SidewaysFancyPrance t1_jdnuhar wrote

Thanks to short attention spans and humanity's astounding ability to adapt to new circumstances, CEOs are embracing the "random bullshit, go!" method of distracting and deflecting in order to dissipate any accumulating pushback against their decisions. They just need to ride it out. Generative text AI is a nice, cheap, controllable way to help with that. It's designed to sound good but not actually mean anything, and is easily disavowed if it acts up.

Look at how quickly everyone's gotten over the fact that Twitter has been turned into a megaphone for narcissistic billionaires and Nazis. We've all just sort of accepted it as the new normal. Sure, lots of people are still mad and thinking about it, but it's no longer enough people to effect any change.


SidewaysFancyPrance t1_jddxd0n wrote

It's the tracking. I don't think my HOA should be able to share my car's location in my private community with the police. If you live near one of these HOAs, the police could know every time you drive to or from your home. You're OK with that? Personal tracking that will never be used to help you, but only harm you?

Imagine if a group of thieves managed to gain access to this data. They'd know exactly when to rob people.

Now I'll wait for someone to inform me that a random person's right to monitor and track all of my movements is "freedom" but my desire to keep my location private is "communism" or something.


SidewaysFancyPrance t1_jd9dxb0 wrote

What will art look like in 100 years, when it's just AI copying AI copying AI copying AI...copying AI copying artists from today? Because many potential artists will eventually stop learning art, because people will stop paying for art after AI drops the value they place on it. Sure, some artists will keep their crafts alive, because actual human art will be prized by the wealthy, but the number of paying art jobs will fall over time. Back to the old days of patronage.


SidewaysFancyPrance t1_jcyr5rk wrote

Human brains are hackable. AI will be tuned to hack the brain the same way advertising and marketing people have been doing for decades, but it will be refined and bespoke for each target. It will be cheap, it won't sleep, and can be cloned to scale up massively.

AI is going to be a horribly negative force in society. The AI we are seeing now is capital that will be built, owned, and IP-protected by capitalists to enrich themselves at great cost to society.

If an AI can do something good for society at any point in the future, a capitalist will figure out the value of that good, try to capture and monopolize it to control scarcity and make it more valuable, then bleed the hell out of it to extract profit.


SidewaysFancyPrance t1_jcl2n3h wrote

Not just that, they don't want regulations to clamp down again in response to this. So they're pivoting to "oh shit, we stepped in it, but we'd rather pretend everything's OK than have regulators step in, because I want easy, free money for my next startup."

This is what happens in an environment where people can fail spectacularly and still walk away rich and personally unaffected. They just look for the next grift.


SidewaysFancyPrance t1_jcblo4m wrote

I'm hoping there's financial incentive for the companies who have been snapping up homes to start selling them at a loss. If companies get tax breaks for sitting on empty homes so they can wait for prices to go back up, I'll be pretty pissed. They're manipulating the market for profit, and people are homeless.


SidewaysFancyPrance t1_jcbeccw wrote

AI arms race, which is 100% about money and greed. Morality and ethics slow things down. Regulations slow things down. Wall Street and investors are watching for a prospective winner to bet big on. Tech is stale, old news. They need the next big win.

AI domination is going to be "first past the post" and all these companies know it. We're going to hear all kinds of BS about how they're trying to be ethical/etc but they really want to be first and snap up lucrative deals with their protected IP. What was open will stop being open very quickly once a public tech demo like this gets attention, and be strictly protected for that reason.


SidewaysFancyPrance t1_jae8gki wrote

For artistic purposes where there is no "right" or "wrong" way to do something, it's great. Like for upscaling a personal photo, but maybe not for a news broadcast showing an AI's prediction of what a crime suspect looks like based on a blurry security cam, where the AI is just pulling from random people's faces it trained on. There are pitfalls to people using/creating/consuming AI-generated data/content without attempting to understand the nuance and implications.


SidewaysFancyPrance t1_jadyt2z wrote

ChatGPT is at least one step removed from the actual source material, and ChatGPT isn't trying to be "right." You should just bypass ChatGPT and go to actual source material instead of asking a language AI to try to summarize it for you, knowing that it will often confidently present you with wrong information.


SidewaysFancyPrance t1_jadfj50 wrote

When you think about, the kind of AI we're talking about isn't designed to be accurate, it's designed to make your brain satisfied with what it sees. This is why I think AI is being introduced irresponsibly to average citizens: ChatGPT has disclaimers, but people don't care about accuracy either. It's like a brain hack, a Jedi mind trick to make people think the output is good because it uses what seem like the right words in the right order. It's a confidence man at its core.

As long as the person likes the upscaled photo it spits out, that's all the AI really wanted. It's not going to be a CSI enhancer.


SidewaysFancyPrance t1_j8nvonl wrote

Weak AI is good enough to sorta replace workers, in areas where accuracy is not super important (customer-facing stuff, where people are already used to corporations providing minimal/poor service).

If you train your customers to accept less and less every year, then eventually replacing an underpaid, poorly-trained human with a weak AI is not going to change much except save money. AI is going to end up in places C-levels already didn't care about and were strangling.


SidewaysFancyPrance t1_j8imlbj wrote

They are targeting people who are driving and want to quickly get some food without leaving their car. Promoting app ordering for a drive-thru doesn't sit right - and can be dangerous if people feel they need to place their order while driving. I understand why they want this tech, because there will always be customer demand for voice ordering in drive-thrus. It's just sad that society is pushing so hard towards eliminating human interfaces and jobs, just so a few people can get a little richer.


SidewaysFancyPrance t1_j7nfwxy wrote

This is why they don't allow any lithium batteries in checked baggage anymore - too dangerous. If this had happened in the cargo hold, that could have taken down the whole plane. They need to be in the cabin as carry-on where a fire could be detected and addressed quickly (like in this case).


SidewaysFancyPrance t1_j5zvwkp wrote

Yeah, I just did my first rebuild in many years. Upgraded from an i7-4770k and 1080 RTX to an i7-13700KF and 3070 RTX.

My goal was a major performance upgrade without resorting to major power consumption increases. I only needed a 600W PSU for my build and it screams. I could have afforded more, but I don't feel comfortable running that hot, especially with all the melted cables I was seeing with 4090s. That just seems like the industry moving in the wrong direction.


SidewaysFancyPrance t1_j5zbws6 wrote

I'm really not looking forward to the inevitable: when AI starts doing a lot of this low-level/foundational work, people are going to see it as a wonderful scapegoat for anything that goes wrong in their projects, or for malicious people to shift liability. "The AI got the analysis wrong" or "The AI missed this parameter" or whatnot. At some point there will be a court case where real people were harmed, and the AI gets blamed while the people responsible are let off the hook.

AI needs accountability, and that is woefully missing right now. We already have half of the country wanting AIs to be racist and tell horrible jokes on command, and get mad when the AI refuses to do that. I don't see how our society can handle the AI implementations we're witnessing. It's not going to go well.


SidewaysFancyPrance t1_j5u35jv wrote

At some point this gets super unsafe, if they can't perform maintenance or replace parts/bulbs (or if they do it anyway on live lines). After a year, some higher authority needs to step in and resolve this ASAP, replacing the system if they have to.


SidewaysFancyPrance t1_j250ofp wrote

This is pure terrorism, which benefits Russia in many ways. Strategically, it diverts a lot of Ukrainian resources towards protecting and evacuating civilians from areas where they should be pretty safe, away from the front lines. Russia wants to disrupt Ukraine's advantages on the battlefield (better training, better equipment, higher morale, stronger will to fight) by keeping them spread thinly and reacting to civilian catastrophes.

Attacking civilian populations is a war crime for good reason: it's effective and is a cheap/obvious strategy for anyone who does not care if the enemy population lives or dies. It's for truly evil people.