Comments

You must log in or register to comment.

vm_linuz t1_jd84r9m wrote

Society is just catching up to what AI safety experts have been saying for years.

Good thing AI scales slowly so society has time to consider things thoroughly.

45

11-Eleven-11 t1_jd9odc8 wrote

I unironically have never seen something scale this fast before. I just watched an ai generated spongebob episode today where patrick and spongebob were considering having sex with each other after watching porn at the krusty krab before discussing 1984. Ai is accelerating very fast.

23

vm_linuz t1_jd9osjd wrote

We've for sure reached the hockey stick

7

elehman839 t1_jd8gnav wrote

To me, "AI turns evil" scenarios seem a ways out. The more-probable scenario in the near term that concerns me is nasty PEOPLE repurposing AI to nasty ends. There are vile people who make malware and ransomware. There are people who've lived wretched lives, are angry about that, and just want to inflict damage wherever they can. These folks may make up 0.001% of the population, but that's still a lot of people worldwide.

So how are these folks going to use AI to cause as much damage as possible? If they had control of an AI, they could give it the darkest possible intentions. Maybe something like, "befriend people online and over a period of months then gradually start undermining their sense of self worth and encourage them to commit suicide". Or "relentlessly make calm, rational-sounding arguments in many online forums under many identities that <some population group> is evil and should be killed".

As long as AI is super compute-intensive, there will be check on this behavior. If you're running on BigCorp's cloud service, they can terminate your service. But when decent AI can run on personally-owned hardware, I think we're almost certain to see horrific stuff like this. This may not end the world, but it will be quite unpleasant.

26

Traveshamockery t1_jd8s4zo wrote

>But when decent AI can run on personally-owned hardware, I think we're almost certain to see horrific stuff like this.

As of March 13th, a Stanford team claims to run a ChatGPT 3.5-esque model called Alpaca 7B which runs on a $600 home computer.

https://crfm.stanford.edu/2023/03/13/alpaca.html

Then on March 18th, someone claims to get Alpaca 7B running on their Google Pixel phone.

https://twitter.com/rupeshsreeraman/status/1637124688290742276

12

Wh00pty t1_jd8pysq wrote

Flip side could be that we'll get much better, ai-driven auto moderation. Good guys will get AI too.

7

Dziadzios t1_jd9srzy wrote

Auto moderation can be evil too. China will definitely abuse it. It's also on Youtube already and is very faulty.

10

tigerCELL t1_jdaub5c wrote

Not when the people who program it set it to andrew tate mode

2

zerobeat t1_jd90zdq wrote

> nasty PEOPLE repurposing AI to nasty ends

So...the owners, then?

3

Interesting_Mouse730 t1_jd9bprp wrote

Agreed. The imminent direct danger of AI is bad actors, setting aside whatever chaos widespread adoption will cause the economy and labor market.

That said, I don't like how quick so much of the media and the tech industry is to dismiss the spookier sci-fi apocalypse scenarios. They may be a ways out, but we don't know what is or isn't possible. The most damaging consequences may come from something initially benign or seemingly harmless. We just don't know yet, but that doesn't mean we should stick our head in the sand.

2

Mercurionio t1_jdbvsz1 wrote

We know exactly everything that can and Will happen.

There are 2 scenarios:

  1. Single gestalt consciousness of AI, once it starts to create it's own tasks. At this moment tech will either stop advancing, coz AI will understand the usefulness of existence, or it will do it's tasks without stopping. Humans will be an obstacle, either to be ignored completely or to get rid off.

  2. Before gestalt, people will use AI as a tool to control the power over others. Through propaganda, fake world, fake artists and so on. This scenario is already happening in China.

In both cases, freaks, that are working on it, are responsible for the caused chaos. Because they should have been understanding that even before starting the work. Also, just look on the ClosedAI. They are the embodiment of everything bad, that could happen with the AI development.

1

somethingsomethingbe t1_jd9940w wrote

I'm getting increasingly frustrated with how a majority of people brush shit off until its actually bad and the consequences are being expereinced.

For example, 2 weeks ago one of most massive CME ever documented occurred on the opposite side of the sun. Had that happened 4 weeks earlier (the sun spins) we would have been in the path and potentially would have been living rather different lives right now.

There is an expected increase of CME activity over the next few years but is there any push to secure hardware for our power infrastructure? Nope. So just what the fuck are we doing? It's on every fucking front. So many brush real issues off and put people in power who brush those issues off, until suddenly everyone is scrambling and wondering what the fuck happened.

12

tigerCELL t1_jdaumn5 wrote

The only US presidential candidate who has ever mentioned AI automation as a hazard that needs regulation got less than 1 percent of the vote and was widely viewed as a non-serious candidate. The people will never vote in their best interests. Ever. That would require being united, and having a bunch of united states is unamerican.

3

OpenlyFallible OP t1_jd82282 wrote

Submisson statement - The increasing use of Artificial Intelligence (AI) poses a range of dangers that require attention. One significant danger is that AI systems may perpetuate or even amplify biases and prejudices that exist in society, leading to discriminatory outcomes. Another risk is the potential loss of jobs as AI systems become increasingly capable of performing tasks previously performed by humans. Additionally, there is a risk of accidents or errors caused by the complexity of AI systems, which could lead to catastrophic consequences. Finally, the deployment of autonomous weapons systems using AI could lead to unpredictable and uncontrollable behavior with potentially devastating effects. These risks highlight the need for careful consideration of the development and deployment of AI systems, including ethical and regulatory frameworks to minimize the risks and ensure their responsible use.

9

MrSmileyHat69 t1_jd92h9e wrote

"Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them." – from Dune (1965)

6

Jasrek t1_jdbql59 wrote

Yeah, the 'post-thinking machine' society in Dune wasn't any better.

1

ty1771 t1_jd8gmh7 wrote

Who will write all the AI think pieces if AI writes them all?

5

topazchip t1_jdb86i4 wrote

But, does it prevent you from being a Homo Neanderthalensis worried/complaining about all the new Homo Sapiens moving into town?

3

OvermoderatedNet t1_jd8sjoc wrote

The combination of AI/robotics + finite and increasingly scarce natural resources (there’s only so much mining you can do without turning Earth hostile for organic life, and trade dependency is a lot more brittle than we thought in 2019) + anything other than an egalitarian and unified species with a tradition of sacrifice = potential for really bad stuff for the working class and quite a bit of the middle and upper middle class (possibly excepting native-passing citizens of certain northern countries with a strong welfare tradition). Brace yourself for a rogue’s gallery of crooks and extreme ideologies straight out of 1936.

2

[deleted] t1_jdb49b5 wrote

Consider that social media amplifies negative views much more than others.

It's now possible to leverage the tool to do really great stuff. Things are not slowing down and being worried doesn't seem like a good strategy. We need more people creating solutions for accessibility, helping people unable to speak or move to re connect with the world and million other constructive things.

Even climate crisis could be minimized with the help of the new wave of tools. Being cynic about may be trendy but doesn't contribute in meaningful ways.

2

FuturologyBot t1_jd85q9b wrote

The following submission statement was provided by /u/OpenlyFallible:


Submisson statement - The increasing use of Artificial Intelligence (AI) poses a range of dangers that require attention. One significant danger is that AI systems may perpetuate or even amplify biases and prejudices that exist in society, leading to discriminatory outcomes. Another risk is the potential loss of jobs as AI systems become increasingly capable of performing tasks previously performed by humans. Additionally, there is a risk of accidents or errors caused by the complexity of AI systems, which could lead to catastrophic consequences. Finally, the deployment of autonomous weapons systems using AI could lead to unpredictable and uncontrollable behavior with potentially devastating effects. These risks highlight the need for careful consideration of the development and deployment of AI systems, including ethical and regulatory frameworks to minimize the risks and ensure their responsible use.


Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/11ykvz0/persuasive_piece_by_robert_wright_worrying_about/jd82282/

1

Emperor_Zar t1_jd8kwuk wrote

TIL: I may have been considered a kook for having concerns about the rapid development of AI.

Now, worrying we will actually have a “SkyNet Rogue/Killer AI” scenario. That may make me a bit kooky.

1

babygrapes-oo t1_jday8bo wrote

Can we call it what it is? Machine learning, nothing ai about it.

1

[deleted] t1_jd8hgz9 wrote

[deleted]

0

Better_Call_Salsa t1_jd8jdl7 wrote

Thank God the "nanobots inside your body controlled by AI radio waves" is 100% not possible kookiness... right?

1

spectre1210 t1_jd91csn wrote

Guess you weren't doing your job well enough of playing the ostracized, unrecognized, and unappreciated informal subject expert.

0