Viewing a single comment thread. View all comments

shoegraze t1_j9s1hd0 wrote

What I’m hoping is that EY’s long term vision for AI existential risk is thwarted by the inevitable near term issues that will come to light and inevitably be raised to major governments and powerful actors who will then enter a “collective action” type of response similar to what happened with nukes, etc. the difference is that any old 15 year old can’t just buy a bunch of AWS credits and start training a misaligned nuke.

What you mention about a ChatGPT like system getting plugged into the internet is exactly what Adept AI is working on. It makes me want to bang my head against the wall. We can say goodbye soon to a usable internet because power seeking people with startup founder envy are going to just keep ramping these things up.

In general though, I think my “timelines” are longer than EY / EA by a bit for a doomsday scenario. LLMs are just not going to be the paradigm that brings “AGI,” but they’ll still do a lot of damage in the meantime. Yann had a good paper about what other factors we might need to get to a dangerous, agentic AI.

9

CactusOnFire t1_j9s4b0f wrote

>We can say goodbye soon to a usable internet because power seeking people with startup founder envy are going to just keep ramping these things up.

This might seem like paranoia on my part, but I am worried about AI being leveraged to "drive engagement" by stalking and harassing people.

"But why would anyone consider this a salable business model?" It's no secret that the entertainment on a lot of different websites are fueled by drama. If someone were so inclined, AI and user metrics could be harvested specifically to find creative and new ways to sow social distrust, and in turn drive "drama" related content.

I.E. creating recommendation engines specifically to show people things that they assume will make them angry, specifically so they engage with it in great detail, so that a larger corpus of words will be exchanged that can be datamined for advertising analytics.

11

FeministNeuroNerd t1_j9syj6j wrote

Pretty sure that's already happened... (the algorithm showing things to make you angry so you write a long response)

4

icedrift t1_j9uwkrx wrote

I agree with all of this but it's already been done. Social media platforms already use engagement driven algorithms that instrumentally arrive at recommending reactive content.

Cambridge analytica also famously preyed on user demographics to feed carefully tailored propaganda to swing states in the 2016 election.

3