Viewing a single comment thread. View all comments

Additional-Escape498 t1_j9rq3h0 wrote

EY tends to go straight to superintelligent AI robots making you their slave. I worry about problems that’ll happen a lot sooner than that. What happens when we have semi-autonomous infantry drones? How much more aggressive will US/Chinese foreign policy get when China can invade Taiwan with Big Dog robots with machine guns attached? What about when ChatGPT has combined with toolformer and can write to the internet instead of just read and can start doxxing you when it throws a temper tantrum? What about when rich people can use something like that to flood social media with bots that spew disinformation about a political candidate they don’t like?

But part of the lack of concern for AGI among ML researchers is that during the last AI winter we rebranded to machine learning because AI was such a dirty word. I remember as recently as 2015 at ICLR/ICML/NIPS you’d get side-eye for even bringing up AGI.

193

icedrift t1_j9s5640 wrote

What freaks me out the most are the social ramifications of AIs that pass as humans to the majority of people. We're still figuring out how to healthily interact with social media and soon we're going to be interacting with entirely artificial content that we're gonna anthropomorphize onto other humans. In the US we're dealing with a crisis of trust and authenticity, I can't imagine generative text models are going to help with that.

78

VirtualHat t1_j9vnc3y wrote

Yes, it's worse than this too. We usually associate well-written text with accurate information. That's because, generally speaking, most people who write well are highly educated and have been taught to be critical of their own writing.

Text generated by large language models is atypical in that it's written like an expert but is not critical of its own ideas. We now have an unlimited amount of well-written, poor-quality information, and this is going to cause real problems.

13

darthmeck t1_j9utkza wrote

Very well articulated.

4

icedrift t1_j9uuocn wrote

Appreciate it! Articulation isn't a strong suit of mine but I guess a broken clock is right twice a day

3

memberjan6 t1_j9uko15 wrote

Here's how I would score this passage based on the nine emotions:

Anger: 0 - There's no indication of anger in this statement. Fear: 3 - The passage expresses a sense of worry and concern about the social ramifications of AI that pass as humans, which may reflect some level of fear. Joy: 0 - There's no expression of joy in this statement. Sadness: 0 - There's no indication of sadness in this statement. Disgust: 0 - There's no expression of disgust in this statement. Surprise: 0 - There's no indication of surprise in this statement. Trust: 1 - The passage expresses a concern about a crisis of trust and authenticity in the US, which may reflect some level of trust. Anticipation: 0 - There's no expression of anticipation in this statement. Love: 0 - There's no expression of love in this statement. Please keep in mind that these scores are subjective and based on my interpretation of the text. Different people may score the passage differently based on their own perspectives and interpretations.

Source: chatgpt

0

maxToTheJ t1_j9rwaum wrote

I worry about a lot of bad AI/ML made by interns making decisions that have huge impact like in the justice system, real estate ect.

38

Appropriate_Ant_4629 t1_j9slydb wrote

> I worry about a lot of bad AI/ML made by interns making decisions that have huge impact like in the justice system, real estate ect.

I worry more about those same AIs made by the senior-architects, principal-engineers, and technology-executives rather than the interns. It's those older and richer people whose values are more likely to be archaic and racist.

I think the most dangerous ML models in the near term will be made by highly skilled and competent people whose goals aren't aligned with the bulk of society.

Ones that unfairly send certain people to jail, ones that re-enforce unfair lending practices, ones that will target the wrong people even more aggressively than humans target the wrong people today.

35

maxToTheJ t1_j9u6gj5 wrote

> Ones that unfairly send certain people to jail, ones that re-enforce unfair lending practices, ones that will target the wrong people even more aggressively than humans target the wrong people today.

Those examples are what I was alluding to with maybe a little too much hyperbole with saying “interns”. The most senior or best people are absolutely not building those models. Those models are being built by contractors who are subcontracting that work out which means its being built by people who are not getting paid well ie not senior or experienced folks.

Those jobs aren’t exciting and arent being rewarded financially by the market and I understand that I am not personally helping the situation but I am not going to take a huge paycut to work on those problems especially when that paycut would be at my expense for the benefit of contractors who have been historically scummy.

−2

Appropriate_Ant_4629 t1_j9ubt3u wrote

> I understand that I am not personally helping the situation but I am not going to take a huge paycut to work on those problems especially when that paycut would be at my expense

I think you have this backwards.

Investment Banking and the Defense Industry are two of the richest industries in the world.

> Those models are being built by contractors who are subcontracting that work out which means its being built by people who are not getting paid well ie not senior or experienced folks.

The subcontractors for that autonomous F-16 fighter from the news last month are not underpaid, nor are the Palantir guys making the software used to target who autonomous drones hit, nor are the ML models guiding real-estate investment corporations that bought a quarter of all homes this year.

It's the guys trying to do charitable work using ML (counting endangered species in national parks, etc) that are far more likely to be the underpaid interns.

3

maxToTheJ t1_j9vpq25 wrote

>The subcontractors for that autonomous F-16 fighter from the news last month are not underpaid, nor are the Palantir guys making the software used to target who autonomous drones hit, nor are the ML models guiding real-estate investment corporations that bought a quarter of all homes this year.

You are equivocating the profits of the corporations vs the wages of the workers. Also you are equivocating "Investment Banking" with "Retail Banking" the person making lending models isnt getting the same TC as someone at Two Sigma.

None of those places (retail banking, defense) you list are the highest comp employers. The may be massively profitable but that doesnt necessarily translates to wages.

0

Top-Perspective2560 t1_j9umv6r wrote

My research is in AI/ML for healthcare. One thing people forget is that everyone is concerned about AI/ML, and no-one is happy to completely delegate decision making to an ML model. Even where we have models capable of making accurate predictions, there are so many barriers to trust e.g. Black Box Problem and general lack of explainability which relegate these models to decision-support at best and being completely ignored at worst. I actually think that's a good thing to an extent - the barriers to trust are for the most part absolutely valid and rational.

However, the idea that these models are just going to be running amock is a bit unrealistic I think - people are generally very cautious of AI/ML, especially laymen.

1

Mefaso t1_j9s66qq wrote

>I remember as recently as 2015 at ICLR/ICML/NIPS you’d get side-eye for even bringing up AGI.

You still do, imo rightfully so

15

starfries t1_j9ufa2h wrote

Unfortunately there are just too many crackpots in that space. It's like bringing up GUT in physics - worthwhile goal, but you're sharing the bus with too many crazies.

6

uristmcderp t1_j9scj7t wrote

None of those concerns have to do with the intrinsic nature of machine learning, though. Right now it's another tool that can automate tasks previously thought impossible to automate, and sometimes it does that task much better than humans could. It's another wave like the Industrial Revolution and the assembly line.

Some people will inevitably use this technology to destroy things on a greater scale than ever before, like using the assembly line to mass produce missiles and tanks. But trying to put a leash on the technology won't accomplish anything because technology isn't inherently good or evil.

Now, if the state of ML were such that sentient AI was actually on the horizon, not only would this way of thinking be wrong, we'd need to rethink the concepts of humanity and morality altogether. But it's not. Not until these models manage to improve at tasks it was not trained to do. Not until these models become capable of accurate self-evaluation of its own performance without human input.

10

FeministNeuroNerd t1_j9sy939 wrote

I don't think this is about sentience, unless you're using that as a synonym for "general intelligence" rather than "conscious awareness"?

6

shoegraze t1_j9s1hd0 wrote

What I’m hoping is that EY’s long term vision for AI existential risk is thwarted by the inevitable near term issues that will come to light and inevitably be raised to major governments and powerful actors who will then enter a “collective action” type of response similar to what happened with nukes, etc. the difference is that any old 15 year old can’t just buy a bunch of AWS credits and start training a misaligned nuke.

What you mention about a ChatGPT like system getting plugged into the internet is exactly what Adept AI is working on. It makes me want to bang my head against the wall. We can say goodbye soon to a usable internet because power seeking people with startup founder envy are going to just keep ramping these things up.

In general though, I think my “timelines” are longer than EY / EA by a bit for a doomsday scenario. LLMs are just not going to be the paradigm that brings “AGI,” but they’ll still do a lot of damage in the meantime. Yann had a good paper about what other factors we might need to get to a dangerous, agentic AI.

9

CactusOnFire t1_j9s4b0f wrote

>We can say goodbye soon to a usable internet because power seeking people with startup founder envy are going to just keep ramping these things up.

This might seem like paranoia on my part, but I am worried about AI being leveraged to "drive engagement" by stalking and harassing people.

"But why would anyone consider this a salable business model?" It's no secret that the entertainment on a lot of different websites are fueled by drama. If someone were so inclined, AI and user metrics could be harvested specifically to find creative and new ways to sow social distrust, and in turn drive "drama" related content.

I.E. creating recommendation engines specifically to show people things that they assume will make them angry, specifically so they engage with it in great detail, so that a larger corpus of words will be exchanged that can be datamined for advertising analytics.

11

FeministNeuroNerd t1_j9syj6j wrote

Pretty sure that's already happened... (the algorithm showing things to make you angry so you write a long response)

4

icedrift t1_j9uwkrx wrote

I agree with all of this but it's already been done. Social media platforms already use engagement driven algorithms that instrumentally arrive at recommending reactive content.

Cambridge analytica also famously preyed on user demographics to feed carefully tailored propaganda to swing states in the 2016 election.

3

Leptino t1_j9s73ml wrote

One would have to consider the ultimate consequences (including paradoxical ones) of those things too.. Like would it really be catastrophic if social media became unusable for the average user? The 1990s are usually considered the last halycon era... Maybe thats a feature not a bug!

As far as drone swarms, those are definitely terrifying, but then there will be drone swarm countermeasures. Also, is it really much more terrifying than Russia throwing wave after wave of humans at machine gun nests?

I view a lot of the ethics concerns as a bunch of people projecting their fears into a complicated world, and then drastically overextrapolating. This happened with the industrial age, electricity, the nuclear age and so on and so forth.

8

Soc13In t1_j9stb29 wrote

Much like that there are issues like what recommender systems are recommending, how credit models are scoring, why your resumes are being discarded without being seen by a human being and lots of other minor mundane daily things that we take for granted and are actually dystopic for the people at the short end of the stick. These are systems that need to be fine tuned and we treat their judgements as holy lines in the Commandment stones. The AI dystopia is already real.

5

MuonManLaserJab t1_j9udmcp wrote

> EY tends to go straight to superintelligent AI robots making you their slave.

First, I don't think he ever said that they will make us slaves, except possibly as a joke at the expense of people who think the AI will care about us or need us enough to make us slaves.

Second, I am frustrated by the fact that you seem to think that only the short-term threats matter. What's a more short-term threat: nuclear contamination because of the destruction of the ZPP in Ukraine, or all-out nuclear war? Contamination is more likely, but that doesn't mean that we wouldn't be stupid to ignore the potentially farther away yet incredibly catastrophic outcome of nuclear war. Why can you not be worried about short-term AI issues but also acknowledge the possibility of the slightly longer term risk of superintelligent AI?

This is depressingly typical as an attitude and not at all surprising as the top comment here, unfortunately.

4

abc220022 t1_j9t681p wrote

The shorter-term problems you mention are important, and I think it would be great for technical and policy-minded people to try to alleviate such threats. But it's also important for people to work on the potential longer term problems associated with AGI.

OpenAI, and organizations like them, are racing towards AGI - it's literally in their mission statement. The current slope of ML progress is incredibly steep. Seemingly every week it looks like some major ML lab comes up with an incredible new capability with only minor tweaks to the underlying transformer paradigm. The longer this continues to happen, the more impressive these capabilities look, and the longer we see scaling curves continue with no clear ceiling, the more likely it looks that AGI will come soon, say, over the next few decades. And if we do succeed at making AI as capable or more capable than us, then all bets are off.

None of this is a certainty. One of Yudkowsky's biggest flaws imo is the certainty with which he makes claims backed with little rigorous argument. But given recent discoveries, the probability of a dangerous long term outcome is high enough that I'm glad we have people working on a solution to this problem, and I hope more people will join in.

1

Tseyipfai t1_j9tvfby wrote

Re: Things that will happen rather soon, I think it's important that we also look at AI's impact on nonhuman animals. I argued it in this paper. AI-controlled drones are already killing animals, some AIs are helping factory farming, language models are showing speciesist patterns that might reinforce people's bad attitudes toward some animals (ask ChatGPT about recipes of dog meat vs chicken meat, or just google "chicken" to see whether you see mainly the animal or their flesh)

Actually, even for things that could happen in the far future, I think it's extremely likely that what AI will do will immensely impact nonhuman animals too.

1

bohreffect t1_j9uy9ko wrote

>What about when ChatGPT

I mean, we're facing even more important dilemas right now, with ChatGPT's saftey rails. What is it allowed to talk about, or not? What truths are verbotten?

If the plurality of Internet content is written by these sorts of algorithms, that have hardcoded "safety" layers, then dream of truly open access to information that was the Internet will be that much closer to death.

0