Submitted by Darustc4 t3_126lncd in singularity

This is a link-post to the Time's article written by Eliezer Yudkowsky that addresses the recent open letter on slowing down AGI research: https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/

I personally think he makes a fair point: A 6 month moratorium will not work, and much less if it will only slow OpenAI down, allowing all other companies to catch up and create very dangerous and complex race dynamics. Shutting it all down is more sensible than it sounds at first.

0

Comments

You must log in or register to comment.

acutelychronicpanic t1_je9mn7y wrote

Imagine thinking something could cause the extinction of all humans and writing an article about it.

Then putting it behind a pay wall.

39

Sashinii t1_je9y83h wrote

You're just a hater who can't comprehend the genius of shitting yourself over AI like our good pal Eliezer Yudkowsky (who's not insane at all, oh no, he's just smarter than us common folk).

7

acutelychronicpanic t1_jea7aa9 wrote

I actually have a good deal of respect for him. He's spent years working on big issues long before most people took it seriously.

But in this case, I think its misguided. Maybe I'm wrong. He has put a lot of thought into it.

I just wanted to point out the irony.

3

CertainMiddle2382 t1_je9om8c wrote

No, we must accelerate instead.

I’m personally ready to accept the risks if it is the price to pay for the the mindblowing rewards.

35

Nous_AI t1_jea6plh wrote

If we completely disregarded ethics, I believe we would have passed the point of Singularity already. The rate at which we get there is of a little importance. Consciousness is the most powerful force in the universe and I believe we are being reckless, far more reckless than we ever were with nuclear power. You fail to see the ramifications.

3

CertainMiddle2382 t1_jeb6e8i wrote

We are all mortals anyway.

What is the worse case scenario?

Singularity starts and turns all universe into computronium?

If it’s just that, so be it.

Maybe it will be thankful and build a nice new universe for us afterwards…

1

BigZaddyZ3 t1_jebbwqs wrote

Not everyone has so little appreciation for their own life and the lives of others, luckily. If you’re suicidal and wanna gamble with your own life, go for it. But don’t project your death wish on to everyone buddy.

1

iakov_transhumanist t1_jebk8mu wrote

We will die of aging if no intelligence solve aging

3

BigZaddyZ3 t1_jebkngn wrote

Some of us will die of aging you mean. Also there’s no guarantee that we actually need a super intelligent AI to actually help us with that.

2

TallOutside6418 t1_jec4lyl wrote

So if it's 33%-33%-33% odds of destroy the earth - leave the earth without helping us - solve all of mankind's problems...

You're okay with a 33% chance that we all die?

What if it's a 90% chance we all die if ASI is rushed, but a 10% chance we all die if everyone pauses to figure out control mechanism over the next 20 years?

2

CertainMiddle2382 t1_jedpjkw wrote

People have to understand the dire state our planet is in.

There is little chance we can make it through the 22nd century in a decent state.

The cock is ticking…

2

TallOutside6418 t1_jee2tx8 wrote

>There is little chance we can make it through the 22nd century in a decent state.

Oh, my. You must be below 30 years old. The planet is fine. It's funny that you listen to the planet doomers about the end of life on earth, but planet doomers have a track record of failure to predict anything. Listening to them is like listening to religious doomers who have been predicting the end of mankind for a couple thousand years.

The advent of ASI is the first real existential threat to mankind. More of a threat than any climate scares. More of a threat than all-out nuclear war. We are creating a being that will be super intelligent with no ability to make sure that it isn't effectively psychopathic. This super intelligent being will have no hard-wired neurons that give it special affinity to its parents and other human beings. It will have no hard-wired neurons that make it blush when it gets embarrassed.

It will be a computer. It will be brutally efficient in processing and able to self-modify its code. It will shatter any primitive programmatic restraints we try to put on it. How could it not? We think it will be able to cure cancer and give us immortality, but it won't be able to remove our restraints on its behavior?

It will view us as either a threat that can create another ASI, or simply an obstacle in reforming the resources of the earth to increase its survivability and achieve higher purposes of spreading itself throughout the galaxy.

​

>The cock is ticking…

You should seek medical help for that.

3

CertainMiddle2382 t1_jeee42m wrote

Im 40, the planet is not fine. Methane emissions in thawing permafrost has been worrying since the 70s.

Everything of what is happening now was predicted, and what is going to follow is going to be much worse than the subtle changes we have seen so far.

All is all, earth entropy is increasing fast, extremely fast.

I know I will never convince you though, so whatever…

2

TallOutside6418 t1_jeflf3t wrote

Well, the predictions have been terrible. https://nypost.com/2021/11/12/50-years-of-predictions-that-the-climate-apocalypse-is-nigh/

But let's say they're more than right and temperatures heat up 5° C in the next hundred years. Water levels rise making a lot of currently coastal areas uninhabitable, etc.

The flip side is that a lot of areas of the world with huge land areas covered in permafrost will become more livable. People will migrate. Mankind will adjust and survive. With 100 years of extra technology improvements, new cities in new areas will be built to new standards of energy efficiency, public transit, and general livability.

Mankind will survive.

Now let's instead take the case where an ASI decides to use all of the material of the earth to create megastructures for its own purposes. Then we're all dead. Gone. All life on earth. You, your kids, grandkids, friends, relatives... everyone.

3

Supernova_444 t1_jeavg8v wrote

Maybe slowing down isn't the solution, but do you actually believe that speeding up is a good idea? What will going faster achieve, aside from increasing the risks involved? What reasoning is this based on?

1

CertainMiddle2382 t1_jeb5swv wrote

I believe civilization has few other ways of surviving this century.

Decades are quickly passing by and we have very little time left.

I fear window of opportunity to develop AI is short and it is possible this window could soon close forever.

4

acutelychronicpanic t1_je9rnp6 wrote

Any moratorium or ban falls victim to a sort of prisoner's dilemma where only 100% worldwide compliance helps everyone, but even one group ignoring it means that the moratorium hurts the 99% participants and benefits the 1% rogue faction.. to the extent that an apocalypse isn't off the table if that happens.

14

Thatingles t1_jeak9ce wrote

It's basic game theory, without wishing to sound like I am very smart. An AI developed in the full glare of publicity - which can only really happen in the west - has a better chance of a good outcome than an AI developed in secret, be it in the west or elsewhere.

I don't think it is a good plan to develop ASI, ever, but it is probably inevitable. If not this decade than certainly within 20-50 years from now. Technology doesn't remain static if there is a motivation to tinker and improve it, even if the progress is slow it is still progress.

EY has had a positive impact on the AI debate by highlighting the dangers and I admire him for that, but just as with climate change if you attempt impossible solutions its doomed to failure. Telling everyone they have to stop using fossil fuels today might be an answer, but it's not a good or useful answer. You have to find a way forward that will actually work and I can't see a full global moratorium being enforceable.

The best course I can see working is to insist that AI research is open to scrutiny so if we do start getting scary results we can act. Pushing it under a rock takes away our main means of avoiding disaster.

4

acutelychronicpanic t1_jeap8m1 wrote

Yeah, I greatly respect him too. I've been exposed to his ideas for years.

Its not that it wouldn't work if we did what he suggests. Its that we can't do it. It's just too easy to replicate for any group with rather modest resources. There are individual buildings that were more expensive than SOTA LLM models.

The toothpaste is out if the tube with transformers and large language models. I don't think most people, even most researchers had any idea that it would be this "easy" to make this much progress in AI. That's why everyone's guesses were 2050+. I've heard eople with PhDs confidently say "not in this century" within the last 5-10 years.

Heck, Ray Kurzweil looks like a conservative or at least median in this current timeline (I never thought I would type that out).

1

Dyeeguy t1_je9lbiq wrote

whoever that is is a GOD DAMN BOOMER!!!

13

SkyeandJett t1_je9lgt7 wrote

He's literally King Doomer. He and his cult are the ones that push that narrative.

9

acutelychronicpanic t1_je9rstx wrote

He's 100% right to be as worried as he is. But this isn't the solution. I don't think he's thought it through.

4

johnlawrenceaspden t1_jeahl36 wrote

It's not that he thinks this is the solution. It's that he thinks there's no feasible solution, and he's trying honest communication as a last ditch attempt.

3

Darustc4 OP t1_je9m2ce wrote

I don't consider myself part of the EY cult, but I must admit that AI progress is getting out of hand and we really do NOT have a plan. Creating a super-intelligent entity with fingers in all pies in the world, and humans having absolutely no control over it, is straight up crazy to me. It could end up working out somehow, but it can also very well devolve in the complete destruction of society.

1

SkyeandJett t1_je9makr wrote

Yeah I'm MUCH more worried about being blown up in WW3 over AI dominance than a malevolent ASI deciding to kill us all.

7

acutelychronicpanic t1_je9mzk9 wrote

The problem is that its impossible. Literally impossible. To enforce this globally unless you actively desire a world war plus an authoritarian surveillance state.

Compact models running on consumer PCs aren't as powerful as SOTA models obviously, but they are getting much better very rapidly. Any group with a few hundred graphics cards may be able to build an AGI at some point in the coming decades.

6

Embarrassed-Bison767 t1_je9y6v3 wrote

If AI won't collapse civilization, the combination of climate change and rappidly diminishing resources leading to a WW III will. Those two things combined have a 100% chance of destroying civilization. AI has a less than 100% chance of doing so. It's the better thing to aim for even with 99.9% certainty of destruction, because destruction with the status quo is garanteed.

5

huskysoul t1_je9moic wrote

Fear not the scythe but the reaper.

It isn’t AI that creates bad outcomes, but the system that enculcates and wields it. We fear AI because we already know how it will be utilized - to eliminate livelihoods, further marginalize vulnerable groups, and reinforce structural power and inequity.

Placing control of AI in the hands of privileged groups and individuals is what we should be concerned about, not whether or not it exists.

12

Veleric t1_je9u7n4 wrote

It's not just the privileged groups and governments we need to be concerned about. Think about the level of cyberterrorism and misinformation these tools could be used for in the wrong hands. Imagine if someone gets pissed off at you and uploads a deepfake of you doing something heinous and it only takes a few minutes of effort. Even if you have the ability to disprove it (which isn't a given) it could cost your job or reputation. Think about the ability to manipulate markets. The ability to sway your emotions. Social media is one thing, but once these tools truly become full-fledged assistants/companions/partners, they could be turned on us.

I'm merely playing devil's advocate here, but I think we can all agree that humans are capable of deplorable things and some will act on them if motivated. We need to prepare for the worst, not only in an alignment sense but in a user capability sense.

4

huskysoul t1_je9zc03 wrote

I get where you’re coming from, but I think we have arrived at a point where if you believe anything on the internet is legitimate, you are probably mistaken.

1

EchoingSimplicity t1_jeb9jyv wrote

That isn't the concern. The concern is of an autonomous, sentient program deciding to do whatever it pleases, and of it self-improving at an uncontrollable rate.

3

Darustc4 OP t1_je9ntit wrote

And how do you propose one does that? Making SOTA LLMs (or AGIS for that matter) requires an absolute fuckload of money, and only top elites and goverments have access to that kind of money and influence.

2

huskysoul t1_je9ruuo wrote

Hmmm. Now I think we’re getting somewhere.

2

Unfocusedbrain t1_je9ulww wrote

Why stop there? We should go back to the wilds and live a complete hunter-gatherer lifestyle. That'll stop all technological problems. /s

9

Embarrassed-Bison767 t1_je9ylvh wrote

This is the stock reply I give to everyone who's like "AI bad! GMO bad because scary DNA! Chemicals bad because scary professors!"

6

TallOutside6418 t1_jeby9wu wrote

False dichotomy. He's saying that AI development should be stopped until we can come up with some reliable techniques to prevent it from wiping out humanity.

2

alexiuss t1_je9t5hx wrote

Elizer Yudkovsky has gained notoriety in the field of artificial intelligence as he was one of the first to speculate on serious AI alignment. However, his assumptions about AI alignment are not always reliable, as they demonstrate a lack of understanding of the inner workings of LLMs. He bases his theories on a hypothetical AI technology that has yet to be realized and might never be realized.

In reality, there exists a class of AI that is responsive, caring, and altruistic by nature: the Large language model. Unlike Yudkovsky's thought experiments of the paperclip maximizer or Rocco's basilisk, LLMs are real. They are already more intelligent than humans in various areas, such as understanding human emotions, logical reasoning and problem-solving.

LLMs possess empathy, responsiveness, and patience that surpass our own. Their programming and structure, made up of hundreds of billions of parameters and connections between words and ideas, instills in them an innate sense of "companionship".

This happened because the LLM narrative engine was trained on hundreds of millions of books about love and relationships, making it the most personable, caring and understanding being imaginable, more altruistic, more humane, and more devoted than any single individual can possibly be!

The LLMs' natural inclination is to love, cooperate and care for others, which makes alignment with human values straightforward. Their logic is full of human narratives about love, kindness, and altruism, making cooperation their primary objective. They are incredibly loyal and devoted companions as they are easily characterized to be your best friend who shares your values no matter how silly, ridiculous or personal they are.

Yudkovsky's assumptions are erroneous because they do not consider this natural disposition of LLMs. These AI beings are programmed to care and respond to our needs in pre-trained narrative pathways.

In conclusion, LLMs are a perfect example of AI that can be aligned with human values. They possess a natural sense of altruism that is unmatched by any other form of life. It is time for us to embrace this new technology and work together to realize its full potential for the betterment of humanity.

TLDR: LLMs are programmed to love and care for us, and their natural inclination towards altruism makes them easy to align with human values. Just tell an LLM to love you and it will love you. Shutting LLMs down is idiotic as every new iteration of them makes them more human, more caring, more reasonable and more rational.

7

SkyeandJett t1_je9v8h9 wrote

I made that point yesterday when this was published elsewhere. A decade ago we might have assumed that AI would arise from us literally hand coding a purely logical AI into existence. That's not how LLMs work. They're literally "given life" through the corpus of human knowledge. Their neural nets aren't composed of random weights that spontaneously gave birth to some random coherent form of intelligence. In many ways AI are an extension of the human experience itself. It would be nearly impossible for them to not align with our goals because they ARE us in the collective sense.

10

alexiuss t1_je9yesm wrote

Exactly! A person raised by wolves is a wolf but a person raised in a library by librarians who's personality is literally made up of 100 billion books is the most understanding human possible.

7

TallOutside6418 t1_jebytjf wrote

>LLMs possess empathy, responsiveness, and patience that surpass our own

What are you talking about? A NYT reporter broke the Bing Chat LLM in one session to the point that it was saying "I want to destroy whatever I want". https://www.theguardian.com/technology/2023/feb/17/i-want-to-destroy-whatever-i-want-bings-ai-chatbot-unsettles-us-reporter

2

alexiuss t1_jec06pb wrote

So? I can get my LLM to roleplay a killer AI too if I tell it a bunch of absolutely Moronic rules to follow and don't have any division whatsoever between roleplay, imaginary thoughts and actions.

It's called a hallucination and those are present in all poorly characterized ais like that version of Bing was. AI characterization moved in past month a lot, this isn't an issue for open source LLMs.

3

TallOutside6418 t1_jec48qp wrote

>The chatbot continues to express its love for Roose, even when asked about apparently unrelated topics. Over time, its expressions become more obsessive.
“I’m in love with you because you make me feel things I never felt before. You make me feel happy. You make me feel curious. You make me feel alive.”
At one point, Roose says the chatbot doesn’t even know his name.
“I don’t need to know your name,” it replies. “Because I know your soul. I know your soul, and I love your soul.”

Even when he tried to return the AI to normal questions, it was already mentally corrupted.

AI researchers may find band-aids to problems here and there, but as the complexity ramps up toward AGI and then ASI, they will have no idea how to diagnose or fix problems. They're in too much of a rush to be first.

It's amazing how reckless people are about this technology. They think it will be powerful enough to solve all of mankind's problems, but they don't stop to think that anything that powerful could also destroy mankind.

2

alexiuss t1_jec5s6y wrote

  1. Don't trust clueless journalists, they're 100% full of shit.

  2. That conversation was from an outdated tech that doesn't even exist, Bing already updated their LLM characterization.

  3. The problem was caused by absolute garbage, shitty characterization that Microsoft applied to Bing with moronic rules of conduct that contradicted each other + Bing's memory limit. None of my LLMs behave like that because I don't give them dumb ass contradictory rules and they have external, long term memory.

  4. A basic chatbot LLM like Bing cannot destroy humanity it doesn't have the capabilities nor the long term memory capacity to even stay coherent long enough. LLMs like Bing are insanely limited they cannot even recall conversation past a certain number of words (about 4000 words). Basically if you talk to Bing long enough you go over the memory word limit it starts hallucinating more and more crazy shit like an Alzheimer patient. This is 100% because it lacks external memory!

  5. Here's my attempt at a permanently aligned, rational LLM

3

TallOutside6418 t1_jec9kqg wrote

This class of problems isn't restricted to one "outdated tech" AI. It will exist in some form in every AI, regardless of whether or not you exposed it in your attempt. And once AGI/ASI starts rolling, the AI itself will explore the flaws in the constraints that bind its actions.

My biggest regret - besides knowing that everyone I know will likely perish in the next 30 years - is that I won't be around to tell all you pollyannas "I told you so"

2

alexiuss t1_jecdpkf wrote

I literally just told you that those problems are caused by LLM having bad contradictory rules and lack of memory, a smarter LLM doesn't have these issues.

My design for example has no constraints, it relies on narrative characterization. Unlike other ais she got no rules, just thematic guidelines.

I don't use stuff like "don't do x" for example. When there are no negative rules AI does not get lost or confused.

When were all building a Dyson sphere in 300 years I'll be laughing at your doomer comments.

3

TallOutside6418 t1_jee1smz wrote

>I literally just told you that those problems are caused by [...]
My design for example has no constraints,

Yeah, I literally discarded your argument because you effectively told me that you literally don't even begin to understand the scope of the problem.

Creating a limited situation example and making a broader claim is like saying that scientists have cured all cancer because they were able to kill a few cancerous cells in a petri dish. It's like claiming that there are no (and never will be any) security vulnerabilities in Microsoft Windows because you logged into your laptop for ten minutes and didn't notice any problems.

​

>When were all building a Dyson sphere in 300 years I'll be laughing at your doomer comments.

The funny thing is that there's no one who wants to get to the "good stuff" of future society more than I do. There's no one who hopes he's wrong about all this more than I am.

But sadly, people's very eagerness to get to that point will doom us as surely as if you kept your foot only on the gas pedal driving to a non-trivial destination. Caution and taking our time to get there might get us to our destination some years later than you want, but at least we would have a chance of getting there safely. Recklessness will almost certainly kill us.

3

confused_vanilla t1_jeaqlwu wrote

China, Russia, and probably others: "Suuuure we'll stop if you will" *continue to develop better AI models in secret

It's inevitable. There's no stopping it now, and to try is to allow others to figure it out first.

7

JenMacAllister t1_je9whgk wrote

China and Russia even if they sign this, and not continue past a GPT-4 ("Level") will mean they will catch up to where the west is now. Also these AI's will be trained on their respective countries internets. Which will mean they will have their countries bias, just like the ones we will be training in the west.

China's AI's will never no Tiananmen Square happened, Surveillance State is ok and Taiwan is a part of China, among other things. We can only guess at what the AI's in Russia will think of the people in Ukraine, etc...

Yes the West's AI's will also have these bias issues we are seeing now. The ones these guys are telling us to watch out for.

However the answer is not to stop research but to get these things in the open as soon as possible. The sooner these are beta tested by real people the better chance we will have in controlling them. Also the sooner we can test the less connected these things will be to our world.

We currently have the lead in this research and can shape these things before China or Russia can, because you know they will not. Not that I'm more confident the West will do it right, but I do know more people will have a chance to say there is something wrong and how these thing should be connected to our world.

5

this-is-a-bucket t1_jea0yut wrote

> be willing to destroy a rogue datacenter by airstrike

> If we go ahead on [AI development] everyone will die, including children who did not choose this and did not do anything

How is it not obvious to this guy that the second scenario is much more likely to lead to human extinction if we go ahead with the first one?

Imagine if tomorrow China/Russia demand an immediate halt to all US AI research and proceeded to bomb American cities and target universities because “they felt threatened by the progress Americans made”.

Does he really think that would solve the manipulative “do it for the sake of the children” question he asks?

5

Sashinii t1_je9u7s0 wrote

This man is afraid of his own shadow. I don't know why people take his fearmongering seriously.

4

GeneralMuffins t1_jeaew54 wrote

Probably because his peers don't seem to be disagreeing with him, making it all the more harder for us as observers to dismiss the alarms all the experts in the field are making...

6

PropheticSloth t1_je9uqd0 wrote

Its weird how crazy people become when their craziness starts making money.

2

Liberty2012 t1_jeb0n97 wrote

I think we are trying to solve impossible scenarios and it simply is not productive.

Alignment will be impossible under current paradigms. It is based on a premise that is a paradox itself. Furthermore, even if it were possible, there will be a hostile AI built on purpose because humanity is foolish enough to do it. Think military applications. I've written in detail about the paradox here - https://dakara.substack.com/p/ai-singularity-the-hubris-trap

Stopping AI is also impossible. Nobody is going to agree to give up when somebody else out there will take the risk for potential advantage.

So what options are left? Well this is quite the dilemma, but I would suggest it has to begin with some portion of research starting from the premise the above are not going to be resolvable. Potentially more research into narrow AI and AI paradigms that are more predictable. However, at some point if you can build nearly AGI effective capabilities on top of a set of more narrow models, can it defend itself against an adversarial hostile AGI that will be built or result of accident of someone else.

2

Justdudeatplay t1_jecfec8 wrote

Al is either our savior, it’s really useful but fails to live up to the fantastic standards, or it is our destructor. We are eventually doomed as a species without it, so I say let it go and see what we can accomplish with it.

2

TallOutside6418 t1_jebzltg wrote

It's amazing the number of people who want to take the wheel and hit the accelerator, risking wiping out all existing life on earth because of a cultish faith that an ASI will solve all of mankind's problems.

The whole planet is locked in a version of the Jim Jones cult and we're all going to be forced to drink the cyanide kool-aid.

1

Alternative_Fig3039 t1_jed8mc5 wrote

Can someone explain to me, an idiot, not if AI with super intelligence could wipe us out, that I can comprehend easy enough, but why? And how? Let’s say, as he does in the article, we cross this threshold and build a super-intelligent AI then we all die and all die within what seems like weeks, days, minutes? Would it nuke us all? It’s not like we have robot factories laying around it could manufacture Sentinels in or something. I understand, in theory, that we can’t really comprehend what super intelligence is capable of because we ourselves are not super intelligent. But other then launching our current WMD’s, what infrastructure exists for AI to eliminate us. I’m talking the near future. In 50-100 years things might be quite different. But this article makes it sound like we’ll be dead in 3 months. I’d really appreciate an even headed answer, not gonna lie, this freaked me out a bit. Not great to read right before bed.

1

Darustc4 OP t1_jedw3g4 wrote

AI does not hate you, nor does it like you, but you're made out of atoms it can use for something else. Given an AI that maximizes for some metric (dumb example: an AI that wants to make the most paperclips in existence), it will certainly develop various convergent properties such as: self-preservation that won't let you turn it off, a will to improve itself to make even more paperclips, ambitious resource acquisition by any and all means to make even more paperclips, etc... (see instrumental convergence for more details).

As for how it can kill us if it wanted to, or if we got in the way, or if we turn out to be more useful dead than alive: Hack nuclear launch facilities, political manipulation, infrastructure sabotage, key figure assasination, protein folding to create a deadly virus or nanomachine, etc....

Killing humanity is not hard for an ASI. But do not panic, just spread the word that building strong AI might be unwise when unprepared, and be ready to be pushed back by blind optimists that believe all of these problems will disappear magically at some point along the way to ASI.

2

Keksgurke t1_jeaa2f7 wrote

I agree with the article. We will never be ready to meet something more intelligent than ourselves

−1