Submitted by No-Performance-8745 t3_125f3x5 in singularity

Recently, the Future of Life Institute (FLI) released an open letter calling for a "pause for at least 6 months the training of AI systems more powerful than GPT-4." If you are unaware, FLI is a not-for-profit aimed at ensuring positive outcomes for the human race, a subgoal of which is preventing the extinction of it; something they (and many others) have identified as a potential outcome of transformative artificial intelligence (TAI). There already exists a post for general discussion of the letter here, and that isn't what I plan to have this post be.

Many have noticed that that influential figures who stand to gain an exceptional amount from the development of artificial intelligence (e.g. Emad Mostaque) have signed this letter, and are curious as to why, postulating that perhaps they know more than we, or that the signing is fake, etc. If you are asking these questions, I ask you to consider the idea that perhaps these people are truly worried about the outcome of TAI, and that perhaps even the people who stand to gain the most from it still fear it.

To give credence to this, I point you to the fact that Victoria Krakovna (a research scientist at Deepmind) is a board member of FLI, that OpenAI have acknowledged the existential risks of TAI, and that the field of AI safety exists, and is populated by people who fear a negative TAI outcome. This is not to say that we should never build TAI, but just that we should do it after we know it will be safe. If all this takes is a few more years without TAI, and it could prevent the extinction of the human race; maybe we should consider it.

I so badly want AGI, just like almost everyone in this community; and I desire a safe iteration of it ASAP, but it is also so so critical to consider things like "maybe the Deepmind research scientist is correct", "maybe OpenAI isn't handling safety responsibly" and "what if this could go wrong?".

If you have read this and are thinking something like "why would an AGI ever want to exterminate humanity?", "why would we build an AGI that would do that?" or something along those lines, then you are asking the right questions! Keep asking them, and get engaged with safety research. There is no reason why safety and capabilities need to be opposed or separate, we should all be looking forward to the same goal; safe TAI.

I wrote the paragraphs above because of how I interpreted the top comments on the post, and I think that regardless of whether or not you think an open letter like this could ever succeed in slowing down as valuable a technology as AI; we should not dismiss it. Most of the people proposing ideas like this open letter love AI, and want safe TAI just as much as the next singularitarian, but think that it should be done in a safe, scientific and responsible manner.

12

Comments

You must log in or register to comment.

Sure_Cicada_4459 t1_je4fln6 wrote

It reeks of sour grapes, not only are many of the signature fake which straight up puts this into at best shady af territory but there is literally zero workable plan after 6 months, hell even during it. No criteria as to what is "enough" pause and who decides them. And that also ignores that PAUSING DOESN'T WORK, there are all kinds of open source models out there and the tech is starting to move away from large = better. It's FOMO + desperate power grab + neurotic unfalsifiable fears. I am not saying x-risk is 0, but drastic action need commensurate evidence. I get tail risks are hard to get evidence for in advance, but we have seen so many ridiculous claims of misalignment like people coaxing ChatGPT or Bing into no-no talk and people claiming "It's aggressively misaligned", and yet at the very same time saying "It's hallucinating and doesn't understand anything abt reality". Everything abt this signals to me motivated reasoning, fears of obsolence, and projection of one's own demon onto a completely alien class of mind.

13

CravingNature t1_je4n2x2 wrote

I agree with their concerns but I think creating a Manhattan Project type group for the alignment problem is a better solution. Our government and allies should be funding this and working together to be sure we get this right.

Pausing gives the chance that others, without such concerns, 6 months to advance.

4

y53rw t1_je4se3n wrote

Creating a Manhattan Project type group is not an alternative solution to a pause. They are complementary.

2

CravingNature t1_je4t6d6 wrote

Agreed, but I don't think everyone will pause, only the ethical.

1

Iffykindofguy t1_je59lk0 wrote

John Wick, The Continental, Massage therapist

do you think the real john wick signed there too?

edit: Weve already had people come out saying their name was faked on the letter

4

Sigma_Atheist t1_je40kfl wrote

I'm mostly concerned with complete economic collapse

2

CertainMiddle2382 t1_je433vr wrote

Nothing in AI will not provide better economic productivity.

Nothing will collapse, only useless jobs.

8

No-Performance-8745 OP t1_je42cpu wrote

Economic concerns are another issue posed by TAI, and I believe a pause on capabilities research could be of great benefit in this regard too in terms of better planning economically for a post-TAI society. I would however urge you to consider the existential threat of AGI as a greater potential negative than economic collapse, and as something that could be very real very soon. I also think that many efforts toward preventing existential catastrophes will help us in a regulatory and technical sense to combat economic crises too.

It is very likely that similar (if not identical) organizations could be used to combat both of these issues, meaning that by setting up measures against existential risks, we are doing the same for economic ones (and vice-versa).

1

1a1b t1_je4jmho wrote

Isn't this a fake thing? No-one has admitted that they have anything to do with this.

1

IronJackk t1_je5dsh2 wrote

They want to pause it so they have time to use it for their personal benefit.

1

dkajare t1_jeeojcj wrote

so is the website legit at all? how do you know whether it's just another fake news?

1

TorchNine t1_je4dd7q wrote

I think Emad Mostaque's signature is hypocritical; in fact, GPT is helping many people, including me. Stable AI's Stable Diffusion is also helping a lot of people, but on the other hand, SD is causing a lot of problems, such as image copyright violations. To me, his signature shows that he found a way to blame others without reflecting on his own company's problems.

0

ArgeenPom t1_je4qqh3 wrote

It appears hypocritical at first, but reading Emad's twitter thread about signing this makes sense given he prefers small models, no? The letter only calls for pausing the largest of language model development, models that can't run on consumer hardware anyway. Copyright is a different issue that we can work to improve on indeed, regardless of model size.

2

TorchNine t1_je5275b wrote

I don't understand how he can't even resolve his own "copyright issues" of Stable Diffusion, yet he's acting like he's "knocking down the ladder" of MLLM's achievements. I'd hope he'd write an open letter to the many artists who have blamed Stable Diffusion rather than this hypocrisy.

2

ArgeenPom t1_je57r7q wrote

Understandable, sorry to hear. I hope as SD & art generators in general mature we find a way to satisfy both AI enthusiasts like myself and regular artists. I imagine it's going to be difficult, though I believe in the general good of many folks e.g. at Stability AI, OpenAI, various public officials/regulators even, etc. Here's hoping.

1