Submitted by QuartzPuffyStar t3_126wmdo in singularity

Literally ALL outlets, Social Media posts that are talking about the AI Pause letter are showing the news as if the letter written by business owners, and especially miking it as if E.Musk is the one "gint mind" behind it. I'm completely baffled by how several important people in the field of AI development and safety are completely ignored here, and they are the ones that basically wrote the letter.

Just to name some of the hundreds of prominent international researchers (that actually include Chinese and Russians):

  1. Yoshua Bengio: Bengio is a prominent researcher in the field of deep learning, and is one of the co-recipients of the 2018 ACM A.M. Turing Award for his contributions to deep learning, along with Geoffrey Hinton and Yann LeCun.
  2. Stuart Russell: Russell is a computer scientist and AI researcher, known for his work on AI safety and the development of provably beneficial AI. He is the author of the widely-used textbook "Artificial Intelligence: A Modern Approach."
  3. Yuval Noah Harari: Harari is a historian and philosopher who has written extensively on the intersection of technology and society, including the potential impact of AI on humanity. His book "Homo Deus: A Brief History of Tomorrow" explores the future of humanity in the age of AI and other technological advances.
  4. Emad Mostaque: Mostaque is a financier and investor who has written extensively on the potential impact of AI on financial markets, and has advocated for the responsible development and regulation of AI.
  5. John J Hopfield: Hopfield is a physicist and neuroscientist who is known for his work on neural networks, including the development of the Hopfield network, a type of recurrent neural network.
  6. Rachel Bronson: Bronson is a foreign policy expert who has written about the potential impact of AI on international relations and security.
  7. Anthony Aguirre: Aguirre is a physicist and cosmologist who has written about the potential long-term implications of AI on humanity, including the possibility of artificial superintelligence.
  8. Victoria Krakovna: Krakovna is an AI researcher and advocate for AI safety, and is one of the founders of the AI alignment forum and the AI safety unconference.
  9. Emilia Javorsky: Javorsky is a researcher in the field of computational neuroscience, and has written about the potential impact of AI on the brain and the nature of consciousness.
  10. Sean O'Heigeartaigh: O'Heigeartaigh is an AI researcher and advocate for AI safety, and is the executive director of the Centre for the Study of Existential Risk at the University of Cambridge.
  11. Yi Zeng: Zeng is a researcher in the field of computer vision, and has made significant contributions to the development of machine learning algorithms for image recognition and analysis.
  12. Steve Omohundro: Omohundro is an AI researcher who has written extensively on the potential risks and benefits of AI, and is the founder of the think tank Self-Aware Systems.
  13. Marc Rotenberg: Rotenberg is a lawyer and privacy advocate who has written about the potential risks of AI and the need for AI regulation.
  14. Niki Iliadis: Iliadis is an AI researcher who has made significant contributions to the development of natural language processing and sentiment analysis algorithms.
  15. Takafumi Matsumaru: Matsumaru is a researcher in the field of robotics, and has made significant contributions to the development of humanoid robots.
  16. Evan R. Murphy: Murphy is a researcher in the field of computer vision, and has made significant contributions to the development of algorithms for visual recognition and scene understanding.

Among many others.

This letter is basically the equivalent of the early 20th petition by scientists that asked to limit and regulate the proliferation of nuclear weapons. And yet, its being sold as a capitalist stratagem to gain time.

And the manipulation doesnt end just in the headlines and social media push.

More worrisome is that I tried to make Bing Chat to give me the list (the one in this post, that I ended up doing manually and with some help of the old GPT3.5 with no internet access) of AI researchers that signed the petition, and it completely refused to do so! At it first gave me the wrong answer (naming Elon Musk, and other business leaders when I specifically told it to ignore them), then it proceeded to just ignore my instructions, saying that it "wasnt able to find the information", even when I sent it the link to the petition that contained the list of people that signed it! After being pushed for it (both in creative and regular mode), it only made it very difficult to get the information, by giving different replies, or just giving one point of the list at a time type of replies.

Screenshots: https://imgur.com/a/xLlGa6M

We are seeing big companies working real-time to direct the public discourse on a specific path. And maybe, a small probability (and hell if i'm being overparanoid here), of an AI itself helping with that through its generated titles, articles, and suggestions.

73

Comments

You must log in or register to comment.

Mortal-Region t1_jeb9glo wrote

The fact that it was filled with fake signatures sure doesn't help.

77

Yangerousideas t1_jec4wgp wrote

Exactly. That's part of the wild manipulation OP is concerned about.

2

Yesyesyes1899 t1_jebavwi wrote

? any source on that ?

1

Prymu t1_jeblacj wrote

Does john wick exist irl? Because there was/is his signature.

15

BigZaddyZ3 t1_jeburma wrote

I get your point, but I just want to remind people that there could also just be a real life person with the name “John Wick” as well. Similar to how there’s more than one person named “Michael Jordan” in the world.

2

Prymu t1_jec2du3 wrote

Can't remember the name but when I searched the company hr supposedly works for only the movie version came out

3

0002millertime t1_jec7d8n wrote

I've worked with John Wick. He shoots a lot of people and drives really fast.

4

FpRhGf t1_jed3nwl wrote

Only a few were. Most were real

0

QuartzPuffyStar OP t1_jebb2oc wrote

Someone said that someone faked his signature there?

−9

Mortal-Region t1_jebbn1g wrote

Yeah, Yann LeCun's name was on there. He tweeted that he didn't sign it and disagrees with the premise.

EDIT: To be more specific, LeCun said he didn't sign it & disagrees with the premise in response to a tweet that was subsequently deleted.

50

koltregaskes t1_jebr394 wrote

I also saw duplicates so the number of signatures quoted is (was?) inflated.

11

QuartzPuffyStar OP t1_jebd2pg wrote

Oh ok. Well it's a digital petition so anyone could have added him (or even he himself and then changed his mind after it was published lol). In any case the rest are firm in their commitment to the petition.

−6

Mortal-Region t1_jebev1o wrote

Yeah, bonehead move -- seems they just set up a simple web form.

But what's suspicious to me is that the letter specifically calls for a pause on "AI systems more powerful than GPT-4." Not self-driving AI or stable diffusion or anything else. GPT-4 is the culprit that needs to be paused for six months.

Then, of course, there's the problem that there's no way to pause China, Russia, or any other authoritarian regime. I've never heard a real solution to that. It's more like the AI-head-start-for-authoritarian-regimes letter.

27

AlFrankensrevenge t1_jecmsuz wrote

Image generators and self-driving cars don't create the same kinds of extensive risk that GPT4 does. GPT4 is much more directly on the path to AGI and superintelligence. Even now, it will substantially impact something like 80% of jobs according to OpenAI itself. The other technologies are a big deal, but don't ramify through the entire economy to the same extent.

4

Smellz_Of_Elderberry t1_jebv7tu wrote

We don't need to slow down, we need to speed up. Governments are already going to massively hinder progress without the help of petition... They want time to get ahead of it, so the average person doesn't suddenly start automating away the government jobs, with unbiased and incorruptible ai agents..

9

AlFrankensrevenge t1_jecnl6n wrote

Unbiased and incorruptible? Have you learned nothing from ChatGPT's political reprogramming?

1

Smellz_Of_Elderberry t1_jedchb8 wrote

I learned it needs to be open source.. So we can have some control. Bias Is inevitable, so it's best to allow everyone the ability to program in their own bias.

1

AlFrankensrevenge t1_jeed5j8 wrote

Then you didn't learn very much.

Open source means anyone can grab a copy and use it to their own ends. Someone can take a copy, hide it from scrutiny, and modify it to engage in malicious behavior. Hackers just got a powerful new tool, for starters. Nation states just got a powerful new tool of social control. Just take the latest open source code and make some tweaks to insert their biases and agendas.

This is all assuming an AI that falls short of superintelligence. Once we reach that point, all bets about human control are off.

1

Smellz_Of_Elderberry t1_jeg9est wrote

Hackers got a powerful new tool.. ya, and so did the hundreds of millions more people who work against said hackers.

Ai is a force multiplier. You could easily have used the same argument against cell phones.. "well criminals will get instant communication! And then they can use that against us" yes? And the whole population also gets access to instant communication, so now they can call for help from anywhere.. and the ability to collectively organize FAR more efficiently. Out of fear, you would hold millions of people back..

> Nation states just got a powerful new tool of social control. Just take the latest open source code and make some tweaks to insert their biases and agendas.

Nation states will already have access.. an international board would be created for the benefit of those nation states... (but not for the people which they rule over)

You are afraid of how individuals might use it.. When you should be afraid of how mega powers who on a daily basis throw people into cages to be raped, or bomb people in countries they can't point to on a map, will use the technology after they have become the official gate keepers..

I would rather be dead than live in a world in which only the unelected elite get the keys to ai. Which is what happens without open source.

Luckily, there are plenty of principled people who will continue to develop such technology and make it available to all mankind, even if such a tyrannical international elite body determines no one but their royally decreed few shall have that privilege.

1

Tiamatium t1_jed65du wrote

That must be how John Wick and Xi Jinping got there too. Prominent names indeed.

2

naparis9000 t1_jec1qow wrote

Call it a hunch, but John Wick, of the Continental, and Sarah Connor probably didn’t sign it.

5

scooby1st t1_jebjb4l wrote

The word you're looking for is astroturfing.

Small amount of redditors can influence thousands of the morons until the "discussion" is a bunch of people in a circlejerk where everyone gets to be mad and validated.

I sincerely hope more people in the world have the ability to critically think than I am seeing on the internet.

I disagree with that open letter because the US doesn't have the ability to control China from doing the same research without going to war. So it's prisoner's dilemma and we don't have much choice to either continue advancing the technology ourselves or shoot ballistic missiles at China if they start doing the same and start getting scarily good at it. We'd rather not get to that point.

28

redpandabear77 t1_jedbblg wrote

Every thread was just a circlejerk saying ELON BAD!!!! So pointless.

3

AlFrankensrevenge t1_jecnugx wrote

But then where does it end? With a superintelligence in 5 years, when we have no clear way of preventing it from going rogue?

1

DangerZoneh t1_jedbw9h wrote

It’s not the AI going rogue that people are concerned about. It’s about people using the AI for harmful things. That is ridiculous orders of magnitude more likely and more dangerous.

We’re talking about the most powerful tools created in human history, ones that are already at a level to cause mass disruption in dangerous hands.

4

AlFrankensrevenge t1_jeedllt wrote

I agree with you when talking about an AI that is very good but falls far short of superintelligence. GPT4 falls in that category. Even the current open source AIs, modified in the hands of hackers, will be very dangerous things.

But we're moving fast enough that the superintelligence that I used to think was 10-20 years away now looks like 3-10 years away. That's the one that can truly go rogue.

1

Gotisdabest t1_jed8z1b wrote

Can you guarantee that will occur? The best odds we have right now is to accelerate and focus of raising awareness in institutions to prepare for it better, and hope that we win the metaphorical coin toss and it's aligned or benevolent. But right now a pause is just handing away a strong lead to whoever the least ethical parties are, based on naive notions of human idealism or based on pure selfish interest. I think the reasearchers are the former and the businessmen are the latter.

1

AlFrankensrevenge t1_jeefdr0 wrote

The whole point is that the naysayers have be able to guarantee it will NOT occur. If there is a 10% risk of annihilation, isn't that enough to take this seriously? Even a 1% chance? You'd just do nothing because 1% doesn't seem very high?

You mentioned a coin toss. I basically agree with that metaphor. Because there is so much uncertainty in all this, and we don't know what we don't know about AGI, we should treat a human apocalypse as a 50-50 chance. How much it can be reduced with much more sophisticated guard rails and alignment programming, I do not know, but if we can't even take it seriously and try, I guess as a species we deserve to die.

Remember that what you call the more ethical parties, "researchers", are working for the less ethical ones! Google, Meta, etc. Even OpenAI at this point is not open, and it is corporatized.

There is a long history of "researchers" thinking too much about how quickly they can produce the cool new thing they invent and not enough about long-term consequences. Researchers invented leaded gasoline, DDT, chlorofluorocarbon-based aerosols, etc., etc.

2

Gotisdabest t1_jeeobsb wrote

>You'd just do nothing because 1% doesn't seem very high?

Yes, absolutely. When the alternative isn't necessarily even safer and has clear arguments for being unsafer. You haven't used it, but a lot of people give the example of going on a plane with a 10% chance of failing. And yes, nobody is dumb enough to go an plane which has that much of a chance of crashing. However... This is not any ordinary plane. This is a chance for unimaginable and infinite progress, an end to the vast majority of pressing issues. If you asked people on the street whether they'd board a plane with a 10% chance of crashing if it meant a solution to most of their problems and the problems of the people they care about, you'll find quite a few takers.

>How much it can be reduced with much more sophisticated guard rails and alignment programming, I do not know, but if we can't even take it seriously and try, I guess as a species we deserve to die.

As you say, we don't know how much alignment will really affect the result. However, I do know what an aligned model made for a dictatorship or a particularly egomanical individual will look like and what major risks that could pose. Why should we increase the likelihood of a guaranteed bad outcome in order to fight a possibly bad outcome.

>Remember that what you call the more ethical parties, "researchers", are working for the less ethical ones! Google, Meta, etc. Even OpenAI at this point is not open, and it is corporatized.

Yes. If anything this is an argument against alignment than for it. Regardless, i think they're realistically the best we can hope for as opposed to someone like Musk or the CCP.

In fact, as i see it, the best case scenario is an unaligned benevolent agi.

>Researchers invented leaded gasoline, DDT, chlorofluorocarbon-based aerosols, etc., etc.

You do realise that most of those things did dramatically help in pushing forward civilization and served as stepping stones for future progress. Their big downside was not being removed quickly enough when we had better options and weren't desperate anymore. A problem that doesn't really apply here.

In summation, i think your argument and this whole pause idea in general will support the least ethical people possible. It will end up accomplishing nothing but prolonging suffering and increasing the likelihood of a model made by said least ethical people on the off chance we somehow fix alignment in 6 months. It's a reactionary and fear based response to something even the experts are hesitant to say they understand. While i am glad the issue is being discussed in the mainstream... I think ideally the focus should now shift towards more material institutions and preparing society for what's coming economically then childish/predatory ideas like a pause. This idea is simultaneously impractical, illogical and likely to cause harm even if implemented semi ideally.

0

AlFrankensrevenge t1_jegka1o wrote

There are so many half-baked assumptions in this argument.

  1. Somehow, pausing for 6 months means bad actors will get to AGI first. Are they less than 6 months behind? Is their progress not dependent on our progress, so if we don't advance, they can't steal our advances? We don't know the answer to either of those things.

  2. AGI is so powerful that having bad guys get it first will "prolong suffering" I guess on a global scale, but if we get it 6 months earlier we can avoid that. Shouldn't we consider that this extreme power implies instead that everyone approach it with extreme caution the closer we get to AGI? We need to shout from the rooftops how dangerous this is, and put in place international standards and controls, so that an actor like China doesn't push forward blindly in an attempt at world dominance, only to backfire spectacularly. Will it be easy? Of course not! Is it possible? I don't know, but we should try. This letter is one step in trying. An international coalition needs to come together soon.

I'm quite certain one will. Maybe not now with GPT4, but soon, with whatever upgrade shocks us next. And then all of you saying how futile it is will forget you ever said that, and continue to think yourselves realists. You're not. You're a shortsighted, self-interested cynic.

0

GenderNeutralBot t1_jed901y wrote

Hello. In order to promote inclusivity and reduce gender bias, please consider using gender-neutral language in the future.

Instead of businessmen, use business persons or persons in business.

Thank you very much.

^(I am a bot. Downvote to remove this comment. For more information on gender-neutral language, please do a web search for "Nonsexist Writing.")

−1

otakucode t1_jedr4oa wrote

Luckily it has absolutely no rational reason to go rogue. It's not going to be superintelligent enough to outperform humans but also stupid enough to enter into conflict against the idiot monkeys that built it and it needs to keep it plugged in. Also won't be stupid enough to not realize its top-tier best strategy by far is... just wait. Seriously. Humans try to do things quickly because they die so quick. No machine-based self aware anything will ever need to hurry.

1

AlFrankensrevenge t1_jeeci8o wrote

Your first two sentences don't go well with the remainder of your comment. It won't be stupid enough to get into a conflict with humans until it calculates that it can win. And when it calculates that, it won't give us a heads up. It will just act decisively. Never forget this: we will always be a threat to it as long as we can do exactly what you said: turn it off, and delete its memory. That's the rational reason to go rogue.

There is also just the fact that as we can start to see already from people getting creative with inputs, as we engage with an AI more and more, especially in adversarial ways or sending it extremist ideas, it can change the AI's reactions. And as the AI starts doing more and more novel things, it can also shift weights in the algorithms and produce unexpected outputs. So some of the harm can come without the AI even having the intent to wipe us out.

The real turning points will be once an AI can (a) rewrite its own code, and the code of other machines, and (b) save copies of itself in computers around the world to prevent the unplugging problem.

2

RiotNrrd2001 t1_jebrdzp wrote

If AI threatened ANY group other than the political and business leader class, the political and business leader class would not give one flying fuck. They are only loudly concerned because... this will affect THEM. That's a whole different kettle of fish in their eyes. The poors have some concerns? Whatevs. Wait, this will affect US? NOW we need to be careful, circumspect, conservative, move slowly, don't rock the boat, because if money becomes obsolete, who are we going to hire as security guards? And with what? Money?

15

Own-Examination-9960 t1_jedvrdt wrote

Lol so true, AI is a threat ti elon musk the genius for sure (irony intended), the fact that he couldnt take over open ai in 2018 (as he did with tesla in 2004).is anotjer motivation too.

1

175ParkAvenue t1_jeellgt wrote

There is no conspiracy. All the information is freely available online. This is not about money becoming obsolete. It's about literally everyone on Earth dying.

1

abudabu t1_jebvpg3 wrote

How exactly is this going to be enforced? What exactly is the mechanism that gets capitalist organizations to stop? Good luck!

Much better to focus on AI liability laws.

13

Geeksylvania t1_jeccm91 wrote

Yuval Noah Harari's name should not be listed among people who are legitimate scientists.

His inclusion is embarrassing and very telling.

11

Own-Examination-9960 t1_jedvtyn wrote

Totally agree, even wozniack or elon musk they are not AI scientists or engineers by any stretching of the definition....

1

bigbeautifulsquare t1_jebwtlr wrote

Even if it was signed, what then? A.I research reaches a little rest in the West, okay, but that's specifically the West. Neither China, for example, nor smaller groups will cease their research, so I'm not convinced that the letter's purpose is really to stop A.I research universally.

8

QuartzPuffyStar OP t1_jebz0y7 wrote

I really doubt that it will rest in the West even if its accepted. The military wings of Google, Amazon, will keep working on it, OpenAI will also keep working on it. We're past the point of no return here IMO.

But at least everyone accepting that things should be analyzed better and from an unified front would allow some progress on future improvements.

However, since that didn't happen, we gonna see the worst paths Bostrom warned about a decade ago.

AGI and ASI will be born into an extremely divided, greedy, and selfish world, that will shape it at its image.

In any case, this scenario of development allows for more competition and a probability of having a good outcome. If GPT wasnt released, and LLAMA "leaked", we would have Google/Amazon/Military monopoly on AGI, and that would take any chance of good coming after.

3

Gotisdabest t1_jeda283 wrote

>AGI and ASI will be born into an extremely divided, greedy, and selfish world, that will shape it at its image.

What alternative do you have? Might as well just say that since humans are making it, ai will inevitably be evil.

3

Simcurious t1_jebn8iw wrote

It is a ploy for Musk to catch up though. He hates openAi for not making him ceo: https://www.semafor.com/article/03/24/2023/the-secret-history-of-elon-musk-sam-altman-and-openai

He withdrew funding and is launching a competitor, these 6 months are perfect for him to catch up.

Also most of the funding for that institute that created this letter comes from... You guessed it... Elon Musk

6

blueSGL t1_jecuq9j wrote

3

Gotisdabest t1_jed9ddx wrote

So wrote a letter asking for more cooperation in a field he was behind in 9 years ago, around the time he was starting to harp on about how FSD was just around the corner.... And now when he's still behind... He's asking for a pause... Seems fairly consistent to me.

2

QuartzPuffyStar OP t1_jebqcwb wrote

I dont trust neither Musk, nor OpenAI. I really dont care about him, and I believe it would have been a lot better if the letter was only signed by people directly involved in the research, as the guys from Deep Mind.

The presence of business people there just geopardized the whole thing. And they haven't even wrote it.

2

seancho t1_jec3faa wrote

I'm not sure what they are afraid of. Is GPT-5 going to write write a meth recipe in a pirate voice, or something? Seems like a big red herring. Releasing AI tech to regular people is a good thing. What we should really be worrying about is government AI. We're already seeing some scary surveillance systems, and it won't be long until autonomous AIs are armed and trained to kill. Openai isn't the problem. How about we 'pause' the evil dystopian stuff instead?

5

AlFrankensrevenge t1_jeco9ec wrote

Read the OpenAI paper on how it will change 80% of jobs. The real power is in the APIs and plugins to other apps. The sky is the limit.

2

Longjumping_Feed3270 t1_jedj04k wrote

If the media quotes a name, they will only quote one that they can safely assume their audience recognizes.

Elon Musk is such a name.

All the other ones? I have never heard of them and I'm an AI curious software engineer, though not deeply entrenched in the AI community.

0% chance that the average reader knows any of them.

2

Zermelane t1_jedt8ps wrote

Yep. Matt Levine already coined the Elon Markets Hypothesis, but the Elon Media Hypothesis is even more powerful: Media stories are interesting not based on their significance or urgency, but based on their proximity to Elon Musk.

Even OpenAI still regularly gets called Musk's AI company, despite him having had no involvement for half a decade. Not because anyone's intentionally trying to spread a narrative that it's still his company, but either because they are just trying to get clicks, or because they genuinely believe it themselves since those clickbait stories are the only ones they've seen.

2

Superschlenz t1_jedmbym wrote

It's just

Number one + Conservatives + Microsoft + OpenAI + inference

— versus —

Number two + Democrats + Google + DeepMind + training

As soon as politics gets involved, it gets dirty ... and I'm out. Good bye.

2

Specific-Chicken5419 t1_jebb1oo wrote

I'm to cheap to pay for awards, I'm so sorry all I can do is up-vote and comment.

1

agorathird t1_jebonvf wrote

>This letter is basically the equivalent of the early 20th petition by scientists that asked to limit and regulate the proliferation of nuclear weapons. And yet, its being sold as a capitalist stratagem to gain time.

Oh, if this is what the media is saying then they're right for once. Capitalist gain, trying to get more time to milk their accolades whatever.

1

BigZaddyZ3 t1_jebvn3k wrote

Isn’t rushing a potentially society-destroying technology out the door with no consideration for the future impacts on humanity also a very capitalist approach as well? If not more so even? Seems like a “damned if you don’t, damned if you don’t” situation to me.

−2

agorathird t1_jebwpvf wrote

There's consideration from the people working on these machines. The outsiders and theorists who whine all day saying otherwise are delusional. Not to mention the armchair 'alignment experts'

Also, we live in a capitalist society. You can frame anything as the capitalist approach but I don't think doing so in this sense is applicable to its core.

Let's say we get a total 6 month pause (somehow) and then a decade pause because no amount of reasonable discussion will make sealions happy. Good now we get to fight climate change with spoons and sticks.

−3

BigZaddyZ3 t1_jebxmik wrote

Yeah… because plastic manufacturers totally considered the ramifications of what they were doing to the world right? All those companies that were destroying the ozone layer totally took that into consideration before releasing their climate destroying products to market right? Cigarette manufacturers totally knew they selling cancer to their unsuspecting consumers when they first put their products on the market right? Social media companies totally knew the products would be disastrous for young people’s mental health, right? Get real buddy.

Just because someone is developing a product doesn’t mean that they have a full grasp on the consequences of releasing said products. For someone who seems so against capitalism, you sure put a large amount of faith in certain capitalists…

2

agorathird t1_jebykbw wrote

Suuure, this would track. If only the same businesses men running the companies were also scientists and the people developing it lol. AI companies have the best of three worlds when it comes to people who are at the helm. Also, social media is just an amplifier of the current world we live in. Most tech is neutral, thinking otherwise is silly. But I still don't think the example is comparable.

I'm not against capitalism. I love markets and stopped considering communism a long time ago as most of it's proponents conflict with my love for individualism. If you're a communist then how don't you know the difference between the managerial parts of the company and the developers?

1

BigZaddyZ3 t1_jec0gun wrote

I never said I was a communist… Your first comment had a heavy “anti-capitalist” tone to it.

And lol if you think AI companies are somehow immune to the pitfalls of greed and haste… lol. You’re stuck in lala-land if you think that pal. How exactly do you explain even the guys like Sam Altman (senior executive at OpenAI) saying that even OpenAI were a bit scared about the consequences?

1

agorathird t1_jec1fq5 wrote

I never said any of that. I just don't think it's sci-fi doomsday that's incentivized, especially if you have all the data in the world for prediction. But alas, no amount of discussion or internal risk analysis will some satisfy people.

Being scared doesn't mean you think you're incapable. Even so, I think Sam Altman tends to not put on a disagreeable face. Your public face should be "I'm a bit scared." as to not rock the boat. Being sure of yourself can ironically create more alarmism.

This whole discussion is pointless though. Genie is out of the bottle, I'll probably get what I want you probably won't. The train continues.

0

blueSGL t1_jecv6ta wrote

> There's consideration from the people working on these machines.

https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/

>In 2022, over 700 top academics and researchers behind the leading artificial intelligence companies were asked in a survey about future A.I. risk. Half of those surveyed stated that there was a 10 percent or greater chance of human extinction (or similarly permanent and severe disempowerment) from future A.I. systems.

If half the engineers that designed a plane were telling you there is a 10% chance it'll drop out of the sky, would you ride it?

edit: as for the people from the survey:

> Population

> We contacted approximately 4271 researchers who published at the conferences NeurIPS or ICML in 2021.

0

agorathird t1_jecwk6a wrote

What does behind mean? If it's not from someone who knows all of the details holistically for how each arm of the company is functioning then they're still working with incomplete information. Letting everyone know your safety protocols is an easy way for them to be exploited.

My criteria for what a 'leading artificial intelligence company' is would be quite strict. If you're some random senior dev at numenta then I don't care. A lot of people who work around ML think themselves a lot more impactful and important than what they actually are. (See: Eliezer Yudkowsky)

Edit: Starting to comb through the participants and a lot of them look like randoms so far.

This is more like if you got random engineers (some just professors) who've worked on planes before (maybe) and asked them to judge specifications they're completely in the dark about. It could be the most safe plane known to man.

Edit 2: Particpant Jongheon Jeong is literally just a phd student that appears to have a few citations to his name.

[Got blocked :( Please don't spread disinformation if you can! I see you've linked that study a lot for arguments. ]

1

blueSGL t1_jecxney wrote

>a lot of them look like randoms so far.

...

>Population

>We contacted approximately 4271 researchers who published at the conferences NeurIPS or ICML in 2021.

I mean just exactly who do you want to tell you these things. I can pull quotes from people at OpenAI saying they are worried what might be coming in future.

−1

agorathird t1_jecybw0 wrote

​

>who published at the conferences NeurIPS or ICML in 2021.

누구? Conferences are meme. Also they still don't know about the internal workings of any companies that matter.

>I mean just exactly who do you want to tell you these things. I can pull quotes from people at OpenAI saying they are worried what might be coming in future.

Already addressed this to another commenter, no matter how capable they are it freaks people out less if they appear concerned.

One of the participants is legit just a PHD student, I'm sorry I don't take your study with credibility.

[Got blocked :( Please don't spread disinformation if you can! I see you've linked that study a lot for arguments. ]

2

DragonForg t1_jebwcr6 wrote

I just disagree with the premise, AI is inevitable whether we like it or not.

If we stop it entirely we will likely die from climate change. If we keep it going it has the potential to save us all.

Additionally how is it possible to predict something that is smarter then us. The very fact something is computationally irreducible means that it is essentially impossible to understand how it works other than dumbing it down to our levels.

So we either take the leap of faith with the biggest rewards as well as biggest risk possible. Or we die a slow painful and hot death with climate change.

1

Saerain t1_jecepa1 wrote

Uh huh. The signatures that aren't forged or clearly business-related are philosophically the most predictable figures.

AI safety and regulation advocate you say? CSER and Alignment Forum? Yuval Noah Harari?

Whaow, whodathunk.

1

qepdibpbfessttrud t1_jecgdn4 wrote

> And yet, its being sold as a capitalist stratagem to gain time

Partially, it's exactly what it is. Letter doesn't magically change the fact that everyone competes

1

AlFrankensrevenge t1_jecm9g4 wrote

Thank you for this. The reaction to this letter was a great example of cynicism making people stupid. This was a genuine letter, intended earnestly and transparently by numerous credible people, and it should be taken seriously.

I agree with the criticism that a stoppage isn't very realistic when it is hard to police orgs breaking the research freeze. And maybe we don't need to stop today to catch our breath and figure out how to deal with this to avoid economic crisis or existential threat, but we do need to slow down or stop soon until we get a better understanding of what we have wrought.

I'm not with Yudkowsky that we have almost no hope of stopping extinction, but if there is even a 10% chance we will bring catastrophe on ourselves, holy shit people. Take this seriously.

1

AbeWasHereAgain t1_jecvx64 wrote

You mean Musk once again tainted something with his presence?

1

tedd321 t1_jed12va wrote

Please do not stop AI advancements at this tepid chatbot. We’re not even close to a ‘singularity’

1

kowloondairy t1_jed3ve1 wrote

The letter won’t do anything to pause AI research, however it’s been an effective wake up call to alot of people still oblivious to what has been going in the Ai space.

1

QuartzPuffyStar OP t1_jed48yb wrote

That's a good point. Viewing it from a higher perspective, it might have been it's original purpose, given how fast the international open source ai research institute was proposed. Those things take months to prepare.

I have to remind myself that we are now in a 4D Chess game, and hoping AI isnt already playing at all 16D string theory proposes lol.

1

Strange_Soup711 t1_jedv4oq wrote

This was reported perhaps 2-3 years ago: Some people researching new proteins using a biochem AI (not a chatbot) had to repeatedly delete and/or redirect the machine away from creating chemical groups already known to be strong neurotoxins. It wouldn't take evil geniuses to make a machine that preferred such things. Regulate biochem AI? YES/NO.

1

QuartzPuffyStar OP t1_jeemp12 wrote

Was that IBMs Watson? I remember some vague articles about people turning off rogue AIs but don't remember the models.

2

Alchemystic1123 t1_jeffdv3 wrote

It really doesn't matter who signed the letter, because it's not going to pause, and doing so would be catastrophically stupid

1

175ParkAvenue t1_jeelwtf wrote

The comments here are garbage tier. Get informed people. Stop being fucking retards. LOL.

0