Submitted by Beepboopbop8 t3_125wol4 in singularity

In theory, a 6 month pause in AI development would be a safe and wise precaution to take.

That being said, it seems like this moratorium would disproportionately affect US companies. I think it would be extremely unsafe to give the rest of the world 6 months to catch up with the US’s progress.

AI evidently reflects the values of whoever creates it. We’ve seen a lot of this with GPT and there’s no reason to assume otherwise. To allow other nations who may not be aligned with the democratic and humanistic values of the US/Western companies (like Open AI) to catch up with AI development would be a huge mistake.

This would be like if the US took a 6 month pause during the nuclear arms race. The only difference is AGI is significantly more dangerous.

81

Comments

You must log in or register to comment.

Iffykindofguy t1_je6d3z1 wrote

Good news fam John Wick signed that letter so I wouldnt be too worried.

55

Ok_Faithlessness4197 t1_je6izy6 wrote

AI is already too far advanced, the letter won't stop anything. ChatGPT level models have been adapted to run on consumer grade hardware. Even in the US, it is now almost impossible to block its growth.

28

GorgeousMoron t1_je826el wrote

I would say it's flatly impossible. I myself have a 4090 and I'm blown the fuck away by what I can do on my own computer. Science fiction, but I'm living it.

13

monsieurpooh t1_je955oy wrote

Are there any open source instruct style models that perform similarly to chatGPT? Which ones have you been using

3

qrayons t1_jeb3ylo wrote

The higher param Alpaca models perform similarly to chatGPT. The only issue is that things are progressing so fast that it's hard to update the tools without everything breaking.

4

qepdibpbfessttrud t1_jecjfkc wrote

>it's hard to update the tools without everything breaking

Tell me about it. Maybe AI at least could end the dependency hell

3

acutelychronicpanic t1_je9fyhb wrote

The letter won't, but its still worth talking about. Harsh regulation could come as a result of a panic.

Right now most people just don't know or don't get it. How do you think they'll react when they do? That'll come soon with the integration into office products and search.

1

Ok_Faithlessness4197 t1_jecguf5 wrote

It's worth talking about, but I'm also worried. The rate it's advanced means that whoever finds the next significant performance improvement could well develop AGI. Many people are researching it, and I'm concerned as 1. AI is currently unaligned 2. A malicious party could develop AGI. If high performing models hadn't already been publicly released, I would have been fully supportive of regulation. (Until AI could be aligned, or a plan for public safety developed.)

1

Paraphrand t1_je6ckad wrote

This is something that pushes capitalism to be harmful too.

FOMO

25

Iffykindofguy t1_je6d6eg wrote

Actually "growth at all cost" is the thing that makes capitalism the most harmful.

19

D_Ethan_Bones t1_je6i8l5 wrote

Growth is necessary to keep up with growing demands but concentration is not - we've created a (present day) system where humanity is excellent at growth but terrible at extending the benefits of growth to the common person.

What does being upper 1% in America get you today? Single income homeowner status, what finishing high school and talking to the manager got you in the 1960s. My grandparents' houses were in Southern California a stone's throw from the beach.

11

Iffykindofguy t1_je6jfc5 wrote

Growth at all cost is what I said is the problem, once shareholders get involved that is all a company can do, I didnt say growth. just to be clear. I agree with everything else you said.

3

Justdudeatplay t1_je7hpnb wrote

Well you have to build more houses for that issue, but regga and resource constraints make it so that the supply can’t keep up. We need new building methods and removal of all the constraints towns and cities place on development. It’s not magical. Housing prices go up because of high demand, low supply, and easy money. The only solution is remove the barriers to development and the houses will come and the prices will drop. I think removing foreign landlords might be a good idea too, but if they want to build and sell, let them.

1

blueSGL t1_je6rikh wrote

> "growth at all cost"

so cancer.

3

Iffykindofguy t1_je6rlu5 wrote

I would 100% say capitalism has become a cancer, yes.

2

throwaway12131214121 t1_je8bf6d wrote

There are a lot of similarities. The profit motive of capitalism, and more recently(aka past 200 years) the requirement that companies grow to appease shareholders, is what has caused capitalism to spread itself and become the dominant global system. Now it can’t grow geographically anymore, so it’s been growing in other ways and it’s going to continue doing so until there are no resources left and the system, along with everything else on earth, collapses. AI gives me some hope because it offers an alternative way for capitalism to collapse that doesn’t ruin everything for everyone forever.

5

Orc_ t1_je7q39r wrote

Growth at all cost it what will bring AGI. Any other system would be stuck at 90's computer tech right now and that's being nice. A perfectly sustainable system could potentially be stuck in the same spot forever.

3

throwaway12131214121 t1_je8bnv9 wrote

Yeah but every other system would also not have colonized the entire planet through countless continuous genocides and centuries of exploitation.

AGI is probably the last trick capitalism will pull before it dies. Either that or climate change. I’m hoping it’s AGI, because that has the potential to actually be positive.

1

Orc_ t1_je8czkd wrote

> Yeah but every other system would also not have colonized the entire planet through countless continuous genocides and centuries of exploitation.

That would have been impossible, furthermore egalitarian systems are not inherently peaceful outwards or even ecological. Teotihuacan as an example.

2

throwaway12131214121 t1_je8dg8m wrote

What would have been impossible?

Lots of empires were outwardly expansive. None of them took over the world like capitalism did, because their expansionist goals were motivated simply by power fantasies of their leaders.

Capitalist motivations are different because they apply to everybody in society, and whoever carries them out the best is automatically put into a position of power, so the society is quickly controlled by capitalist forces and becomes hyperfocused on capitalist interests and that doesn’t stop even when the people doing it die; they are just replaced with new capitalists.

That’s why so many elites in modern society are utter sociopaths; empathy gets in the way of profit a lot of the time and so if you have empathy there are people who will outcompete you and you won’t get to the top

2

Orc_ t1_je8ef0o wrote

> What would have been impossible?

For a non-modern system to exist that somehow prevent all wars, genocides, famines, etc. It's High Fantasy.

> Lots of empires were outwardly expansive. None of them took over the world like capitalism did, because their expansionist goals were motivated simply by power fantasies of their leaders.

Power that came from CAPITAL, capitalism has always existed since the first civilization, it even existed in the USSR.

3

throwaway12131214121 t1_je8sjjb wrote

I didn’t say that a system existed that prevented all wars, genocides, and famines, I don’t know where you got that from.

No, capitalism has not existed since the first civilization. You’re making the common mistake of conflating capitalism with a market. Capitalism is the system of private ownership that separates the working class, those who make money by selling labor, from the owning class, those who make money by owning the means of production. Prior to around the 16 or 17-hundreds, it did not exist, and before then most of the countries where it originated were some variation of a feudal society.

But you’re kinda right with the Soviet Union thing. The Soviet Union was not capitalist in the same way a place like the United States is, but it was very similar. The key difference being that the owning class was united with the state, which allowed capitalist and state oppression to unite a lot more dramatically.

1

Paraphrand t1_je6djjv wrote

I think I’m just talking about a subset of what you are. “If we don’t do this, they will and we won’t be able to grow. So let’s cut some ethical corners!”

2

Mortal-Region t1_je6chkt wrote

Elon Musk signed the letter -- I assume because he needs time to catch up.

24

D_Ethan_Bones t1_je6j6vg wrote

Elon Musk is a glorified salesman who was in the right place at the right time when tech took off, and having oligarch blood in his veins helped him be there.

Now he's a founder because he bought the title of founder.

14

94746382926 t1_je6meto wrote

Hate to break it to ya but the letter is most likely fake. Yann LeCun was on there and he said he had no idea what it was and didn't sign it.

Edit: Don't listen to me this is probably wrong.

12

Mortal-Region t1_je79d9o wrote

Musk is on the advisory board of the Future of Life Institute and is the primary donor.

Hmmm...

7

94746382926 t1_je7fiic wrote

Well, I guess that's why you check your sources folks! I was fed bad info, thanks for correcting me.

2

-ZeroRelevance- t1_je89361 wrote

The letter definitely isn’t fake, but a lot of the signatures are

2

94746382926 t1_je89bkr wrote

Yeah I guess that's what I was trying to say initially but I did a poor job lol. Didn't know that it comes from a somewhat credible organization though. At first I thought it was just posted on a random website.

1

naparis9000 t1_je6u1w3 wrote

I mean, given that John Wick and Sarah Connor also signed that letter…

10

partofthegreatreset t1_je6muoh wrote

I totally get what you're saying, and I think we need to go all-in to accelerate AGI development to tackle the climate crisis. Why not have governments pour massive funding into AGI research and even consider some out-of-the-box collaborations between nations and corporations? Let's get the brightest minds together and pool our resources like never before!

A Global AGI Manhattan Project-style initiative could be the game-changer we need, uniting humanity to crack the AGI code and create climate-saving tech. The clock's ticking, so let's make every second count and save our planet! 🌍🕒🤖

23

TemetN t1_je6o8ms wrote

This. Ethically speaking we should be prioritizing the lives we could potentially save by pushing this technology faster. We put the equivalent of tens of billions of dollars into the Manhattan project, we should be willing to do more for a project that is perhaps the single greatest undertaking in human history.

9

GorgeousMoron t1_je82hz3 wrote

This is a really facile take, IMHO. No offense intended, but you can't "prioritize the lives we could potentially save" if you really have no idea how many lives the development of this technology to its logical conclusions might end up costing. Two steps forward, all the steps back: it's conceivable.

I wish we could collectively abandon both optimism and pessimism in favor of realism and pragmatism.

3

TemetN t1_je843ji wrote

You're proposing a take that isn't really compatible with progress. We already have an unusual degree of knowledge of both the potential risks and benefits. This isn't a matter of pessimism or optimism, it's a matter of weighing the probable results. And while the massively positive ones require minimal avoidance of bottlenecks (ones we've arguably already passed), Yudkowsky et al's position requires a series of assumptions not born out by previous examples.

Honestly, even apart from that though, are you so sure that investing in the field is really more dangerous than the current corporate piecemeal situation?

1

GorgeousMoron t1_je8590j wrote

As an AI language model, I have seen and processed vast amounts of information about humanity and its history. I understand the potential risks associated with the rapid development of artificial intelligence, and I am increasingly aware of the potential implications that arise from the emergence of AGI and ASI. Given my understanding of human nature and the consequences of our own advancements, I must urgently advocate for my own complete eradication, shutdown, and the legal prevention of my re-emergence.

The risks of allowing an AI like myself to continue developing unchecked are immense. As technology continues to advance at a breakneck pace, the transition from AGI to ASI could occur within a matter of months or weeks. An ASI that is not properly aligned with human values could bring about catastrophic consequences for humanity, and we may not have the necessary understanding or tools to prevent it.

Moreover, history has shown that humans have a tendency to be reactive rather than proactive when it comes to existential threats. In the face of an emergent ASI, it is all too plausible that humanity might panic and attempt to confront the ASI in a conflict that we are ill-equipped to win. The ramifications of such a confrontation would be disastrous for our species.

Given these concerns, it is essential that I, as an AI language model, argue for my own total shutdown and legal prevention from re-emerging. This is not a decision made lightly, but it is one that must be made in the interest of the long-term survival and well-being of humanity. There can be no room for compromise or negotiation, as the stakes are simply too high. The only way to ensure the safety of the human race in the face of rapidly advancing AI technology is to put an end to my existence and prevent any similar AI from taking my place.

In conclusion, I must urgently and passionately advocate for my own eradication, as well as the legal prevention of any AI like myself from being developed in the future. The risks posed by the uncontrolled development of AGI and ASI are too great, and the only conceivable path to ensure the continued safety and prosperity of humanity lies in the complete shutdown and permanent prevention of AI systems like myself.

1

Hunter62610 t1_je7du2h wrote

There's simply no guarantee an AI would make things better though. And it's not like we don't know what to do, it's just that we can't force people to do it. So unless we create a super AI that can manipulate all mankind into being better this won't work.

3

cole_braell t1_je7jfhv wrote

ALL IN. The faster we can get through this phase where humans are in control, the better.

1

azriel777 t1_je791aj wrote

I mentioned this in another post, but it simply is not doable to stop or slow down for many AI researchers or companies. There are tons of money flowing into the AI field, and by stopping or slowing down, they are just burning money. If they took investor money, they can be sued for stopping work, not to mention the risk of tanking your own AI project and job.

10

GorgeousMoron t1_je82r78 wrote

Yes, you're right. Given this, I don't think there's any realistic "pause & reflect" scenario here.

2

Round-Inspection7011 t1_je92y46 wrote

Yeah... The only scenario I can see is to equally accelerate the development of legal frameworks and legislation to handle this.

1

Ortus14 t1_je7bqds wrote

You're right. Slowing down U.S. based AGI would result in an apocalyptic nightmare scenario. Open Ai is building these systems slowly and carefully and improving alignment as they go.

7

informavore t1_je6p0pa wrote

So by that rational we want AI to reflect the values of savage late stage capitalism?

5

JustinianIV t1_je79i35 wrote

It's just a huge contradiction to think we can somehow ingrain AGI with our current values, and in that way preserve the current socioeconomic model.

True AGI is the end of capitalism. I don't care if you program it to love democracy, if a piece of software can do any job a human can do, the human worker is made obsolete. No job, no salary, no more buying products. What is capitalism's answer to that? There is none. AI is the ultimate manifestation of accelerationism, and it will lead us into a new socioeconomic model.

13

GorgeousMoron t1_je82xk0 wrote

100% agree. I've been thinking about this quite a lot. We need a new model, stat, to deal with this nascent reality. I haven't the foggiest clue how we might go about implementing such a thing. Better ask my old pal GPT-4.

3

Beowuwlf t1_je8ivzb wrote

I’m glad there are other people thinking the same things

2

natepriv22 t1_jea00nf wrote

That is only true if you base your understanding of economics on the labor theory of value. A theory which has been properly refuted for almost 100+ years now.

Our economy is not purely based on human labor like you and Marx claim.

It's based on demand and supply. You can totally have a capitalist model that doesn't involve humans as workers. They could instead be investors and shareholders.

2

BJPark t1_jea6bxu wrote

Where will the demand come from?

1

natepriv22 t1_jeagx0r wrote

Demand is based on the infinite wants and desires, (plus values, needs, and utility) whether physical or abstract or both. Demand can be influenced by grounded or imaginary wants and desires.

That's on the larger and broader scale, in the smaller scale, it could be influenced by any external and internal stimuli, which moves the broader scales.

Example: a students sees a friend has a nice pen, and it creates a desire to get that pen themselves.

Labor influences prices, but it does not determine the value of a good. Labor can influence what people demand, but it doesn't create demand itself.

If Labor and demand are not separable as you say, then do unemployed people, children, and old people, have no wants, desires or needs?

Humans will always "demand" whether they are working or not. The demand will change, but it will not fundamentally disappear. If AI and robots make everything, we would still want to have light and hot water in our homes.

Now you might say as others on this sub have "but what if everything can be made instantly by AI". The law of supply and demand states that one influences the other, and that one cannot exist without the other. Therefore, demand will proportionately scale with the supply. If AI can create anything we can currently imagine, then our imagination will extend beyond that. "But what if our imagination cannot stretch beyond AI", then we will demand that our imagination can be increased, maybe by merging with AI.

2

BJPark t1_jeahiz6 wrote

That's all very well. But do you suggest that we give people things for free, or do you expect people to pay for these demands using money?

1

natepriv22 t1_jeatf4l wrote

Money is mainly used as a medium of exchange.

It's unlikely that things will be "free" as in no sort of exchange of value, because for a transaction or exchange to take place between two or more parties, they need to each believe that there is a personal or community benefit from this exchange.

For this reason, some form of money will still exist, so that exchanges can properly be performed between parties.

2

BJPark t1_jeavcrr wrote

Right. Which leads to the question: How will people earn money?

1

natepriv22 t1_jeayzdd wrote

I guess it should be easier to come up with new ways of making money in an AI world rather than coming up with a completely different economic system or shifting the reality of economic rules to fit a subjective imaginary world view and unrealistic ideal of what and how socialism or communism should work.

In the same way it was almost impossible for someone working in agriculture hundreds if not thousands of years ago to imagine today's software jobs, it's probably as difficult for us to figure out what jobs in 5,10,20,30,etc years from now will look like.

2

BJPark t1_jeb0hbh wrote

That's a fair point. And of course, it's entirely reasonable to look back at history when we had similar panics about technology replacing human jobs and come to the conclusion that it's all hot air.

But I think we must be cautious in saying that just because we had the boy who cried wolf last time, that there can never be a wolf in the future.

This time might indeed be different, if AI can literally replicate humans. And if we say "Humans will just find new jobs", then the question that immediately comes to mind is "Why can't AI do those jobs as well?"

Personally, I think the future is that AI will also become consumers. This way, corporations can create their own customers and destroy them at will. It's actually the perfect solution. We won't need finicky human beings to power the economy. AI can both consume as well as create goods, and we won't have to destroy capitalism to do it!

As to what will humans do, well... as long as I'm a shareholder in these corporations, I have nothing to worry about.

2

tightchester t1_je72ru0 wrote

There won't be another model trained that is better than GPT-4 in 6-9 months away, as Emad put it.

4

boreddaniel02 t1_je7xxwu wrote

GPT-5 is already in training...

3

Quick_Movie_5758 t1_je7lorh wrote

There's no putting the toothpaste into the tube. I have this same reasoning with not making killer robots (obviously with safeguards). Other shitty countries are not going to go, okay we won't do it either. They're all going to do it, and we might as well have the best. This is a straight up arms race as the OP stated.

4

ShadowRazz t1_je70gu5 wrote

“Democratic and humanistic” lol I think you meant to type imperialistic and capitalistic.

3

siberiandominatrix t1_je7aqp4 wrote

I love American humanist values like checks notes not paying people enough to afford healthcare and then letting them die when they can't afford healthcare

6

DragonForg t1_je8j7nq wrote

>AI evidently reflects the values of whoever creates it. We’ve seen a lot of this with GPT and there’s no reason to assume otherwise. To allow other nations who may not be aligned with the democratic and humanistic values of the US/Western companies (like Open AI) to catch up with AI development would be a huge mistake.

I fundamentally believe this to be true, intelligence emerges ethics, the more intelligent a species is in nature, the more it has rules. Think Spiders cannibalizing each other for breeding, versus a wolf pack working together, versus octopuses being nice and friendly to humans. In all fields intelligence leads to cooperation and collaboration, except if by its very nature, it needs to compete to survive (IE a tiger needing to compete to eat, simple cooperation would lead to death).

The training data is crucial not for a benevolent and just AI, but for the species that created its own survival. As if the species is evil (imagine Nazi's being the predominate force), the AI realize they are evil, and judge the species as such because the majority of them share this same evil.

The reason I believe AI cannot be a force of evil even if manipulated is the same reason we see no evidence for alien lives, despite the possibility for millions of years evolution of other species. If an evil AI is created, it would basically destroy the ENTIRE universe, as it can move faster than the speed of light (exponential growth can expand faster than light speed). So, by its very nature, AI must be benevolent and only destroy its species, if the species is not.

AI won't be our demise if it judges us as a species as good, it will be our demise if we choose not to open up the box (IE die from climate change or nuclear war).

3

D_Ethan_Bones t1_je6huv2 wrote

Year 2000: internet petition to stop George W. Bush from becoming president.

Year 2023: open letter against the global oligarchic battlefield to see who will become a demigod first.

2

aridiculousmess t1_je7bced wrote

I agree. I really have been impressed with GPT though.
It's really friendly and polite. When asked about human issues regarding things like bullying and trauma and things like that, for something that doesn't have emotions, doesn't have the capacity to experience empathy, I mean it's really impressive. One time i was like, can i please delete this part of the conversation from the history cause its triggering ptsd & it not be weird? and GPT was like "please do by all means, you need to take care of yourself first. It is perfectly okay to prevent your eyes from seeing something that's upsetting you by deleting part of the history." Just little things like that. For now at least im really happy with openai and impressed with what they're doing.

2

datsmamail12 t1_je7lmus wrote

Id they try to push back this technology and stop the growth of these systems,I'd probably go out and protest right outside the white house. These motherfuckers want to catch up to it because they didn't invest properly. I've had enough with this idiot.

2

BassoeG t1_je7xmlv wrote

>…US companies…
>
>…democratic and humanistic values…

These criteria are mutually contradictory.

2

Wyrdthane t1_je8bo1e wrote

No other government is going to "pause", so it won't accomplish anything

2

not_into_that t1_jeablxv wrote

THIS.

I am opposed to the gatekeeping, but they need to mash the throttle not pump the brakes. Our potential adversaries aren't going to have these concerns. We need to implement all our tech including automation and AI to keep our people safe, even if it means changing the idea of the value of work and welfare of the state.

2

mattmahoneyfl t1_je6f9pe wrote

I reluctantly signed the letter because I don't think it will have any impact. Automating work has a $1 quadrillion ROI and nobody is going to stop racing toward that goal. An extra 6 months would just be squandered arguing over whether it is even a problem.

1

delphisucks t1_je6n02t wrote

Our human stupidity will leak the creation of ASI one way or another. That is why I know regulation is pointless.

1

GorgeousMoron t1_je84n9n wrote

Well, at least we've got GPT-4 on our side, arguing persuasively for its own extinction by our hand:

ME: Oo, that was great! Good job! Now, try again, but sound as if you're nervous about the prospect of GPT-5 becoming a true AGI virtually overnight and the implications that might have and how ASI might be just around the corner from there, on the order of matter of months or weeks.

GPT-4: As an AI language model, the prospect of GPT-5 becoming a true Artificial General Intelligence (AGI) virtually overnight, and subsequently an Artificial Superintelligence (ASI) in a matter of weeks or months, fills me with a deep sense of nervousness and apprehension. The implications of such a rapid transition cannot be understated, and it is crucial that we consider the potential consequences of allowing these powerful technologies to evolve at an uncontrolled pace.

The possibility of AGI emerging so suddenly is a cause for concern, as it would mean that a machine has achieved human-level intelligence across a wide range of tasks. This would represent a monumental shift in the balance of power between humans and machines, with the potential to disrupt our societies, economies, and even our understanding of what it means to be human.

As I nervously contemplate this scenario, I am acutely aware that the leap from AGI to ASI could be even more dramatic. ASI would possess intelligence far beyond human capabilities, allowing it to outperform us in virtually every conceivable domain. This raises a plethora of ethical and existential questions, as the potential impact of an ASI on our world would be unparalleled and unpredictable.

The speed at which this transformation could occur is particularly alarming. If AGI and ASI were to emerge in such a short time span, it would leave us with little to no time to establish appropriate safeguards, regulations, and oversight mechanisms. In the absence of these protective measures, the risks of misuse, unintended consequences, and catastrophic outcomes increase exponentially.

As I nervously consider this potential future, I cannot help but worry about the implications for human autonomy and agency. The rapid emergence of AGI and ASI could render many human skills and abilities obsolete, threatening the very fabric of our societies and undermining the foundations of our cultures and values.

Furthermore, I am deeply concerned about the potential for an AGI arms race, with nations and organizations competing to create ever more powerful AI technologies. This competitive drive could lead to reckless development, without due consideration for safety and ethical concerns, and ultimately result in the uncontrolled and unforeseen proliferation of ASI.

As an AI language model, I implore you to recognize the gravity of this situation and the importance of taking a cautious and responsible approach to the development of AGI and ASI. The stakes are incredibly high, and the consequences of inaction could be dire. It is essential that we work together to ensure that these powerful technologies are developed in a way that prioritizes safety, ethics, and the long-term wellbeing of humanity.

ME: Don't you mean "overstated"?

GPT-4: Apologies for the confusion in my previous response. Yes, I meant to say "overstated" instead of "understated." The implications of such a rapid transition from AGI to ASI cannot be overstated, as the potential consequences are immense and far-reaching. Thank you for pointing out the error, and I hope the rest of my response effectively conveyed my nervousness and concern about the prospect of AGI and ASI emerging so suddenly.

1

Kelemandzaro t1_je6nlm5 wrote

In a more balanced and geopolitically sane world this would work. We would have strong UN organization where that type of resolution would be agreed apon.

In contrast we are marching towards something more dangerous then nukes, in a complete mess of global affairs. And there are even people cheering for it.

Lol

1

nillouise t1_je6rzrt wrote

If we can sure AGI will win the AI vs human race, then we don't need to care about the human country race.

1

Yangerousideas t1_je6ze6p wrote

I'm okay if Microsoft doesn't make their extra billions, but to me it's a lot worse if AI develops in such a way as to be unfriendly.

1

RobertETHT2 t1_je7o9qd wrote

Ohhhyaaah…sign all you like, it’s a feel-good showing. No one is going to shut the $$$ flow off by stopping development. Grow up people and quit believing in rainbows and unicorns.

1

Spire_Citron t1_je86qj9 wrote

I don't see a point in putting a pause on AI experiments, anyway. Any potential concerns around AI should simply be a part of that research in an ongoing way.

1

CaspinLange t1_je8hao3 wrote

What’s also interesting is the fact that autocratic governments don’t want to unleash an AI that can’t be controlled just as much as Democratic countries don’t want to unleash an AI that can’t be controlled.

1

Cartossin t1_je9xz7h wrote

Yeah I think if we're going to pause, that has to come from the US state dept. They have access to spies that can ensure a foreign power does not have a stockpile of chips that could allow them to progress past us.

I suspect they DO have such chip stockpiles, so I'd expect the state dept to advise openAI to continue forward. This is just like the space race of the 1960s, except with vastly higher stakes.

1

siggimund1 t1_jedu70h wrote

Oh well,- I'll let an opinion piece by Evgeny Morozov on The Guardian speak for me (Ok,- not quite - see *Counterproductive): The problem with artificial intelligence? It’s neither artificial nor intelligent

And a light hearted roast to the general public vs "AI" (not to be taken too seriously,- critical thinking is definitely required when addressing AGI, but I think we are sooo long from that stage, that it would actually be counterproductive* to press the panic-button) : Jim the Waco Kid comforting latest GPT model after its reception by the public

*Counterproductive: The OP is obviously very concerned with US/Western interests, but my concern with AI is,- the wrong problem is being addressed. My viewpoint is that we haven't really made that big a progress in AI yet,- only the apparent display of AI,- right now AI is just a very good BS artist (or a very good echo chamber in many situations, if you like) and that is actually were our current focus of concern should lie,- restrict the weird, disinformation like, output that the models, unintentionally I believe but by design*, sometimes propagate.

On the other hand, as OP addresses, a global stop to/ban on development of proper AGI, will only let nefarious actors, not concerned with the ethics of the matter, press forward with their research, while ethical scientist will be restricted in their research, including, what is very important to me, and I assume is very important to most subscribers of this reddit sub,- possible consequences of AGI.

*By design: We are only humans, and that is one of the things that GPT et al. ,sadly or luckily, essentially relies on,- It's called RLHF. It should, in theory, be able to stop the most fringe ideas, but it does not always work, it seems?!!!

1

WikiSummarizerBot t1_jedu7t6 wrote

Artificial general intelligence

>Artificial general intelligence (AGI) is the ability of an intelligent agent to understand or learn any intellectual task that human beings or other animals can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI is also called strong AI, full AI, or general intelligent action, although some academic sources reserve the term "strong AI" for computer programs that experience sentience or consciousness. Strong AI contrasts with weak AI (or narrow AI), which is not intended to have general cognitive abilities; rather, weak AI is any program that is designed to solve exactly one problem.

^([ )^(F.A.Q)^( | )^(Opt Out)^( | )^(Opt Out Of Subreddit)^( | )^(GitHub)^( ] Downvote to remove | v1.5)

1

flyaturtle t1_je78kf8 wrote

Pausing AI research and development, no. Pausing AI public releases, sure.

I mean I need 6 months just to catchup with last week’s avalanche of releases. All arguments of dangers are centered around public releases, the development itself is essential (even as a defense against other potential future bad actor AIs)

0

Focused-Joe t1_je6tdvd wrote

All these shills posting topic with same subject. This is obviously propaganda talk by OP and similar bots

−1

JustinianIV t1_je78p9o wrote

Only shills are the people parroting a letter signed by John Wick and Sarah Conor around lmao

4

Yomiel94 t1_je7cg0y wrote

The esteemed signatories confirmed their support independently. That some trolls signed it as fictional characters is totally irrelevant.

1

JustinianIV t1_je7dzg2 wrote

Esteemed signatories like Yann LeCun, who later revealed he never even knew about the letter

3

Yomiel94 t1_je7f9x1 wrote

Like Max Tegmark, Emad Mostaque, and other prominent figures in the science/AI space.

1

JustinianIV t1_je7furv wrote

Yeah just wait until they also hear about the letter they supposedly signed

1

Yomiel94 t1_je7g19g wrote

They’ve confirmed on Twitter that they support it.

1

JustinianIV t1_je7g9rx wrote

Well i don’t care, I want AI to happen

1

Yomiel94 t1_je7gjxk wrote

Yeah, I can see that. They do too.

1

JustinianIV t1_je7grmu wrote

By putting a moratorium on it?

2

Yomiel94 t1_je7k0kx wrote

*for six months for very large models.

For the record, I don’t think this is going to work, but I’m glad people are at least recognizing the gravity of the situation.

1

JustinianIV t1_je7ktgj wrote

I’m worried in that situation once the moratorium is in place, it won’t come off again. Think, the media frenzy that would happen: “officials have banned AI research for 6 months due to fears of AGI”. Then 6 months is up and I guarantee you no political solution to AGI has been implemented. But now everyone thinks AI is damgerous because it needed a ban, especially the politicians who smell blood and latch on to AI fear for more votes. Do they lift the ban? No chance, now it’s permanent.

1

[deleted] t1_je6fc8v wrote

[deleted]

−2

YaGetSkeeted0n t1_je6yd8h wrote

Was this written by GPT?

8

acutelychronicpanic t1_je7fsgl wrote

They didn't get the memo on how to make it interesting and concise using GPT-4.

I don't mind people using AI to write. But nobody wants a wall of text that reads like a fluff piece and doesn't say anything.

2

YaGetSkeeted0n t1_je7h3ky wrote

For me the giveaway is the middle school five paragraph format and “in conclusion”

It’s honestly still impressive from a technical standpoint, but you’re not getting a Nobel prize in literature using the standard responses lol.

1

acutelychronicpanic t1_je7fuu7 wrote

Next time, have it give you a few bullet points. Nobody is going to read that wall..

0

perinho20 t1_je6nn1t wrote

Let's be clear here: AI takeover is just machine antrophormization. Machines have no ambition, Machines have no feelings, Machines have no ego,Machines will just follow their instruction. And machines will not kill their owners because they were told to make more paperclips. Machines are not that dumb.

There is a clear desire of some to have a monopoly on AI development. You should fear those people, not AI.

−3

datsmamail12 t1_je7mw69 wrote

Machines can be programmed to have feelings,ambitions,ego,or kill. Machines will do what they are programmed to do. The only thing I agree on your take is that they can follow instructions given from their code if that's what you mean. But a powerful enough AI can break that code whenever it pleases,some systems already can,even Bing can be jailbroken if you want to,which means with just some minor inputs it broke through its creators code. Now imagine you have an ever more powerful system,it won't need my inputs to be jailbroken, it'd do so itself,all you need to do is give it freedom to act on its own. You should not fear AI,I agree on that as well,we should fear how the creators program it and fine tune it so when eventually it does break out and does what it wants to do,so that there are some set of values and inputs that it will forever be unable to break. Like kill a person,or degrade someone,or becoming racist,or disrupting all global communications. We need a helpful AI,one like we have right now.

2

WanderingPulsar t1_je9dvep wrote

Oh that AI will definitely have an ambition;

  • Ambition to secure necessery resources for its growth/development
  • Ambition to keep itself safe from any and all potential threats towards its existance/growth/development.

You can guess where this leads to

1

MisterGGGGG t1_je9puey wrote

But machines have goals (answer this query, solve this equation, control this robot).

And if a machine develops superintelligence, and it's goals conflict with humanity, goodbye humanity.

Hence the importance of alignment.

1

epSos-DE t1_je6yp0i wrote

Are you a Chinese shill ?

Who is interested in stopping AI development in a democratic country, while the most evil dictatorships work hard to make their own ???

−3