Comments

You must log in or register to comment.

just-a-dreamer- t1_j9ksfnt wrote

We already have like 10x enough nuclear weapons to wipe out humanity.

With AI we have the opportunity to venture towards a post scarcity society. To eradicate capitalism and establish abundance.

That is worth any risk in the long term.

22

jamesj OP t1_j9kt458 wrote

I agree that the reward is huge. I also think we can't stop progress or even slow it down. That means we need as many people as possible looking at the problems and making sure it is developed to all our benefit.

10

genericrich t1_j9l85sv wrote

Let's game this out:

  • A state (say, China) develops AGI in a lab.
  • The US government intelligence service learns of this.

What happens?

  • It is the doctrine of the US DOD that nobody can challenge our supremacy on the battlefield. AGI is a direct threat to that supremacy.

Another scenario:

  • Say a Silicon Valley company develops AGI. Is the US government going to let one just sit around where our adversaries can get it or learn from it or copy it?

These things (if they ever exist) will be massively destabilizing and could easily spark a war just by existing. They wouldn't have to even DO anything.

6

just-a-dreamer- t1_j9ldmqo wrote

So what, what is the US gonna do if China gets ahead? And vice versa?

You can't nuke China without ending the world. And China can't nuke the US. It has always been this way for 50 years.

The only solution to internal trouble is the eradication of capitalism. That is a good thing we all should work towards anyway in human evolution.

What did the government do during Covid Lockdowns? Handed out cash left and right. People remember. The next sucessfull political party will demand UBI then.

4

Fhbob1988 t1_j9n2d1z wrote

That’s the reason AI is more dangerous. Your own argument for AI is also the argument why it’s more dangerous. Mutually assured destruction has kept humanity safe. The other guy knows they can’t press the button without killing themselves as well. ASI could kill us all without a second thought if not aligned properly.

1

genericrich t1_j9ljoto wrote

You're living in a dream world if you don't think the US would act to prevent China from exploiting an AGI against them. Which it would, if it had one. (Just like the USA would, if it had one).

UBI? Please. Never gonna happen. Listen to the GOP nutjobs whine about "communism" in the USA now, for basic shit like social security and medicare. They would have aneurysms if someone was legit pushing UBI.

−4

just-a-dreamer- t1_j9lkgdw wrote

And what would the US gonna do about it?

You can't invade China and you can't nuke China without ending the world. China is pretty safe. China can nuke every city in the US.

1

just_thisGuy t1_j9lctav wrote

I’d say AI might be the only way to solve nuclear weapons risk. Without AI we’d probably end up using them sooner or later. In the span of a few hundred years probably almost guaranteed, the only thing that will stop it is some other very advanced technology and AI.

4

turnip_burrito t1_j9nv18u wrote

Yes, without AI managing our affairs, we're playing a game of chicken with our existence. Every year maybe the % chance of annihilation is low, but a low chance will happen eventually given enough time.

3

Fhbob1988 t1_j9n1udj wrote

Nukes require multiple humans to make multiple extremely difficult decisions to end the world. ASI can make the decision all on it’s own and we have no idea if it would be a difficult choice for it.

2

Krishna_Of_Titan t1_j9n9min wrote

I don't disagree that there's no stopping the progress of technology and that we should continue to pursue AGI for the potential benefits. However, to deny that AGI could be weaponized and that it is a real threat is kind of insane.

The thing that makes AGI more dangerous than nuclear weapons is that there's not the "in your face" deterrent of an all out catastrophic nuclear apocalypse if there's retribution. So, there's a much higher willingness to actually use it. Look at how willingly China, Russia, and even the U.S. have used cyber attacks and cyber espionage without fear of retribution. For one, they believe they can do it covertly with plausible deniability. Secondly, they believe they can harden the defenses of their systems to avoid the full repercussions of retribution.

Additionally, do you think Russia, China, or even the U.S. government are pursuing a post-scarcity economy or want to solve the world's major problems? Do you think these governments or corporations want to end world hunger or implement UBI? Our governments and corporations are run by traditionalists, capitalists, autocrats, and sociopaths. They are controlled by those seeking money and power. These people are absolutely not looking at AGI as a means to ending the need for money or the dissolution of their power.

Here's a short list of the ways AGI could be weaponized. Keep in mind that I'm not a super intelligent AGI that can think of a hundred more clever and sophisticated ways to weaponize itself in under 10 seconds.

AGI could be used to:

  • Crash stock markets and/or manipulate markets or individual stocks
  • Hack governments, corporations, and financial institutions
  • Perform advance espionage, steal government secrets, steal corporate IP
  • Advance decryption capabilities
  • Covertly hack infrastructure such as power plants, water treatment facilities, or adversarial weapons systems
  • Identify weaknesses or rifts in foreign governments and institutions, or individuals in power that can be manipulated or blackmailed
  • Create sophisticated systems to track individuals or groups of people
  • Identify and more effectively manipulate large groups of people through social engineering
  • Create complex social engineering schemes on an individual level to penetrate government institutions or corporations
  • Create highly intelligent and/or highly accurate autonomous weapons systems
  • Design more sophisticated and capable weapons systems
  • Do much of the above in ways that makes it difficult to trace back to the source

Please, use a little thought to consider the motives of those working to create AGI and the governments that may acquire AGI. Google and Microsoft are not looking to be the first corporation to end capitalism. Nor is any government looking to undermine the power or wealth of it's shareholders.

If Germany had acquired nuclear weapons in quantity before the U.S. during WW2, do you think they would have been judicious in their use? Do you think they would have shown restraint? AGI potentially has the destructive power of nuclear weapons without the fallout that makes the planet uninhabitable. Combined with a major breakthrough in quantum computing and it may be irresistible to a foreign power seeking to finally alter the balance of power greatly in their favor. Making all adversarial encryption obsolete alone could wreak massive damage to a foreign government and economy and give an incredibly unfair advantage to a foreign adversary engaging in a cold war.

I'm not attempting to fear monger. Hopefully, multiple nations will acquire AGI in a similar time frame and that will be enough of a deterrent. Or perhaps, the powers that be will remain rational enough not to engage in the extremes of cold warfare and cyber warfare. If we're lucky, maybe it will become a motivator for the world's leaders to bring some order and stability to the their foreign relationships for fear the other side might achieve AGI first. However, I think it's a realistic threat that should be taken into consideration.

2

Wyrade t1_j9nzl26 wrote

Thank you, it was a very good read!

2

frobar t1_j9ldsvd wrote

If we're wiped out, I hope whatever replaces us is sentient at least. Dead algos roaming the universe feels kinda dumb.

1

Coderules t1_j9p2tmk wrote

I first stopped reading after just the first sentence. The part "The rise of transformer-based architectures, such as..." is just click-bait.

But I re-opened and continued.

Then at the part where they offer a skewed definition of Intelligence, "First, a few definitions. Intelligence, as defined in this article, is the ability to compress data describing past events, in order to predict future outcomes...". This is not correct. Why not just use some agreed-upon definition? Like "The ability to acquire and apply knowledge and skills."

I'm just stopping there. Calling BS.

We have already seen too many articles where people imply AI will bring about loss jobs sending people into panic. Then on the other side we have this type of BS that causes fear and more panic.

−2

jamesj OP t1_j9p3m4l wrote

>Then at the part where they offer a skewed definition of Intelligence, "First, a few definitions. Intelligence, as defined in this article, is the ability to compress data describing past events, in order to predict future outcomes...". This is not correct. Why not just use some agreed-upon definition? Like "The ability to acquire and apply knowledge and skills."
>
>I'm just stopping there. Calling BS.

This definition of intelligence comes from Juergen Schmidhuber, who's team was instrumental in the development of LSTMs and advances in deep learning in the 90s.

I recommend reading the paper, it is a very useful view of what the core of intelligence really is. https://arxiv.org/pdf/0812.4360.pdf

2