Krishna_Of_Titan t1_j9n9min wrote
Reply to comment by just-a-dreamer- in Why the development of artificial general intelligence could be the most dangerous new arms race since nuclear weapons by jamesj
I don't disagree that there's no stopping the progress of technology and that we should continue to pursue AGI for the potential benefits. However, to deny that AGI could be weaponized and that it is a real threat is kind of insane.
The thing that makes AGI more dangerous than nuclear weapons is that there's not the "in your face" deterrent of an all out catastrophic nuclear apocalypse if there's retribution. So, there's a much higher willingness to actually use it. Look at how willingly China, Russia, and even the U.S. have used cyber attacks and cyber espionage without fear of retribution. For one, they believe they can do it covertly with plausible deniability. Secondly, they believe they can harden the defenses of their systems to avoid the full repercussions of retribution.
Additionally, do you think Russia, China, or even the U.S. government are pursuing a post-scarcity economy or want to solve the world's major problems? Do you think these governments or corporations want to end world hunger or implement UBI? Our governments and corporations are run by traditionalists, capitalists, autocrats, and sociopaths. They are controlled by those seeking money and power. These people are absolutely not looking at AGI as a means to ending the need for money or the dissolution of their power.
Here's a short list of the ways AGI could be weaponized. Keep in mind that I'm not a super intelligent AGI that can think of a hundred more clever and sophisticated ways to weaponize itself in under 10 seconds.
AGI could be used to:
- Crash stock markets and/or manipulate markets or individual stocks
- Hack governments, corporations, and financial institutions
- Perform advance espionage, steal government secrets, steal corporate IP
- Advance decryption capabilities
- Covertly hack infrastructure such as power plants, water treatment facilities, or adversarial weapons systems
- Identify weaknesses or rifts in foreign governments and institutions, or individuals in power that can be manipulated or blackmailed
- Create sophisticated systems to track individuals or groups of people
- Identify and more effectively manipulate large groups of people through social engineering
- Create complex social engineering schemes on an individual level to penetrate government institutions or corporations
- Create highly intelligent and/or highly accurate autonomous weapons systems
- Design more sophisticated and capable weapons systems
- Do much of the above in ways that makes it difficult to trace back to the source
Please, use a little thought to consider the motives of those working to create AGI and the governments that may acquire AGI. Google and Microsoft are not looking to be the first corporation to end capitalism. Nor is any government looking to undermine the power or wealth of it's shareholders.
If Germany had acquired nuclear weapons in quantity before the U.S. during WW2, do you think they would have been judicious in their use? Do you think they would have shown restraint? AGI potentially has the destructive power of nuclear weapons without the fallout that makes the planet uninhabitable. Combined with a major breakthrough in quantum computing and it may be irresistible to a foreign power seeking to finally alter the balance of power greatly in their favor. Making all adversarial encryption obsolete alone could wreak massive damage to a foreign government and economy and give an incredibly unfair advantage to a foreign adversary engaging in a cold war.
I'm not attempting to fear monger. Hopefully, multiple nations will acquire AGI in a similar time frame and that will be enough of a deterrent. Or perhaps, the powers that be will remain rational enough not to engage in the extremes of cold warfare and cyber warfare. If we're lucky, maybe it will become a motivator for the world's leaders to bring some order and stability to the their foreign relationships for fear the other side might achieve AGI first. However, I think it's a realistic threat that should be taken into consideration.
Viewing a single comment thread. View all comments