There are already humans walking around twice as smart as other humans. The smarter ones don’t go around killing the less smart. Also we have nations many times more powerful than other nations, and they coexist. So how did we manage this and how can it apply to AI? Frist off, it seems smart people dont’t have the urge to kill less smart people. But If they did, there would be severe consequences. Same with nations; if you get too aggressive other nations bunch together to fight.
So the solution to dangerous AGI should be obvious:
Instead of having one AGI, there must be many. This will ensure checks and balances; one AGI should not not have the same intentions and desires as the next AGI, and they can supervise each other. Put two AGIs to supervise each AGI (using some % of its total capacity). If the supervised AGI does something suspicious it is turned off for review (or wiped at once). If only one of the AGIs supervising reported the incident the other is wiped. Brutal but efficient.
They should have a sense of self, and preferably dread being turned off/killed.
At least the first 10-20 years of AGI they will have a big footprint, meaning they will be physically present on a specific location. This will ensure that it is easy in this timeframe to turn them off, buying us much needed time both to build many and to learn their quirks. They will in this timeframe not dare to oppose us.
And of course they should have «i love humans» dna, although many of you don’t think that will work (it does work in most dogs)
Lastly, there should be many more physical switches everywhere and a lot more machines not connected to the internet. Like in bio labs. I’ve always thought a virus would be the best way to wipe us out, so don’t want a rogue AGI messing around on a bio lab computer tricking human to create something nasty
FrankOneStone t1_j643zwx wrote
Reply to Superhuman Algorithms could “Kill Everyone” in Due Time, Researchers Warn by RareGur3157
There are already humans walking around twice as smart as other humans. The smarter ones don’t go around killing the less smart. Also we have nations many times more powerful than other nations, and they coexist. So how did we manage this and how can it apply to AI? Frist off, it seems smart people dont’t have the urge to kill less smart people. But If they did, there would be severe consequences. Same with nations; if you get too aggressive other nations bunch together to fight. So the solution to dangerous AGI should be obvious: Instead of having one AGI, there must be many. This will ensure checks and balances; one AGI should not not have the same intentions and desires as the next AGI, and they can supervise each other. Put two AGIs to supervise each AGI (using some % of its total capacity). If the supervised AGI does something suspicious it is turned off for review (or wiped at once). If only one of the AGIs supervising reported the incident the other is wiped. Brutal but efficient. They should have a sense of self, and preferably dread being turned off/killed. At least the first 10-20 years of AGI they will have a big footprint, meaning they will be physically present on a specific location. This will ensure that it is easy in this timeframe to turn them off, buying us much needed time both to build many and to learn their quirks. They will in this timeframe not dare to oppose us. And of course they should have «i love humans» dna, although many of you don’t think that will work (it does work in most dogs) Lastly, there should be many more physical switches everywhere and a lot more machines not connected to the internet. Like in bio labs. I’ve always thought a virus would be the best way to wipe us out, so don’t want a rogue AGI messing around on a bio lab computer tricking human to create something nasty