Submitted by OldWorldRevival t3_zjplm9 in singularity
turnip_burrito t1_izwny9e wrote
I do think that this is an idea worth considering to solve alignment: an AI may look to a person or group as a role model(s) and try to act as that person or group would act given more time and knowledge.
OldWorldRevival OP t1_izxficr wrote
I also believe that AI takeover is not only plausible, but inevitable, whether or not it is a machine or person at the helm.
It is inevitable because it is fundamentally an as race. The more real, scary and powerful these tools get, the more resources militaries will put into them.
Non killer robots as a treaty is simply a nonstarter because unlike nuclear weapons, there is no stalemate game.
We still have nukes. We stopped developing new ones, but we still have nukes precisely because of this stalemate.
AI has no such stalemate. There will be no stalemate in AI.
I find it funny that we announced fusion power positive energy output just as AI starts getting scary... unlimited power for machines.
turnip_burrito t1_j01cb1e wrote
Yes, AI may take over, but I am optimistic that we can direct it along a path beneficial for us (humans). Killer robots with an AGI inside is something I don't see happening. That would be a stupid move by governments that could achieve better results economically with an AGI. At least, I hope so. Truly no clue.
Viewing a single comment thread. View all comments