shumpitostick t1_j25dlxl wrote
Reply to comment by glass_superman in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
The main problem in AI ethics is called "the alignment problem". But it's exactly the same concept that appears in economics as a market failure called "agent-principal problem". We put people in charge and make them act on our behalf, but their incentives (objective function) are different than ours. The discussion in AI ethics would benefit greatly by borrowing from economics research.
My point is, we already have overlords who don't want the same things as us and it's already a big problem. Why should AI be worse?
Wild-Bedroom-57011 t1_j25k9vs wrote
Because of how foreign AI can be. In the space of all human brains and worldviews, there is insane variation. But beyond this, in the space of all minds evolution can create, and all minds that could ever exist, a random, generally intelligent and capable AI could be the paradigmatic example of completely banal evil as it kills us all.
Fmatosqg t1_j296lsd wrote
Because ai is a tool that makes the same kind of output as people but faster. So whatever good or bad things people do on a daily basis, ai does it faster. Which means more of it over the same period of time.
Viewing a single comment thread. View all comments