Rhiishere
Rhiishere t1_j25wn9p wrote
Reply to comment by AndreasRaaskov in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
Whoah, I didn’t know that! That’s freaking amazing in a very wrong way! Like I knew something along those lines was possible in theory but I hadn’t heard of anyone actually doing it!
Rhiishere t1_j25vsg3 wrote
Reply to How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
That was a very interesting article. Something that has and always will confuse me though is the drive humans have to make machines like themselves. Morality, good and evil, those are all ideas specific to the human race. Not sure why we feel the need to impose them on to AI.
Rhiishere t1_j29aun0 wrote
Reply to comment by AndreasRaaskov in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
Well there’s some here that I agree with and some I don’t. You can’t program an AI to have the same moral code as humans do. At the end of the day it’s a machine based on logic, and if our morals don’t align with that logic than nobody wins. For GPS, it’s the same thing, you say it makes ethical choices unawares, but those are what are ethical to you and other human beings in general. It doesn’t make “ethical choices” it makes choices based on whatever benefits it’s algorithm best, whatever makes the most sense based on the data it has received, the outlines of its job, and the needs of its users. I’d even argue that it would be more dangerous if we tried to program simple systems with our morals. Program a simple AI running a factory that it’s not okay to kill people, it’s not going to understand the way we do. In what application within the factory does that rule apply? What is the definition of killing? What traits do people that shouldn’t be killed display? Going back to GPS, what warrants a more dangerous route? What is the extreme to which the definition of danger is limited to, and what is the baseline? Even with the most simple moral input into the most simple AI, you have to be able to explain in the most clear and in depth and extensive way everything that surrounds that moral, which just makes sense to an everyday human. Expecting a machine to understand a very socially and individually complex moral is just implausible. It wouldn’t make sense even at the most basic level and wouldn’t go the way any human being would think it should.