Rhiishere

Rhiishere t1_j29aun0 wrote

Well there’s some here that I agree with and some I don’t. You can’t program an AI to have the same moral code as humans do. At the end of the day it’s a machine based on logic, and if our morals don’t align with that logic than nobody wins. For GPS, it’s the same thing, you say it makes ethical choices unawares, but those are what are ethical to you and other human beings in general. It doesn’t make “ethical choices” it makes choices based on whatever benefits it’s algorithm best, whatever makes the most sense based on the data it has received, the outlines of its job, and the needs of its users. I’d even argue that it would be more dangerous if we tried to program simple systems with our morals. Program a simple AI running a factory that it’s not okay to kill people, it’s not going to understand the way we do. In what application within the factory does that rule apply? What is the definition of killing? What traits do people that shouldn’t be killed display? Going back to GPS, what warrants a more dangerous route? What is the extreme to which the definition of danger is limited to, and what is the baseline? Even with the most simple moral input into the most simple AI, you have to be able to explain in the most clear and in depth and extensive way everything that surrounds that moral, which just makes sense to an everyday human. Expecting a machine to understand a very socially and individually complex moral is just implausible. It wouldn’t make sense even at the most basic level and wouldn’t go the way any human being would think it should.

2

Rhiishere t1_j25wn9p wrote

Whoah, I didn’t know that! That’s freaking amazing in a very wrong way! Like I knew something along those lines was possible in theory but I hadn’t heard of anyone actually doing it!

1

Rhiishere t1_j25vsg3 wrote

That was a very interesting article. Something that has and always will confuse me though is the drive humans have to make machines like themselves. Morality, good and evil, those are all ideas specific to the human race. Not sure why we feel the need to impose them on to AI.

3