Viewing a single comment thread. View all comments

Rhiishere t1_j25vsg3 wrote

That was a very interesting article. Something that has and always will confuse me though is the drive humans have to make machines like themselves. Morality, good and evil, those are all ideas specific to the human race. Not sure why we feel the need to impose them on to AI.

3

AndreasRaaskov OP t1_j28802z wrote

Something that was in the original draft but I found was to emphasise more that Artificial intelligence is not like human intelligence. What AI does is it can solve a specific problem better than humans while being unable to do anything outside that specific problem.

A good example would be a pathfinder algorithm in a GPS that can find the fastest route from A to B. It is simple, widely used and performs an intelligent task way faster and sometimes better than a human.

However, my article was about how even simple systems can be dangerous if they don't have a moral code.

Take the GPS again, first of all, death by GPS is a real phenomenon that happens since the GPS doesn't evaluate how dangerous a route may be.

But even in the more mundane setting, we see GPS make ethical choices unaware of it making them. Suppose for example a GPS finds two routes to your location, one is shorter, while the other is longer but faster since it uses the highway. Here you may argue that it should take the sort road to minimise CO2 impact, we could also consider the highway to be more dangerous for the driver of the car, however taking the slow road may put pedestrians at risk. There are also some of the newest GPS that consider the overall traffic based on real-time data, those GPS sometimes face a choice where it could send some cars a longer road to avoid congestion, thus sacrificing some people's time in order to make to overall transport time shorter.

2

Rhiishere t1_j29aun0 wrote

Well there’s some here that I agree with and some I don’t. You can’t program an AI to have the same moral code as humans do. At the end of the day it’s a machine based on logic, and if our morals don’t align with that logic than nobody wins. For GPS, it’s the same thing, you say it makes ethical choices unawares, but those are what are ethical to you and other human beings in general. It doesn’t make “ethical choices” it makes choices based on whatever benefits it’s algorithm best, whatever makes the most sense based on the data it has received, the outlines of its job, and the needs of its users. I’d even argue that it would be more dangerous if we tried to program simple systems with our morals. Program a simple AI running a factory that it’s not okay to kill people, it’s not going to understand the way we do. In what application within the factory does that rule apply? What is the definition of killing? What traits do people that shouldn’t be killed display? Going back to GPS, what warrants a more dangerous route? What is the extreme to which the definition of danger is limited to, and what is the baseline? Even with the most simple moral input into the most simple AI, you have to be able to explain in the most clear and in depth and extensive way everything that surrounds that moral, which just makes sense to an everyday human. Expecting a machine to understand a very socially and individually complex moral is just implausible. It wouldn’t make sense even at the most basic level and wouldn’t go the way any human being would think it should.

2