Viewing a single comment thread. View all comments

FuturologyBot t1_ix622pa wrote

The following submission statement was provided by /u/Gari_305:


From the Article

>“In war, unexpected things happen all the time. Outliers are the name of the game and we know that current AIs do not do a good job with outliers,” says Batarseh.
>
>To trust AIs, we need to give them something that they will have at stake
>
>Even if we solve this problem, there are still enormous ethical problems to grapple with. For example, how do you decide if an AI made the right choice when it took the decision to kill? It is similar to the so-called trolley problem that is currently dogging the development of automated vehicles. It comes in many guises but essentially boils down to asking whether it is ethically right to let an impending accident play out in which a number of people could be killed, or to take some action that saves those people but risks killing a lesser number of other people. Such questions take on a whole new level when the system involved is actually programmed to kill.


Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/z0jvzm/part_of_the_kill_chain_how_can_we_control/ix5xz8k/

1