sproingie

sproingie t1_j16qxu3 wrote

> If full automation was an absolute necessity, why not have several different sentient AI evaluating it constantly to ensure that very outcome didn't happen?

It may be that the inner workings of the AI would be so opaque that we won't have any clue how to test them to discover hidden motivations. I also have to imagine there are parties that want exactly such an outcome, and would thus let their AI have free run to do whatever it wants.

It's not the potential sentience of AI that disturbs me so much as the question of "Who do they work for?"

1