Viewing a single comment thread. View all comments

sproingie t1_j16qxu3 wrote

> If full automation was an absolute necessity, why not have several different sentient AI evaluating it constantly to ensure that very outcome didn't happen?

It may be that the inner workings of the AI would be so opaque that we won't have any clue how to test them to discover hidden motivations. I also have to imagine there are parties that want exactly such an outcome, and would thus let their AI have free run to do whatever it wants.

It's not the potential sentience of AI that disturbs me so much as the question of "Who do they work for?"

1

SendMePicsOfCat OP t1_j16t6qp wrote

Aside from the potential hidden motivations, I'm totally with you. Bad humans are way more of a problem for AGI, than bad AGI.

As for the hidden motivations, I just have to disagree that there's any evidence or reason to believe that synthetic sentience will lead to motives or goals. I can understand if you personally disagree, but I remain unconvinced and am honestly baffled by how many would agree with you.

1