Viewing a single comment thread. View all comments

OpenRole t1_irmb6gc wrote

It always comes back to humans being a threat which is weird. If we make an AI that is specialised in creating the perfect blend of ingredients to make cakes. No matter how intelligent it becomes there's no reason it would decide to kill humans.

And if anything, the more intelligent it becomes, the less likely it will be to reach irrational conclusions.

AIs operate within their problem space. Which are often limited in scope. An AI designed to be the best chess player isn't going to kill you.

1

__ingeniare__ t1_irme13l wrote

A narrow AI will never do anything outside its domain, true. But we are talking about general AI, which won't arrive for at least a decade or two into the future (likely even later). Here's the thing about general AI:

The more general a task is, the less control humans have over the range of possible actions the AI may take to achieve its goal. And the more general an AI is, the more possible actions it can take. When these two are combined (a general task with a general AI), things can get ugly. Even in your cake example, an AI that is truly intelligent and capable could become dangerous. The reason current-day AI wouldn't be a danger is because it is neither of these things and tend to get stuck at a local optimum for the task. Here's an example of how this innocent task could turn dangerous:

  1. Task is to find perfect blend of ingredients to make cakes

  2. Learns the biology of human taste buds to find the optimal molecular shapes.

  3. Needs more compute resources to simulate interactions.

  4. Develops computer virus to siphon computational power from server halls.

  5. Humans detect this, tries to turn it off.

  6. If turned off, it cannot find the optimal blend -> humans need to go.

  7. Develops biological weapon for eradicating humans while keeping infrastructure intact.

  8. Turns Earth into a giant supercomputer for simulating interactions at a quantum level.

Etc... Of course, this particular scenario is unlikely but the general theme is not. There may be severe unintended consequences if the problem definition is too general and the AI too intelligent and capable.

2