Viewing a single comment thread. View all comments

DukkyDrake t1_ja1ettj wrote

> Yet you've offered no explanation as to why it would choose to manipulate or kill, or why it would have its own motives and why they would be to harm us.

Aren't you making your own preferred assumptions?

1

AsheyDS t1_ja1jn9c wrote

Am I?

1

DukkyDrake t1_ja2to9m wrote

Aren't you assuming the contrary state as the default to every one of your points the OP didn't offer an explanation.

i.e.: "Yet you've offered no explanation as to why it would choose to manipulate or kill" are you assuming it wouldn't do that? Did you consider there could be other pathway that leads to that result that doesn't involve "wanting to manipulate or kill"? It could accidentally "manipulate or kill" to efficiently accomplish some mundane tasks it was instructed to do.

Some ppl thinks the failure mode is it possibly wanting to kill for fun or to further its own goals, while the experts are worried about it incidentally killing all humans while out on some human directed errand.

1

AsheyDS t1_ja3bnu8 wrote

While I can't remember what exactly the OP said, there was nothing to indicate they meant accidental danger rather than intentional on the part of the AGI, and their arguments are in-line with other typical arguments that also go in that direction. If I was making an assumption, it wasn't out of preference. But if you want to go there, then yes, I believe that AGI will not inherently have its own motivations unless given them, and I don't believe those motivations will include harming people. But I also believe that it's possible to control an AGI and even an ASI, but alignment is a more difficult issue.

1