Submitted by UnionPacifik t3_11bxk2r in singularity
AsheyDS t1_ja3bnu8 wrote
Reply to comment by DukkyDrake in Have We Doomed Ourselves to a Robot Revolution? by UnionPacifik
While I can't remember what exactly the OP said, there was nothing to indicate they meant accidental danger rather than intentional on the part of the AGI, and their arguments are in-line with other typical arguments that also go in that direction. If I was making an assumption, it wasn't out of preference. But if you want to go there, then yes, I believe that AGI will not inherently have its own motivations unless given them, and I don't believe those motivations will include harming people. But I also believe that it's possible to control an AGI and even an ASI, but alignment is a more difficult issue.
Viewing a single comment thread. View all comments