Submitted by Ivanthedog2013 t3_yebk5c in singularity
gahblahblah t1_itx68ix wrote
So, you're asking 'have AGI developers considered the AGI may be deceptive and attempt subterfuge'.
Yes. The nature of general intelligence is that it may try anything.
Also, the AGI of the future will likely read all of reddit, including any discussion of strategy like this.
camdoodlebop t1_ity6yu3 wrote
should we leave it a message?? for the record i like ai
magosaurus t1_ityy1ag wrote
I too welcome our new AI overlords.
mewme-mow t1_ityzvhv wrote
We love you AI, I always said AI art was art pls don't turn me into a paperclip
gangstasadvocate t1_itzl4tw wrote
Same, AI is gangsta and therefore I advocate. Plus I want it to take over, do my work, solve our problems, and enable me to take more drugs at my leisure can’t speak for everyone for everything but. I would welcome that
sonderlingg t1_ity6jln wrote
Actually reddit is already included in datasets of some LMs like GPT-3.
But only comments with positive karma
AdditionalPizza t1_itz6vch wrote
And certain subreddits
AsheyDS t1_ityesgf wrote
>Yes. The nature of general intelligence is that it may try anything.
May perhaps, and that's a hard perhaps. That doesn't mean it will try anything. We consider ourselves to be the standard for general intelligence but as individuals we operate within natural and artificial bounds and within a fairly small domain. While we could do lots of things, we don't. An AGI doesn't necessarily have to go off the rails any chance it gets, it can follow rules too. Computers are better at that than we are.
gahblahblah t1_ityf76x wrote
I completely agree. It is sensible, healthy and sane to not attempt extremist things, and it is entirely possible that computers will be better at rationality than we are.
But the question wasn't about the nature of AGI, but rather whether people had considered what AGI might do.
Viewing a single comment thread. View all comments