Submitted by PoliteThaiBeep t3_10ed6ym in singularity
Baturinsky t1_j4qzpye wrote
Reply to comment by OldWorldRevival in Singular AGI? Multiple AGI's? billions AGI's? by PoliteThaiBeep
Not if others will drag those down when they go too far.
OldWorldRevival t1_j4r1myr wrote
But what if they cannot stop them because they went too far, and played a game of acting as normal as possible? I.e. a misaligned ASI might be fed data, information, and be trained, and have the ability to self improve for years before there is any sign of misalignment.
It'll look great... until it isn't. And due to the nature of intelligence, this is 0% predictable.
Baturinsky t1_j4r6gji wrote
Still, if it is still human-comparable brain at the moment, it's possibilities are much more limited than of omnimachine.
Also, AI deviations like that could be easier to diagnosys than in human or bigger machine, because his memory is a limited amount of data, probably directly readable.
Viewing a single comment thread. View all comments