[deleted] t1_j495t97 wrote
Post singularity most humans will have massively increased intelligence.
As an augmented human your goals will be entirely different. Although it sounds fun to be running around and doing whatever you want in a simulation now, most of those things you’d currently want wouldn’t be of interest. Your brain at this point is an extremely small percentage of your now massively expanded consciousness, and in every mental aspect you’re more machine than human. So ask yourself - what would you want at this point? I honestly have no clue.
But, in any case, I strongly suspect that superintelligence also induces new emotions. After all, look at the difference in emotions as you move up the chain of intelligence in nature. At some point organisms have the ability to fear, to love, to hate, and so on. Who’s to say there aren’t numerous incomprehensible emotional states that we just aren’t smart enough to conceptualize. This is one of the main reasons I don’t think the ASI will be even remotely malicious and will actually feel much more than any human ever has. After all, it’ll be able to trivially understand the underlying nature of consciousness, and I think it’d thus rather commit suicide than harm organics that’d be gone forever, and arguably it might believe that humans only do seemingly bad things because of their limited intelligence and understanding. We don’t call a cheetah evil for eating a baby gazelle for example. We don’t call a virus evil for its species killing millions. And so on.
So you’ll have almost exclusively new emotions after augmentation, and the old ones will mostly be gone. That isn’t to say you couldn’t simply modify yourself to be greedy or hateful. But by default, if you get augmented you would presumably have hyper awareness of all the faults of different emotions, the understanding of novel ones, and be very loving in at least some abstract sense.
So when it comes to moral policing, I don’t think this is even remotely a problem. At least not in most senses. One could argue that “evil” might exist even if you’re augmented, but I’d assume that you’d be hyper rational, and to be honest if an incomprehensibly superintelligent post-human did something seemingly evil to me, if I were still a non-enhanced human, I would assume they were in the right. And the same would of course go for the even smarter ASI. I’d simply assume I didn’t and couldn’t understand the big picture behind the decision.
Magicdinmyasshole OP t1_j49frng wrote
I agree with most of your points, I just think we're talking about a different time horizon. I think some people will twist themselves into knots starting like, today, about how much AI is going to change the world in the immediate future. They may need a little help to get over the hump.
Artemisfowl8788 t1_j49yz2b wrote
EXTRACT the VALUE
Viewing a single comment thread. View all comments