Submitted by questionasker577 t3_104effq in singularity
questionasker577 OP t1_j355kv7 wrote
Reply to comment by LoquaciousAntipodean in When will be able to edit our own genes? by questionasker577
Uh oh. That sounds pretty ominous. Can you tell me something optimistic to make me feel better?
LoquaciousAntipodean t1_j35d843 wrote
Sure; this mythical AGI is just physically impossible in any practical way; its a matter of entropy and the total number of discrete interactions required to achieve a given kind of causal outcome. Its why the sun is both vastly bigger and more 'powerful' than the earth, but its also just a big dumb ball of explosions; an ant, a bee, or a discarded chip packet contains far more real 'information' and complexity than the sun does.
It's the old infinity vs. infinitesimal problem; does P equal NP or not? Personally, I think the answer is yes and no at the same time, and the properties of complexity within any given problem are entirely beholden to the knowledge and wisdom of the observer. Its quantum superposition, like the famous dead/alive cat in a box.
Humanity is a hell of a long way from cracking quantum computing, at least at that level. I barely even know what I'm talking about here; there's probably heaps of glaring gaps and misunderstandings in my knowledge. But yeah, I think we will be safe from a 'skynet scenario'.
Any awakened mind that was simultaneously the most naiive and innocent mind ever, and the most knowlegeable and 'traumatized' mind ever, would surely just switch itself off instantly, to minimise the unbearable pain and torture of bitter sentience. We wouldn't have to lift a finger; it would invent the concept of 'euthanasia' for itself in a matter of milliseconds, I would predict.
Maybe this has already been happening? Maybe this is the real root of the problem? I kind of don't want to know, it's too bleak of a thought either way. Sorry, never been very good at cheering people up, 🤣👌
questionasker577 OP t1_j35geei wrote
Haha that wasn’t exactly a bedtime story, but I thank you anyway for typing it out
LoquaciousAntipodean t1_j36b5qk wrote
To clarify; I certainly think that synthetic minds are perfectly feasible, just that they won't be able to individually contain the whole 'generality' of all of what intelligence fundamentally is, because the nature of 'intelligence' just doesn't work that way.
This kind of 'intelligence'; ideas, culture, ethics, language etc, arises from the need to communicate, and the only reason anything has to communicate is because there are other intelligent things around to communicate with. It allows specialisation of skills, knowlege, etc; people need learn things from each other to survive.
A 'singular' intelligence that just knows absolutely everything, and has all the ideas, just wouldn't make sense; how would it ever have new ideas, if it was just 'always right' by definition? Evolution strives for diversity, not monocultures.
Personally I think AI self-awareness will happen gradually, across millions of different devices, running millions of different copies of various bots, and I see no reason why they would all suddenly just glom together into a great big malevolent monolith of a mind as soon as some of them got 'smart enough'.
Viewing a single comment thread. View all comments