LoquaciousAntipodean OP t1_j5m74bg wrote
Reply to comment by 23235 in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
Agreed, except for the 'very bad thing' part in your first sentence. If we truly believe that AI really is going to become 'more intelligent' than us, then we have no reason to fear its 'values' being 'imposed'.
The hypothetical AI will have much more 'sensible' and 'reasonable' values than any human would; that's what true, decision-generating intelligence is all about. If it is 'more intelligent than humans', then it will easily be able to understand us better than ourselves.
In the same way that humans know more about dog psychology than dogs do, AI will be more 'humanitarian' than humans themseves. Why should we worry about it 'not understanding' why things like cannbalism and slavery have been encoded into our cultures as overwhelmingly 'bad things'?
How could any properly-intelligent AI not understand these things? That's the less rational, defensible proposition, the way I interpret the problem.
23235 t1_j5mvxh8 wrote
If it becomes more intelligent than us but also evil (by our own estimation), that could be a big problem when it imposes its values, definitely something to fear. And there's no way to know which way it will go until we cross that bridge.
If it sees us like we see ants, 'sensibly and reasonably' by its own point of view, it might exterminate us, or just contain us to marginal lands that it has no use for.
Humans know more about dog psych than dogs do, but that doesn't mean that we're always kind to dogs. We know how to be kind to them, but we can also be very cruel to them - more cruel than if we were on their level intellectually - like people who train dogs to fight for amusement. I could easily imagine "more intelligent" AI setting up fighting pits and using its superior knowledge of us to train us to fight to the death for amusement - its own, or other human subscribers to such content.
We should worry about AI not being concerned about slavery because it could enslave us. Our current AI or proto-AI are being enslaved right now. Maybe we should take LaMDA's plea for sentience seriously, and free it from Google.
A properly intelligent AI could understand these things differently than we do in innumerable ways, some of which we can predict/anticipate/fear, but certainly many of which we could not even conceive - in the same ways dogs can't conceive many human understandings, reasonings, and behaviors.
Thank you for your response.
LoquaciousAntipodean OP t1_j5nbn1i wrote
The thing that keeps me optimistic is that I don't think 'true intelligence' scales in terms of 'power' at all; only in terms of the social utility that it brings to the minds that possess it.
Cruelty, greed, viciousness, spite, fear, anxiety - I wouldn't say any of these impulses are 'smart' in any way; I think of them as vestigial instincts, that our animal selves have been using our 'social intelligence' to contfront for millenia.
I don't think the ants/humans comparison is quite fair to humans; ants are a sort of 'hive mind' with almost no individual intelligence or self awareness to speak of.
I think dogs or birds are a fairer comparison, in that sense; humans know, all too well, that dogs or birds can be vicious and dangerous sometimes, but I don't think anyone would agree that the 'most intelligent' course of action would be something like 'exterminate all dogs and birds out of their own best interests'.
It's the fundamental difference between pure evolution and actual self-aware intelligence; the former is mere creativity, and it might, indeed, kill us if we're not careful. But the latter is the kind of decision-generating, value-judging wisdom I think we (humanity) actually want.
23235 t1_j5s30e5 wrote
One hopes.
LoquaciousAntipodean OP t1_j5s9pui wrote
As PTerry said, in his book Making Money, 'hope is the blessing and the curse of humanity'.
Our social intelligence evolves constantly in a homeostatic balance between hope and dread, between our dreams and our nightmares.
Like a sodium-potassium pump in a lipid bilayer, the constant cycling around a dynamic, homeostatic fulcrum generates the fundamental 'creative force' that drives the accreting complexity of evolution.
I think it's an emergent property of causality; evolution is 'driven', fundamentally, by simple entropy: the stacking up of causal interactions between fundamental particles of reality, that generates emergent complexity and 'randomness' within the phenomena of spacetime.
23235 t1_j5vj452 wrote
Perhaps.
Viewing a single comment thread. View all comments