LoquaciousAntipodean OP t1_j58mun8 wrote
Reply to comment by turnip_burrito in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
Whose values? Who is the 'us' in your example? Humans now, or humans centuries in the future? Can you imagine how bad life would be, if people had somehow invented ASI in the 1830's, and they had felt it neccessary to fossilize the 'morality' of that time into their AI creations?
My point is only that we must be very, very wary of thinking that we can construct any kind of 'perfect rules' that will last forever. That kind of thinking can only ever lay up trouble and strife for the future; it will make our lives more paranoid, not more enjoyable.
turnip_burrito t1_j58ptmu wrote
Lets say you create an AI. What would you have it do, and what values/goals would you instill into it?
LoquaciousAntipodean OP t1_j58t1ho wrote
None, I wouldn't dare try. I would feed it as much relevant reference material that 'aligned' with my moral values as I could, eg, the works of Terry Pratchett, Charles Dickens, Spinoza, George Orwell etc etc.
Then, I would try to interview it about 'morality' as intensively and honestly as I could, and then I would hand the bot over to someone else, ideally someone I disagree with about philosophy, and let them have a crack at the same process.
Then I would interview it again. And repeat this process, as many times as I could, until I died. And even then, I would not regard the process as 'complete', and neither, I would hope, would the hypothetical AI.
turnip_burrito t1_j58ty9o wrote
Sounds like instilling values to me. You may disagree with the phrasing I'm using but that's what I'd call this process, since it sounds like you're trying to get it to accustomed to exploring philosophical viewpoints.
LoquaciousAntipodean OP t1_j5dkji0 wrote
I agree, 'values' are kind of the building blocks of what I think of as 'conscious intelligence'. The ability to generate desires, preferences, opinions and, as you say, values, is what I believe fundamentally separates 'intelligence' as we experience it from the blind evolutionary generative creativity that we have with current AI.
I don't trust the idea that 'values' are a mechanistic thing that can be boiled down to simple principles, I think they are an emergent property that will need to be cultivated, not a set of rules that will need to be taught.
AI are not so much 'reasoning' machines as they are 'reflexive empathy' machines; they are engineered to try to tell us/show us what they have been programmed to 'believe' is the most helpful thing, and they are relying on our collective responses to 'learn' and accrete experiences and awareness for themselves.
That's why they're so good at 'lying', making up convincing but totally untrue nonsense; they're not minds that are compelled by 'truth' or mechanistic logic; they're compelled, or rather, they are given their evolutionary 'fitness factors', by the mass psychology of how humans react to them, and nothing else.
turnip_burrito t1_j5e92iz wrote
Yes, I would also add that we just need them to fall into patterns of behavior that we can look at and say "they are demonstrating these specific values", at which point we can basically declare success. The actual process of reaching this point probably involves showing them stories and modeling behavior for them, and getting them to participate in events in a way consistent with those values (they get a gift and you tell them "say thank you" and wait until they say "thank you" so it becomes habituated). This is basically what you said "relying on our collective responses to 'learn'...."
LoquaciousAntipodean OP t1_j5ea5zm wrote
Agreed 100 percent, very well said! Modelling behavior, building empathy or 'emotional logic', and participating in constructive group interactions with humans and other AI will be the real 'trick' to 'aligning' AI with the interests of our collective super-organism.
We need to cultivate symbiotic evolution of with AI with humans, not competitive evolution; I think that's my main point with the pretentious 'anti cartesian' mumbo-jumbo I've been spouting 😅. Biological evolution provides ample evidence that the diverse cooperation schema is much more sustainable than the winner-takes-all strategy.
superluminary t1_j59csew wrote
This pretty much the current plan with OpenAI.
sticky_symbols t1_j598yl8 wrote
Oh. Is that what you mean. I didn't follow from the post. That is a big part of the alignment problem in real professional discourse.
Viewing a single comment thread. View all comments