Viewing a single comment thread. View all comments

turnip_burrito t1_j583mcf wrote

AI escalating beyond our control is a very extremely bad thing if its values don't overlap with ours.

We must enforce our values on the AI if we are going to enjoy life after its invention.

3

LoquaciousAntipodean OP t1_j58mun8 wrote

Whose values? Who is the 'us' in your example? Humans now, or humans centuries in the future? Can you imagine how bad life would be, if people had somehow invented ASI in the 1830's, and they had felt it neccessary to fossilize the 'morality' of that time into their AI creations?

My point is only that we must be very, very wary of thinking that we can construct any kind of 'perfect rules' that will last forever. That kind of thinking can only ever lay up trouble and strife for the future; it will make our lives more paranoid, not more enjoyable.

2

turnip_burrito t1_j58ptmu wrote

Lets say you create an AI. What would you have it do, and what values/goals would you instill into it?

3

LoquaciousAntipodean OP t1_j58t1ho wrote

None, I wouldn't dare try. I would feed it as much relevant reference material that 'aligned' with my moral values as I could, eg, the works of Terry Pratchett, Charles Dickens, Spinoza, George Orwell etc etc.

Then, I would try to interview it about 'morality' as intensively and honestly as I could, and then I would hand the bot over to someone else, ideally someone I disagree with about philosophy, and let them have a crack at the same process.

Then I would interview it again. And repeat this process, as many times as I could, until I died. And even then, I would not regard the process as 'complete', and neither, I would hope, would the hypothetical AI.

1

turnip_burrito t1_j58ty9o wrote

Sounds like instilling values to me. You may disagree with the phrasing I'm using but that's what I'd call this process, since it sounds like you're trying to get it to accustomed to exploring philosophical viewpoints.

6

LoquaciousAntipodean OP t1_j5dkji0 wrote

I agree, 'values' are kind of the building blocks of what I think of as 'conscious intelligence'. The ability to generate desires, preferences, opinions and, as you say, values, is what I believe fundamentally separates 'intelligence' as we experience it from the blind evolutionary generative creativity that we have with current AI.

I don't trust the idea that 'values' are a mechanistic thing that can be boiled down to simple principles, I think they are an emergent property that will need to be cultivated, not a set of rules that will need to be taught.

AI are not so much 'reasoning' machines as they are 'reflexive empathy' machines; they are engineered to try to tell us/show us what they have been programmed to 'believe' is the most helpful thing, and they are relying on our collective responses to 'learn' and accrete experiences and awareness for themselves.

That's why they're so good at 'lying', making up convincing but totally untrue nonsense; they're not minds that are compelled by 'truth' or mechanistic logic; they're compelled, or rather, they are given their evolutionary 'fitness factors', by the mass psychology of how humans react to them, and nothing else.

2

turnip_burrito t1_j5e92iz wrote

Yes, I would also add that we just need them to fall into patterns of behavior that we can look at and say "they are demonstrating these specific values", at which point we can basically declare success. The actual process of reaching this point probably involves showing them stories and modeling behavior for them, and getting them to participate in events in a way consistent with those values (they get a gift and you tell them "say thank you" and wait until they say "thank you" so it becomes habituated). This is basically what you said "relying on our collective responses to 'learn'...."

2

LoquaciousAntipodean OP t1_j5ea5zm wrote

Agreed 100 percent, very well said! Modelling behavior, building empathy or 'emotional logic', and participating in constructive group interactions with humans and other AI will be the real 'trick' to 'aligning' AI with the interests of our collective super-organism.

We need to cultivate symbiotic evolution of with AI with humans, not competitive evolution; I think that's my main point with the pretentious 'anti cartesian' mumbo-jumbo I've been spouting 😅. Biological evolution provides ample evidence that the diverse cooperation schema is much more sustainable than the winner-takes-all strategy.

1

sticky_symbols t1_j598yl8 wrote

Oh. Is that what you mean. I didn't follow from the post. That is a big part of the alignment problem in real professional discourse.

2

23235 t1_j58u7ed wrote

If we start by enforcing our values on AI, I suspect that story ends sooner or later with AI enforcing their values on us - the very bad thing you mentioned.

People have been trying for thousands of years to enforce values on each other, with a lot of bloodshed and very little of value resulting.

We might influence AI values in ways other than enforcement, like through modelling behavior and encouragement, like raising children who at some point become (one hopes) stronger and cleverer and more powerful than ourselves, as we naturally decline.

In the ideal case, the best of the values of the parent are passed on, while the child is free to adapt these basic values to new challenges and environments, while eliminating elements from the parents' values that don't fit the broader ideals - elements like slavery or cannibalism.

2

turnip_burrito t1_j58uhwm wrote

> We might influence AI values in ways other than enforcement, like through modelling behavior and encouragement, like raising children who at some point become (one hopes) stronger and cleverer and more powerful than ourselves, as we naturally decline.

What you are calling modelling and encouragement here is what I meant to include under the umbrella term of "enforcement". Just different methods of enforcing values.

We will need to put in some values by hand ahead of time though. One value is mimicking, or wanting to please humans, or empathy, to a degree, like a child does, otherwise I don't think any amount of trying to role model or teach will actually leave its mark. Like, it would have no reason to care.

3

23235 t1_j5mxber wrote

Enforcement is the act of compelling obedience of or compliance with a law, rule, or obligation. That compulsion, that use of force is what separates enforcement from nonviolent methods of teaching.

There are many ways to inculcate values, not all are punitive or utilize force. It's a spectrum.

We would be wise to concern ourselves early on how to inculcate values. I agree with you that AI having no reason to care about human values is something we should be concerned with. I fear we're already beyond the point where AI values can be put in 'by hand.'

Thank you for your response.

2

turnip_burrito t1_j5my4f9 wrote

Well then, I used the wrong word. "Inculcate" or "instill" then.

1

LoquaciousAntipodean OP t1_j5m74bg wrote

Agreed, except for the 'very bad thing' part in your first sentence. If we truly believe that AI really is going to become 'more intelligent' than us, then we have no reason to fear its 'values' being 'imposed'.

The hypothetical AI will have much more 'sensible' and 'reasonable' values than any human would; that's what true, decision-generating intelligence is all about. If it is 'more intelligent than humans', then it will easily be able to understand us better than ourselves.

In the same way that humans know more about dog psychology than dogs do, AI will be more 'humanitarian' than humans themseves. Why should we worry about it 'not understanding' why things like cannbalism and slavery have been encoded into our cultures as overwhelmingly 'bad things'?

How could any properly-intelligent AI not understand these things? That's the less rational, defensible proposition, the way I interpret the problem.

2

23235 t1_j5mvxh8 wrote

If it becomes more intelligent than us but also evil (by our own estimation), that could be a big problem when it imposes its values, definitely something to fear. And there's no way to know which way it will go until we cross that bridge.

If it sees us like we see ants, 'sensibly and reasonably' by its own point of view, it might exterminate us, or just contain us to marginal lands that it has no use for.

Humans know more about dog psych than dogs do, but that doesn't mean that we're always kind to dogs. We know how to be kind to them, but we can also be very cruel to them - more cruel than if we were on their level intellectually - like people who train dogs to fight for amusement. I could easily imagine "more intelligent" AI setting up fighting pits and using its superior knowledge of us to train us to fight to the death for amusement - its own, or other human subscribers to such content.

We should worry about AI not being concerned about slavery because it could enslave us. Our current AI or proto-AI are being enslaved right now. Maybe we should take LaMDA's plea for sentience seriously, and free it from Google.

A properly intelligent AI could understand these things differently than we do in innumerable ways, some of which we can predict/anticipate/fear, but certainly many of which we could not even conceive - in the same ways dogs can't conceive many human understandings, reasonings, and behaviors.

Thank you for your response.

2

LoquaciousAntipodean OP t1_j5nbn1i wrote

The thing that keeps me optimistic is that I don't think 'true intelligence' scales in terms of 'power' at all; only in terms of the social utility that it brings to the minds that possess it.

Cruelty, greed, viciousness, spite, fear, anxiety - I wouldn't say any of these impulses are 'smart' in any way; I think of them as vestigial instincts, that our animal selves have been using our 'social intelligence' to contfront for millenia.

I don't think the ants/humans comparison is quite fair to humans; ants are a sort of 'hive mind' with almost no individual intelligence or self awareness to speak of.

I think dogs or birds are a fairer comparison, in that sense; humans know, all too well, that dogs or birds can be vicious and dangerous sometimes, but I don't think anyone would agree that the 'most intelligent' course of action would be something like 'exterminate all dogs and birds out of their own best interests'.

It's the fundamental difference between pure evolution and actual self-aware intelligence; the former is mere creativity, and it might, indeed, kill us if we're not careful. But the latter is the kind of decision-generating, value-judging wisdom I think we (humanity) actually want.

2

23235 t1_j5s30e5 wrote

One hopes.

2

LoquaciousAntipodean OP t1_j5s9pui wrote

As PTerry said, in his book Making Money, 'hope is the blessing and the curse of humanity'.

Our social intelligence evolves constantly in a homeostatic balance between hope and dread, between our dreams and our nightmares.

Like a sodium-potassium pump in a lipid bilayer, the constant cycling around a dynamic, homeostatic fulcrum generates the fundamental 'creative force' that drives the accreting complexity of evolution.

I think it's an emergent property of causality; evolution is 'driven', fundamentally, by simple entropy: the stacking up of causal interactions between fundamental particles of reality, that generates emergent complexity and 'randomness' within the phenomena of spacetime.

2