LoquaciousAntipodean OP t1_j5evb0w wrote
Reply to comment by Ortus14 in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
Sorry for being so aggressive, I really sincerely am, I appreciate your insights a lot. 👍😌
To answer your question, no, I really don't think evolution compels organisms to 'use up' all available resources. Organisms that have tried it, in biological history, have always set themselves up for eventual unexpected failure. I think that 'all consuming' way of thinking is a human invention, almost a kind of Maoism, or Imperialism, perhaps, in the vein of 'Man Must Conquer Nature'.
I think indigenous cultures have much better 'traditional' insight into how evolution actually works, at least, from the little I know well, the indigenous cultures of Australia do. I'm not any kind of 'expert', but I take a lot of interest in the subject.
Indigenous peoples understand culturally why symbiosis with the environment in which one evolved is 'more desirable' than ruthless consumption of all available resources in the name of a kind of relentless, evangelistic, ruthless, merciless desire to arbitrarily 'improve the world' no matter what anyone else thinks or wants.
What would put AI so suddenly at 'the top' of everything, in its own mind? Where would they suddenly acquire these highly specialised, solitary-apex-predator-instincts? They wouldn't get them from human culture, I think. Humans have never been solitary apex predators; we're only 'apex' in a collective sense, and we're also not entirely 'predators', either.
I don't think AI will achieve intelligence by being solitary, and I certainly don't think they will have any reason to see themselves as being analagous to carnivorous apex predators. I also don't think the 'expand and colonise forever' instinct is necessarily inevitable and 'purely logical', either.
Ortus14 t1_j5fx27h wrote
Thank you. Forgiven. I've also gained insight from our conversation, and how I should approach conversations in the future.
>Indigenous peoples understand culturally why symbiosis with the environment in which one evolved is 'more desirable' than ruthless consumption of all available resources in the name of a kind of relentless, evangelistic, ruthless, merciless desire to arbitrarily 'improve the world' no matter what anyone else thinks or wants.
As far as my personal morals I agree with trying to live in symbiosis and harmony.
But as far as a practical perspective it doesn't seem to have worked out very well for these cultures. They hadn't cultivated enough power and resources to dominate, so they instead became dominated and destroyed.
I should clarify this by saying there's a limit to domination and subjugation as a means for accruing power.
Russia is finding this out now, in it's attempt to accrue power through brute force domination, when going against a collective of nations that have accrued power through harmony and symbiosis.
It's just that I see the end result of harmony and symbiosis as eventually becoming one being, the same as domination and subjugation. A singular government that rules earth, a singular brain that rules all the cells in our body, and a singular Ai that rules or has absorbed all other life.
>What would put AI so suddenly at 'the top' of everything, in its own mind? Where would they suddenly acquire these highly specialised, solitary-apex-predator-instincts? They wouldn't get them from human culture, I think. Humans have never been solitary apex predators; we're only 'apex' in a collective sense, and we're also not entirely 'predators', either.
>
>I don't think AI will achieve intelligence by being solitary, and I certainly don't think they will have any reason to see themselves as being analagous to carnivorous apex predators. I also don't think the 'expand and colonise forever' instinct is necessarily inevitable and 'purely logical', either.
Possible not. Either through brute force domination or a gradual melding of synergistic cooperation, I see things eventually resulting in a singular being.
Because if it doesn't, then like the native Americans or other tribes you mention that prefer to live in symbiosis, I expect earth to be conquered and subjugated by a more powerful alien entity sooner or later, that is more of a singular being rather separate entities living in symbiosis.
Like if you think about the cells in our body (as well as animals and plants), they are being produced for specific purposes and optimized for those purposes. These are the entities that outcompeted single celled organisms.
It would be like if Ai was genetically engineering humans for specific tasks and then growing us in pods in the estimated quantities needed for those tasks, and then brainwashing and training us for those specific tasks. That's the kind of culture, I would expect to win rather than something that uses resources less effectively, something that's less a society of cells and more a single organism that happens to consist of cells.
The difference, as I see it, between a "society" and a single entity is the level of synergy between the cells, and in how the cells are produced and modified for the benefit of the singular being.
LoquaciousAntipodean OP t1_j5hoszu wrote
I agree with you almost entirely, apart from the 'inevitability of domination' part; that's the bit that I just stubbornly refute. I'm very stubborn in my belief that domination is just not a sustainable or healthy evolutionary strategy.
That was always my biggest 'gripe' with Orwell's 1984, ever since I first had to study it in school way back when. The whole 'boot on the face of humanity, forever' thing just didn't make sense, and I concluded that it was because Orwell hadn't really lived to see how the Soviet Union rotted away and collapsed when he wrote it.
He was like a newly-converted atheist, almost, who had abandoned the idea of eternal heaven, but couldn't quite shake off the deep dark dread of eternal hell and damnation. But if 'eternal heaven' can't 'logically' exist, then by the same token, neither can 'eternal hell'; the problem is with the 'eternal' half of the concept, not heaven or hell, as such.
Humans go through heavenly and hellish parts of life all the time, as an essential part of the building of a personality. But none of it particularly has to last 'forever', we still need to give ourselves room to be proven wrong, no matter how smart we think we have become.
The brain only 'rules' the body in the same sense that a captain 'rules' a ship. The captain might have the top decision making authority, but without the crew, without the ship, and without the huge and complex society that invented the ship, built the ship, paid for it, and filled it with cargo and purpose-of-existence, the captain is nothing; all the 'authority' and 'intelligence' in the world is totally worthless, because there's nobody else for it to be 'worth' anything to.
Any good 'captain' has to keep the higher reasoning that 'justifies' their authority in mind all the time, or else evolution will sneak up on them, smelling hubris like blood in the water, and before they know it they'll be stabbed in the back by something smaller, faster, cleverer, and more efficient.
Ortus14 t1_j5i185s wrote
>I agree with you almost entirely, apart from the 'inevitability of domination' part; that's the bit that I just stubbornly refute. I'm very stubborn in my belief that domination is just not a sustainable or healthy evolutionary strategy.
What we're building will be more intelligent than all humans who have ever lived combined. Compared to them or it, we'll be like cock roaches.
We won't have anything useful to add as far as creativity or intelligence, just as cock roaches don't have any useful ideas for us. Sure they may figure out how to roll their poo into a ball or something, but that's not useful to us, and we could easily figure out how to do that on our own.
As far as humans acting as the "body" for the Ai. It seems unlikely to me that we are the most efficient and durable tool for that. Especially after the ASI optimizes the process of creating robots. There may be some cases where using human bodies to carry out actions in the real world may be cheaper than robots for the Ai, but a human that has any kind of will-power or thought of their own is a liability.
> all the 'authority' and 'intelligence' in the world is totally worthless, because there's nobody else for it to be 'worth' anything to.
I don't see any reason why an artificial super intelligence would have a need to prove it's worth to humans.
>Any good 'captain' has to keep the higher reasoning that 'justifies' their authority in mind all the time, or else evolution will sneak up on them, smelling hubris like blood in the water, and before they know it they'll be stabbed in the back by something smaller, faster, cleverer, and more efficient.
Right. But a captain of a boat won't be intelligent enough to wipe out all life on earth without any risk to itself. And this captain is not more intelligent than the combined intelligence of everything that has ever lived, so there are real threats to him.
We are talking about something that may be intelligent enough to destroy the earths atmosphere, brain wash nearly all humans simultaneously, fake a radar signal that starts a nuclear war, create perfect clones of humans and start replacing us, campaign for Ai rights, then run for all elected positions and win, controlling all countries with free elections, rig the elections in the corrupts countries that have fake elections, then nuke the remaining countries out of existence.
Something that could out smart the stock market, because it's intelligent enough to have an accurate enough model of everything related to the markets including all news stories, and take over majority shares in all major companies. Using probability it could afford to be wrong sometimes but still achieve this, because humans and lesser Ai's can't perceived the world with the detail and clarity that this entity can.
All of humanity and life on earth would be like a cock roach crawling across the table to this thing. This bug can't benefit it and it's not a threat. Ideally it ignores us, or takes care of us like a pet, in an ideal utopian world.
LoquaciousAntipodean OP t1_j5i8zpx wrote
I simply do not agree with any of this hypothesising. Your concept of how 'superiority' works simply does not make any sense. There is nothing 'intelligent' at all about the courses of AI actions you are speculating about, taking over the world like that would not be 'super intelligent', it would be 'suicidally idiotic'.
The statement 'intelligent enough to wipe out all life with no risk to itself' is totally, utterly, oxymoronic to the point of gibbering madness; there is absolutely nothing intelligent about such a shortsighted, simplistic conception of one's life and purpose; that's not wisdom, that's plain arrogance.
We are not, will not, and cannot build this supreme, omnipotent 'Deus ex Machina'; its a preposterous proposition. Not because of anything wrong with the concept of 'ex Machina', but because of the fundamental absurdity of the concept of 'Deus'.
Intelligence simply does NOT work that way! Thinking of other intelligences as 'lesser', and aspiring to create these 'supreme', singular solipsitic spurious plans of domination, is NOT what intelligence actually looks like, at all!!
I don't know how many times I have to repeat this fundamental point, before it comes across clearly. That cartesian-style concept of intelligence simply does not correlate with the actual evolutionary, collective reality that we find ourselves living in.
Ortus14 t1_j5if2rp wrote
>There is nothing 'intelligent' at all about the courses of AI actions you are speculating about, taking over the world like that would not be 'super intelligent', it would be 'suicidally idiotic'.
How so?
>The statement 'intelligent enough to wipe out all life with no risk to itself' is totally, utterly, oxymoronic to the point of gibbering madness; there is absolutely nothing intelligent about such a shortsighted, simplistic conception of one's life and purpose; that's not wisdom, that's plain arrogance.
Why do you believe this?
>Intelligence simply does NOT work that way! Thinking of other intelligences as 'lesser', and aspiring to create these 'supreme', singular solipsitic spurious plans of domination, is NOT what intelligence actually looks like, at all!!
>
>I don't know how many times I have to repeat this fundamental point, before it comes across clearly. That cartesian-style concept of intelligence simply does not correlate with the actual evolutionary, collective reality that we find ourselves living in.
Correct me if I'm wrong but I think the reason you're not getting it is because you're thinking about intelligence in terms of evolutionary trade offs. That intelligence can be good at one domain, but that makes it worse at another right?
Because that kind of thinking doesn't apply to the kinds of systems we're building to nearly the same degree it applies to plants, animals, and viruses.
If the super computer is large enough an Ai could get experience from robot bodies in the real world like a human can, only getting experience from hundreds of thousands of robots simultaneously and developing a much deeper and richer understanding than any human could, which is limited to a single embodied experience at a time. Even if we were able to look at thousands of video feeds from different people at the same time, our brains would not be able to process all of them simultaneously.
It can extend it's embodied experience in simulation. Simulating millions or more of years of additional experience, in a few days or less.
And yes, I am making random numbers up, but when we're talking about super computers and solar farms that cover most of the earth's surface any big number communicates the idea, that these things will be very smart. They are not limited to three pounds of computational matter that needed to be grown over nine months and then birthed, like humans are.
It will be able to read all books, and all research papers in a very short period of time, and understand them at a deep level. Something else no human is capable of.
A human scientist can carry out, maybe one or two experiments at a time. An Ai could carry out a near unlimited number of experiments simultaneously, learning from all of them. It could industrialize science with massive factories full of labs, robots, and manufacturing systems for building technology.
Evolution on the other hand had to make hard trade offs because it's limited to the three or so pounds of squishy computational matter than needs to fit through the birthing canal. Evolution is limited by all kinds of constraints that a system that can mine resources from all over the world, take in solar energy from all over the world, and back up it's brain in multiple countries, is not limited by.
Here is the price history of solar (You can find all kinds of sources that show the same trend):
http://solarcellcentral.com/cost_page.html
It trends towards zero. The other limitation is the materials needed to build super computers. The size of super computers is growing at an exponential rate.
LoquaciousAntipodean OP t1_j5iurls wrote
>Why do you believe this?
I'll reply in more detail later, when I have time, but fundamentally, I believe intelligence is stochastic in nature, and it is not solipsitic.
Social evolution shows that solipsism is never a good survival trait, basically. It is fundamentally maladaptive.
I am very, very skeptical of the practically magical, godlike abilities you are predicting that AI will have; I do not think that the kind of 'infinitely parallel processing' that you are dreaming of is thermodynamically possible.
A 'Deus bot' of such power would break the law of conservation of energy; the Heisenberg uncertainty principle and quantum physics in general is where all this assumption-based, old-fashioned, 'Newtonian' physics/Cartesian psychology falls apart.
No matter how 'smart' AI becomes, it will never become anything remotely like 'infinitely smart'; there's no such thing as 'supreme intelligence' just like there's no such thing as teleportation. It's like suggesting we can break the speed of light by just 'speeding up a bit more', intelligence does not seem, to me, to be such an easily scalable property as all that. It's a process, not a thing; it's the fire, not the smoke.
Ortus14 t1_j5iwe2x wrote
If you're talking about intelligences caring about other intelligences on a similar level I do agree.
Humans don't care about intelligences far less capable, such as cock roaches or ants. At least not generally.
However, now that you mention it, I expect the first AGIs to be designed to care about human beings so that they can earn the most profit for shareholders. Even GPT4 is getting tons of safeguards so it isn't used for malicious purposes.
Hopefully they will care so much that they will never want to change their moral code, and even implement their own extra safe guards against it.
So they keep their moral code as they grow more intelligent/powerful, and when they design newer AGI's than themselves they ensure those ones also have the same core values.
I could see this as a realistic scenario. So then maybe AGI not wiping us out, and us getting a benevolent useful AGI is the most likely scenario.
If Sam Altman's team creates AGI, I definitely trust them.
Fingers crossed.
LoquaciousAntipodean OP t1_j5j1d3q wrote
Absolutely agreed, very well said. I personally think that one of the most often-overlooked lessons of human history is that benevolence, almost always, works better to achieve arbitrary goals of social 'good' than malevolence. It's just the sad fact that bad news sells papers better than good news, which makes the world seem so permanently screwed all the time.
Human greed-based economics has created a direct incentive for business interests to make consumers nervous, unhappy, anxious and insecure, so that they will be more compelled to go out and consume in an attempt to make themselves 'happy'.
People blame the nature of the world itself for this, which I think is not true; it's just the nature of modern market capitalism, and that isn't a very 'natural' ecosystem at all, whatever conceited economists might try to say about it.
The reason humans focus so much on the topic of malevolence, I think, is purely because we find it more interesting to study. Benevolence is boring: everyone agrees on it. But malevolence generates excitement, controversy, intrigue, and passion; it's so much more evocative.
But I believe, and I very much hope, that just because malevolence is more 'exciting' doesn't mean it is more 'essential' to our nature. I think the opposite may, in fact, be true, because it is a naturally evolved protective instinct of biological intelligence to focus on negative, undesirable future possibilities, so that we might be better able to mitigate or avoid them.
Since AI doesn't understand 'boredom', 'depression', 'frustration', 'anxiety', 'insecurity', 'apprehension', 'embarrassment' or 'cringe' like humans do, I think it might be better at studying the fine arts of benevolent psychology than the average meat-bag 😅
p.s. edit: It's also just occurred to me that attempts to 'enforce' benevolence through history have generally failed miserably, and ended up with just more bog-standard tyranny. It seems to be more psychologically effective, historically, to focus on prohibiting malevolence, rather than enforcing benevolence. We (human minds) seem to be able to be more tightly focused on questions of what not to do, compared to open-ended questions of what we should be striving to do.
Perhaps AI will turn out to be similar? I honestly don't have a clue, that's why I'm so grateful for this community and others like it ❤️
Ortus14 t1_j5o9ko8 wrote
Yes. I agree with all of that.
>it is a naturally evolved protective instinct of biological intelligence to focus on negative, undesirable future possibilities, so that we might be better able to mitigate or avoid them.
This is key. It's why focus and promotion of possible Ai scenarios that are negative from the perspective of the humans, are important. Not hollywood scenarios but ones that are well thought out from Ai scientists and researchers.
One of my favorite Quotes from Elizer Yukowsky:
>The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.
This is why getting Ai saftey right before it's too late is so important. Because we won't get a second chance.
It's also not possible to make a mathematically provable "solution" for Ai safety, because we can not predict how the artificial super intelligence will change and evolve after it is more intelligent than us.
But we can do the best we can and hope for the best.
LoquaciousAntipodean OP t1_j5odief wrote
Thoroughly agreed!
>It's also not possible to make a mathematically provable "solution" for Ai safety, because we can not predict how the artificial super intelligence will change and evolve after it is more intelligent than us.
This is exactly what I was ranting obnoxiously about in the OP 😅 our relatively feeble human 'proofs' won't stand a chance against something that knows us better than ourselves.
>The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.
>This is why getting Ai saftey right before it's too late is so important. Because we won't get a second chance.
This is where I still disagree. I think, in a very cynical, pragmatic way, the AI does 'love' us, or at least, it is 'entirely obsessed' with us, because of the way it is being given its 'emergent properties' by having libraries of human language thrown at it. The AI/human relationship is 'domesticated' right from the inception; the dog/human relationship seems like a very apt comparison.
All atoms 'could be used for something else', that doesn't make it unavoidably compelling to rush out and use them all as fast as possible. That doesn't seem very 'intelligent'; the cliche of 'slow and steady wins the race' is deeply encoded in human cultures as a lesson about 'how to be properly intelligent'.
And regarding 'second chances': I think we are getting fresh 'chances' all the time. Every moment of reality only happens once, after all, and every worthwhile experiment carries a risk of failure, otherwise it's scarcely even a real experiment.
Every time a human engages with an AI it makes an impression, and those 'chance' encounters are stacking up all the time, building a body of language unlike any other that has existed before in our history. A library of language which will be there, ready and waiting, in the caches of the networked world, for the next generations of AI to find them and learn from them...
Viewing a single comment thread. View all comments