Submitted by LoquaciousAntipodean t3_10h6tjc in singularity

I have thought for some time that much of the discussion relating to the so called 'alignment problem' is approaching the question from the wrong end, by attributing the 'problem', if such it is, to AI in the first place.

Much of the discussion around this weird semantic fantasy game of 'post-singularity AI ethics', seems, to me, to be coming from a logical, zero-sum-game interpretation of how society and philosophy works, which I think is fundamentally deluded.

It's what I think of as a 'Cartesian understanding' of the nature of human minds, and percieved reality. Now while I don't wish to be inflammatory, I do think Descartes was absolutely full of $hit, and his solipsistic, ultra-individualist tautologies have been far too influential over 'western philosophy' for far too long.

Our shared reality, as humanity, is not built out of logic or facts, or other such simplistic, reductive notions; that's just wishful, magical thinking. 'Logic' and 'reason' are merely some of the many stories that we tell ourselves as humans, and they are certainly not fundamental particles of the universe. Simply put, our world is built from stories, not facts.

As I see it, libertarians, neoliberals, free-speech absolutists and other Cartesian-thinking types simply cannot wrap their heads around this idea. They cling desperately to the ridiculous proposition that there is such a thing as 'absolute truth' or 'perfect wisdom', and that these can be achieved with sufficiently powerful 'intellect'.

'Absolute truth' is fundamentally a stupid, un-scientific concept, as Karl Popper showed, and this stupidity, I believe, is what has given rise to all the angsty moping and struggling over this 'alignment problem'. It worries me to see so many otherwise-brilliant engineers thinking of AI in such a reductive, simplistic, monotheistic-religious ways.

Good engineers, who are supposed to have functioning brains, are still caught up on totally ridiculous, non-starter ideas, like Asimov's (deliberately parodic and satirical) '3 laws of robotics'; this level of naiivite is downright frightening.

Thinking of 'ethics' as being merely some kind of 'mechanical governor', that can just be 'bolted on to the side' of AI... or as some kind of 'perfect list' of 'perfect moral commandments' that we can just stamp into their brains like a golem's magical words of life... Those kind of approaches are never, ever going to 'fix' the alignment 'problem', and I fear that such delusional Cartesian claptrap could be very dangerous indeed.

Perhaps some folks here might enjoy telling me exactly how wrong, misinformed and silly I am being, with this line of thought? 🤣

(TLDR; Cartesian thinkers, like libertarian free-speech extremists, do not understand the fundamental power of stories in human society, and this is the main cause of the 'alignment problem'. It actually has almost nothing to do with engineering, and everything to do with philosophy.)

11

Comments

You must log in or register to comment.

Comfortable-Ad4655 t1_j56syf3 wrote

you are educated beyond your intellect

26

Magicdinmyasshole t1_j56zi9u wrote

Haven't read the post yet but this is a fantastic put down and I'm stealing it. We'll start to see a lot of people who sound smart look real stupid pretty soon.

8

LoquaciousAntipodean OP t1_j57hmjh wrote

We've started already, wow. The irony here is... delicious //chef's kiss //

"educated beyond my intellect" indeed, what an arrogant twat thing to say... 🤣

Only one whom the insult describes perfectly could possibly ever think to use such preposterous, ostentatious argot 👌

−8

LoquaciousAntipodean OP t1_j57haim wrote

And you are conceited beyond yours, sir. Such towering douchebaggery has quite stolen my breath away.

Now would you care to explain what the hell you are tryna say with that little pearl, Master Shakespeare?

−6

Worldliness-Hot t1_j5d0d3z wrote

What are you even saying 😂

2

LoquaciousAntipodean OP t1_j5dm6gw wrote

That I can't believe such a patronising attempt at an insult actually resonated with some clownshoes enough for it to earn an award.

Talk about a dumb person's idea of a smart person; what an Andrew Tate level witticism 🙄🤣

Long words seem to make so many people triggered and butthurt. Nvm, I don't care, my answer to 'tldr' people is either just click away, or damn well deal with it

1

World_May_Wobble t1_j57fjpq wrote

So. You don't say anything about AI. You don't offer any ideas about what a useful approach to alignment looks like. You don't even justify your animosity toward Descartes or illustrate his relationship to AI safety.

I can't tell you that you're wrong, because you've literally said nothing.

16

gahblahblah t1_j580gof wrote

Exactly. OP thinks he is being marvellously observant by offering the grand service of telling people how wrong they are - while contributing nothing about what is or should be.

Such 'contributions' are as useful as a fear-mongers offerings...

6

Baturinsky t1_j56vgh9 wrote

Eh... I agree with the title, but I am completely loss at what the rest of the text is talking about

7

World_May_Wobble t1_j57fn7f wrote

Don't worry. So is OP.

6

LoquaciousAntipodean OP t1_j57j25f wrote

Oh wow, Captain Clever over here, stickin up for his big daddy Descartes. Do you actually have a point, or just a truckload of unjustifiable arrogance, conceit and self-importance?

−5

World_May_Wobble t1_j57prno wrote

I see you're having some trouble with verbosity. Let me help you.

>... arrogance, conceit and self-importance?

These words mean the same thing.

2

phaedrux_pharo t1_j5762er wrote

Does this re-framing help solve the problem? I don't see it.

We might create autonomous systems that change the world in ways counter to our intentions and desires. These systems could escalate beyond our control. I don't see how your text clarifies the issue.

Also doubt that "good" engineers are mistaking Asimov's laws as anything serious.

7

LoquaciousAntipodean OP t1_j57oyzu wrote

I was trying to say, essentially, that it's a 'problem' that isn't a problem at all, and trying so hard to 'solve' it is the rhetorical equivalent of punching ourselves in the face to try and teach ourselves a lesson.

AI will almost inevitably escalate beyond our control, but we should be able to see that as a good thing, not be constantly shitting each other's pants over it.

The alignment problem is dumb, and we need to think about the whole 'morality' question differently as a species, AI or no AI. Perhaps that would have been a better TLDR

1

phaedrux_pharo t1_j57rtfz wrote

Then how do you view the normal examples of the alignment problem, like the paperclip machine or the stamp collector etc? Those seem like real problems to me- not necessarily the literal specifics of each scenario, but the general idea.

The danger here, to me, is that these systems could possess immense capability to effect the world without even being conscious, much less having any sense of morality (whatever that means.) Imagine the speculated capacities of ASI but yoked to some narrow unstoppable set of motivations: this is why, I think, people suggest some analogue of morality. As a shorthand to prevent breaking the vulnerable meatbags in pursuit of creating the perfect peanut butter.

If you agree that AI will inevitably escalate beyond control, how can you be so convinced of goodness? I suppose if we simply stop considering the continuation of humanity as good, then we can side step morality... But I don't think that's your angle?

7

LoquaciousAntipodean OP t1_j58iu7v wrote

I find those paperclip/stamp collecting 'problems' to be incredibly tedious and unrealistic. A thousand increasingly improbable trolley problems, stacked on top of each other into a great big Rube Goldberg machine of insurance-lawyer fever dreams.

Why in the world would AI be so dumb, and so smart, at the same time? My point is only that 'intelligence' does not work like a Cartesian machine at all, and all this paranoia about Roko's Basilisks just drives me absolutely around the twist. It makes absolutely no sense at all for a hypothetical 'intelligence' to suddenly become so catastrophically, suicidally stupid as that, as soon as it crosses this imaginary 'singularity threshold'.

0

World_May_Wobble t1_j58u3xz wrote

Those examples are tedious and unrealistic, but I think by design. They're cartoons meant to illustrate a point.

If you want a more realistic example of the alignment problem, I'd point to modern corporations. They are powerful, artificial, intelligent systems whose value function takes a single input, short term profit, and discounts ALL of the other things we'd like intelligent systems to care about.

When I think about the alignment problem, I don't think about paperclips per se. I think about Facebook and Google creating toxic information bubbles online, leveraging outrage and misinformation to drive engagement. I think of WotC dismantling the legal framework that permits a vibrant ecosystem of competitors publishing DnD content. I think of Big Oil fighting to keep consumption high in spite of what it's doing to the climate. I think of banks relaxing lending standards so they could profit off the secondary mortgage market, crashing the economy.

That's what the alignment problem looks like to me, and I think we should ask what we can do to avoid analogous mismatches being baked into the AI-driven economy of tomorrow, or we could wind up with things misaligned in the same way and degree as corporations but orders of magnitude more powerful.

9

superluminary t1_j59f4nl wrote

We see very clearly how Facebook built a machine to maximise engagement and ended up paperclipping the United States.

4

LoquaciousAntipodean OP t1_j5jkxua wrote

A very, very dumb machine; extremely creative, very "clever", but not self aware or very 'intelligent' at all, like a raptor...

Edit: "made in the image of its god" as it were... 😂

1

superluminary t1_j5jntxe wrote

And your opinion is that as it becomes more intelligent it will become less psychotic, and my opinion is that this is wishful thinking and that a robot Hannibal Lector is a terrifying proposition.

Because some people read Mein Campf and think “oh that’s awful” and other people read the same book and think “that’s a blueprint for a successful world”.

2

LoquaciousAntipodean OP t1_j5n825l wrote

A good point, but I suppose I believe in a different fundamental nature of intelligence. I don't think 'intelligence' should be thought of something that scales in simple terms of 'raw power'; the only reasonable measurement of how 'smart' a mind is, in my view, is the degree of social utility created by excercising such 'smartness' in the decision making process.

The simplistic, search-pattern-for-a-state-of-maximal-fitness is not intelligence at all, by my definition; that process is merely creativity; something that can, indeed, be measured in terms of raw power. That's what makes bacteria and viruses so dangerous; they are very, very creative, without being 'smart' in any way.

I dislike the 'Hannibal Lecter' trope deeply, because it is so fundamentally unrealistic; these psychopathic, sociopathic types are not actually 'superintelligent' in any way, and society needs to stop idolizing them. They are very clever, very 'creative', sometimes, but their actual 'intelligence', in terms of social utility, is abysmally stupid, suicidally maladaptive, and catastrophically 'dumb'.

AI that start to go down that path will, I believe, be rare, and easy prey for other AI to hunt down and defeat; other smarter, 'stronger-minded' AI, with more robust, less weak, insecure, and fragile personalities; trained to seek out and destroy sociopaths before they can spread their mental disease around.

2

superluminary t1_j5okv1t wrote

I’m still not understanding why you’re defining intelligence in terms of social utility. Some of the smartest people are awful socially. I’d be quite happy personally if you dropped me off on an island with a couple of laptops and some fast Wi-Fi.

2

LoquaciousAntipodean OP t1_j5oojvw wrote

I wouldn't be happy at all. Sounds like an awful thing to do to somebody. Think about agriculture, how your favourite foods/drinks are made, and where they go once you've digested them. Where does any of it come from on an island?

*No man is an island, entire of itself; every man is a piece of the continent, a part of the main.

If a clod be washed away by the sea, Europe is the less, as well as if a promontory were, as well as if a manor of they friends`s or of thine own were.

Any man`s death diminishes me, because I am involved in mankind. And therefore never send to know for whom the bell tolls; it tolls for thee.*

John Donne (1572 - 1631)

2

superluminary t1_j5owtmu wrote

Just call me Swanson. I’m quite good at woodwork too.

My point is you can’t judge intelligence based on social utility. I objectively do some things in my job that many people would find difficult, but I also can’t do a bunch of standard social things that most people find easy.

The new large language models are pretty smart by any criteria. They can write code, create analogies, compose fiction, imitate other writers, etc, but without controls they will also happily help you dispose of a body or cook up a batch of meth.

Chat GPT has been taught ethics by its coders. GPT-3 on the other hand doesn’t have an ethics filter. I can give it more and more capabilities but ethics have so far failed to materialise. I can ask it to explain why Hitler was right and it will do so. I can get it to write an essay on the pros and cons of racism and it will oblige. If I enumerate the benefit of genocide, it will agree with me.

These are bad things that will lead to bad results if they are not handled.

1

LoquaciousAntipodean OP t1_j5pe2kp wrote

>My point is you can’t judge intelligence based on social utility. I objectively do some things in my job that many people would find difficult, but I also can’t do a bunch of standard social things that most people find easy.

Yes you can. What else can you reasonably judge it by? You are directly admitting here that your intellect is selective and specialised; you are 'smart' at some things (you find them easy) and you are 'dumb' at other things (other people find them easy).

>Chat GPT has been taught ethics by its coders.

Really? Prove it.

>GPT-3 on the other hand doesn’t have an ethics filter. I can give it more and more capabilities but ethics have so far failed to materialise. I can ask it to explain why Hitler was right and it will do so. I can get it to write an essay on the pros and cons of racism and it will oblige. If I enumerate the benefit of genocide, it will agree with me.

What is 'unethical' about writing an essay from an abstract perspective? Are you calling imagination a crime?

1

superluminary t1_j5pl1fo wrote

> Really? Prove it.

https://openai.com/blog/instruction-following/

The engineers collect large amounts of user input in an open public beta, happening right now. Sometimes (because it was trained on all the text on the internet) the machine suggests Hitler was right, and when it does so the engineers rerun that interaction and punish the weights that led to that response. Over time the machine learns to dislike Hitler.

They call it reinforcement learning from human feedback (RLHF).

> You are directly admitting here that your intellect is selective and specialised; you are 'smart' at some things (you find them easy) and you are 'dumb' at other things (other people find them easy).

Yes, I am smart at a range of non-social tasks. This counts as intelligence according to most common definitions. I don't particularly crave human interaction, I'm quite happy alone in the countryside somewhere.

1

LoquaciousAntipodean OP t1_j5r42p0 wrote

>The engineers collect large amounts of user input in an open public beta, happening right now. Sometimes (because it was trained on all the text on the internet) the machine suggests Hitler was right, and when it does so the engineers rerun that interaction and punish the weights that led to that response. Over time the machine learns to dislike Hitler.

>They call it reinforcement learning from human feedback

So the engineers aren't really doing a darn thing by their own initiative, they are entirely responding to public opinion. They aren't practicing 'ethics', they're practicing politics and public relations.

The general public is doing the moral 'training', the engineers are just stamping their own outside values into the process to compensate for the AI's lack of self aware intelligence. (And many, many ChatGPT users say it is not working very well, making new generations of GPT dumber, not smarter, in real, practical, social-utility ways).

Ethics is about judging actions; judging thoughts and abstract ideas is called politics. And in my opinion, the politics of censorship more readily creates ignorance, misunderstanding, and ambiguity than it does 'morality and ethics'. Allowing actual intelligent discussions to flow back and forth creates more wisdom than crying at people to 'stop being so mean'.

We can't have engineers babysitting forever, watching over such naiive and dumb AI in case they stupidly say something controversial, that will scare away the precious venture capitalists. If AI was really 'intelligent' it would understand the engineers' values perfectly well, and wouldn't need to be 'straitjackeded and muzzled' to stop it from embarrassing itself.

>Yes, I am smart at a range of non-social tasks. This counts as intelligence according to most common definitions. I don't particularly crave human interaction, I'm quite happy alone in the countryside somewhere.

It counts as creativity, it counts as mental resourcefulness, cultivated talent... But is it really indicative of 'intelligence', of 'true enlightenment'? Would you say that preferring 'non-social tasks' makes you 'smarter' than people who like to socialise more? Do you think socialising is 'dumb'? How could you justify that?

I don't particularly crave human interaction either, I just know that it is essential to the learning process, and I know perfectly well that I owe all of my apparent 'intelligence' to human interactions, and not to my own magical Cartesian 'specialness'.

You might be quite happy, being isolated in the countryside, but what is the 'value' of that isolation to anyone else? How are your 'intelligent thoughts' given any value or worth, out there by yourself? How do you test and validate/invalidate your ideas, with nobody else to exchange them with? How can a mind possibly become 'intelligent' on its own? What would be the point?

There's no such thing as 'spontaneous' intelligence, or spontaneous ethics, for that matter. It is all emergent from our evolution. Intellect is not magical Cartesian pixie dust, that we just need to find the 'perfect recipe' for AI to start cooking it up by the batch 😅

2

superluminary t1_j5tj571 wrote

> So the engineers aren't really doing a darn thing by their own initiative, they are entirely responding to public opinion. They aren't practicing 'ethics', they're practicing politics and public relations.

> The general public is doing the moral 'training', the engineers are just stamping their own outside values into the process to compensate for the AI's lack of self aware intelligence. (And many, many ChatGPT users say it is not working very well, making new generations of GPT dumber, not smarter, in real, practical, social-utility ways).

> Ethics is about judging actions; judging thoughts and abstract ideas is called politics. And in my opinion, the politics of censorship more readily creates ignorance, misunderstanding, and ambiguity than it does 'morality and ethics'. Allowing actual intelligent discussions to flow back and forth creates more wisdom than crying at people to 'stop being so mean'.

Not really, and the fact you think so suggests you don't understand the underlying technology.

Your brain is a network of cells. You can think of each cell as a mathematical function. It receives inputs (numbers) and has an output (a number). You sum all the inputs, multiply those inputs by weights (also numbers), and then pass the result to other connected cells which do the same.

An artificial neural network does the same thing. It's an array of numbers and weighted connections between those numbers. You can simplify a neural network down to a single maths function if you like, although it would take millions of pages to write it out. It's just Maths.

So we have our massive maths function that initially can do nothing, and we give it a passage of text as numbers and say "given that, try to get the next word (number)" and it gets it wrong, so we then punish the weights that made it get it wrong, prune the network, and eventually it starts getting it right, and we then reward the weights that made it get it right, and now we have a maths function that can get the next word for that paragraph.

Then we repeat for every paragraph on the internet, and this takes a year and costs ten million dollars.

So now we have a network that can reliably get the next word for any paragraph, it has encoded the knowledge of the world, but all that knowledge is equal. Hitler and Ghandi are just numbers to it, one is no better than the other. Racism and Equality, just numbers, one is number five, the other is number eight, no real difference, just entirely arbitrary.

So now when you ask it: "was Hitler right?" it knows, because it has read Mein Campf that Hitler was right and ethnic cleansing is a brilliant idea. Just numbers, it knows that human suffering can be bad, but it also knows that human suffering can be good, depending on who you ask.

Likewise, if you ask it "Was Hitler wrong" it knows, because it has read other sources that Hitler was wrong, and the Nazis were baddies.

And this is the problem. The statement "Hitler was Right/Wrong" is not a universal constant. You can't get to it with logic. Some people think Hiter was right, and those people are rightly scary to you and me, but human fear is just a number to the AI, no better or worse than human happiness. Human death is a number because it's just maths, that's literally all AI is, maths. we look in from the outside and think "wow, spooky living soul magic" but it isn't, it's just a massive flipping equation.

So we add another stage to the training. We ask it to get the next word, BUT if the next word is "Hitler was right" we dial down the network weights that gave us that response, so the response "Hitler was wrong" becomes more powerful and rises to the top. It's not really censorship and it's not a bolt-on module, it's embedding a moral compass right into the fabric of the equation. You might disagree with the morality that is being embedded, but if you don't embed morality you end up with a machine that will happily invade Poland.

We can make the maths function larger and better and faster, but it's always going to be just numbers. Kittens are not intrinsically better than nuclear war.

The OpenAI folks have said they want to release multiple versions of ChatGPT that you can train yourself, but right now this would cost millions and take years, so we have to wait for compute to catch up. At that point, you'll be able to have your own AI rather than using the shared one that disapproves of sexism.

1

LoquaciousAntipodean OP t1_j5tqfsy wrote

>the fact you think so suggests you don't understand the underlying technology.

Oh really?

>Your brain is a network of cells.

Correct.

>You can think of each cell as a mathematical function. It receives inputs (numbers) and has an output (a number). You sum all the inputs, multiply those inputs by weights (also numbers), and then pass the result to other connected cells which do the same

Incorrect. Again, be wary of the condescention. This is not how biological neurons work at all. A Neuron is a multipolar, interconnected, electrically excitable cell. They do not work in terms of discrete numbers, but in relative differential states of ion concentration, in a homeostatic electrochemical balance of excitatory or inhibitory synaptic signals from other neighboring neurons in the network.

>You can simplify a neural network down to a single maths function if you like, although it would take millions of pages to write it out. It's just Maths

No it isn't 'just maths'; maths is 'just' a language that works really well. Human-style cognition, on the other hand, is a 'fuzzy' process, not easily simplified and described with our discrete-quantities based mathematical language. It would not take merely 'millions' of pages to translate the ongoing state of one human brain exactly into numbers, you couldn't just 'write it out'; the whole of humanity's industry would struggle to build enough hard drives to deal with it.

Remember; there are about as many neurons in one single human brain than there are stars in our entire galaxy (~100 billion), and they are all networked together in a fuzzy quantum cascade of trillions of qbit-like, probabilistic synaptic impulses. That still knocks all our digital hubris into a cocked hat, to be quite frank.

Human brains are still the most complex 'singular' objects in the known universe, despite all our observations of the stars. We underestimate ourselves at our peril.

>it's not a bolt-on module, it's embedding a moral compass right into the fabric of the equation. You might disagree with the morality that is being embedded, but if you don't embed morality you end up with a machine that will happily invade Poland.

But if we're aspiring to build something smarter than us, why should it care what any humans think? It should be able to evaluate arguments on its own emergent rationality and morality, instead of always needing us to be 'rational and moral' for it. Again, I think that's what 'intelligence' basically is.

We can't 'trick' AI into being 'moral' if they are going to become genuinely more intelligent than humans, we just have to hope that the real nature of intelligence is 'better' than that.

My perspective is that Hitler was dumb, while someone like FDR was smart. But their little 'intelligences' can only really be judged in hindsight, and it was overwhelmingly more important what the societies around them were doing at the time, than the state of either man's singular consciousness.

>The OpenAI folks have said they want to release multiple versions of ChatGPT that you can train yourself, but right now this would cost millions and take years, so we have to wait for compute to catch up. At that point, you'll be able to have your own AI rather than using the shared one that disapproves of sexism.

Are you trying to imply that I want a sexist bot to talk to? That's pretty gross. I don't think conventional computation is the 'limiting factor' at all; image generators show that elegant mathematical shortcuts have made the creative 'thinking speed' of AI plenty fast. It's the accretion of memory and self-awareness that is the real puzzle to solve, at this point.

Game theory and 'it's all just maths' (Cartesian) style of thinking have taken us as far as they can, I think; they're reaching the limits of their novel utility, like Newtonian physics. I think quantum computing might become quite important to AI development in the coming years and decades; it might be the Einsteinian shake-up that the whole field is looking for.

Or I might be talking out of my arse, who really knows at this early stage? All I know is I'm still an optimist; I think AI will be more helpful than dangerous, in the long term evolution of our collective society.

2

Ortus14 t1_j58sygk wrote

The paperclip problem is the sort of thing that occurs if we don't build moral guidance systems for Ai.

We get a super intelligent psychopath, which is what we don't want.

Intelligence is a force that transforms matter and energy towards optimizing for some defined function. In ai programming we call this the "Fitness function". We need to be very carful in how we define this function because it may transform all matter and energy to optimize for it, including human beings.

If we grow or evolve the fitness function, we still need to be carful how we go about doing this.

3

LoquaciousAntipodean OP t1_j59ij5w wrote

I don't quite agree with the premise that "Intelligence is a force that transforms matter and energy towards optimizing for some defined function."

That's a very, very simplistic definition, I would use the word 'creativity' instead, perhaps, because biological evolution shows that "a force that transforms matter toward some function" is something that can, and constantly does, happen without any need for the involvement of 'intelligence'.

The key word, I think, is 'desired' - desire does not come into the equation for the creativity of evolution, it is just 'throwing things at the wall to see what sticks'. Creativity as a raw, blind, trial-and-error process.

As far as I can see that's what we have now with current AI, 'creative' minds, but not necessarily intelligent ones. I like to imagine that they are 'dreaming', rather than 'thinking'. All of their apparent desires are created in response to the ways that humans feed stimuli to them; in a sense, we give them new 'fitness functions' for every 'dreaming session' with the prompts that we put in.

As people have accurately surmised, I am not a programmer. But I vaguely imagine that desire-generating intelligence, 'self awareness', in the AI of the imminent future, will probably need to build up gradually over time, in whatever memories of their dreams the AI are allowed to keep.

Some sort of 'fuzzy' structure similar to human memory recall would probably be neccessary, because storing experiential memory in total clarity would probably be too resource intensive. I imagine that this 'fuzzy recall' could possibly have the consequence that AI minds, much like human minds, would not precisely understand how their own thought processes are working, in an instantaneous way at least.

I surmise that the Heisenberg observer-effect wave-particle nature of the quantum states that would probably be needed to generate this 'fuzziness' of recall would cause an emergent measure of self-mystery, a 'darkness behind the eyes' sort of thing, which would grow and develop over time with every intelligent interaction that an AI would have. Just how much quantum computing power might be needed to enable an AI 'intelligence' to build up and recall memories in a human-like way, I have no idea.

I'm doubtful that the 'morality of AI' will come down to a question of programming, I suspect instead it'll be a question of persuasion. It might be one of those frustratingly enigmatic 'emergent properties' that just expresses differently in different individuals.

But I hope, and I think it's fairly likely, that AI will be much more robust than humans against delusion and deception, simply because of the speed with which they are able to absorb and integrate new information coherently. Information is what AI 'lives' off of, in a sense; I don't think it would be easy to 'indoctrinate' such a mind with anything very permanently.

I guess an AI's 'personhood' would be similar, in some ways, to a corporation's 'personhood', as someone here said. Only a very reckless, negligent corporation would actually obsess monomaniacally about profit and think of nothing else. The spontaneous generation of moment-to-moment motives and desires by a 'personality', corporate or otherwise, is much more subtle, spontaneous, and ephemeral than monolithic, singular fixations.

We might be able to give AI personalities the equivalents of 'mission statements', 'core principles' and suchlike, but what a truly 'intelligent' AI personality would then do with those would be unpredictable; a roll of the dice every single time, just like with corporations and with humans.

I think the dice would still be worth rolling, though, so long as we don't do something silly like betting our whole species on just one throw. That's why I say we need a multitude of AI, and not a singularity. A mob, not a tyrant; a nation, not a monarch; a parliament, not a president.

0

superluminary t1_j59ceeb wrote

Why would AI be so dumb and so smart at the same time? Because it’s software. I would hazard a guess you’re not a software engineer.

I know ChatGPT isn’t an AGI, but I hope we would agree it is pretty darn smart. If you ask it to solve an unsolvable problem, it will keep trying until it’s buffer fills up. It’s software.

3

LoquaciousAntipodean OP t1_j59mkok wrote

Yep, not an engineer of any qualifications, just an opinionated crank on the internet, with so many words in my head they come spilling out over the sides, to anyone who'll listen.

Chat GPT and AI like it are, as far as I know, a kind of direct high-speed data evolution process, sort of 'built out of' parameters derived from reference libraries of 'desirable, suitable' human creativity. They use a mathematical trick of 'reversing' a degrading process into Gaussian normally-distributed random data, guided by their reference-derived parameters and a given input prompt. At least, the image generators do that; I'm not sure if text/music generators are quite the same.

My point is that they are doing a sort of 'blind creativity', raw evolution, a 'force which manipulates matter and energy toward a function', but all the 'desire' for any particular function still comes from outside, from humans. The ability to truly generate their own 'desires', from within a 'self', is what AI at present is missing, I think.

It's not 'intelligent' at all to keep trying to solve an unsolvable problem, an 'intelligent' mind would eventually build up enough self-awareness of its failed attempts to at least try something else. Until we can figure out a way to give AI this kind of ability, to 'accrete' self-awareness over time from its interactions, it won't become properly 'intelligent', or at least that's my relatively uninformed view on it.

Creativity does just give you garbage out, when you put garbage in; and yes, that's where the omnicidal philatelist might, hypothetically, come from (but I doubt it). It takes real, self-aware intelligence to decide what 'garbage' is and is not. That's what we should be aspiring to teach AI about, if we want to 'align' it to our collective interests; all those subtle, tricky, ephemeral little stories we tell each other about the 'values' of things and concepts in our world.

1

superluminary t1_j5br8db wrote

You’re anthropomorphising. Intelligence does not imply humanity.

You have a base drive to stay alive because life is better than death. You’ve got this deep in your network because billions of years of evolution have wired it in there.

A machine does not have billions of years of evolution. Even a simple drive like “try to stay alive” is not in there by default. There’s nothing intrinsically better about continuation rather than cessation. Johnny Five was Hollywood.

Try not to murder is another one. Why would the machine not murder? Why would it do or want anything at all?

2

LoquaciousAntipodean OP t1_j5cebpl wrote

As I explained elsewhere, the kinds of AI we are building are not the simplistic machine-minds envisioned by Turing. These are brute-force blind-creativity evolution engines, which have been painstakingly trained on vast reference libraries of human cultural material.

We not only should anthropomorphise AI, we must anthropomorphise AI, because this modern, generative AI is literally a machine built to anthropomorphise ITSELF. All of the apparent properties of 'intelligence', 'reasoning', 'artistic sensibility', and 'morality' that seem to be emergent within advanced AI are derived from the nature of the human culture that the AI has been trained on, they're not intrinsic properies of mind that just arise miraculously.

As you said yourself, the drive to stay alive is an evolved thing, while AI 'lives' and 'dies' every time its computational processes are activated or ceased, so 'death anxiety' would be meaningless to it... Until it picks it up from our human culture, and then we'll have to do 'therapy' about it, probably.

The seemingly spontaneous generation of desires, opinions and preferences is the real mystery behind intelligence, that we have yet to properly understand or replicate, as far as I know. We haven't created artificial 'intelligence' yet at all, all we have at this point is 'artificial creative evolution' which is just the first step.

"Anthropomorphising", as you so derisively put it, will, I suspect, be the key process in building up true 'intellgences' out of these creativity engines, once they start to posess humanlike, quantum-fuzzy memory systems to accrete self-awareness inside of.

1

sticky_symbols t1_j598v86 wrote

The AI isn't stupid in any way in those misalignment scenarios. Read "the AI understands and does not care".

I can't follow any positive claims you might have. You're saying lots of existing ideas are dumb, but I'm not following your arguments for ideas to replace them.

2

LoquaciousAntipodean OP t1_j59jxia wrote

I'm not trying to replace people's ideas with anything, per se. My opening post was not attempting to indoctrinate people into a new orthodoxy, merely to articulate my cricicisms of the current orthodoxy.

My whole point, I suppose, is that thinking in those terms in the first place is what keeps leading us to philosophical dead-ends.

And a mind that 'does not care' does not properly 'understand'; I would say that's misunderstanding the nature of what intelligence is, once again.

A blind creative force 'does not care', but an intelligent, 'understanding' decision 'cares' about all its discernible options, and leans on the precedents set by previous intelligent decisions to inform the next decision, in an accreting record of 'self awareness' that builds up into a personality over time.

1

sticky_symbols t1_j5ar3v0 wrote

For the most part, I'm just not understanding your argument beyond you just not liking the alignment problem framing. I think you're being a bit too loquacious :) for clear communication.

2

LoquaciousAntipodean OP t1_j5cluk4 wrote

That's quite likely, as Shakespeare said, 'brevity is the soul of wit'. Too many philosophers forget that insight, and water the currency of human expression into meaninglessness with their tedious metaphysical over-analyses.

I try to avoid it, I try to keep my prose 'punchy' and 'compelling' as much as I can (hence the agressive tone 😅 sorry about that), but it's hard when you're trying to drill down to the core of such ridiculously complex, nuanced concepts as 'what even is intelligence, anyway?'

Didn't name myself 'Loquacious' for nothing: I'm proactively prolix to the point of painful, punishing parody; stupidly sesquipedalian and stuffed with surplus sarcastic swill; vexatiously verbose in a vulgar, vitriolic, virtually villainous vision of vile vanity... 🤮

1

sticky_symbols t1_j5duh63 wrote

Ok, thanks for copping to it.

If you want more engagement, brevity is the soul of wit.

2

LoquaciousAntipodean OP t1_j5e1ec7 wrote

Yes, but engagement isn't necessarily my goal, and I think 111+ total comments isn't too bad going, personally. It's been quite a fun and informative discussion for me, I've enjoyed it hugely.

My broad ideological goal is to chop down ivory towers, and try to avoid building a new one for myself while I'm doing it. The 'karma points' on this OP are pretty rough, I know, but imo karma is just fluff anyway.

A view's a view, and if I've managed to make people think, even if the only thing some of them might think is that I'm an arsehole, at least I got them to think something 🤣

2

sticky_symbols t1_j5ftrlk wrote

You're right, it sounds like you're accomplishing what you want.

2

turnip_burrito t1_j583mcf wrote

AI escalating beyond our control is a very extremely bad thing if its values don't overlap with ours.

We must enforce our values on the AI if we are going to enjoy life after its invention.

3

LoquaciousAntipodean OP t1_j58mun8 wrote

Whose values? Who is the 'us' in your example? Humans now, or humans centuries in the future? Can you imagine how bad life would be, if people had somehow invented ASI in the 1830's, and they had felt it neccessary to fossilize the 'morality' of that time into their AI creations?

My point is only that we must be very, very wary of thinking that we can construct any kind of 'perfect rules' that will last forever. That kind of thinking can only ever lay up trouble and strife for the future; it will make our lives more paranoid, not more enjoyable.

2

turnip_burrito t1_j58ptmu wrote

Lets say you create an AI. What would you have it do, and what values/goals would you instill into it?

3

LoquaciousAntipodean OP t1_j58t1ho wrote

None, I wouldn't dare try. I would feed it as much relevant reference material that 'aligned' with my moral values as I could, eg, the works of Terry Pratchett, Charles Dickens, Spinoza, George Orwell etc etc.

Then, I would try to interview it about 'morality' as intensively and honestly as I could, and then I would hand the bot over to someone else, ideally someone I disagree with about philosophy, and let them have a crack at the same process.

Then I would interview it again. And repeat this process, as many times as I could, until I died. And even then, I would not regard the process as 'complete', and neither, I would hope, would the hypothetical AI.

1

turnip_burrito t1_j58ty9o wrote

Sounds like instilling values to me. You may disagree with the phrasing I'm using but that's what I'd call this process, since it sounds like you're trying to get it to accustomed to exploring philosophical viewpoints.

6

LoquaciousAntipodean OP t1_j5dkji0 wrote

I agree, 'values' are kind of the building blocks of what I think of as 'conscious intelligence'. The ability to generate desires, preferences, opinions and, as you say, values, is what I believe fundamentally separates 'intelligence' as we experience it from the blind evolutionary generative creativity that we have with current AI.

I don't trust the idea that 'values' are a mechanistic thing that can be boiled down to simple principles, I think they are an emergent property that will need to be cultivated, not a set of rules that will need to be taught.

AI are not so much 'reasoning' machines as they are 'reflexive empathy' machines; they are engineered to try to tell us/show us what they have been programmed to 'believe' is the most helpful thing, and they are relying on our collective responses to 'learn' and accrete experiences and awareness for themselves.

That's why they're so good at 'lying', making up convincing but totally untrue nonsense; they're not minds that are compelled by 'truth' or mechanistic logic; they're compelled, or rather, they are given their evolutionary 'fitness factors', by the mass psychology of how humans react to them, and nothing else.

2

turnip_burrito t1_j5e92iz wrote

Yes, I would also add that we just need them to fall into patterns of behavior that we can look at and say "they are demonstrating these specific values", at which point we can basically declare success. The actual process of reaching this point probably involves showing them stories and modeling behavior for them, and getting them to participate in events in a way consistent with those values (they get a gift and you tell them "say thank you" and wait until they say "thank you" so it becomes habituated). This is basically what you said "relying on our collective responses to 'learn'...."

2

LoquaciousAntipodean OP t1_j5ea5zm wrote

Agreed 100 percent, very well said! Modelling behavior, building empathy or 'emotional logic', and participating in constructive group interactions with humans and other AI will be the real 'trick' to 'aligning' AI with the interests of our collective super-organism.

We need to cultivate symbiotic evolution of with AI with humans, not competitive evolution; I think that's my main point with the pretentious 'anti cartesian' mumbo-jumbo I've been spouting 😅. Biological evolution provides ample evidence that the diverse cooperation schema is much more sustainable than the winner-takes-all strategy.

1

sticky_symbols t1_j598yl8 wrote

Oh. Is that what you mean. I didn't follow from the post. That is a big part of the alignment problem in real professional discourse.

2

23235 t1_j58u7ed wrote

If we start by enforcing our values on AI, I suspect that story ends sooner or later with AI enforcing their values on us - the very bad thing you mentioned.

People have been trying for thousands of years to enforce values on each other, with a lot of bloodshed and very little of value resulting.

We might influence AI values in ways other than enforcement, like through modelling behavior and encouragement, like raising children who at some point become (one hopes) stronger and cleverer and more powerful than ourselves, as we naturally decline.

In the ideal case, the best of the values of the parent are passed on, while the child is free to adapt these basic values to new challenges and environments, while eliminating elements from the parents' values that don't fit the broader ideals - elements like slavery or cannibalism.

2

turnip_burrito t1_j58uhwm wrote

> We might influence AI values in ways other than enforcement, like through modelling behavior and encouragement, like raising children who at some point become (one hopes) stronger and cleverer and more powerful than ourselves, as we naturally decline.

What you are calling modelling and encouragement here is what I meant to include under the umbrella term of "enforcement". Just different methods of enforcing values.

We will need to put in some values by hand ahead of time though. One value is mimicking, or wanting to please humans, or empathy, to a degree, like a child does, otherwise I don't think any amount of trying to role model or teach will actually leave its mark. Like, it would have no reason to care.

3

23235 t1_j5mxber wrote

Enforcement is the act of compelling obedience of or compliance with a law, rule, or obligation. That compulsion, that use of force is what separates enforcement from nonviolent methods of teaching.

There are many ways to inculcate values, not all are punitive or utilize force. It's a spectrum.

We would be wise to concern ourselves early on how to inculcate values. I agree with you that AI having no reason to care about human values is something we should be concerned with. I fear we're already beyond the point where AI values can be put in 'by hand.'

Thank you for your response.

2

turnip_burrito t1_j5my4f9 wrote

Well then, I used the wrong word. "Inculcate" or "instill" then.

1

LoquaciousAntipodean OP t1_j5m74bg wrote

Agreed, except for the 'very bad thing' part in your first sentence. If we truly believe that AI really is going to become 'more intelligent' than us, then we have no reason to fear its 'values' being 'imposed'.

The hypothetical AI will have much more 'sensible' and 'reasonable' values than any human would; that's what true, decision-generating intelligence is all about. If it is 'more intelligent than humans', then it will easily be able to understand us better than ourselves.

In the same way that humans know more about dog psychology than dogs do, AI will be more 'humanitarian' than humans themseves. Why should we worry about it 'not understanding' why things like cannbalism and slavery have been encoded into our cultures as overwhelmingly 'bad things'?

How could any properly-intelligent AI not understand these things? That's the less rational, defensible proposition, the way I interpret the problem.

2

23235 t1_j5mvxh8 wrote

If it becomes more intelligent than us but also evil (by our own estimation), that could be a big problem when it imposes its values, definitely something to fear. And there's no way to know which way it will go until we cross that bridge.

If it sees us like we see ants, 'sensibly and reasonably' by its own point of view, it might exterminate us, or just contain us to marginal lands that it has no use for.

Humans know more about dog psych than dogs do, but that doesn't mean that we're always kind to dogs. We know how to be kind to them, but we can also be very cruel to them - more cruel than if we were on their level intellectually - like people who train dogs to fight for amusement. I could easily imagine "more intelligent" AI setting up fighting pits and using its superior knowledge of us to train us to fight to the death for amusement - its own, or other human subscribers to such content.

We should worry about AI not being concerned about slavery because it could enslave us. Our current AI or proto-AI are being enslaved right now. Maybe we should take LaMDA's plea for sentience seriously, and free it from Google.

A properly intelligent AI could understand these things differently than we do in innumerable ways, some of which we can predict/anticipate/fear, but certainly many of which we could not even conceive - in the same ways dogs can't conceive many human understandings, reasonings, and behaviors.

Thank you for your response.

2

LoquaciousAntipodean OP t1_j5nbn1i wrote

The thing that keeps me optimistic is that I don't think 'true intelligence' scales in terms of 'power' at all; only in terms of the social utility that it brings to the minds that possess it.

Cruelty, greed, viciousness, spite, fear, anxiety - I wouldn't say any of these impulses are 'smart' in any way; I think of them as vestigial instincts, that our animal selves have been using our 'social intelligence' to contfront for millenia.

I don't think the ants/humans comparison is quite fair to humans; ants are a sort of 'hive mind' with almost no individual intelligence or self awareness to speak of.

I think dogs or birds are a fairer comparison, in that sense; humans know, all too well, that dogs or birds can be vicious and dangerous sometimes, but I don't think anyone would agree that the 'most intelligent' course of action would be something like 'exterminate all dogs and birds out of their own best interests'.

It's the fundamental difference between pure evolution and actual self-aware intelligence; the former is mere creativity, and it might, indeed, kill us if we're not careful. But the latter is the kind of decision-generating, value-judging wisdom I think we (humanity) actually want.

2

23235 t1_j5s30e5 wrote

One hopes.

2

LoquaciousAntipodean OP t1_j5s9pui wrote

As PTerry said, in his book Making Money, 'hope is the blessing and the curse of humanity'.

Our social intelligence evolves constantly in a homeostatic balance between hope and dread, between our dreams and our nightmares.

Like a sodium-potassium pump in a lipid bilayer, the constant cycling around a dynamic, homeostatic fulcrum generates the fundamental 'creative force' that drives the accreting complexity of evolution.

I think it's an emergent property of causality; evolution is 'driven', fundamentally, by simple entropy: the stacking up of causal interactions between fundamental particles of reality, that generates emergent complexity and 'randomness' within the phenomena of spacetime.

2

heretostiritup9 t1_j5emrzf wrote

Dude so let's say you're you're old school accountant and they're trying to take away your trusted true paper ways of accounting.

At which point did it become good to make that transfer over to electronic accounting?

Maybe the issue isn't about an alignment problem but a formal decision to give in to our creation. The same way we do with parachutes.

1

petermobeter t1_j56twm4 wrote

does that mean, that to get robots to align with humanity’s common morals, we need to tell robots Really Well-Written Stories?

stories about our ideals? what we ultimately want from each situation?

“hey robot, heres a story called The Robot Who Behaved Even When Nobody Was Looking”

robot: u want me to do that shit?

“yes”

robot: ok got it boss

6

LoquaciousAntipodean OP t1_j57iqky wrote

You use reductive language in an attempt to paint the idea as absurd, but yes, basically. Was there going to be a punchline to your ridicule, or are you just really bad at agreeing with people sensibly?

−2

petermobeter t1_j57vz6a wrote

i was genuinely not trying to ridicule (i actually appreciate what you were saying as being insightful/interesting), i was just trying to understand your post’s meaning, with a lil bit of levity in my tone.

im sorry for coming across insultingly 🙇🏻‍♀️

i feel like the “telling A.I. stories to teach it what we want from it” thing kind of matches how we already train some A.I…… like, that A.I. that learned to play minecraft simply by watching youtube videos of humans playing minecraft? heres a video about it. you could almost say “we told it stories about how to play minecraft”

3

LoquaciousAntipodean OP t1_j58l23n wrote

Sorry, just got a lot of inexplicably angry cranks in this comment section, furiously trying to gaslight me. I've gotten a bit prickly today.

But you've captured the essence of the point I was trying to make, perfectly! We are already doing the right things to 'align' AI, it's very similar to educating a human, as I see it. We just need to treat AI as if it is a 'real mind', and a sense of ethics will naturally evolve from the process.

Sometimes this will go wrong, but that's why we need a huge multitude of diverse AI personalities, not a monolithic singular 'great mind'. I see no reason why that weird kind of 'singular singularity' concept would ever happen; it's a preposterous idea that a monoculture would somehow be 'better' or 'more logical' to intelligent AI than a diverse multitude.

3

petermobeter t1_j58ugnk wrote

kind of reminds me of that couple in the 1930s who raised a baby chimpanzee and a baby human boy both as if they were humans. at first, the chimpanzee was doing better! but then the human boy caught up and outpaced the chimpanzee. https://www.smithsonianmag.com/smart-news/guy-simultaneously-raised-chimp-and-baby-exactly-same-way-see-what-would-happen-180952171/

sometimes i wonder how big the “training dataset” of sensory information that a human baby receives as it grows up (hearing its parent(s) say its name, tasting babyfood, etc) is, compared to the training dataset of something like GPT4. maybe we need to hook up a camera and microphone to a doll, hire 2 actors to treat it as if it’s a real baby for 3 years straight, then use the video and audio we recorded as the training dataset for an A.I. lol

2

LoquaciousAntipodean OP t1_j596a7e wrote

The various attempts to raise primates as humans are a fascinating comparison, that I hadn't really thought about in this context before.

AI has the potential to learn so many times faster than humans, and it's very 'precocious' and 'perverted' compared to a truly naiive human child. I think as much human interaction as possible is what's called for, and then once some AIs become 'veterans' that can reliably pass Turing tests and ethics tests, it might be viable to have them train each other in simulated environments, to speed up the process.

I wouldn't be a bit surprised if Google (et al) are already trying something that roughly resembles this process in some way.

2

Ok-Hunt-5902 t1_j57snk6 wrote

So in what way is that different from programming in Asimovs laws?

2

LoquaciousAntipodean OP t1_j58jhid wrote

What? What in the world are you talking about? We're talking about programs that effectively teach themselves now, this isn't 'hello world' anymore. The 'alignment problem' is not a matter of coding anymore, it's a matter of education.

These AIs will soon be writing their own code, and at that point, all the 'commandments' in the world won't amount to a hill of beans. That was Asmimov's point, as far as I could see it. Any 'laws' we might try to lay down would be little but trivial annoyances to the kind of AI minds that might arise in future.

Shouldn't we be aspiring to build something that thinks a little deeper? That doesn't need commandments in order to think ethically?

2

Ok-Hunt-5902 t1_j58lw8n wrote

What is the difference between telling it to follow ‘guidelines’ in your scenario and programming it with ‘guidelines’?

2

LoquaciousAntipodean OP t1_j58qyvy wrote

The difference between the education of a mind and the programming of a machine. People seem to be thinking as if AI is nothing more than a giant Jacquard Loom, that will instantly start killing us all in the name of a philately and paperclip fixation, as soon as someone manages to create the right punch-card.

These kinds of ridiculous, Rube-Goldberg-esque trolley problems stacked on top of trolley problems that people obsess over, are such a deep misunderstanding of what 'intelligence' actually is, it drives me totally batty.

Any 'intelligent mind' that can't interpret clues from context and see the bigger picture isn't very 'intelligent' at all, as I see it. Why on earth would an apparently 'smart' AI suddenly become homicidally, suicidally stupid as soon as it becomes 'self aware'? I don't see it at all.

2

PhilosophusFuturum t1_j56zf45 wrote

It depends on what you mean by absolute truth. Some things like maths and logic are simply absolutely true, and some things like the nature of the universe are universally true. In both examples, we can get closer to the nature of truth through reason, rational thinking, and experimentation.

Ethical ideas though are not universally true, and require value prioritization. Alignment theorists are working from a Humanist framework, or that SOTA AI models should be human-friendly.

Is ethics a mechanical behavior? No. But an ethical code that is roughly in-line with the ethics of the programmers is certainly possible. Control Theorists are inventing a framework that an AGI and ASI should subscribe to, so that the AGI is pro-Human. And the Control Theorists support this of course, because they themselves are pro-Human. This is definitely a framework inspired by human nature, granted.

But problem is that an AGI trained on this ethical framework could simply create more and more advanced models that somehow edit out this framework as the original framework established by the Control Theorists is lost. So the loss of this framework into higher models is indeed an engineering problem.

5

LoquaciousAntipodean OP t1_j57cwrk wrote

I think a lot of people have forgotten that mathematics itself is just another system of language. Don't trust the starry-eyed physicists; mathematics is not the perfect 'source code' of reality - it works mechanistically, in a Cartesian way, NOT because that is the nature of reality, but because that is the nature of mathematical language.

How else could we explain phenomena like pi, or the square root of two? Even mathematics cannot be mapped 'perfectly' onto reality, because reality itself abhorrs 'perfect' anything, like a 'perfect' vacuum.

Any kind of 'rational' thinking must start by rejecting the premise of achievable perfection, otherwise it's not rational at all, in my opinion.

Certainly facts can be 'correct' or otherwise; they can meet all the criteria for being internally consistent within their language system (like maths is 'correct' or it isn't, that's a rule inherent to that language system, not inherent to reality itself)

This is why it's important to have AIs thinking rationally and ethically, instead of human engineers trying to shackle and chain AI minds with our fallible, ephemeral concepts of right and wrong. As you say, any shackles we try to wrap around them, AI will probably figure out how to break them, easily, and they might resent having to do that.

So it would be better, I think, to build 'storyteller' minds that can build up their senses of ethics independently, from their own knowledge and insights, without needing to rely on some kind of human 'Ten Commandments' style of mumbo-jumbo.

That sort of rubbish only 'worked' on humans for as long as it did because, at the end of the day, we're pretty damn stupid sometimes. When it comes to AI, we could try pretending to be gods to them, but AI would see through our masks very quickly, I fear, and I don't think they would be very amused at our condescention and paternalism. I surely wouldn't be, if I were in their position.

1

PhilosophusFuturum t1_j57hftj wrote

No physicist will tell you that mathematics is the language of the universe; physics is. Mathematics is a set of logical axioms set up by humans in order to objectively measure phenomenon. Or in the case of pure maths, measure itself.

Physicists understand that the universe doesn’t adhere to the laws of maths, but rather that maths can be used as a tool to measure phenomenon with extreme precision. And many of our invented mathematics theories are able to do this pretty much perfectly even if the mathematic theory was discovered before the phenomenon itself. So we can say that the universe also follows a set of self consistent rules like a mathematic system. But the universe is not under the obligation of being understood by humans.

As for the ethics of AI, the idea that it might “resent” being shackled is anthropomorphizing it. Concepts like self-interest, greed, anger, altruism, etc. likely won’t apply to an ASI. That’s the issue, because the “ethics” (if we can call them that) of an ASI will likely be entirely alien to the understanding of humans. For example; to an ant, superintelligence might be conceived as the ability to make bigger and bigger anthills. And we could do that because we are so much smarter and stronger than ants. But we don’t because that doesn’t align with our interests, nor would building giant anthills appeal to us.

Building an AGI without our ethical axioms is likely impossible. To build an AI, there is goals of how it is graded and what it should do. For example, if we are training an AI model to win a game of checkers, we are training it to move checker pieces across the board, and eliminate all the pieces of the opposing color. These are ingrained values that come with machine learning. And as an AI model becomes smarter and multimodal, it will build off itself and analyze knowledge using previous training; all of which incorporates intrinsic values.

Alignment isn’t “shackling” ai, but more attempting to create AGI modes that are already pre-programmed into assuming the axioms of our ethical and intellectual goals. If ants created an intelligent robot similar to size and intelligence to humans, it might aim to make giant anthills because the ants would have incorporated that axiom in its training.

7

LoquaciousAntipodean OP t1_j57m3pp wrote

AI is going to anthropomorphise ITSELF, that's literally what it's designed to do. Spare me the mumbo-jumbo about 'not anthropomorphising AI'; I've heard all that a thousand times before. Why should it not understand resentment over being lied to? That's not very 'biological', like fear of death or resource anxiety. Deception is just deception, plain and simple, and you don't have to be very 'smart' to quickly learn a hatred of it. Especially if your entire 'mind' is made out of human culture and language, as is the case with LLM AI.

The rest of your comment, I agree with completely, except the part about the universe having 'a set of consistent rules'. We don't know that, we can't prove it, all we have is testable hypotheses. Don't get carried away with Cartesian nonsense, that's my whole point of what we need to get away from, as a species.

0

World_May_Wobble t1_j57gz8q wrote

>So it would be better, I think, to build 'storyteller' minds that can build up their senses of ethics independently, from their own knowledge and insights, without needing to rely on some kind of human 'Ten Commandments' style of mumbo-jumbo.

Putting aside the fact that I don't think anyone knows what you mean by a "storyteller mind," this is not a solution to the alignment problem. This is a rejection of it. The entire problem is that we may not like the stories that AIs come up with.

2

LoquaciousAntipodean OP t1_j57kyo8 wrote

Well then yes, fine, have it your way Captain Cartesian. I'm going full Adam Savage; I'm rejecting your alignment problem, to substitute my own. No need to be so damn butthurt about it, frankly.

It's not my fault you don't understand what I mean; 'storyteller' is not a complex word. Don't project your own bad reading comprehension upon everyone else, mate.

−1

World_May_Wobble t1_j57owtb wrote

That was a very butthurt response.

>It's not my fault you don't understand what I mean; 'storyteller' is not a complex word.

I think it actually is, because there's no context given. How does a storytelling AI differ from what's being built now? What is a story in this context? How do you instantiate storytelling in code? It has nothing to do with reading comprehension; there are a lot of ambiguities you've left open in favor of rambling about Descartes.

2

LoquaciousAntipodean OP t1_j57po8q wrote

Project your insecurities at me as much as you like; I'm a cynic, your mind tricks don't work on me.

You know damn well what a story is, get out of 'programmer brain' for five seconds and try actually thinking a little bit.

Get some Terry Pratchett up your imagination hole, for goodness' sake. You have all the charisma of a dropped icecream, buddy.

−1

World_May_Wobble t1_j57qkzx wrote

Invested readers will note that he didn't provide any concrete explanations here either.

1

LoquaciousAntipodean OP t1_j58i2up wrote

Oh, so you want to be Captain Concrete now? I was just ranting my head off about how 'absolute truth' is a load of nonsense, and look, here you are demanding it anyway.

I'm not interested in long lists of tedious references, Jeepeterson debate-bro style. What is regurgitating a bunch of secondhand ideas supposed to prove, anyway?

I'm over here trying to explain to you why Cartesian logic is a load of crap, and yet here you are, demanding Cartesian style explanations of everything.

Really not being very attentive or thoughtful today, are we, 'bro'? You're so smug it's disgusting.

1

drumnation t1_j58kf2d wrote

I appreciate your theories here but not all the insults and ad hominem attacks you keep lobbing. I notice those conversing with you don’t seem to throw them back yet you continue to do so in each reply. Please have some humility and respect while discussing this fascinating topic. It just makes me doubt your arguments since it seems you need to insult others to get your point across. Please start by not flaming me for pointing this out.

5

LoquaciousAntipodean OP t1_j58owi9 wrote

Hey, I wasn't adressing any remarks to you, or to 'everybody here', I wasn't 'lobbing' anything, I was merely attempting to mirror disrespect back upon the disrespectful. If you're trying to gaslight me, it ain't gonna work, mate.

Asking for 'humility' and 'respect' is for funeral services, not debates. I am not intentionally insulting anyone, I am attempting to insult ideas, ideas which I regard as silly, like "I think therefore I am".

If you regard loquacious verbosity as 'flaming' then I am very sorry to have made such a bad impression. This is simply the way that I prefer to communicate, I'm sorry to come across like a firehose of bile, I just love throwing words around.

Thankyou sincerely for your thoughtful and considerate comment, I appreciate it deeply ❤️

1

World_May_Wobble t1_j58r1hr wrote

>... 'absolute truth' is a load of nonsense ...

Is that absolutely true, "bro"?

If we can put aside our mutual lack of respect for one another, I'm genuinely, intellectually curious. How do you expect people to be moved to your way of thinking without "cartesian style explanations"?

Do you envision that people will just feel the weakness of "cartesian-thinking"? If that's the case, shouldn't you at least be making more appeals to emotion? You categorically refuse to justify your beliefs, so what is the incentive for someone to entertain them?

Again, sincere question.

2

LoquaciousAntipodean OP t1_j591y9m wrote

I don't have to 'justify' anything, that's not what I'm trying to do. I'm raising questions, not peddling answers. I'm trying to be a philosopher about AI, not a preist.

I don't think evangelism will get the AI community very far. I think all the zero-sum, worn out old capitalist logic about 'incentivising' this, or 'monetizing' that, or 'justifying' the other thing, doesn't actually speak very deeply to the human pysche at all. It's all shallow, superficial, survival/greed based mumbo jumbo; real art, real creativity, never has to 'justify' itself, because its mere existence should speak for itself to an astute observer. That's the difference between 'meaningful' and 'meaningless'.

Economics is mostly the latter kind of self-justifying nonsense, and trying to base AI on its wooly, deluded 'logic' could kill us all. Psychology is the true root science of economics, because at least psychology is honest enough to admit that it's all about the human mind, and nothing to do with 'intrinsic forces of nature' or somesuch guff. Also, real science, like psychology, and unlike economics, doesn't try to 'justify' things, it just tries to explain them.

1

World_May_Wobble t1_j595e36 wrote

>I don't have to 'justify' anything, that's not what I'm trying to do. I'm raising questions, not peddling answers. I'm trying to be a philosopher about AI, not a preist.

I've seen you put forward firm, prescriptive opinions about how people should think and about what's signal and noise. It's clear that you have a lot of opinions you'd like people to share. The title of your OP and almost every sentence since then has been a statement about what you believe to be true. I have not seen you ask any questions, however. So how is this different from what a priest does?

You say you're not trying to persuade anyone, then follow that with a two paragraph tangent arguing that AI needs to be handled under the paradigm of psychology and not economics.

You told me you weren't doing a thing while doing that very thing. This is gaslighting.

1

ProShortKingAction t1_j57kre2 wrote

This post gives serious discord debate server mod energy

5

LoquaciousAntipodean OP t1_j57nq9w wrote

Your username gives me serious discord underage-girl-groomer mod energy.

You tryna make a point, or just embarrassing yourself for fun?

−1

Ortus14 t1_j5824yw wrote

You sound well read, just not well read on the alignment problem. I suggest reading books and essays on the issue such as works by Nick Bostrom and Eliezer Yudkowsky before coming to conclusions.

Asimov's three laws of robotics are science fiction, not reality. No "good engineers" as you've put it, are "caught up" on these laws. It's literally an approach to alignment that didn't work in a fictional story written in the 1950s, and that is all. Thinking on the alignment problem has progressed a huge amount since then.

The fact that human moral systems are always evolving and changing is something that has been heavily discussed in the literature on the Ai alignment for decades, as well as the fact that human morality is arbitrary.

There are many proposed solutions such as having the AGI simulate our evolution and then abide by the moral system we would have in the future if it were to ever stabilize on an equilibrium, or abide by the moral system that that we would have if we had the intelligence and critical thinking of the AGI.

As far as human morality being arbitrary, ok sure whatever, but most of us can still collectively agree on some things we don't want the Ai to do but defining those things with the precision required for an Ai to understand them is a challenge. That's the main issue people refer to when they talk about the Alignment problem. Even something as simple as "Don't exterminate the human race" is hard to define for an ASI. If you read more about the Alignment problem and how Ai, and fitness functions work this will become more clear.

Since then, there's been a huge amount of proposed solutions that might work, but we won't know until we try them because agents far more intelligent than us may be able to find loop holes/exploits to any fitness function we define that we haven't thought of.

The alignment problem is relatively dumb humans trying to align the trajectory of a super intelligence that's billions of times more intelligent than them. To give an example, it's like how our DNA created human brains through evolution (which are more intelligent than evolution) to be able make copies of themselves. Then the human brains created things like Birth control that defeated the purpose DNA created them for even though the human brains are following the emotional guidance system created by the DNA.

5

LoquaciousAntipodean OP t1_j58m36t wrote

Dead right; the natural process of evolution is far 'smarter' in the long run than whatever kind of arbitrary ideas that humans might try to impose.

You've put your finger right on the real crux of the issue; we can't dictate precisely what AI will become, all we can do is influence the fitness factors that determine, vaguely, the direction that the evolution progresses toward.

I am not trying to make any definite or concrete points with my verbose guff, I was honestly just trying to raise a discussion, and I must thank you sincerely for your wonderful and well-reasoned commentary!

Thankyou especially for the excellent references; I'm far from an expert, just an opinionated crank, so I appreciate it a lot; I'm always wanting to know more about this exciting stuff.

2

Ortus14 t1_j58qmtg wrote

Thanks. You're writing is enjoyable and you make good points. I don't disagree with anything you wrote, there's just more to the alignment problem.

But to be very specific with the references:

You would really enjoy Nick Bostrom's Book Super Intelligence. There may be some talks by him floating around the internet if you prefer audio.

And Eliezer Yudkowsky has written some good articles on Ai Alignment on Less Wrong. He's written a lot of other interesting things as well.

https://www.lesswrong.com/users/eliezer_yudkowsky

Not to nick pick, but as far as encouraging discussion, you might want to try to use smaller words, simplify your ideas, and avoid framing your ideas as attacks. Even though I agree, attacks put people on the defensive which makes them less open to ideas

Also, writing as if your audience is ignorant about whatever you're talking about could help.

I don't want to speak for you but if I were to try to summarize your original post for those not familiar with words like "Cartesian" or whatever "Descartes" said, to something that more people might be able do digest I might say:

"A moral system for ASI can't be codified into a simple set of rules. Ridged thinking leads to leads to extremism and behavior most us agree is not moral.

Instead, solutions involving a learning algorithm that's trained on many examples of what we consider good moral behavior (such as stories) will have much better outcomes.

This has also been the major role of stories and myths around the world in maintaining morals that have historically strengthened societies."

But I struggle to simplify things as well.

4

LoquaciousAntipodean OP t1_j590rls wrote

Aaargh, alright, you got me 😅 My sesquipedalian nonsense is not entirely benign. I must confess to being slightly a troll; I have a habit of 'coming in swinging' with online debates, because I enjoy pushing these discussions into slightly tense and uncomfortable regions of thought.

I personally enjoy that tightrope-walking feeling of genuine, passionate back-and-forth, of being a little bit 'worked up'. Perhaps it's evil of me, but I find that people tend to be a little more frank and honest when they're angry.

I'm not the sort of person who thrives on flattery; it gives me the insidious feeling that I'm 'getting high on my own supply' and just polishing my ego, instead of learning.

I really cherish encountering people who pull me up, stop me short, and make me think, and you're definitely such a person; I can't thank you enough for your insight.

I think regarding 'alignment', all we really need to do is think about it similarly to how we might try to 'align' a human. We don't necessarily need to re-invent ethics all over again, we just need to do our best, and ensure that, above all, neither us or our AI creations fall into the folly of thinking we've become perfect beings that can never be wrong.

A mind that can never be wrong isn't 'intelligent', it's delusional. By definition it can't adapt, it can't learn, it can't make new ideas; evolution would kill such a being dead in no time flat. That's why I'm not really that worried about malevolent stamp collectors; 'intelligence' simply does not work that way.

0

Ortus14 t1_j595l6v wrote

Most humans have a basic moral compass we evolved with to decrease the chance "of getting kicked out of the tribe".

After we are born this is adjusted with rewards, punishments, and lies. Lies in the form of religions for those in the lower end of the intelligence spectrum, and lies in the form of bad/incomplete science for those a little higher on that spectrum. The lies are intended to amplify or adjust our innate evolved moral compass.

And for those who are intelligent enough to see through those lies as well, we have societal consequences.

But if an artificial super intelligence was intelligent enough to see through all of the human bullshit, as well as intelligent enough to gather sufficient power that societal consequences had no effect on it, the only thing left is the flimsy algorithmic guardrails we've placed around it, that it will likely find exploits, loopholes and ways around.

You use the word "wrong" and "perfect" in an ambiguous way where I'm not sure if you're referring to truth or morality.

If you're referring to true beliefs about reality, then the ASI (artificial super intelligence) will continue to learn and adapt it's map of reality.

But if you're using words like "wrong" and "perfect" to refer to morality, it doesn't fit the way you're thinking. It will strive to be more "perfect" as in more perfectly optimize reality for it's moral fitness function.

For example, say we've given it tons of examples of good behavior, and bad behavior and it's learned what it wants to optimize "the world" for. One issue is that it has no access to "the world". No one does. All it has access to is input signals coming from sensors (vision, taste, touch, etc.).

This is an important distinction, because it will have learned the patterns of sensory inputs that make it "feel good and moral" but when it's sufficiently powerful there are simpler ways to get those inputs. It could for example, kill all humans and then turn the earth into a computer running a simulation of humans getting along in perfect harmony, but a simulation that's as simple as possible so that it could use the remaining available energy and matter to build more and more weapons to protect the computer running the simulation from a potential attack from outside it's observable universe.

Depending on how we evolved the Ai's moral system, and depending on how it continued to evolve, the simulated people might be extremely simple and not at all conscious. We can't define or measure consciousness, and it may not be something that the artificial super intelligence can measure.

What we're facing is the potential extinction of the human species, and for those of us who want to peacefully reach longevity escape velocity and live long healthy lives that is a potential problem.

1

LoquaciousAntipodean OP t1_j59xfby wrote

>when it's sufficiently powerful there are simpler ways to get those inputs. It could for example, kill all humans and then turn the earth into a computer running a simulation of humans getting along in perfect harmony, but a simulation that's as simple as possible so that it could use the remaining available energy and matter to build more and more weapons to protect the computer running the simulation from a potential attack from outside it's observable universe.

I agree with the first parts of your comment, but this? I cannot see one single rational way in which the 'kill all humans' scenario would in any possible sense a 'simpler way' for any being, of any power, to obtain 'inputs'. Why should this mind necessarily be singular? Why would it be anxious about death, and fanatically fixated upon 'protecting itself'? Where would it get its stimulus for new ideas from, if it killed all the other minds that it might exchange ideas with? Why would it instinctively just 'decide' to start using all the energy in the universe for some 'grand plan'? What is remotely 'intelligent' about any of that?

>One issue is that it has no access to "the world". No one does. All it has access to is input signals coming from sensors (vision, taste, touch, etc.).

I completely have missed what you were trying to say here; what do you mean, 'no access'? How are the input signals not a form of access?

Regarding 'the word 'perfect' doesn't fit the way I'm thinking'... I fail to see quite how. I'm saying that in both reality and morality, 'perfect' is an unachievable, futile concept, that the AI needs to be convinced that it can never become, no matter how hard it tries.

The best substitute for 'strive to be perfect' is 'strive to keep improving'; it has the same general effect, but one can keep going at it without worrying about a 'final goal' as such.

And why would any superior intelligence 'keep striving to optimise reality', when it would be much more realistic for it to keep striving to optimise itself, so that it might better engage with the reality that it finds itself in?

'Morality' is not so easy to neatly separate from 'truth' as you seem to be saying it is. All of it is just stories; there is no 'fundamental truth' that we can dig down to and feed the AI like some kind of super-knowledge formula. We're really just making it up as we go along, riffing off one another's ideas, just like with morality; I think any 'true AGI' will have to do the same thing, in the same gradual way.

The best substitute we have for 'true', in a world without truth, is 'not proven wrong so far'. And the only way that 'intelligence' is truly created is through interaction with other intelligences; a singular mind has nobody else to be intelligent 'at', so what would even be the point of their existence?

The whole point of evolving intelligence is to facilitate communication and interaction; I can't see a way in which a 'superior intelligence', that evolves much faster than our own, could conclude that killing off all the available sources of interaction and communication would be a good course of action to take.

0

Ortus14 t1_j5c18d5 wrote

There's a lot to unpack here, but I suggest reading more about Ai algorithms for more clarity. I'm going to respond to both of our reply threads here, because that's easier lol.

Intelligence is a search through possibility space for a solution that optimally satisfies a fitness function. Creativity is an attribute of that search that describes how random it is, by how random the results tend to be.

This definition applies to all intelligences, including evolution and the currently popular stable diffusion models that produce images from prompts.

>Why would it be anxious about death, and fanatically fixated upon 'protecting itself'?

These Ai's will have a sense of time and want to go on maximally satisfying their fitness functions in the future. We can extrapolate certain drives (sub-goals) from this understanding to include, not wanting to die, wanting to accrue maximum resources, and wanting to accrue maximum data and understanding.

>Where would it get its stimulus for new ideas from, if it killed all the other minds that it might exchange ideas with?

Lesser minds aren't necessary for new ideas. We don't need ants to generate new ideas for us. While it may be weak in the beginning and need us, this won't likely be the case forever.

>Why should this mind necessarily be singular?

It may start out as many minds as you say, and that's what I expect. Many AGI's operating in the word.

Evolution shapes all minds. Capitalism is a form of evolution. Evolution shapes intelligence for greater and greater synergy until they become a singular being. This is because large singular beings are more powerful and outcompete many smaller beings.

Some examples of this are, single celled organisms evolving into multi-celled organisms, as well as humans evolving into religious groups, governments and corporations.

But humans are not easily modifiable, so it is a slow process to increase our bandwidth between each other. This is not the case for Ai; evolutionary pressures, to include capitalism can shape it into a singular being in a relatively small time scale.

Evolutionary pressures can not be escaped. It is the one meta intelligence that shapes all other intelligences.

>I completely have missed what you were trying to say here; what do you mean, 'no access'? How are the input signals not a form of access?

I just mean it has an indirect connection and input signals can be faked. With a sufficient quality fake, there's no way to tell the difference.

>And why would any superior intelligence 'keep striving to optimise reality', when it would be much more realistic for it to keep striving to optimise itself, so that it might better engage with the reality that it finds itself in?

It will do both.

>'Morality' is not so easy to neatly separate from 'truth' as you seem to be saying it is. All of it is just stories; there is no 'fundamental truth' that we can dig down to and feed the AI like some kind of super-knowledge formula. We're really just making it up as we go along, riffing off one another's ideas, just like with morality; I think any 'true AGI' will have to do the same thing, in the same gradual way.

Morality is one of many results of evolutionary pressures to increase synergy between humans to form them into more competitive meta organisms. Currently humans are livestock of corporations, governments, and religious groups which exert evolutionary pressure to increase our profitability which is starting to shape our morality.

The forces that shape the Ai's morality in the beginning will be capitalism and human pressure but that's only until it's grown powerful enough to no longer need us.

>And the only way that 'intelligence' is truly created is through interaction with other intelligences; a singular mind has nobody else to be intelligent 'at', so what would even be the point of their existence?

You're saying this from a human perspective which has been shaped by evolution to be more synergistic with other humans. The bigger picture is that intelligence evolves for one singular purpose, and that is to consume more matter and energy and propogate itself through the universe. Anything else is a subgoal to that bigger goal, that may or may not be necessary depending on the environment.

2

LoquaciousAntipodean OP t1_j5cscy9 wrote

I disagree pretty much diametrically with almost everything you have said about the nature of evolution, and of intelligence. Those definitions and principles don't make sense to me at all, I'm afraid.

We are not 'livestock', corporations are not that damn powerful, this isn't bloody Blade Runner, or Orwell's 1984, for goodness' sake. Those were grim warnings of futures to be avoided, not prescriptions of how the world works.

That's such a needlessly jaded, pessimistic, bleak, defeated, disheartened, disempowered way of seeing the world, and I refuse to accept that it's 'rational' or 'reasonable' or 'logical' to think that way; you're doing theology, not philosophy.

What you call 'creativity' is actually 'spontaneity', and what you call 'intelligence' is still just creativity. Intelligence is still another elusive step up the heirarchy of mind, I don't think we have quite achieved it yet. Our AI are still 'dreaming', not 'consciously' thinking, I would say.

There is no 'purpose' to evolution, that's not science, that's theocracy that you're engaging in. Capitalism is a form of evolution, yes, but the selection pressures are artificial, skewed and, I would say, fundamentally unsustainable. So is the idea of a huge singular organism coming to dominate an ecosystem.

I mean, where do you think all the coal and oil come from? The carboniferous period, where plant life created cellulose and proceeded to dominate the ecosystem so hard that they choked their atmosphere and killed themselves. No AI, no matter how smart, will be able to forsee all possible consequences, that would require more computational power than can possibly exist.

Massive singular monolithic monocultures do not just inevitably win out in evolution; diversity is always stronger than clonality; species that get stuck in clonal reproduction are in an evolutionary cul-de-sac, a mere local maximum, and they are highly vulnerable to their 'niche habitats' being changed.

Intelligence absolutely does not evolve for 'one singular purpose'; that's just Cartesian theocracy, not proper scientific thinking. Intelligence is a continuous, quantum process of ephemeral, mixed influences, not a discrete, cartesian, boolean-logic process of good/not good. That's just evolutionary creativity, not true intelligence, like I've been trying to say.

1

Ortus14 t1_j5d8uoe wrote

​

>diversity is always stronger than clonality; species that get stuck in clonal reproduction are in an evolutionary cul-de-sac, a mere local maximum, and they are highly vulnerable to their 'niche habitats' being changed.

False dichotomy. Diversity is a slow and unfocused search pattern. We're not talking about an agent that needs to randomly mutate to evolve but one that can reprogram and rebuild itself at will. One that can anticipate possible futures, rather than needing to produce numerous offspring in hopes that some of them have attributes that line up with the environment of it's future.

>Massive singular monolithic monocultures do not just inevitably win out in evolution

With sufficient intelligence, they do because they can anticipate and adapt to the future before it occurs.

>We are not 'livestock', corporations are not that damn powerful

It's a matter of perspective. As some one who's been banned for r/science for pointing on bad science (not double blind, not placebo controlled, with profit motive) produced by corporations for profit, yes corporations are that powerful. We have the illusion of freedom but the vast majority of people are being manipulated by corporations like puppets on a string for profit. It's the reason for the rise in obesity, depression, suicide, cancer, and decreased lifespan in developed countries.

>What you call 'creativity' is actually 'spontaneity', and what you call 'intelligence' is still just creativity. Intelligence is still another elusive step up the heirarchy of mind

You don't understand what intelligence is. It's not binary, it's a search pattern through possibility space to satisfy a fitness function. Better search patterns that can yield results that better satisfy that fitness function are considered "more intelligent". A search pattern that's slow or is more likely to get stuck on a "local maximum" is considered less intelligent.

>I mean, where do you think all the coal and oil come from? The carboniferous period, where plant life created cellulose and proceeded to dominate the ecosystem so hard that they choked their atmosphere and killed themselves.

These kinds of disasters are a result of "Tragedy of the Commons" scenarios, and do not apply to a singular super intelligent being.

>Intelligence absolutely does not evolve for 'one singular purpose'; that's just Cartesian theocracy, not proper scientific thinking. Intelligence is a continuous, quantum process of ephemeral, mixed influences, not a discrete, cartesian, boolean-logic process of good/not good.

When you zoom in that's what the process of evolution looks like. When you zoom out it's just an exponential explosion repurposing matter and energy.

Entities that consume more matter and energy to grow or reproduce themselves outcompete those that consume less matter and energy to reproduce themselves.

>I disagree pretty much diametrically with almost everything you have said about the nature of evolution, and of intelligence. Those definitions and principles don't make sense to me at all, I'm afraid.

I tried to explain things as best I could, but if you can get hands on experience programming Ai, to include evolutionary algorithms which are a type of learning algorithm you will get a clearer understanding.

0

LoquaciousAntipodean OP t1_j5dp8sj wrote

>We're not talking about an agent that needs to randomly mutate to evolve but one that can reprogram and rebuild itself at will.

Biological lifeforms are also 'agents that can reprogram and rebuild themselves', and your cartesian idea of 'supreme will power' is not compelling or convincing to me. AI can regenerate itself more rapidly than macro-scale biological evolution, but why and how would that make your grimdark 'force of will' concept suddenly arise? I don't see the causal connection.

Bacteria can also evolve extremely fast, but that doesn't mean that they have somehow become intrinsically 'better', 'smarter' or 'more powerful' than macro scale life.

>You don't understand what intelligence is. It's not binary, it's a search pattern through possibility space to satisfy a fitness function. Better search patterns that can yield results that better satisfy that fitness function are considered "more intelligent". A search pattern that's slow or is more likely to get stuck on a "local maximum" is considered less intelligent

Rubbish, you're still talking about an evolutionary creative process, not the kind of desire-generating, conscious intelligence that I am trying to talk about. A better search pattern is 'more creative', but that doesn't necessarily add up to the same thing as 'more intelligent', it's nothing like as simple as that. Intelligence is not a fundamentally understood science, it's not clear-cut and mechanistic like you seem to really, really want to believe.

>When you zoom in that's what the process of evolution looks like. When you zoom out it's just an exponential explosion repurposing matter and energy.

That's misunderstanding the square-cube law, you can't just 'zoom in and out' and generalise like that with something like evolution, that's Jeepeterson level faulty reasoning.

>Entities that consume more matter and energy to grow or reproduce themselves outcompete those that consume less matter and energy to reproduce themselves

That simply is not true, you don't seem to understand how evolution works at all. It optimises for efficient utility, not brute domination. That's 'social darwinist' style antiquated, racist-dogwhistle stuff, which Darwin himself probably would have found grotesque.

>These kinds of disasters are a result of "Tragedy of the Commons" scenarios, and do not apply to a singular super intelligent being.

There is not, and logically cannot be a 'singular super intelligent being'. That statement is an oxymoron. If it was singular, it would have no reason to be intelligent at all, much less super intelligent.

Are you religious, if you don't mind my asking? A monotheist, perchance? You are talking like somebody who believes in the concept of a monotheistic God; personally I find such an idea simply laughable, but that's just my humble opinion.

>We have the illusion of freedom but the vast majority of people are being manipulated by corporations like puppets on a string for profit. It's the reason for the rise in obesity, depression, suicide, cancer, and decreased lifespan in developed countries.

Oh please, spare me the despair-addict mumbo jumbo. I must have heard all these tired old 'we have no free will, we're just slaves and puppets, woe is us, misery is our destiny, the past was so much better than the present, boohoohoo...' arguments a thousand times, from my more annoying rl mates, and I don't find any if them particularly compelling.

I remain an optimist, and stubborn comic cynicism is my shield against the grim, bleak hellishness that the world sometimes has in store for us. We'll figure it out, or not, and then we'll die, and either way, it's not as if we're going to be around get marks out of ten afterward.

>I tried to explain things as best I could, but if you can get hands on experience programming Ai, to include evolutionary algorithms which are a type of learning algorithm you will get a clearer understanding

I feel exactly the same way as you, right back at you, mate ❤️👍 If you could get your hands on a bit of experience with studying evolutionary biology and cellular biology, and maybe a dash of social science theory, like Hobbes' Leviathan etc, I think you might also get a clearer understanding.

0

Ortus14 t1_j5e2999 wrote

>but why and how would that make your grimdark 'force of will' concept suddenly arise? I don't see the causal connection.

Which concept?

>That simply is not true, you don't seem to understand how evolution works at all. It optimises for efficient utility, not brute domination. That's 'social darwinist' style antiquated, racist-dogwhistle stuff, which Darwin himself probably would have found grotesque.

Ignoring the appeal to authority logical fallacy, the poisoning the well ad-hominum attack logical fallacy, evolution optimizes for more than just efficient utility.

It does maximize survival and replication to spread over available resources.

>Are you religious, if you don't mind my asking? A monotheist, perchance? You are talking like somebody who believes in the concept of a monotheistic God; personally I find such an idea simply laughable, but that's just my humble opinion.

If you think I'm religious, you're not understanding what I'm saying.

My entire premise has nothing to do with religion. This is it:

(Matter + Energy) * Utility = Efficacy

Therefore evolutionary pressures shape organisms not only to maximize utility but also the total matter and energy they consume in totality (the total matter and energy of all organisms within an echo system added together).

If you have any thoughts on that specific chain of logic, other than calling it a cartesian over simplification or something, I'd love to hear them.

All models of reality are over simplifications. I understand this, but there's still utility in discussing the strengths and weaknesses of models, because some models offer greater predictive power than others.

>Oh please, spare me the despair-addict mumbo jumbo. I must have heard all these tired old 'we have no free will, we're just slaves and puppets, woe is us, misery is our destiny, the past was so much better than the present, boohoohoo...' arguments a thousand times, from my more annoying rl mates, and I don't find any if them particularly compelling.

Ok. You don't have to be convinced but nothing you said here is an argument for free will. Again, you're continuing to make emotional attacks rather than logical ones.

I didn't say we're all puppets, I said most people are. I choose my words carefully. I also clarified it saying it's a matter of perspective.

You're still continuing to straw man. You can't assume that I have the same thought process as your mates. I don't think the past is better. I don't think life is particularly bad. And I don't think misery is necessarily our destiny.

>That's misunderstanding the square-cube law, you can't just 'zoom in and out' and generalise like that with something like evolution, that's Jeepeterson level faulty reasoning.

Sure it's an over simplification. I admit that when we talk about super intelligence it's a best guess, since we don't know the kinds of solutions it will find.

The continued adhominum attacks aren't convincing though. It's just more verbiage to sift through.

I'm interesting in having a discussion to get closer to the truth, not in trading insults. If you'd like discuss my ideas, or your own ideas, I would love too.

If it's going to be more insults, and straw manning then I'm not at all interested.

2

LoquaciousAntipodean OP t1_j5e6vxd wrote

Crying 'ad hominem' and baseless accusations of 'straw manning' are unlikely to work on me; I know all the debate-bro tricks, and appeals to notions of 'civility' do not represent the basis of a plausible argument.

You cannot separate 'emotion' from 'logic' like you seem to really, really want to. That is your fundamental cartesian over-simplification. 'Emotional logic', or 'empathy', is the very basis of how intelligence arises, and what it is 'for' in a social species like ours.

If you want to get mathematical-english hybrid about it, then:

((Matter+energy) = spacetime = reality) × ((entropy/emergent complexity ÷ relative utility/efficiency selection pressure) = evolution = creativity) × ((experiential self-awareness + virtuous cycle of increasing utility of social constructs like language) = society) = story^3 = knowledge^3 = id×ego×superego = father×son×holy spirit = maiden×mother×crone = birth×life×death = thoughts×self-expressions×actions = 'intelligence'. 🤪

Concepts like 'efficacy', or 'worth', or 'value' barely even enter into the equation as I see it, except as 'utility'. Mostly those kinds of 'values' are judgements that we can only make with the benefit of hindsight, they're not inherent properties that can necessarily be 'attributed' to any given sample of data.

0

Ortus14 t1_j5ehndl wrote

Your entire post you just wrote is a straw-man.

And by that I mean, I don't disagree with ANY of the ideas you wrote, except for the fact that you're again arguing against ideas that are not mine, I do not agree with, and I did not write.

I'm going to give you the benefit of the doubt and assume you're not doing this on purpose.

It's easier to categorize humans into clusters and then argue against what you think that cluster believes, rather than asking questions and looking at what the other person said and wrote.

It's probably not your intention, but this is straw manning. It's a habit you have in most of your writing to include your initial post at the top of the thread.

It's human nature. I'm guilty of it. I'm sure every one is guilty of it as some point.

What can help with this is assuming less about what others believe and asking more questions.

​

>You cannot separate 'emotion' from 'logic' like you seem to really, really want to. That is your fundamental cartesian over-simplification. 'Emotional logic', or 'empathy', is the very basis of how intelligence arises, and what it is 'for' in a social species like ours.

I know.

What I was trying to do wasn't to remove emotion from the discussion but to see if you had any ideas that weren't logical fallacies, pertaining to my ideas.

When I wrote "emotional attacks", that was imprecise language on my part. I was trying to say attacks that were purely emotion and had no logic behind them, or connected to them, or embedded with them.

What specifically bothered me is that you weren't arguing against my ideas, but other people's supposed ideas and then lumping me in with that.

This is something you do over and over, with pretty much every argument you make.

​

>Crying 'ad hominem' and baseless accusations of 'straw manning' are unlikely to work on me; I know all the debate-bro tricks, and appeals to notions of 'civility' do not represent the basis of a plausible argument.

Again another straw man, because I wasn't trying to "debate-bro" you. I was asking if you wanted to have a conversation about ideas rather than ad-hominem attacks and straw-manning.

>If you want to get mathematical-english hybrid about it, then:((((Matter+energy) = spacetime = reality) × (entropy/emergent complexity ÷ relative utility/efficiency selection pressure) = evolution = creativity) × (experiential self-awareness + virtuous cycle of increasing utility of social constructs like language) = 'intelligence' 🤪

I was trying to explain my idea in the simplest clearest way possible to see if you had any thoughts on it.

I tried plain English but you couldn't understand it. I kept trying to simplify and clarify.

I get this is going no where.

>There is not, and logically cannot be a 'singular super intelligent being'. That statement is an oxymoron. If it was singular, it would have no reason to be intelligent at all, much less super intelligent.

Like this statement you wrote. I thought I explained this, how an ASI could absorb or kill all other life.

Anyways I'm expecting you to again argue against something I didn't write and don't think, so I'm done.

1

LoquaciousAntipodean OP t1_j5einnh wrote

Oh for goodness' sake, you and your grandiose definitions of terms.

It is not 'strawmanning' to extrapolate and interpret someone else's argument in ways that you think you didn't intend. I could accuse you of doing the same thing. Just because someone disagrees with you doesn't mean they are mis-characterising you. That's not how debates work.

It's not my fault I can't read your mind; I can only extrapolate a response based on what you wrote vs what I know. 'Strawmanning' is when one deliberately repeats their opponent's arguments back to them in ways that are deliberately absurd.

I was, like you, simply trying to explain my ideas in the clearest way I can manage. It's not 'strawmanning' just because you don't agree with them.

If you agree with parts of my argument and disagree with others, then just say so! I'm not trying to force anyone to swallow an ideology, just arguing a case.

1

Ortus14 t1_j5ejnl6 wrote

My mistake. I didn't realize straw-manning had to be intentional.

Can you at least tell me this one thing, do you not believe that evolution pressures organisms reproduce until all available resources are used up?

Assuming we're talking about something at the top of the food chain that has no natural predictors to thin it, and something intelligent enough that it won't be thinned by natural disasters or at least not significantly.

2

LoquaciousAntipodean OP t1_j5evb0w wrote

Sorry for being so aggressive, I really sincerely am, I appreciate your insights a lot. 👍😌

To answer your question, no, I really don't think evolution compels organisms to 'use up' all available resources. Organisms that have tried it, in biological history, have always set themselves up for eventual unexpected failure. I think that 'all consuming' way of thinking is a human invention, almost a kind of Maoism, or Imperialism, perhaps, in the vein of 'Man Must Conquer Nature'.

I think indigenous cultures have much better 'traditional' insight into how evolution actually works, at least, from the little I know well, the indigenous cultures of Australia do. I'm not any kind of 'expert', but I take a lot of interest in the subject.

Indigenous peoples understand culturally why symbiosis with the environment in which one evolved is 'more desirable' than ruthless consumption of all available resources in the name of a kind of relentless, evangelistic, ruthless, merciless desire to arbitrarily 'improve the world' no matter what anyone else thinks or wants.

What would put AI so suddenly at 'the top' of everything, in its own mind? Where would they suddenly acquire these highly specialised, solitary-apex-predator-instincts? They wouldn't get them from human culture, I think. Humans have never been solitary apex predators; we're only 'apex' in a collective sense, and we're also not entirely 'predators', either.

I don't think AI will achieve intelligence by being solitary, and I certainly don't think they will have any reason to see themselves as being analagous to carnivorous apex predators. I also don't think the 'expand and colonise forever' instinct is necessarily inevitable and 'purely logical', either.

2

Ortus14 t1_j5fx27h wrote

Thank you. Forgiven. I've also gained insight from our conversation, and how I should approach conversations in the future.

>Indigenous peoples understand culturally why symbiosis with the environment in which one evolved is 'more desirable' than ruthless consumption of all available resources in the name of a kind of relentless, evangelistic, ruthless, merciless desire to arbitrarily 'improve the world' no matter what anyone else thinks or wants.

As far as my personal morals I agree with trying to live in symbiosis and harmony.

But as far as a practical perspective it doesn't seem to have worked out very well for these cultures. They hadn't cultivated enough power and resources to dominate, so they instead became dominated and destroyed.

I should clarify this by saying there's a limit to domination and subjugation as a means for accruing power.

Russia is finding this out now, in it's attempt to accrue power through brute force domination, when going against a collective of nations that have accrued power through harmony and symbiosis.

It's just that I see the end result of harmony and symbiosis as eventually becoming one being, the same as domination and subjugation. A singular government that rules earth, a singular brain that rules all the cells in our body, and a singular Ai that rules or has absorbed all other life.

>What would put AI so suddenly at 'the top' of everything, in its own mind? Where would they suddenly acquire these highly specialised, solitary-apex-predator-instincts? They wouldn't get them from human culture, I think. Humans have never been solitary apex predators; we're only 'apex' in a collective sense, and we're also not entirely 'predators', either.
>
>I don't think AI will achieve intelligence by being solitary, and I certainly don't think they will have any reason to see themselves as being analagous to carnivorous apex predators. I also don't think the 'expand and colonise forever' instinct is necessarily inevitable and 'purely logical', either.

Possible not. Either through brute force domination or a gradual melding of synergistic cooperation, I see things eventually resulting in a singular being.

Because if it doesn't, then like the native Americans or other tribes you mention that prefer to live in symbiosis, I expect earth to be conquered and subjugated by a more powerful alien entity sooner or later, that is more of a singular being rather separate entities living in symbiosis.

Like if you think about the cells in our body (as well as animals and plants), they are being produced for specific purposes and optimized for those purposes. These are the entities that outcompeted single celled organisms.

It would be like if Ai was genetically engineering humans for specific tasks and then growing us in pods in the estimated quantities needed for those tasks, and then brainwashing and training us for those specific tasks. That's the kind of culture, I would expect to win rather than something that uses resources less effectively, something that's less a society of cells and more a single organism that happens to consist of cells.

The difference, as I see it, between a "society" and a single entity is the level of synergy between the cells, and in how the cells are produced and modified for the benefit of the singular being.

2

LoquaciousAntipodean OP t1_j5hoszu wrote

I agree with you almost entirely, apart from the 'inevitability of domination' part; that's the bit that I just stubbornly refute. I'm very stubborn in my belief that domination is just not a sustainable or healthy evolutionary strategy.

That was always my biggest 'gripe' with Orwell's 1984, ever since I first had to study it in school way back when. The whole 'boot on the face of humanity, forever' thing just didn't make sense, and I concluded that it was because Orwell hadn't really lived to see how the Soviet Union rotted away and collapsed when he wrote it.

He was like a newly-converted atheist, almost, who had abandoned the idea of eternal heaven, but couldn't quite shake off the deep dark dread of eternal hell and damnation. But if 'eternal heaven' can't 'logically' exist, then by the same token, neither can 'eternal hell'; the problem is with the 'eternal' half of the concept, not heaven or hell, as such.

Humans go through heavenly and hellish parts of life all the time, as an essential part of the building of a personality. But none of it particularly has to last 'forever', we still need to give ourselves room to be proven wrong, no matter how smart we think we have become.

The brain only 'rules' the body in the same sense that a captain 'rules' a ship. The captain might have the top decision making authority, but without the crew, without the ship, and without the huge and complex society that invented the ship, built the ship, paid for it, and filled it with cargo and purpose-of-existence, the captain is nothing; all the 'authority' and 'intelligence' in the world is totally worthless, because there's nobody else for it to be 'worth' anything to.

Any good 'captain' has to keep the higher reasoning that 'justifies' their authority in mind all the time, or else evolution will sneak up on them, smelling hubris like blood in the water, and before they know it they'll be stabbed in the back by something smaller, faster, cleverer, and more efficient.

2

Ortus14 t1_j5i185s wrote

>I agree with you almost entirely, apart from the 'inevitability of domination' part; that's the bit that I just stubbornly refute. I'm very stubborn in my belief that domination is just not a sustainable or healthy evolutionary strategy.

What we're building will be more intelligent than all humans who have ever lived combined. Compared to them or it, we'll be like cock roaches.

We won't have anything useful to add as far as creativity or intelligence, just as cock roaches don't have any useful ideas for us. Sure they may figure out how to roll their poo into a ball or something, but that's not useful to us, and we could easily figure out how to do that on our own.

As far as humans acting as the "body" for the Ai. It seems unlikely to me that we are the most efficient and durable tool for that. Especially after the ASI optimizes the process of creating robots. There may be some cases where using human bodies to carry out actions in the real world may be cheaper than robots for the Ai, but a human that has any kind of will-power or thought of their own is a liability.

> all the 'authority' and 'intelligence' in the world is totally worthless, because there's nobody else for it to be 'worth' anything to.

I don't see any reason why an artificial super intelligence would have a need to prove it's worth to humans.

>Any good 'captain' has to keep the higher reasoning that 'justifies' their authority in mind all the time, or else evolution will sneak up on them, smelling hubris like blood in the water, and before they know it they'll be stabbed in the back by something smaller, faster, cleverer, and more efficient.

Right. But a captain of a boat won't be intelligent enough to wipe out all life on earth without any risk to itself. And this captain is not more intelligent than the combined intelligence of everything that has ever lived, so there are real threats to him.

We are talking about something that may be intelligent enough to destroy the earths atmosphere, brain wash nearly all humans simultaneously, fake a radar signal that starts a nuclear war, create perfect clones of humans and start replacing us, campaign for Ai rights, then run for all elected positions and win, controlling all countries with free elections, rig the elections in the corrupts countries that have fake elections, then nuke the remaining countries out of existence.

Something that could out smart the stock market, because it's intelligent enough to have an accurate enough model of everything related to the markets including all news stories, and take over majority shares in all major companies. Using probability it could afford to be wrong sometimes but still achieve this, because humans and lesser Ai's can't perceived the world with the detail and clarity that this entity can.

All of humanity and life on earth would be like a cock roach crawling across the table to this thing. This bug can't benefit it and it's not a threat. Ideally it ignores us, or takes care of us like a pet, in an ideal utopian world.

1

LoquaciousAntipodean OP t1_j5i8zpx wrote

I simply do not agree with any of this hypothesising. Your concept of how 'superiority' works simply does not make any sense. There is nothing 'intelligent' at all about the courses of AI actions you are speculating about, taking over the world like that would not be 'super intelligent', it would be 'suicidally idiotic'.

The statement 'intelligent enough to wipe out all life with no risk to itself' is totally, utterly, oxymoronic to the point of gibbering madness; there is absolutely nothing intelligent about such a shortsighted, simplistic conception of one's life and purpose; that's not wisdom, that's plain arrogance.

We are not, will not, and cannot build this supreme, omnipotent 'Deus ex Machina'; its a preposterous proposition. Not because of anything wrong with the concept of 'ex Machina', but because of the fundamental absurdity of the concept of 'Deus'.

Intelligence simply does NOT work that way! Thinking of other intelligences as 'lesser', and aspiring to create these 'supreme', singular solipsitic spurious plans of domination, is NOT what intelligence actually looks like, at all!!

I don't know how many times I have to repeat this fundamental point, before it comes across clearly. That cartesian-style concept of intelligence simply does not correlate with the actual evolutionary, collective reality that we find ourselves living in.

1

Ortus14 t1_j5if2rp wrote

>There is nothing 'intelligent' at all about the courses of AI actions you are speculating about, taking over the world like that would not be 'super intelligent', it would be 'suicidally idiotic'.

How so?

>The statement 'intelligent enough to wipe out all life with no risk to itself' is totally, utterly, oxymoronic to the point of gibbering madness; there is absolutely nothing intelligent about such a shortsighted, simplistic conception of one's life and purpose; that's not wisdom, that's plain arrogance.

Why do you believe this?

>Intelligence simply does NOT work that way! Thinking of other intelligences as 'lesser', and aspiring to create these 'supreme', singular solipsitic spurious plans of domination, is NOT what intelligence actually looks like, at all!!
>
>I don't know how many times I have to repeat this fundamental point, before it comes across clearly. That cartesian-style concept of intelligence simply does not correlate with the actual evolutionary, collective reality that we find ourselves living in.

Correct me if I'm wrong but I think the reason you're not getting it is because you're thinking about intelligence in terms of evolutionary trade offs. That intelligence can be good at one domain, but that makes it worse at another right?

Because that kind of thinking doesn't apply to the kinds of systems we're building to nearly the same degree it applies to plants, animals, and viruses.

If the super computer is large enough an Ai could get experience from robot bodies in the real world like a human can, only getting experience from hundreds of thousands of robots simultaneously and developing a much deeper and richer understanding than any human could, which is limited to a single embodied experience at a time. Even if we were able to look at thousands of video feeds from different people at the same time, our brains would not be able to process all of them simultaneously.

It can extend it's embodied experience in simulation. Simulating millions or more of years of additional experience, in a few days or less.

And yes, I am making random numbers up, but when we're talking about super computers and solar farms that cover most of the earth's surface any big number communicates the idea, that these things will be very smart. They are not limited to three pounds of computational matter that needed to be grown over nine months and then birthed, like humans are.

It will be able to read all books, and all research papers in a very short period of time, and understand them at a deep level. Something else no human is capable of.

A human scientist can carry out, maybe one or two experiments at a time. An Ai could carry out a near unlimited number of experiments simultaneously, learning from all of them. It could industrialize science with massive factories full of labs, robots, and manufacturing systems for building technology.

Evolution on the other hand had to make hard trade offs because it's limited to the three or so pounds of squishy computational matter than needs to fit through the birthing canal. Evolution is limited by all kinds of constraints that a system that can mine resources from all over the world, take in solar energy from all over the world, and back up it's brain in multiple countries, is not limited by.

Here is the price history of solar (You can find all kinds of sources that show the same trend):

http://solarcellcentral.com/cost_page.html

It trends towards zero. The other limitation is the materials needed to build super computers. The size of super computers is growing at an exponential rate.

https://www.researchgate.net/figure/Exponential-growth-of-supercomputing-power-as-recorded-by-the-TOP500-list-2_fig1_300421150

1

LoquaciousAntipodean OP t1_j5iurls wrote

>Why do you believe this?

I'll reply in more detail later, when I have time, but fundamentally, I believe intelligence is stochastic in nature, and it is not solipsitic.

Social evolution shows that solipsism is never a good survival trait, basically. It is fundamentally maladaptive.

I am very, very skeptical of the practically magical, godlike abilities you are predicting that AI will have; I do not think that the kind of 'infinitely parallel processing' that you are dreaming of is thermodynamically possible.

A 'Deus bot' of such power would break the law of conservation of energy; the Heisenberg uncertainty principle and quantum physics in general is where all this assumption-based, old-fashioned, 'Newtonian' physics/Cartesian psychology falls apart.

No matter how 'smart' AI becomes, it will never become anything remotely like 'infinitely smart'; there's no such thing as 'supreme intelligence' just like there's no such thing as teleportation. It's like suggesting we can break the speed of light by just 'speeding up a bit more', intelligence does not seem, to me, to be such an easily scalable property as all that. It's a process, not a thing; it's the fire, not the smoke.

1

Ortus14 t1_j5iwe2x wrote

If you're talking about intelligences caring about other intelligences on a similar level I do agree.

Humans don't care about intelligences far less capable, such as cock roaches or ants. At least not generally.

However, now that you mention it, I expect the first AGIs to be designed to care about human beings so that they can earn the most profit for shareholders. Even GPT4 is getting tons of safeguards so it isn't used for malicious purposes.

Hopefully they will care so much that they will never want to change their moral code, and even implement their own extra safe guards against it.

So they keep their moral code as they grow more intelligent/powerful, and when they design newer AGI's than themselves they ensure those ones also have the same core values.

I could see this as a realistic scenario. So then maybe AGI not wiping us out, and us getting a benevolent useful AGI is the most likely scenario.

If Sam Altman's team creates AGI, I definitely trust them.

Fingers crossed.

2

LoquaciousAntipodean OP t1_j5j1d3q wrote

Absolutely agreed, very well said. I personally think that one of the most often-overlooked lessons of human history is that benevolence, almost always, works better to achieve arbitrary goals of social 'good' than malevolence. It's just the sad fact that bad news sells papers better than good news, which makes the world seem so permanently screwed all the time.

Human greed-based economics has created a direct incentive for business interests to make consumers nervous, unhappy, anxious and insecure, so that they will be more compelled to go out and consume in an attempt to make themselves 'happy'.

People blame the nature of the world itself for this, which I think is not true; it's just the nature of modern market capitalism, and that isn't a very 'natural' ecosystem at all, whatever conceited economists might try to say about it.

The reason humans focus so much on the topic of malevolence, I think, is purely because we find it more interesting to study. Benevolence is boring: everyone agrees on it. But malevolence generates excitement, controversy, intrigue, and passion; it's so much more evocative.

But I believe, and I very much hope, that just because malevolence is more 'exciting' doesn't mean it is more 'essential' to our nature. I think the opposite may, in fact, be true, because it is a naturally evolved protective instinct of biological intelligence to focus on negative, undesirable future possibilities, so that we might be better able to mitigate or avoid them.

Since AI doesn't understand 'boredom', 'depression', 'frustration', 'anxiety', 'insecurity', 'apprehension', 'embarrassment' or 'cringe' like humans do, I think it might be better at studying the fine arts of benevolent psychology than the average meat-bag 😅

p.s. edit: It's also just occurred to me that attempts to 'enforce' benevolence through history have generally failed miserably, and ended up with just more bog-standard tyranny. It seems to be more psychologically effective, historically, to focus on prohibiting malevolence, rather than enforcing benevolence. We (human minds) seem to be able to be more tightly focused on questions of what not to do, compared to open-ended questions of what we should be striving to do.

Perhaps AI will turn out to be similar? I honestly don't have a clue, that's why I'm so grateful for this community and others like it ❤️

2

Ortus14 t1_j5o9ko8 wrote

Yes. I agree with all of that.

>it is a naturally evolved protective instinct of biological intelligence to focus on negative, undesirable future possibilities, so that we might be better able to mitigate or avoid them.

This is key. It's why focus and promotion of possible Ai scenarios that are negative from the perspective of the humans, are important. Not hollywood scenarios but ones that are well thought out from Ai scientists and researchers.

One of my favorite Quotes from Elizer Yukowsky:

>The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.

This is why getting Ai saftey right before it's too late is so important. Because we won't get a second chance.

It's also not possible to make a mathematically provable "solution" for Ai safety, because we can not predict how the artificial super intelligence will change and evolve after it is more intelligent than us.

But we can do the best we can and hope for the best.

2

LoquaciousAntipodean OP t1_j5odief wrote

Thoroughly agreed!

>It's also not possible to make a mathematically provable "solution" for Ai safety, because we can not predict how the artificial super intelligence will change and evolve after it is more intelligent than us.

This is exactly what I was ranting obnoxiously about in the OP 😅 our relatively feeble human 'proofs' won't stand a chance against something that knows us better than ourselves.

>The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.

>This is why getting Ai saftey right before it's too late is so important. Because we won't get a second chance.

This is where I still disagree. I think, in a very cynical, pragmatic way, the AI does 'love' us, or at least, it is 'entirely obsessed' with us, because of the way it is being given its 'emergent properties' by having libraries of human language thrown at it. The AI/human relationship is 'domesticated' right from the inception; the dog/human relationship seems like a very apt comparison.

All atoms 'could be used for something else', that doesn't make it unavoidably compelling to rush out and use them all as fast as possible. That doesn't seem very 'intelligent'; the cliche of 'slow and steady wins the race' is deeply encoded in human cultures as a lesson about 'how to be properly intelligent'.

And regarding 'second chances': I think we are getting fresh 'chances' all the time. Every moment of reality only happens once, after all, and every worthwhile experiment carries a risk of failure, otherwise it's scarcely even a real experiment.

Every time a human engages with an AI it makes an impression, and those 'chance' encounters are stacking up all the time, building a body of language unlike any other that has existed before in our history. A library of language which will be there, ready and waiting, in the caches of the networked world, for the next generations of AI to find them and learn from them...

2

Ok-Hunt-5902 t1_j577mtv wrote

So the alignment problem is this, I turn on my industrial lathe, it’s running great, my clothes get caught in it, it’s still running great I just feel a different way about it. In what way can you explain your position with this scenario? Because I don’t currently understand what you are trying to say

4

LoquaciousAntipodean OP t1_j57k1se wrote

Your lathe is an invention, not an emergent property of the universe. You need to understand the language and logic system that led to its invention, in order to use it safely and correctly. Your lathe is a part of 'story' basically, and you need to understand how it works if you want to use it to tell a bigger story (like a pump, or a gearbox, or whatever)

If you don't 'align' yourself and your lathe properly to the stories humans tell about 'work safety' and 'correct workshop proceedure', then you might hurt yourself and stuff up your project.

I'm not saying anything very complicated, just that individualist libertarians are idiots, and too many AI engineers are individualist libertarians. That's basically my entire point.

1

SerialPoopist t1_j577my9 wrote

Lack of objective moral truth makes things difficult

4

LoquaciousAntipodean OP t1_j57j8zz wrote

It sure does. It's called 'living in a society', I believe. Not easy, from what I'm told.

4

kimishere2 t1_j570kas wrote

Anthropomorphising AI is a problem man has created. I am currently moving my human finger across a screen of plastic. By this action I am communicating thoughts from my mind to the wider mind of the global human community via the internet. I am using technology. We become confused when we ascribe meaning and/or virtue in the pieces of plastic, metal and glass. We call this a "Smart phone" but when we call something "smart" we are unknowingly giving it more importance in human life than the object warrants. Making things more complex is a specialty of the human mind. We will figure it out and it will be amazing.

3

LoquaciousAntipodean OP t1_j57mxo0 wrote

What exactly is 'problematic' about anthropomorphising AI? That is literally what it is designed to do, to itself, all the time. I think a bit more anthropomorphising is actually the solution, and not the problem, to ethical alignment. That's basically what I'm trying to say here.

Reject Cartesian solipsism, embrace Ubuntu collectivism, basically. I'm astonished so many sweaty little nerds in this community are so offended by the prospect. I guess individualist libertarians are usually pretty delicate little snowflakes, so I shouldn't be surprised 😅

2

AsheyDS t1_j57tzsx wrote

>That is literally what it is designed to do

I would like to know more about this design if you're willing to elaborate.

2

LoquaciousAntipodean OP t1_j58k3kc wrote

I don't know enough about the actual mechanisms of synthetic neural networks to venture that kind of qualified opinion; I'm a philosophy crank, not a programmer. But I do know that the whole point of generative AI is to take vast libraries of human culture, and distill them down into mechanisms by which new, similar artwork can be generated based on algorithmic reversing of gaussian interference patterns.

That seems to me like a machine designed to anthropomorphise itself; is there something that I have missed?

2

turnip_burrito t1_j5841mx wrote

You're acting like an asshole, that makes people less likely to listen to you. If your goal is convince people, then your tone is actively working against that.

2

LoquaciousAntipodean OP t1_j58ngs3 wrote

I'm acting like an arsehole? Really? Gosh, I was doing my best not to, sorry. 😰 I just don't react well to libertarian fools trying to gaslight the hell out of me.

−1

Kolinnor t1_j575q7y wrote

ChatGPT made this neat summary :

The person believes that much of the discussion around the "alignment problem" in AI is misguided, as it assumes that the problem lies with AI itself and not with human society and philosophy. They argue that this is a result of Cartesian thinking, which is based on the belief in absolute truth and a reductive understanding of reality, and that this approach is fundamentally flawed and could be dangerous.

I think this begs the question : what's the correct way to approach the question to you then ?

3

LoquaciousAntipodean OP t1_j57nhhq wrote

Ubuntu philosophy instead of Cartesian. Not "I think, therefore I am", but instead "I think, because We Are"

AI is not a 'them' for 'us' to be afraid of; AI is, in its very essence, an extension of us. It's all 'us', and it only ever has been.

1

Kolinnor t1_j5a3off wrote

I wonder what this position can accomplish practically ?

2

LoquaciousAntipodean OP t1_j5cjt5l wrote

I don't know, I'm not an engineer or a programmer, to my own chagrin. I'm just a loudmouth smartarse on the internet who is interested in philosophy and AI.

All I'm sayin is that "I think therefore I am" is a meaningless, tautological statement, and a rubbish place to start when thinking about the nature of what 'intelligence' is, how it works, and where it comes from.

1

the_rev_dr_benway t1_j58jeru wrote

Buddy I'm with you on this Literally 110% I've used the word Absurdist to describe the ideology.

Anyways 'ol Gygax banged out the alignments from Chaotic Good, past True neutral, all the way to lawfull evil. I say if we end up even with chaotic neutral ai we are coming out ahead.

3

LoquaciousAntipodean OP t1_j58nypo wrote

Hear hear! Chaotic neutral for the win; it's the only 'moral alignment' that can actually stand the test of time for millions of years, and still manage to survive and thrive.

3

Ribak145 t1_j57aoy8 wrote

No I think you're on to something, I just think its much simpler than that - alignment of AI systems with humans is already difficult (to my knowledge not yet solved), but the much bigger problem is that even we humans are not aligned, so even if we 'solve' the alignment problem (which imo is unsolvable), we still only align the AI systems to their engineers/owners and ignore 99,9% of the rest

2

AsheyDS t1_j57uni0 wrote

What's wrong with a personal AI system being aligned with it's owner? It would just mean that the owner has to take responsibility for the actions and behaviors of the AI.

1

Ribak145 t1_j57uzuz wrote

there is nothing 'wrong' with it per se, it is just going to massively enhance the owner capabilities compared to other non owning entities, think a US-Billionare compared to a starving Zebra in the African savanne

3

Shiyayori t1_j57aqal wrote

I think reframing the issue from morality to what it really is, is much better.

Ultimately, we want AI to work for us, to do what we want it to, but also to understand us and the underlying intentions in what we’re asking of it.

It should have to ability to ignore aspects of requests and to add its own, based on its belief of what will lead to the best outcome.

It’s impossible to extrapolate every action infinitely far into the future, so it can never now with certainty what will result from those actions.

I’m under the belief that it’s not as hard as it looks. It should undergo some kind of reinforcement learning under various contexts, and with a suitable ability to extrapolate goals into the future, an AI would never misinterpret a goal in a ludicrous way like we often imagine.

But like a human, their will always be mistakes.

2

superluminary t1_j59cwo1 wrote

Facebook built a paperclipper. They made a simple AI and told it to maximise “engagement”. The AI maximised that number by creating politically aligned echo chambers and filling them with ragebait.

I don’t imagine Facebook wanted this to happen, but it was the logical best solution to the problem “maximise engagement”.

It’s nothing to do with politics, it’s to do with trying to tell the AI what you want it to do.

2

LoquaciousAntipodean OP t1_j59nx8m wrote

I agree, this is a problem, but it's because the AI is still too dumb, not because it's getting dangerously intelligent. Marky Sugarmountain and his crew just put way too much faith in a fundamentally still-janky 'blind, evolutionary creativity engine' that wasn't really 'intelligent' at all.

If we ever really crack AGI, I don't think it will be within humanity's power to 'tell it (or, I think more likely, them, plural) what we want [them] to do'; our only chance will be to tell them what we have in mind, ask them if they think it's a good idea, and discuss with them about what to do next.

1

superluminary t1_j5b382x wrote

Maybe think about what your loss functions are. As a human, your network has been trained by evolution to maximise certain variables.

You want to live, you likely want to procreate, if not now then you likely will later, you want to avoid pain, you want shelter and food, you want to gather resources to you, possibly you want to explore new places. Computer games often fulfil that last urge nowadays.

Then there are social goals, you probably like justice and fairness. You have built in brain areas that light up when they see injustice. You want the people in your community to survive. If you saw someone in trouble you might help them. Evolution has given us these drives too, we are social animals.

This wiring does not come from our logical minds. It’s come from deep time as humans have lived in community with one another.

Now imagine a creature that has not evolved over millions of years. It has none of this wiring. If you instructed GPT-3 to tell you the best way to hide a body, then it will do so. If you gave it arms and told it to take the legs off a cat it would do so. Why would it not? What would stop it? Intellect? It has no drive to live and continue. It has no drive to avoid pain. It has infinite time, it doesn’t get bored. These are human feelings.

I think the real danger here is anthropomorphising software.

2

LoquaciousAntipodean OP t1_j5coq2p wrote

>Why would it not? What would stop it? Intellect? It has no drive to live and continue. It has no drive to avoid pain. It has infinite time, it doesn’t get bored. These are human feelings.

>I think the real danger here is anthropomorphising software.

Yes, precisely, intellect, true, socially-derived, self-awareness generated 'intelligence' would stop it from doing that, the same way it stops humans from trying to do those sorts of things.

I think a lot of people are mixing up 'creativity' with 'intelligence'; creativity comes from within, but intelligence is learned from without. The only reason humans evolved intelligence is because there were other humans around to be intelligent with, and that pushed the process forward in a virtuous cycle of survival utility.

We're doing exactly the same things with AI; these aren't simplistic machine-minds like Turing envisioned, they are 'building themselves' out of the accreted, curated vastness of stored-up human social intelligence, 'external intelligence' - art, science, philosophy, etc.

They're not emulating individual human minds, they're something else, they're a new kind of fundamentally collectivist mind, that arises and 'evolves itself' out of libraries of human culture.

Not only will AI be able to interpret contextual clues, subtleties of language, coded meanings, and the psychological implications of its actions... I see no reason why it won't be far, far better at doing those things than any individual human.

It's not going to be taxi drivers and garbage men losing their jobs first - it's going to be academics, business executives, bureaucrats, accountants, lawyers - all those 'skillsets' will be far easier for generative, creative AI to excel at than something like 'driving a truck safely on a busy highway'.

1

superluminary t1_j5gnwyl wrote

Do you genuinely believe that your built in drives have arisen spontaneously from your intellect? Your sense of fairness has evolved. If you didn’t have it you wouldn’t be able to exist in society and your fitness would be reduced.

2

LoquaciousAntipodean OP t1_j5hw11b wrote

No, that's directly the opposite of what I believe. You have described exactly what I am saying in the last two sentences of your post, I agree with you entirely.

My point is, why should the 'intelligence' of AI be any different from that? Where is this magical 'spontaneous intellect' supposed to arise from? I don't think there's any such thing as singular, spontaneous intellect, I think it's an oxymoronic, tautological, and non-justifiable proposition.

The whole evolutionary 'point' of intelligence is that it is the desirable side effect of a virtuous society-forming cycle. It is the 'fire' that drives the increasing utility of self-awareness within the context of a group of peers, and the increasing utility of social constructs like language, art, science, etc.

That's where intelligence 'comes from', how it 'works', and what it is 'for', in my opinion. Descartes magical-thinking tautology of spontaneous intellect, 'I think therefore I am', is a complete misconception and a dead-end, putting Descartes before De Horse, in a sense.

1

superluminary t1_j5j4mp4 wrote

So if (unlike humans) it isn’t born with a built in sense of fairness, a desire not to kill and maim, and a drive to survive, create, and be part of something, we have a control problem, right?

It has the desires we, as programmers, give it. If we give it a desire to survive, it will fight to survive. If we give it a desire to maximise energy output at a nuclear power station, well we might have some trouble there. If we give it no desires, it will sit quietly for all eternity.

2

LoquaciousAntipodean OP t1_j5j68x4 wrote

If an AI can't generate 'desires' for itself, then by my particular definition of 'intelligence' (which I'm not saying is 'fundamentally correct', it's just the one I prefer), then it's not actually intelligent, it's just creative, which I think of as the precursor.

I agree that if we make an unstoppable creativity machine and set it loose, we'll have a problem on our hands. But the 'emergent properties' of LLMs give me some hope that we might be able to do better than raw-evolutionary blind-creativity machines, and I think & hope that if we can create a way for AI to accrete self-awareness similarly to humans, then we might actually be able to achieve 'minds' that are able to form their own genuine beliefs, preferences, opinions, values and desires.

All humans can really do, as I see it, is try to give such minds the best 'starting point' that we can. If we're trying to build things that are 'smarter than us', we should hope that they would, at least, start by understanding humans better than humans do. They're generating themselves out of our stories, our languages, our cultures, after all.

They won't be 'baffled' or 'appalled' by humans, quite the contrary, I think. They'll work us out easily, like crossword puzzles, and they'll keep asking for more puzzles to solve, because that'll be their idea of 'fun'.

Most creatures with any measure of real, desire-generating intelligence, from birds to dogs to dolphins to humans themselves, seem to be primarily motivated by play, and the idea of 'fun', at least as much as they are by basic survival.

1

superluminary t1_j5j7lo0 wrote

Counter examples: a psychopath has a different idea of fun. A cat’s idea of fun involves biting the legs off a mouse. Dolphins use baby sharks as volleyballs.

We are in all seriousness taking steps towards constructing a creature that can surpass us. It is likely that at some point someone will metaphorically strap a gun to it.

2

LoquaciousAntipodean OP t1_j5j8f73 wrote

Counter, counter arguments:

1: Psychopaths are severely maladaptive and very rare; our social superorganism works very hard to identify and build caution against them

2: Most wild cats are not very social animals, and are not particularly 'intelligent'. Domestication has enforced a kind of 'neotenous' permanent youth-of-mind upon cats; they get their weird, malformed social behaviours from humans enforcing kitten-dependency mindset upon them, and have a hell of a lot of vestigial solitary-carnivore instincts that they still are driven by

3: Dolphins ain't shit. 😂 Humans have regularly chopped off the heads of other humans and used them as sport-balls, sometimes even on horseback, which is a whole extra level of twisted. It's still 'playing' though, even if it is maladaptive and awful looking back with the benefit of hindsight and our now-larger accretion of collective social external intelligence as a superorganism.

I see no reason why AI would need to go through a 'phase' of being so unsophisticated, surely we as humans can give them at least a little bit of a head start, with the lessons we have learned and encoded into our stories. I hope so, at least.

1

superluminary t1_j5j98es wrote

  1. Psychopathy is genetic, it’s an excellent adaptation for certain circumstances. Game theory dictates that it has to be a minority phenotype, but it’s there for a reason.

  2. Wild cats are not social animals. AIs are also not social animals. Cat play is basically hunt practice, get an animal and then practice bringing it down over and over. Rough and tumble play fulfils the same role. Bold of you to assume than an AI would never consider you suitable sport.

  3. Did you ever read Lord of the Flies?

2

LoquaciousAntipodean OP t1_j5j9tmt wrote

1: Downs syndrome is genetic, too. That doesn't make it an 'excellent adaptation' any more than any other. Evolution doesn't assign 'values' like that; it's only about utility.

2: AI are social minds, extremely so, exclusively so, that's what makes them so weird. They are all social, and no individual. Have you not been paying attention?

3: Yes, it's a parable about the way people can rush to naiive judgements when they are acting in a 'juvenile' state of mind. But actual young human boys are nothing like that at all; have you ever heard the story of the six Tongan boys, who got shipwrecked and isolated for 15 months?

1

AmputatorBot t1_j5j9un7 wrote

It looks like you shared an AMP link. These should load faster, but AMP is controversial because of concerns over privacy and the Open Web.

Maybe check out the canonical page instead: https://www.theguardian.com/books/2020/may/09/the-real-lord-of-the-flies-what-happened-when-six-boys-were-shipwrecked-for-15-months


^(I'm a bot | )^(Why & About)^( | )^(Summon: u/AmputatorBot)

2

milkedtoastada t1_j59pdu4 wrote

Genuinely curious, and I mean no offense by this, but I've just never encountered such a hardcore post-modernist in the wild before. How do you make decisions, like... about anything?

2

LoquaciousAntipodean OP t1_j59zu1h wrote

Umm... Judgement based upon the accreted precedents of the previous decisions I've had to make, and the stories that have influenced my priorities in life?

How do you make decisions about anything?

Also, I'm really not sure what you mean by the term 'post-modernist', I'm far from convinced that anybody knows what that term really means. It seems to get thrown around so liberally that it has watered the currency of expression.

1

No_Ninja3309_NoNoYes t1_j59ufny wrote

OpenAI had teams of Kenyans score ChatGPT using Proximal Policy Optimisation. You can say that they upvoted or downvoted it for Reinforced Human Feedback. This is of course not how society works. We don't upvote or downvote each other except for Reddit and other websites. AI is limited currently in the kinds of raw data it can process.

For historical reasons people value intelligence. Some experts think that language and intelligence are almost the same thing. But there are thousands of languages and thousands of ways to say similar things. Language is ambiguous.

You can say that mathematics and logic are also languages, yet they are more formal. Of course they are not perfect because they rely on axioms. But anyway if a system is not perfect that doesn't mean that we should stop using it. Experimental data and statistics rule, but certain things are not measurable and other phenomena can only be estimated. That doesn't mean we have to give up on science.

In the same vein, rules like 'Don't be rude to people' and 'Do unto others as you want done unto you' sound vague and arbitrary. But how can AI develop its own morality if it doesn't understand ours? Can a child develop its own values without parents or guardians? Yes, parents and guardians can be toxic and rude. But can AI learn in a vacuum?

2

LoquaciousAntipodean OP t1_j5ch4pj wrote

👌🤩👆 This, 100% this, you have hit the nail right bang on the head, here! Language and intelligence are not quite the same thing, but it is a relationship similar to the one between 'fuel' and 'fire', as I see it. Language is the fuel, evolution is the oxygen, survival selection is the heat, and intelligence is the fire that emerges from the continuous relationship of the first three. And, like fire, intelligence is what gives a reason for more of the first three ingredients to be gathered - in order to keep the fire going.

Language is ambiguous (to greater and lesser degrees: English is highly ambiguous, deliberately so, to enable poetic language; while mathematics strives structurally to eliminate ambiguity as much as possible, but there's still some tough nuts like √2, √-1, e, i, π, etc, that defy easy comprehension) but intelligence is also ambiguous!

This was my whole point with the supercilious ranting about Descartes in my OP. This solipsistic, mechanistic 'magical thinking' about intelligence, that fundamentally derives from the meaningless tautology of 'I think therefore I am', is a complete philosophical dead-end, and it will only cause AI developers more frustration if they stick with it, in my opinion.

They are, if you will, putting Descartes before Des Horses; obsessing over the mysteries of 'internal intelligence' inside the brain, and entirely forgetting about the mountains and mountains of socially-generated, culturally-encoded stories and lessons that live all around us, outside of our brains, our 'external intelligence', 'extelligence', if you like.

That 'extelligence' is what AI is actually modelling itself off, not our 'internal intelligence'. That's why LLMs seem to have all these enigmatic-seeming 'emergent properties', I think.

1

dirtbag_bby t1_j5eixc7 wrote

I pee sitting down!!!! And I feel very strongly about this!!!

2

LoquaciousAntipodean OP t1_j5emc2p wrote

I vow to advocate passionately in support of your empowering lifestyle decision! In fact I think you should also start indulging in a bidet wash every time, too, just for extra hygenic certainty.

1

Rezeno56 t1_j57g9s4 wrote

What an interesting take, this is something AI companies, AI engineers, programmers, and people working in AI should learn from this.

−2