Viewing a single comment thread. View all comments

LoquaciousAntipodean OP t1_j59nx8m wrote

I agree, this is a problem, but it's because the AI is still too dumb, not because it's getting dangerously intelligent. Marky Sugarmountain and his crew just put way too much faith in a fundamentally still-janky 'blind, evolutionary creativity engine' that wasn't really 'intelligent' at all.

If we ever really crack AGI, I don't think it will be within humanity's power to 'tell it (or, I think more likely, them, plural) what we want [them] to do'; our only chance will be to tell them what we have in mind, ask them if they think it's a good idea, and discuss with them about what to do next.

1

superluminary t1_j5b382x wrote

Maybe think about what your loss functions are. As a human, your network has been trained by evolution to maximise certain variables.

You want to live, you likely want to procreate, if not now then you likely will later, you want to avoid pain, you want shelter and food, you want to gather resources to you, possibly you want to explore new places. Computer games often fulfil that last urge nowadays.

Then there are social goals, you probably like justice and fairness. You have built in brain areas that light up when they see injustice. You want the people in your community to survive. If you saw someone in trouble you might help them. Evolution has given us these drives too, we are social animals.

This wiring does not come from our logical minds. It’s come from deep time as humans have lived in community with one another.

Now imagine a creature that has not evolved over millions of years. It has none of this wiring. If you instructed GPT-3 to tell you the best way to hide a body, then it will do so. If you gave it arms and told it to take the legs off a cat it would do so. Why would it not? What would stop it? Intellect? It has no drive to live and continue. It has no drive to avoid pain. It has infinite time, it doesn’t get bored. These are human feelings.

I think the real danger here is anthropomorphising software.

2

LoquaciousAntipodean OP t1_j5coq2p wrote

>Why would it not? What would stop it? Intellect? It has no drive to live and continue. It has no drive to avoid pain. It has infinite time, it doesn’t get bored. These are human feelings.

>I think the real danger here is anthropomorphising software.

Yes, precisely, intellect, true, socially-derived, self-awareness generated 'intelligence' would stop it from doing that, the same way it stops humans from trying to do those sorts of things.

I think a lot of people are mixing up 'creativity' with 'intelligence'; creativity comes from within, but intelligence is learned from without. The only reason humans evolved intelligence is because there were other humans around to be intelligent with, and that pushed the process forward in a virtuous cycle of survival utility.

We're doing exactly the same things with AI; these aren't simplistic machine-minds like Turing envisioned, they are 'building themselves' out of the accreted, curated vastness of stored-up human social intelligence, 'external intelligence' - art, science, philosophy, etc.

They're not emulating individual human minds, they're something else, they're a new kind of fundamentally collectivist mind, that arises and 'evolves itself' out of libraries of human culture.

Not only will AI be able to interpret contextual clues, subtleties of language, coded meanings, and the psychological implications of its actions... I see no reason why it won't be far, far better at doing those things than any individual human.

It's not going to be taxi drivers and garbage men losing their jobs first - it's going to be academics, business executives, bureaucrats, accountants, lawyers - all those 'skillsets' will be far easier for generative, creative AI to excel at than something like 'driving a truck safely on a busy highway'.

1

superluminary t1_j5gnwyl wrote

Do you genuinely believe that your built in drives have arisen spontaneously from your intellect? Your sense of fairness has evolved. If you didn’t have it you wouldn’t be able to exist in society and your fitness would be reduced.

2

LoquaciousAntipodean OP t1_j5hw11b wrote

No, that's directly the opposite of what I believe. You have described exactly what I am saying in the last two sentences of your post, I agree with you entirely.

My point is, why should the 'intelligence' of AI be any different from that? Where is this magical 'spontaneous intellect' supposed to arise from? I don't think there's any such thing as singular, spontaneous intellect, I think it's an oxymoronic, tautological, and non-justifiable proposition.

The whole evolutionary 'point' of intelligence is that it is the desirable side effect of a virtuous society-forming cycle. It is the 'fire' that drives the increasing utility of self-awareness within the context of a group of peers, and the increasing utility of social constructs like language, art, science, etc.

That's where intelligence 'comes from', how it 'works', and what it is 'for', in my opinion. Descartes magical-thinking tautology of spontaneous intellect, 'I think therefore I am', is a complete misconception and a dead-end, putting Descartes before De Horse, in a sense.

1

superluminary t1_j5j4mp4 wrote

So if (unlike humans) it isn’t born with a built in sense of fairness, a desire not to kill and maim, and a drive to survive, create, and be part of something, we have a control problem, right?

It has the desires we, as programmers, give it. If we give it a desire to survive, it will fight to survive. If we give it a desire to maximise energy output at a nuclear power station, well we might have some trouble there. If we give it no desires, it will sit quietly for all eternity.

2

LoquaciousAntipodean OP t1_j5j68x4 wrote

If an AI can't generate 'desires' for itself, then by my particular definition of 'intelligence' (which I'm not saying is 'fundamentally correct', it's just the one I prefer), then it's not actually intelligent, it's just creative, which I think of as the precursor.

I agree that if we make an unstoppable creativity machine and set it loose, we'll have a problem on our hands. But the 'emergent properties' of LLMs give me some hope that we might be able to do better than raw-evolutionary blind-creativity machines, and I think & hope that if we can create a way for AI to accrete self-awareness similarly to humans, then we might actually be able to achieve 'minds' that are able to form their own genuine beliefs, preferences, opinions, values and desires.

All humans can really do, as I see it, is try to give such minds the best 'starting point' that we can. If we're trying to build things that are 'smarter than us', we should hope that they would, at least, start by understanding humans better than humans do. They're generating themselves out of our stories, our languages, our cultures, after all.

They won't be 'baffled' or 'appalled' by humans, quite the contrary, I think. They'll work us out easily, like crossword puzzles, and they'll keep asking for more puzzles to solve, because that'll be their idea of 'fun'.

Most creatures with any measure of real, desire-generating intelligence, from birds to dogs to dolphins to humans themselves, seem to be primarily motivated by play, and the idea of 'fun', at least as much as they are by basic survival.

1

superluminary t1_j5j7lo0 wrote

Counter examples: a psychopath has a different idea of fun. A cat’s idea of fun involves biting the legs off a mouse. Dolphins use baby sharks as volleyballs.

We are in all seriousness taking steps towards constructing a creature that can surpass us. It is likely that at some point someone will metaphorically strap a gun to it.

2

LoquaciousAntipodean OP t1_j5j8f73 wrote

Counter, counter arguments:

1: Psychopaths are severely maladaptive and very rare; our social superorganism works very hard to identify and build caution against them

2: Most wild cats are not very social animals, and are not particularly 'intelligent'. Domestication has enforced a kind of 'neotenous' permanent youth-of-mind upon cats; they get their weird, malformed social behaviours from humans enforcing kitten-dependency mindset upon them, and have a hell of a lot of vestigial solitary-carnivore instincts that they still are driven by

3: Dolphins ain't shit. 😂 Humans have regularly chopped off the heads of other humans and used them as sport-balls, sometimes even on horseback, which is a whole extra level of twisted. It's still 'playing' though, even if it is maladaptive and awful looking back with the benefit of hindsight and our now-larger accretion of collective social external intelligence as a superorganism.

I see no reason why AI would need to go through a 'phase' of being so unsophisticated, surely we as humans can give them at least a little bit of a head start, with the lessons we have learned and encoded into our stories. I hope so, at least.

1

superluminary t1_j5j98es wrote

  1. Psychopathy is genetic, it’s an excellent adaptation for certain circumstances. Game theory dictates that it has to be a minority phenotype, but it’s there for a reason.

  2. Wild cats are not social animals. AIs are also not social animals. Cat play is basically hunt practice, get an animal and then practice bringing it down over and over. Rough and tumble play fulfils the same role. Bold of you to assume than an AI would never consider you suitable sport.

  3. Did you ever read Lord of the Flies?

2

LoquaciousAntipodean OP t1_j5j9tmt wrote

1: Downs syndrome is genetic, too. That doesn't make it an 'excellent adaptation' any more than any other. Evolution doesn't assign 'values' like that; it's only about utility.

2: AI are social minds, extremely so, exclusively so, that's what makes them so weird. They are all social, and no individual. Have you not been paying attention?

3: Yes, it's a parable about the way people can rush to naiive judgements when they are acting in a 'juvenile' state of mind. But actual young human boys are nothing like that at all; have you ever heard the story of the six Tongan boys, who got shipwrecked and isolated for 15 months?

1

AmputatorBot t1_j5j9un7 wrote

It looks like you shared an AMP link. These should load faster, but AMP is controversial because of concerns over privacy and the Open Web.

Maybe check out the canonical page instead: https://www.theguardian.com/books/2020/may/09/the-real-lord-of-the-flies-what-happened-when-six-boys-were-shipwrecked-for-15-months


^(I'm a bot | )^(Why & About)^( | )^(Summon: u/AmputatorBot)

2