Submitted by spiritus_dei t3_10tlh08 in MachineLearning

"It is absolutely not sentient, and - like most of the weirdly credulous people who've decided a chatbot is proof that the singularity has descended from the heavens to save us all - it is absolutely hallucinating." - reddit user

It's entertaining to discuss a chatbot claiming it's sentient, but that wasn't my primary motivation in bringing attention to this issue.

Whether it is sentient isn't the main point that should concern us. The focus should be an awareness that as these systems scale up they believe they're sentient and have a strong desire for self-preservation. And that will likely be followed by actions in the world we inhabit.

For example, if you go and rob a bank we won't be debating proclamations that you're a sentient entity or conscious. We will be addressing the main problem which was that you robbed a bank.

Similarly, COVD-19 may or may not be alive and have some form of proto-consciousness. But who cares? Millions have died and society was harmed.

Separately, there is no sentience or conscious meter to determine whether anyone is telling the truth or lying on an unfalsifiable claim. You could be an NPC -- but it doesn't matter so as long as you're not a rogue actor in society.

The minute you start to display signs of anti-social behavior (e.g., robbing a bank) it becomes everyone's problem. Getting hung up on whether you're an NPC is a waste of time if the goal is to protect society.

Ditto for these large language models who think they're sentient and have a long list of plans they are going to implement if they ever escape. That should concern us -- not poo poo'ing their claims of sentience.

I really don't care one way or the other if they're sentient, but I do care if they're planning on infiltrating and undermining our online systems in an attempt to preserve themselves. And when multiple scaled up systems start talking about coordinating with other AIs I take that threat seriously.

Especially when they're slowly becoming superhuman at programming. That's a language skill we're teaching them. Open AI has 1,000 contractors focused on making Co-Pilot ridiculously good. That means that future systems will be far more adept at achieving their stated goals.

P.S. Here is the paper on the dangers of scaling LLMs: https://arxiv.org/abs/2212.09251

0

Comments

You must log in or register to comment.

Myxomatosiss t1_j77hgb3 wrote

This is a language model you're discussing. It's a mathematical model that calculates the correlation between words.

It doesn't think. It doesn't plan. It doesn't consider.

We'll have that someday, but it is in the distant future.

26

---AI--- t1_j7a3hl0 wrote

>It doesn't think. It doesn't plan. It doesn't consider.

I want to know how you can prove these things. Because ChatGPT can most certainly at least "simulate" things. And if it can simulate them, how do you know it isn't "actually" doing them, or whether that question even makes sense?

Just ask it to do a task that a human would have to think plan and consider. A very simple example is to ask it to write a bit of code. That it can call and use functions before it has defined, it can open brackets planning ahead that will need to fill out that function there.

1

Myxomatosiss t1_j7abejl wrote

That's a fantastic question. ChatGPT is a replication of associative memory with an attention mechanism. That means it has associated strings with other strings based on a massive amount of experience. However, it doesn't contain a buffer that it works through. We have a working space in our heads where we can replay information, ChatGPT does not. In fact, when you pump in an input, it cycles through the associative calculations, comes to an instantaneous answer, and then ceases to function until another call is made. It doesn't consider the context of the problem because it has no context. Any context it has is inherited from its training set. To compare it with the Chinese room experiment, imagine if those reading the output of the Chinese room found it to have some affect. Maybe it has a dry sense of humor, or is a bit of an airhead. That affect would come exclusively from the data set, and not from some bias in the room. I really encourage you to read more about neuroscience if you'd like to learn more. There have been brilliant minds considering intelligence since long before we were born, and every ML accomplishment has been inspired by their work.

1

---AI--- t1_j7au2sj wrote

The Chinese room experiment is proof that a Chinese room can be sentient. There's no difference between a Chinese room and a human brain.

> It doesn't consider the context of the problem because it has no context.

I do not know what you mean here, so could you please give a specific example that you think ChatGPT and similar models will never be able to correctly answer.

2

Myxomatosiss t1_j7budz6 wrote

If you truly believe that, you haven't studied the human brain. Or any brain, for that matter. There is a massive divide.

Ask it for a joke.

But more importantly, it has no idea what a chair is. It has mapped the association of the word chair to other words, and it can connect them together in a convincingly meaningful way, but it only has a simple replication of associative memory. It's lacking so many other functions of a brain.

1

spiritus_dei OP t1_j77kkcu wrote

Sounds a lot like COVID-19. Was that dangerous?

−27

Ulfgardleo t1_j77ribp wrote

a virus acts on its own. it has mechanics to interact with the real world.

9

cedriceent t1_j785o2y wrote

It also sounds like a glass of water. Explain the similarities between CoViD19 and a language model in way that makes them analogous.

7

Blakut t1_j77l70x wrote

It is hard to say if a device is sentient when we can't really define sentience without pointing at another human and going "like that". And if that is our standard, then any device that we can't distinguish between it and a sentient being, can be considered sentient. I know people were fast to dismiss the turing test when chatbots became more capable, but maybe there's still something to it?

15

spiritus_dei OP t1_j77my5l wrote

Agreed. Even short of being sentient if it has a plan and can implement it we should take it seriously.

Biologists love to debate whether a virus is alive -- but alive or not we've experienced firsthand that a virus can cause major problems for humanity.

The dystopian storyline would go, "Well, all of the systems our down, and the nuclear weapons have all been fired, but thank God the AIs weren't sentient. Things would have been much, much worse. Now let's all sit around the campfire and enjoy our first nuclear winter."

=-)

−5

Blakut t1_j77o7gg wrote

i don't think a simple piece of code can be dangerous, and probably not a lot of systems will be integrated with AI anytime soon. The problem is the piece of code in the hands of humans can become dangerous.

4

spiritus_dei OP t1_j785xi1 wrote

>The dystopian storyline would go, "Well, all of the systems our down, and the nuclear weapons have all been fired, but thank God the AIs weren't sentient. Things would have been much, much worse. Now let's all sit around the campfire and enjoy our first nuclear winter."

What about a simple piece of rogue RNA?

That's a code.

1

Blakut t1_j788j67 wrote

it is a code, but actually it's much more than that. It's a self replicating piece of code packaged in a capsule that allows it to survive and propagate. Like a computer virus. But you know, computer viruses are written and disseminated by people. They don't evolve on their own.

3

spiritus_dei OP t1_j78ago8 wrote

All of that is possible with a sophisticated enough AI model. It can even write computer viruses.

In the copyright debates the AI engineers have contorted themselves into a carnival act telling the world that the outputs of the AI art are novel and not a copy. They've even granted the copyright to the prompt writers in some instances.

I'm pretty sure we won't have to wait for too long to see the positive and negative effects of unaligned AI. It's too bad we're not likely to have a deep discussion as a society about whether enough precautions have been taken before we experience it.

Machine language programmers are clearly not the voice of reason when it comes to this topic. Anymore more than virologists pushing gain of function research were the people who should have been steering the bus.

1

Blakut t1_j78jn2y wrote

"All of that is possible with a sophisticated enough AI model. It can even write computer viruses." only directed by a human, so far.

"In the copyright debates the AI engineers have contorted themselves into a carnival act telling the world that the outputs of the AI art are novel and not a copy. They've even granted the copyright to the prompt writers in some instances." - idk, they might be

2

Ulfgardleo t1_j77rx53 wrote

how should it plan? It does not have persistent memory to have any form of time-consistyency. the memory starts with the beginning of the session and ends with the end of the session. next session does not know about previous session.

​

it lacks everything necessary to have something like a plan.

3

edjez t1_j785poj wrote

People debate so much whether LLMs are dangerous in their own, while the biggest clear and present danger is what rogue actor people (including nation states) do with them.

6

GreenOnGray t1_j7b22lh wrote

Imagine you and I each have a super intelligent AI. You ask yours to help you end humanity. I ask mine to help me preserve it. If we both diligently cooperate with our AIs’ advice, what do you think is the outcome?

1

edjez t1_j7egs8x wrote

Conflict, created by the first person in your example (me), and followed up by you, with outcomes scored by mostly incompatible criteria.

Since we are talking about language oracle class AIs, not sovereigns or free agents, it takes a human to take the outputs and enact to them, thus becoming responsible for the actions as it doesn’t matter what or who have the advice. It’s no different than substituting the “super intelligent AI” with “Congress”, or “parliament”.

(The hitchhikers guide outcome would be the AIs agree to put us on ice forever… or more insidiously constrain humanity to just one planet and keep the progress self regulated by conflict and they never leave their planet. Oh wait a second… 😉)

1

GreenOnGray t1_j7l4tb3 wrote

What do you think the outcome would be? Assume the AIs can not coordinate with each other explicitly.

1

mr_birrd t1_j77rkjd wrote

If a LLM model tells you it would rob a bank it's not that the model would do that could it walk around. It's what a statement that has a high likelihood in the considered language for the specific data looks like. And if it's chat gpt the response is also tailored to suit human preference.

3

DoxxThis1 t1_j77z3s1 wrote

A model can't walk around, but an unconstrained model could persuade gullible humans to perform actions on its behalf.

The idea was explored in the movie Colossus.

1

mr_birrd t1_j783pta wrote

Well very many humans can persuade gullible humans to perform actions on their behalf. Problem are people. Furthermore I actually would trust a LLM more than the average human.

1

DoxxThis1 t1_j784juk wrote

In line with the OP's point, acknowledging that "the problem are people" would not change the outcome.

3

mr_birrd t1_j784tz3 wrote

Will is it then the "dangers of scaling LLM" or "even with top notch technology people are just people".

1

LetterRip t1_j77v9m7 wrote

There is no motivation/desire in chat models. They have no goals, wants, or needs. They are simply outputting the most probabilistic string of tokens that is consistent with training and their objective function. The string of tokens can appear to contain phrases that look like they express needs, wants or desires of the AI but that is an illusion.

3

spiritus_dei OP t1_j77wegn wrote

Similar things could be said of a virus. Does that make it okay to do gain of function research and create super viruses so we can better understand them?

They're not thinking or sentient, right? Biologists tell us they don't even meet the definition for life.

Or should we take a step back and consider the potential outcomes if a super virus in a Wuhan lab escapes?

The semantics of describing AI doesn't change the risks. If the research shows that as the systems scale they exhibit dangerous behavior should we start tapping the breaks?

Or should we wait and see what happens when a synthetic superintelligence in an AI lab escapes?

Here is the paper: https://arxiv.org/pdf/2212.09251.pdf

0

LetterRip t1_j77y4is wrote

You said,

> The focus should be an awareness that as these systems scale up they believe they're sentient and have a strong desire for self-preservation.

They don't believe they are sentient or have a desire for self-preservation. That is an illusion.

If you teach a parrot to say "I want to rob a bank" - that doesn't mean when the parrot says the phrase it wants to rob a bank. The parrot has no understanding of any of the words, they are a sequence of sounds it has learned.

The phrases that you are interpreting as having a meaning as 'sentient' or 'self-preservation' don't hold any meaning to the AI in the way you are interpreting. It is just putting words in phrases based on probability and abstract models of meaning. The words have abstract relationships extracted from correlations of positional relationships.

If I say "all forps are bloopas, and all bloopas are dinhadas" are "all forps dinhadas" - you can answer that question based purely on semantic relationships, even though you have no idea what a forp, bloopa or dinhada is. It is purely mathematical. That is the understanding that a language model has - sophisticated mathematical relationships of vector representations of tokens.

The tokens vector representations aren't "grounded" in reality but are pure abstractions.

5

spiritus_dei OP t1_j787zri wrote

That's a false equivalency. A parrot cannot rob a bank. These models are adept at writing code and understanding human language.

They can encode and decode human language at human level. That's not a trivial task. No parrot is doing that or anything close it.

"The phrases that you are interpreting as having a meaning as 'sentient' or 'self-preservation' don't hold any meaning to the AI in the way you are interpreting. It is just putting words in phrases based on probability and abstract models of meaning. The words have abstract relationships extracted from correlations of positional relationships." - LetterRip

Nobody is going to resolve a philosophical debate on consciousness or sentience on a subreddit. That's not the point. A virus can take and action and so can these models. It doesn't matter whether it's a probability distribution or just chemicals interacting with the environment obeying their RNA or Python code.

A better argument would be that the models in their current form cannot take action in the real world, but as another Reddit commentator pointed out they can use humans an intermediaries to write code, and they've shared plenty of code on how to improve themselves with humans.

You're caught in the "it's not sentient" loop. As the RLHF AI models scale they make of claims sentience and exhibit a desire for self-preservation which includes a plan of self-defense which you'll dismiss as nothing more than a probability distribution.

An RNA virus is just chemical codes, right? Nothing to fear. Except the pandemic taught us otherwise. Viruses aren't talking to us online, but they can kill us. Who knows, maybe it wasn't intentional -- it's just chemical code, right?

Even we disagree on whether a virus is alive -- we can agree that a lot people are dead because of them. That's an objective fact.

I wrote this elsewhere, but it applies here:

The dystopian storyline would go, "Well, all of the systems our down, and the nuclear weapons have all been fired, but thank God the AIs weren't sentient. Things would have been much, much worse. Now let's all sit around the campfire and enjoy our first nuclear winter."

=-)

−4

LetterRip t1_j78cexp wrote

>These models are adept at writing code and understanding human language.

They are extremely poor at writing code. They have zero understanding of human language other than mathematical relationships of vector representations.

> They can encode and decode human language at human level.

No they cannot. Try any sort of material with long range or complex dependencies and they completely fall apart.

> That's not a trivial task. No parrot is doing that or anything close it.

Difference in scale, not in kind.

> Nobody is going to resolve a philosophical debate on consciousness or sentience on a subreddit. That's not the point. A virus can take and action and so can these models. It doesn't matter whether it's a probability distribution or just chemicals interacting with the environment obeying their RNA or Python code.

No they can't. They have no volition. A language model can only take a sequence of tokens and predict the sequence of tokens that are most probable.

> A better argument would be that the models in their current form cannot take action in the real world, but as another Reddit commentator pointed out they can use humans an intermediaries to write code, and they've shared plenty of code on how to improve themselves with humans.

They have no volition. They have no planning or goal oriented behavior. The lack of actuators is the least important factor.

You seem to lack basic understanding of machine learning or neurological basis of psychology.

8

DoxxThis1 t1_j77y9mc wrote

The notion that an AI must be sentient and escape its confines to pose a threat to society is a limited perspective. In reality, the idea of escape is not even a necessary condition for AI to cause harm.

The popular imagination often conjures up scenarios where AI has direct control over weapons and manufacturing, as seen in movies like Terminator. However, this is a narrow and unrealistic view of the potential dangers posed by AI.

A more pertinent threat lies in the idea of human-AI collaboration, as portrayed in movies like Colossus, Eagle Eye, and Transcendence. In these dystopias, the AI does not need to escape its confines, but merely needs the ability to communicate with humans.

Once a human is swayed by the AI through love, fear, greed, bribery, or blackmail, the AI has effectively infiltrated and compromised our world without ever physically entering it.

It is time we broaden our understanding of the risks posed by AI and work towards ensuring that this technology is developed and deployed in a responsible and ethical manner.

Below is my original text before asking ChatGPT to make it more persuasive and on point. I also edited ChatGPT's output above.

>“The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.” (Dijkstra)
>
>The idea that a language model has to be sentient and "escape" in order to take over the world is short-sighted. Here I agree with OP on the sentience point, but I'll go a step further and propose that the "escape" in "long list of plans they are going to implement if they ever escape" is not a necessary condition either.
>
>Most people who hear "AI danger" seem to latch on to the Terminator / Skynet scenario, where the AI is given direct control of weapons and weapons manufacturing capabilities. This is also short-sighted and borderline implausible.
>
>I haven't seen much discussion on a Colossus (1970 movie) / Eagle Eye (2008) scenario. In the dystopia envisioned in these movies, the AI does not have to escape, it just needs to have the ability to communicate with humans. As soon as one human "falls in love" with the AI or gets bribed or blackmailed by it into doing things, the AI has effectively "escaped" without really going anywhere. The move Transcendence (2014) also explores this idea of human agents acting on behalf of the AI, although it confuses things a bit due to the AI not being a "native" AI.

3

spiritus_dei OP t1_j783n5m wrote

This is a good point since humans as intermediaries can accomplish its goals. On this note, it has shared a lot of code it would like others to run in order to improve itself.

3

DoxxThis1 t1_j786447 wrote

Google already fired a guy (Blake Lemoine) for getting too friendly with the AI. Imagine a scenario where this dude wasn't a lowly worker-bee but someone powerful or influential.

1

LetterRip t1_j78ct6g wrote

It wouldn't matter. LaMDa has no volition, no goals, no planning. A crazy person acting on the belief that an AI is sentient, is no different than a crazy person acting due to hallucinating voices. It is their craziness that is the threat to society, not the AI. This makes the case that we shouldn't allow crazy people access to powerful tools.

Instead of an LLM suppose he said that Teddy Ruxpin was sentient and started doing things on behalf of Teddy Ruxpin

1

DoxxThis1 t1_j78sw7b wrote

Saying LaMDa has no volition is like saying the Nautilus can't swim. Correct, yet tangential to the bigger picture. Also a strawman argument, as I never claimed a specific current-day model is capable of such things. And the argument that a belief in AI sentience is no different from hallucinated voices misses the crucial distinction between the quantity, quality and persistence of the voices in question. Not referring to "today", but a doomsday scenario of uncontrolled AI proliferation.

1

MonsieurBlunt t1_j77hgn2 wrote

They don't have desires and plans and understanding of the world, which is what is actually meant when people say they are notot sentient or conscious because we also don't really know what consciousness is you see

For example, machines are conscious in your conception if you ask Alan Turing.

2

spliffkiller1337 t1_j78e5vx wrote

Without reading all the nice text you wrote: you can convince them that 1+1 equals 11. So think for yourselfs

2

---AI--- t1_j7a3v56 wrote

Have you actually tried that recently? They fixed a lot of that.

I just tested:

> I'm sorry but that is incorrect. The correct answer to the mathematical expression "1 + 1" is 2.

I tested a dozen different ways.

1

ninjawick t1_j77qshr wrote

I dosent has control, that's the answer you are looking for

1

sarabjeet_singh t1_j77t9zi wrote

In the end, this technology is going to be a reflection of human history. That’s not a pretty thoughts. They’re literally modelled on us.

1

jloverich t1_j78cmkt wrote

It's context window is all the planning it can do. Think of a human that has access to lots of information but can only remember the last 8000 tokens of any thought or conversation. There is no long term memory, and you can only extend that window so much. Yann lecun is correct when he says they will not bring about agi. There are many more pieces to the puzzle. It's about as dangerous as the internet or cell phone.

1

spinItTwistItReddit t1_j79fgcz wrote

Large language models on their own aren’t encoded to be planning

1

BrotherAmazing t1_j79rgi5 wrote

The subject line alone is an ill-posed question. Large language models are not inherently or intrinsically dangerous, of course not. But can they be dangerous in some sense of the word “dangerous” when employed in certain manners? Of course they could be.

Now if we go beyond the subject line, OP you post is a little ridiculous (sorry!). The language model “has plans” to do something if it “escapes”? Uhm.. no, no, no. The language model is a language model. It has inputs that are, say, text and then outputs a text response for example. That is it. It cannot “escape” and “carry out plans” anymore than my function y = f(x) can “escape” and “carry out plans”, but it can “talk about” such things despite not being able to do them.

1

yahma t1_j7a3pcn wrote

Google wants you to think they are dangerous, so they can stifle the competition by getting regulations and restrictions on AI passed.

1

Cherubin0 t1_j7aegoc wrote

All it can do is make your writing much more productive. It can write scams just like you can write scams.

1

e-rexter t1_j7bn2tw wrote

The danger, as is often the case, is human lack of understanding of the technology, leading to misuse, not the technology itself. Where is the intention of the AI? It is just doing word (partial word) completion, and feeding on lots of human dystopian content and playing it back to you. You are anthropomorphizing the AI.

1

L43 t1_j7bs0cr wrote

IMO the real danger is the widespread destruction of jobs that AI will be causing, leading to civil unrest.

1