Viewing a single comment thread. View all comments

BaalKazar t1_izq958r wrote

It did not though.

The suspect knew it was talking to a machine and was asked if it believes the machine might be intelligent or even sentient.

The Turing Test implies that the suspect does not know it is talking to a machine. That way the suspect has to identify the machine as a human for the machine to pass the test.

In case of LaMDA the human knew from the beginning that he is talking to a machine. Asking someone if he believes a machine is intelligent is different than asking someone if he believes that he is talking to a human.

There is money in AI. Hence a lot of caution is advised when for profit organizations self declare themselves as the first to pass the test. The first to pass it will become rich by publicity alone. When it actually is passed you, me and everyone will get blasted across all media channels by this breakthrough.

(The GPT ceo is marketing GPT-4 as the first to pass the test. GPT is for profit and said the same about GPT-3, other companies go the same publicity route without the meat needed. As long as no human says „yeh this dialog partner is a human“ the test isn’t passed. A human saying „this machine might be intelligent“ isn’t enough. )

1

SeneInSPAAACE t1_izqad1w wrote

>In case of LaMDA the human knew from the beginning that he is talking to a machine.

So the well was poisoned from the beginning? Isn't that cheating? On the human side?

BTW, allegedly GPT-4 will have 100 TRILLION parameters. Now, again, we can't exactly tell what that means, but human brains have something like 150 trillion SYNAPSES, and that includes all the ones for our bodily functions and motor control, so.... Yeah, it's going to get interesting.

1

BaalKazar t1_izqerqh wrote

To be honest yea it is. But it’s not as easy and definitive. You got a point I don’t want to deny that. The edge between us in our discussion is the fascinating thing about all of this, especially the fact that either of us might be correct but in the current state of time there is no definite way to proof it. The Turing test it self is not definitive either.

Currently it looks like GPT it self is going to try to cheat it’s way through the Turing test by using a language model which is naturally hard for humans to identify as a machine. When a neural network is trained to pass the test by using all means necessary, is it passing the test duo to its intelligence or passing the test pre-determined? (It was trained to pass the test, can it do things beyond the scope of this training?)

There is no clear answer. Which imo makes it fascinating. We cannot truly say it is intelligent, but it will reach a point very soon at which it will appear intelligent.

The master question is, if that it self already is intelligence. It might be! I don’t want to deny that. But we lack the necessary definite understanding of „intelligence“ to truly conclude.

When a neural network passes the test, there will be fierce discussions. These discussions will help us understand what makes up intelligence, they will most likely help with understanding consciousness as well.

But it’s a step by step discovery process on both sides. Passing the Turing test doesn’t automatically mean we suddenly have a clear picture of intelligence or what it looks like. But it is a milestone in being able to understand it. Perhaps humans already created synthetic intelligence without even noticing.

Don’t get me wrong GPT and Co are fascinating and modern age magic. The new sense of possible tools is breathtaking. Intelligence requires the ability to acquire and apply knowledge in form of skills. Digital AI is very close to doing that, but the way they acquire knowledge is very technical and bound to complex engineered models being fed in just the right way. It’s not able to do so on its own. (Just like the brain! But the brain does so with a certain intrinsic ease, which might be purely Duo to some special not yet discovered feature unrelated to „intelligence“. Science can’t really tell yet so we naturally have a hard time setting boundaries for different AI models. Perhaps this current language model isn’t intelligent but some physics model AI already was? The physics one can’t „talk“ to us which makes it easy to miss)

Currently we are talking to the AI, what we are looking for is the AI starting to talk to us. Perhaps it already did but nobody noticed because we didn’t yet know how to listen.

And yeah I fully agree GPT-4 sounds incredible! The steps the industry marches forward with got huge the last years, truly fascinating.

1

SeneInSPAAACE t1_izroblm wrote

>The Turing test it self is not definitive either.

Very true. Without poisoning the well, would LaMDA completely have passed it already? And if I've understood correctly, it's a bit of an idiot outside of putting words in a pleasing order.

​

>Currently it looks like GPT it self is going to try to cheat it’s way through the Turing test by using a language model which is naturally hard for humans to identify as a machine.

"Cheat" is relative. Can a HUMAN pass a turing test, especially if we restrict the format in which they are allowed to respond?
If it can pass every test a human can, and we still call it anything but intelligent, either we gotta admit our dishonesty, or question whether humans are intelligent.

​

> it will reach a point very soon at which it will appear intelligent.

Just like everyone else, then. Well, better than some of us.

1

BaalKazar t1_izsnhiv wrote

Now I fully agree with what you said.

Cheat is a absolutely relative! How can we tell that something which appears to be intelligent is not? The parallels to how human infants acquire knowledge are strikingly similier. Parents are the engineers and the environment is the data set which the infant is getting trained on.

We need to take a better look at what the Turing test is doing to answer your question of „could a human pass it“. Turings approach is not really to measure intelligence, intelligence definitely is a spectrum, his test results in a binary yes/no conclusion for a reason though. He believed that 70% of humans won’t be able to identify a machine through a 5min dialogue until the year 2000.

His test is not a scientifically important milestone, passing the Turing test, or declaring a machine to be intelligent is not yielding any new knowledge. The passing of the Turing test is marking the point in time in which humans must accept the fact that a majority of them won’t be able to tell the difference of remotely communicating with a human or a machine. (The latest point at which governments need to work on additional legislation and regulation etc)

So as you correctly pointed out, the test cannot really be cheated. But the test can be passed without the need for intelligence. A dog is intelligent but could not pass it. Passing it definitively requires something to seem intelligent for a human.

StarTrek has many episodes which tackle this highly ethical topic of when do humans accept something to be intelligent and when do we accept that something is sentient. The android Commander Data is definitively intelligent, he is acquiring knowledge and applies it in the real work. Question about Data is, is he sentient? They impressively show how difficult it is to identify intelligence and even something as seemingly obvious as sentience. There is an episode which concludes a crystalline rock to be intelligent based on it emitting energy patterns which can be considered to be an encoded try of communication.

Humans may look intelligence straight into the face and state it’s not intelligent. That’s because we do not understand our own intelligence enough yet. My point of view is that AI will help us understand our own intelligence. But until we cannot grasp our own, how can we grasp something else’s? I believe that pushing back will at some point result in a technology which goes over and beyond to make the claim of it being not intelligent completely obsolet. StarTreks Data for example, there is no deniability of its intelligence and interesting enough this leads straight to question of sentience. At least StarTrek is not able to draw a picture which clearly shows the boundary of intelligence and sentience, in their pictures these two things are appearing to correlate. Something which is definitively considered to be intelligent by humans, always appears to be sentient at the same time. (Which imo shows that we need to get a better idea of „intelligence“ before we conclude something is, when we concluded it is intelligent the scientific path „ends“ before we truly understood)

2