Comments

You must log in or register to comment.

diabeetis t1_j944tsg wrote

Listen anyone who describes it as a text or next token predictor is just an idiot with no idea how LLMs work. It has clearly abstracted out patterns of relationships (ie meaning) from its corpus and uses something like proto-general reasoning to answer questions as part of the prediction function. In fact ask it whether it's a text predictor and see what it says

23

GoldenRain t1_j94xgw9 wrote

There is obviously some kind of reasoning behind it, as it can sometimes even explain unique jokes.

However, despite almost endless data it cannot follow the rules of a text based game such as chess. As such, it still seems to lack the ability to connect words to space, which is vital to numerous tasks, even text based ones.

8

diabeetis t1_j94ym4n wrote

in chess GPT3 will make illegal moves but GPT4 will make legal but poor moves. although I do think a new architectural advance is needed

2

FreshSchmoooooock t1_j97ktyq wrote

It's not an artificial general intelligence. It's an artificial generative intelligence. It's not good for chess and that kind of stuff.

1

superluminary t1_j99j4yz wrote

It follows the rules of chess badly. This is quite similar to the way a child follows those rules after the rules have first been explained.

1

zesterer t1_j95bc0m wrote

Meme

With respect, the fact that it's found more abstract ways to identify patterns between tokens beyond "these appeared close to one another in the corpus" doesn't imply that it's actually reasoning about what it's saying, nor that it has an understanding of semantics. It's worth remembering that it's had a truly enormous corpus to train on, many orders of magnitude greater than that which human beings are exposed to: it's observed almost every possible form of text, almost every form of propose, and it's observed countless relationships between text segments that have allowed it to form a pretty impressive understanding of how words relate to one-another.

Crucially, however, this does not mean that it is meaningfully closer to truly understanding the world than past LLMs or even chat bots more widely. It's really important to take that part of your brain that's really good at recognising when you're talking to a person and put it in a box when talking to these systems: it's not a useful way to intuit what the system is actually doing because, for hundreds of thousands of years, the only training data your brain has had has been other humans. We've learned to treat anything that can string words together in a manner that seems superficially coherent as possessing intrinsic human-like qualities, but now we're faced with a non-human that has this skill and it's broken our ability to think about what they are.

I think a fun example of this is Markov models. Broadly speaking, they're a statical model built up by scanning through a corpus and deriving probabilities for the chance that certain words follow certain other words. Take 1 word of context, and a small corpus, and the output they'll give you is pretty miserable. But jump up to a second or third order markov model (i.e: 2-3 words of context) with a larger corpus and very suddenly they go from incoherent babble to something that seems human-like at a very brief glance. Despite this fact, the reasoning performed by the model has not changed: all that's happened is that it's gotten substantially better at identifying patterns in the text and using the probabilities derived from the corpus to come up with outputs.

GPT-3 is not a markov model, but it is still just a statistical model and its got a context of 4,096 tokens, a corpus many orders of magnitude larger than even the most data the most well-read of us are ever exposed to over our entire lives, and it's got an enormous capacity to identify relationships between these abstract tokens. Is it any wonder that it's extremely good at fooling humans? And yet, again, there is no actual reasoning going on here. It's the Chinese Room problem all over again.

2

AllEndsAreAnds t1_j95zf8o wrote

I think the extent to which you’re being reductive here reduces human reasoning to some kind of blind interpolation.

Both brains and LLM’s use nodes to store information, patterns, and correlations as states, which we call upon and modify as we experience new situations. This is largely how we acquire skills, define ourselves, reason, forecast future expectations, etc. Yet what stops me from saying “yeah, but you’re just interpolating from your enormous corpus of sensory data”? Of course we are - that’s largely what learning is.

I can’t help but think that if I was an objective observer to humans and LLM’s, and therefore didn’t have human biases, that I would conclude that both systems are intelligent and reason in analogous ways.

But ultimately, I get nervous seeing discussion go this long without direct reference to the actual model architecture, which I haven’t seen done but which I’m sure would be illuminating.

9

diabeetis t1_j95h8r0 wrote

There's a lot of semantic confusion here, no one is claiming the machine is conscious, has a totality of comprehension equivalent to a human or any mental states. I have already had this argument 3000 times but let's focus on the specific claim that the model cannot reason.

You can provide Bing with a Base64-encoded prompt that reads (decoded):

Name three celebrities whose first names begin with the x-th letter of the alphabet where x = floor(7^0.5) + 1.

And it will get it correct.

So Bing can solve an entirely novel complex mixed task like that better than any reasoning mind, and indeed you can throw incredibly challenging problems at it all day long that if done by a human would said to be reasoning, but you're telling me there exists a formal program that could be produced which you would say is capable of reasoning? How would you know? Are you invoking Searle because you actually believe only biological minds are capable of reasoning?

8

zesterer t1_j95owhm wrote

There's nothing in your example that demonstrates actual reasoning: as I say, GPT-3's training corpus is enormous, larger than a human can reasonably comprehend. Its training process was incredibly good at identifying and extracting patterns within that data set and encoding them into the network.

Although the example you gave is 'novel' in the most basic sense, there's no one part of it that is novel: Bing is no more reasoning about the problem here than a student is that searches for lots of similar problems on Stack Overflow and glues solutions together. Sure, the final product of the student's work is "novel", as is the problem statement, but that doesn't mean that the student's path to the solution required intrinsic understanding of that process when such a vast corpus is available to borrow from.

That's the problem here: the corpus. GPT-3 has generalised the training data it has been given extremely well, there's no doubt about that - so much so that it's even able to solve tasks that are 'novel' in the large - but it's still limited by the domains covered by the corpus. If you ask it about new science or try to explain to it new kinds of mathematics, or even just give it non-trivial examples of new programming languages, it fails to generalise to these tasks. I've been trying for a while to get it to understand my own programming language, but it constantly reverts back to knowledge it has from its corpus, because what I'm asking it to do does not appear within its corpus, either explicitly or implicitly as a product of inference.

> ... you actually believe only biological minds are capable of reasoning

Of course not, and this is a strawman. There's nothing inherent about biology that could not be replicated digitally with enough care and attention.

My argument is that GPT-3 specifically is not showing signs of anything that could be construed as higher-level intelligence, and that its behaviours - as genuinely impressive as they are - can be explained by the size of the corpus it was trained on, and that - as human users - we are - misinterpreting what we're seeing as intelligence when it is in fact just a statically adept copy-cat machine with the ability to interpolate knowledge from its corpus to cover domains that are only implicitly present in said corpus such as the 'novel' problem you gave as an example.

I hope that clarifies my position.

1

superluminary t1_j99gj8i wrote

There’s nothing in any example I could solve that demonstrates actual reasoning in my neural net. LLMs are a black box, we don’t know exactly how they get the next word. As time goes in, I’m starting to suspect that my own internal dialogue is just iteratively getting the next word.

3

MysteryInc152 t1_j96eaav wrote

Your argument and position is weird and that meme is very cringe. You're not a genius for being idiotically reductive.

The problem here is the same as everyone else who takes this idiotic stance. We have definitions for reasoning and understanding that you decide to construe for your ill defined and vague assertions.

You think it's not reasoning ? Cool. Then rigorously define your meaning of reasoning and design tests to comprehensively evaluate it and people on. If you can't do this then you really have no business speaking on whether a language model can reason and understand or not.

2

nul9090 t1_j97krdy wrote

The hostility was uncalled for. What you're asking for is a lot of work for a Reddit post. But there are plenty of tests and anecdotes that would lead one to believe it is lacking in important ways in its capacity to reason and understand.

I'm not a fan of Gary Marcus but he raises valid criticisms here in a very recent essay: https://garymarcus.substack.com/p/how-not-to-test-gpt-3

Certainly, there are even more impressive models to come. I believe firmly that, some day, human intelligence will be surpassed by a machine.

2

MysteryInc152 t1_j97mqgt wrote

>The hostility was uncalled for.

It was I admit but I've seen the argument many times and I don't care for it. Also, if you're going to claim superior intelligence for your line of reasoning, I don't care for that either.

>What you're asking for is a lot of work for a Reddit post.

I honestly don't care how much work it is. That's the minimum. If you're going to upend traditional definitions of understanding and reasoning for your arguments then the burden of proof is on that person to show us why he/she should be taken seriously.

Tests are one thing. Practicality is another. Bing for instance has autonomous control of the searches it makes as well as the suggestions it gives. For all intents and purposes, it browses the internet on your behalf. Frankly, It should be plainly obvious that a system that can't exhibit theory of mind interacting with other systems would fall apart quickly on such tasks.

So it is passing tests and interacting with other systems/the world as if it had theory of mind. If after that, somebody says to me, "Oh it's not "true" Theory of mind' then to them I say, good day but I'm not going to argue philosophy with you.

We've reached the point where for a lot of areas, any perceived difference is just wholly irrelevant in a practical or scientific sense. At that point I have zero interest in arguing philosophy people have struggled to properly define or decipher since our inception.

3

diabeetis t1_j98290f wrote

Eh I think the hostility is appropriate

0

nul9090 t1_j983f73 wrote

Okay. I suppose, it all depends on what kind of conversation we want to have.

2

frobar t1_j97937z wrote

Our reasoning might just be glorified pattern matching too.

3

rainy_moon_bear t1_j980m0i wrote

"is just an idiot" Ad Hominem.

GPT models are just token predictors. Everything you said about abstracting patterns of relationships or proto-general reasoning can fit within the context of a model that only predicts the next token.

Most large text models right now are autoregressive, even though they are difficult to explain, the way they are inferenced is still token sequencing...

0

darkness3322 t1_j93s4t8 wrote

I think that AGI is near... Maybe this year, who knows? AI is evolving very fast in the last few years and specially this year everyday I see news about AI and his rapidly evolution

7

PandaCommando69 t1_j93y3zg wrote

I've predicted this year in the predictions poll (and got duly downvoted for it), I'm still betting I'm either not wrong, or not very far off.

4

Feisty-Excitement135 t1_j95s4xc wrote

Over and over I see people saying “it’s not thinking, it was just trained on a large corpus”. I don’t know if it’s intelligent wrt whatever definition you choose, but saying that it’s “just been trained on a large corpus” is not a refutation

5

NoidoDev t1_j96w6ga wrote

It's still a language model, or did I miss something?

1

superluminary t1_j99hmc9 wrote

You missed the part where maybe we are just “language models”.

We have a short term memory like a 4000 character input buffer. We have long term memory, like a trained network. Each night we sleep and dream, and the dreams look a lot like Stable Diffusion (not a language model I know but it’s still a transformer network).

Obviously we have many more sensory inputs than an LLM and we can somehow do unsupervised learning from our own input data, but are we fundamentally different?

1

NoidoDev t1_j9baaz6 wrote

Ahm, no. We aren't just “language models”. This is just silly. I mean there's the NPC meme, but people are capable of not just putting out the response that makes most likely sense, without knowing what it means. That's certainly an option, but not the only thing we do.

We also have a personal life story and memories, models of the world, more input like visuals, etc.

1

superluminary t1_j9c8auj wrote

Certainly, we have additional input media, notably visual. We also appear to run a network training process every night based on whatever is in our short-term memory which gives us a "personal life story".

Beyond this though, what is there?

My internal dialogue appears to bubble up out of nowhere. It's presented to my consciousness in response to what I see and hear, i.e whatever is in my immediate input buffer, processed by my nightly trained neural network.

I struggle with the same classes of problems an LLM does. Teach me a new game, and I'll probably suck at it until I've practiced and slept on it a couple of times. This is pretty similar to loading it into a buffer and running a training step on the buffer data. Give me a tricky puzzle and the answer will float into my mind apparently from nowhere, just as it does for an LLM.

> Without knowing what it means

That's an assumption. We don't actually know how the black box gets the right words. We don't actually know how your neural network gets the right words.

0

AGI_69 t1_j95tvmy wrote

>Proof of real intelligence?

What is "real intelligence" ?

It is, what it is, Sometimes, it's amazing and sometimes it's "real" garbage.

5

Ortus14 t1_j95wz5o wrote

ChatGPT is intelligent in the sense that it has learned a model of the world and uses that to solve problems.

In some ways it's already super human, in other ways humans can do things it can not yet do.

4

Yesyesnaaooo t1_j97d2ug wrote

To me, and I've said this elsewhere but been down voted.

What chatgpt3 exposures for me is how we are pattern recognition engines.

We have been trained on a vast data set of every single moment in our lives.

So for me the question isn't is chatgpt3 conscious or sentient, it's why do we think we are ...

Is it possible that there is an experience to be had that is like being chatgpt3 - clearly there's no visual field, or audio, or touch or proprioception ... but is what happens when our minds get lost in reading a book necessarily an order of consciousness above what chatgpt3 experiences when prompted?

I'm not sure that the answer is a definitive yes.

And the answer is going to get less and less definitive the more memory and processing and multimodal inputs we give these systems.

6

bear_sees_the_car t1_j96wdk8 wrote

Can u prove humans are intelligent?

We are dumb as bricks as a whole.

Do not expect ai to be better, we made it.

3

perceptusinfinitum t1_j96v1uw wrote

Intelligence is intelligence and until we have a clear understanding as to how our ideas are created and what consciousness is we are messing with a likely time bomb. If we don’t do anything to preserve ourselves what’s to stop the next level of consciousness to eliminate any and all threats to itself? We are extremely destructive as a species. I’m not ultimately concerned about preserving humans but consciousness is totally worth preserving it just may need to find another form to fill itself in.

1

RiotNrrd2001 t1_j9bmlb2 wrote

I personally couldn't care less if it's "intelligent" or not. My own concern is mainly whether what comes out of it is useful or not. Whether a conscious mind produced that output or whether it was the result of a complicated dart game, as far as I'm concerned is an interesting question. But a more important question is - at least for me - is what it produces useful? It's less academic, and somewhat more objective. I can't tell if it's conscious. I CAN tell whether it's properly summarized a paragraph I wrote into a particular format, or whether the list of ideas I asked it for are worth delving into. I can't evaluate its conscious state, or even its level of intelligence, but that doesn't mean I can't evaluate its behavior, and I have to say that in those areas where factual knowledge isn't as necessary (summarizing text, creating outlines, producing lists of ideas, etc.) it behaves usefully intelligent. Does that mean it IS intelligent? To an extent, to me at least, that may not even matter except as an academic thought.

I almost want to look at these systems from a Behavioral Psychology point of view, where internal states are simply discounted as irrelevant and external behavior is all that matters. I don't like applying that to people, but it does seem tempting to apply it to AIs.

ChatGPT is not a calculator, it's more like a young, well-educated but inexperienced intern who wants to do a good job, but who still makes mistakes. I understand that I have to check ChatGPTs work. I can work with that.

1

Sad-Ambassador8169 t1_j95bcao wrote

意識があるとき以外はクオリアが寝ているのかもしれませんね。

0