calciumcitrate

calciumcitrate t1_j0asr1n wrote

> GPT-3 contains an understanding of the world, or at least the text world. So does Wikipedia, so does a dictionary. The contents of the dictionary are meaningful. But nobody would say that the dictionary understands the world.

What differs GPT-3 from a database of text is that it seems like GPT-3 contains some representations of concepts that make sense outside of a text domain. It's that ability to create generalizable representations of concepts from sensory input that constitutes understanding.

> I think if you fed a human nonsense information since birth, the person would withdraw from everything and become catatonic. Bombarding them with random sensory experiences which didn't match their actions would result in them carrying out no actions at all.

Maybe my analogy wasn't clear. The point I was trying to make was that if your argument is:

GPT-3 holds no understanding because you can feed it data with patterns not representative of the world, and it'll learn those incorrect patterns.

Then my counter is:

People being fed incorrect data (i.e. incorrect sensory input) would also learn incorrect patterns. e.g. someone who feels cold things as hot and hot things as cold is being given incorrect sensory patterns (ones that aren't representative of real-world temperature), and forming an incorrect idea of what "hot" and "cold" things are as a result, i.e. not properly understanding the world.

My point being that it's the learned representations that determine understanding, not the architecture itself. Of course, if you gave a model completely random data with no correlations at all, then the model would not train either.

5

calciumcitrate t1_j0amrhw wrote

But a model is just a model - it learns statistical correlations* within its training data. If you train it on nonsense, then it will learn nonsense patterns. If you train it on real text, it will learn patterns within that, but patterns within real text also correspond to patterns in the real world, albeit in way that's heavily biased toward text. If you fed a human nonsense sensory input since birth, they'd produce an "understanding" of that nonsense sensory data as well.

So, I don't think it makes sense to assign "understanding" based on the architecture as a model is a combination of both its architecture and the data you train it on. Rather, if you have a trained model that captures representations that are generalizable and representative of the real world, then I think it'd be reasonable to say that those representations are meaningful and that the model holds an understanding of the real world. So, the extent to which GPT-3 has an understanding of the real world is the extent to which the underlying representations learned from pure text data correspond to real world patterns.

* This isn't necessarily a direct reply to anything you said, but I feel like people use "correlations" as a way to discount the ability of statistical models to learn meaning. I think people used to say the same thing about models just being "function approximators." Correlations (and models) are just a mathematical lens with which to view the world: everything's a correlation -- it's the mechanism in the model that produces those correlations that's interesting.

5

calciumcitrate t1_izigomm wrote

/u/tetrisdaemon Any idea what part of the diffusion process might be causing the failure modes? (the latent representations, CLIP embeddings, or cross attention conditioning etc.)

My initial guess was that maybe the CLIP embeddings aren't fine grained enough to represent some relationships between entities in a sentence, but if I understand correctly, the cross-attention conditioning adds some additional text supervision (I'm assuming X in eq 4 and 5 is some transformer representation of the prompt) - and it does seem like some dependency relationship are being captured.

1