Viewing a single comment thread. View all comments

harharveryfunny t1_jcchnkp wrote

That's a bogus comparison, for a number of reasons such as:

  1. These models are learning vastly more than language alone

  2. These models are learning in an extraordinarily difficult way with *only* "predict next word" feedback and nothing else

  3. Humans learn in a much more efficient, targetted, way via curiosity-driven knowledge gap filling

  4. Humans learn via all sorts of modalities in addition to language. Having already learnt a concept then we only need to be given a name for it once for it to stick

6

Necessary-Meringue-1 t1_jcm5mye wrote

>These models are learning vastly more than language alone

A child growing up does too.

>These models are learning in an extraordinarily difficult way with *only* "predict next word" feedback and nothing else

Literally the point, that LLMs do not learn language like humans at all. Unless you're trying to say that you and I are pure Skinner-type behavioralist learners.

1

Alimbiquated t1_jcd2z4g wrote

I agree that comparing these learning processes to brains is bogus.

There is a general tendency to assume that if something seems intelligent, it must be like a human brain. It's like assuming that because it's fast, a car must have legs like a horse and eat oats.

0

Necessary-Meringue-1 t1_jcm6j79 wrote

>There is a general tendency to assume that if something seems intelligent, it must be like a human brain. It's like assuming that because it's fast, a car must have legs like a horse and eat oats.

Ironic, because that is literally what that article is doing.

1

Alimbiquated t1_jcmi1fd wrote

Right, it makes no sense.

1

Necessary-Meringue-1 t1_jcmjqhm wrote

I don't understand why it's so hard for people to acknowledge that LLMs deliver extremely impressive results, but that does not mean they have human-like intelligence of language understanding.

1