Viewing a single comment thread. View all comments

inglandation t1_jdjvmqe wrote

> you can't model one bit with it, it has no predictive power and it kind of shuts down discussions.

For now yes, my statement is not very helpful. But this is a phenomenon that happens in other fields. In physics, waves or snowflakes are an emergent phenomenon, but you can still model them pretty well and make useful predictions about them. Life is another example. We understand life pretty well (yes there are aspects that we don't understand), but it's not clear how we go from organic compounds to living creatures. Put those molecules together in the right amount and in the right conditions for a long time, and they start developing the structures of life. How? We don't know yet, but it doesn't stop us from understanding life and describing it pretty well.

Here we don't really know what we're looking at yet, so it's more difficult. We should figure out what the structures emerging from the training are.

I don't disagree that LLMs "just" predict the next token, but there is an internal structure that will pick the right word that is not trivial. This structure is emergent. My hypothesis here is that understanding this structure will allow us to understand how the AI "thinks". It might also shed some light on how we think, as the human brain probably does something similar (but maybe not very similar). I'm not making any definitive statement, I don't think anyone can. But I don't think we can conclude that the model doesn't understand what it is doing based on the fact that it predicts the next token.

I think that the next decades will be about precisely describing what cognition/intelligence is, and in what conditions exactly it can appear.

7