Viewing a single comment thread. View all comments

Kylaran t1_ix6th4k wrote

Former developmental psychology student here — the reward function for humans is unbelievably complex and RL draws a lot of its assumptions on classical behaviorist principles rather than cognitive or statistical. One reason why cognitive science was born is to tackle exactly the paucity of stimulus argument ala Chomsky: human children learn language without that much explicit feedback at all.

In RL and NLP, there’s a lot of research in areas like content recommendation systems and using RL as feedback loops in chat in chatbots. In these cases, the language models already exist and the RL model is used to generate feedback into the language models.

Learning the language model itself using only reward would be a fundamentally different philosophical and empirical challenge for science.

2

blazejd OP t1_ix7kazf wrote

Glad to hear a non-ML perspective on it! Initializing with language models and then using RL for feedback makes a lot of sense. Could you share any particular papers that I could look into?

1