Viewing a single comment thread. View all comments

anti-torque t1_j8vbhkb wrote

> to teach the models to do more complex tasks based on human preferences.

so... predictive

>Also, and this is more of a nitpick, but "next word" would be greedy search....

This is fair. "Word" is too simple a unit. It picks up phrases and maxims.

1

gurenkagurenda t1_j8vnao5 wrote

>so... predictive

No, not in any but the absolute broadest sense of that word, which would apply to any model which outputs text. In particular, it is not "search out the most common next word", because "most common" is not the criterion it is being trained on. Satisfying the reward model is not a matter of matching a corpus. Read the article I linked.

1