Submitted by JClub t3_10fh79i in MachineLearning

​

Overview of RLHF training

You must have heard about ChatGPT. Maybe you heard that it was trained with RLHF and PPO. Perhaps you do not really understand how that process works. Then check my Gist on Reinforcement Learning from Human Feedback (RLHF): https://gist.github.com/JoaoLages/c6f2dfd13d2484aa8bb0b2d567fbf093

No hard maths, straight to the point and simplified. Hope that it helps!

71

Comments

You must log in or register to comment.

dataslacker t1_j4xd5aj wrote

That’s a nice explanation but I’m still unclear as to the motivation for RL. You say the reward isn’t differentiable but since it’s just a label that tells us which of the outputs is best why not simply use that output with supervised training?

7

JClub OP t1_j4xgp2x wrote

You're not the first person that asks me that question! I need to add a more detailed explanation for that :)

The reward is non-differentiable because it was produced with a reward model, and this reward model takes text as input. This text was obtained by decoding the log probabilities of the output of your model. This decoding process is non-differentiable and we lose the gradient link between the LM model and the reward model.

Does this make sense? Also, if the reward is given directly by a human, instead of a reward model, it's clearer that this reward is non-differentiable.

RL helps transforming this non-differentiable reward into a differentiable loss :)

5

dataslacker t1_j4yraoc wrote

Sorry I think didn’t do a great job asking the question. The reward model, as I understand it, will rank the N generated responses from the LLM. So why not take the top ranked response as ground truth, or a weak label if you’d like and train in a supervised fashion predicting the next token. This would avoid a he RL training which I understand is inefficient and unstable.

2

JClub OP t1_j4z57kr wrote

Yes, the reward model can rank model outputs but it does that by giving a score to each output. You want to train with this score, not with "pseudo labeling" as you are stating. But the reward score is non-differentiable, and RL helps to construct a differentiable loss. Does that make sense?

1

dataslacker t1_j4z8zm4 wrote

Yes, your explanations are clear and are also how I understood the paper, but I feel like there's some motivation for the RL training that's missing. Why not "pseudo labeling"? Why is the RL approach better? Also the reward score is non-differentiable because it was designed that way, but they could have designed it to be differentiable. For example instead of decoding the log probs why not train the reward model on them directly? You can still obtain the labels via decoding them doesn't mean that has to be the input to the reward model. There are a number of design choice the authors made that are not motivated in the paper. I haven't read the reference so maybe they are motivated elsewhere in the literature, but RL seems like a strange choice for this problem since there isn't a dynamic environment that the agent is interacting with.

3

JClub OP t1_j4zejga wrote

Yes, 100% agree with you. I believe that the researchers have also tried pseudo labeling or making the reward differentiable as you say, and maybe RL is the SOTA approach now. But these are just guesses!

1

mtocrat t1_j4zecpm wrote

What you're describing is a general approach to RL that is used in different forms in many methods: sample actions, weight or rank them in some way by the estimated return, regress to the weighted actions. So you're not suggesting to do something other than RL but to replace one RL approach with a different RL approach.

2

crazymonezyy t1_j4yjtuz wrote

Amongst other things, RLs major benefit is for learning from a sequence of reward over simply "a reward" which would be the assumption when you treat this is a SL problem. Do remember IID observations is one of the fundamental premises of SL.

1

Ouitos t1_j50cm0i wrote

Hi, thanks for the explanation !

Two comments :

> 1. Make "New probs" equal to "Initial probs" to initialize.

Shouldn't it be the opposite ? Make the initial be equal to the first occurence of new probs ? I mean equality is transitive, but here we think you change new probs to be equal to initial probs, but I contradicts the diagram that says that new probs is always the output of our LM.

> loss = min(ratio * R, clip(ratio, 0.8, 1.2) * R)

Isn't the min operation redundant with the clip ? How is that different from min(ratio * R, 1.2 * R) ? Does 0.8 have any influence at all ?

2

JClub OP t1_j51h8up wrote

> Shouldn't it be the opposite ?

Yes, that makes more sense. Will change!

> How is that different from min(ratio * R, 1.2 * R) ? Does 0.8 have any influence at all ?

Maybe I did not explain properly what the clip is doing. If you have ratio=0.6, then it become 0.8 and if it is > 1.2, it becomes 1.2
Does that make more sense? Regarding the min operation, it's just an heuristic to choose the smaller update tbh

2

Ouitos t1_j54nh7v wrote

Yes, but If you have a ratio of 0.6, you then take the min of 0.6 * R and 0.8 * R, which is ratio * R. In the end, the clip is only effective one way, and the 0.8 lower limit is never used. Or maybe R has a particular property that makes this not as straight forward ?

2

JoeHenzi t1_j4yowtu wrote

Taking a look - wanting to implement this in my application to explore parameter space, shoot for optimal, but actually am finding ChatGPT gets very cagey on the topic lately. Explored the topic of Genetic Algorithms, which it suggested would be less computationally expensive, then decided to not help me really get to coding it.

EDIT: This is exactly my use case...

1

JClub OP t1_j4z5ciu wrote

This package is pretty simple to use! https://github.com/lvwerra/trl

It supports decoder-only models like GPT and it is in the process of supporting enc-dec like T5.

1

JoeHenzi t1_j50pbv9 wrote

I'll take a look, thanks again. Building up a dataset, at the very least, that could be interesting to analyze or crunch. Would love to implement a GA to explore the space and have the example code from ChatGPT but need to dive deeper. As I may have mentioned on my GH comment, when trying to do predictions around parameters I end up blocking/slowing the API call so either my code is trash (likely!) or I'm trying to do too-too much at once.

On my short term list is using a T5-like model to produce summaries but I was trying to execute them at bad times, trying to make too many changes at once.

Thanks again for sharing. Enjoying playing in the space and love when you find people willing to share. (Unlike OpenAI who is slowly closing out the world to their toys).

2