Viewing a single comment thread. View all comments

JClub OP t1_j4v057p wrote

yeah, instructGPT is like that. How do you calculate a reward score for each output in this ranking scenario?

1

koolaidman123 t1_j4v2uyq wrote

it's just a binary pairwise comparison of which is more preferred between 2 outputs, read the instructgpt paper or the wandb post https://wandb.ai/carperai/summarize_RLHF/reports/Implementing-RLHF-Learning-to-Summarize-with-trlX--VmlldzozMzAwODM2#train-the-reward-model

2

JClub OP t1_j4v5d0y wrote

Ah right, then you can just use the model's reward directly or pass it through a sigmoid so that the reward is between 0-1!

Do you think that the sigmoid is needed?

2