Viewing a single comment thread. View all comments

BSartish t1_jciy4nt wrote

Reply to comment by liright in Those who know... by Destiny_Knight

This video explains it pretty well.

17

ThatInternetGuy t1_jcj2ew8 wrote

Why didn't they train once more with ChatGPT instruct data? Should cost them $160 in total.

11

CellWithoutCulture t1_jcjkwy1 wrote

Most likely they haven't had time.

They can also use SHP and HF-RLHF.... I think they will help a lot since LLaMA didn't get the privlidge of reading reddit (unliked ChatGPT)

9

ThatInternetGuy t1_jckmq5s wrote

>HF-RLHF

Probably no need, since this model could piggyback on the responses generated from GPT4, so it should carry the trait of the GPT4 model with RLHF, shouldn't it?

3

CellWithoutCulture t1_jcmsxjq wrote

HF-RLHF is the name of the dataset. As far as RLHF... what they did to LLaMA is called "Knowledge Distillation" and iirc usually isn't quite as good as RLHF. It's an approximation.

3