Viewing a single comment thread. View all comments

MysteryInc152 t1_j7g83pw wrote

GLM-130B is really really good. https://crfm.stanford.edu/helm/latest/?group=core_scenarios

I think some instruction tuning is all it needs to match the text-davinci models

1

Cheap_Meeting t1_j7j70tj wrote

That's not my takeway. GLM-130B is even behind OPT according to the mean win rate, and the instruction tuned version of OPT in turn is worse than FLAN-T5 which is a 10x smaller model (https://arxiv.org/pdf/2212.12017.pdf Table 14)

1

MysteryInc152 t1_j7ja39c wrote

I believe the fine-tuning dataset matters as well as the model but I guess we'll see. I think they plan on fine-tuning.

The set used to tune OPT doesn't contain any chain of thought.

1