Viewing a single comment thread. View all comments

buggaby OP t1_jc3a3zh wrote

Thanks for that note. This sounds like, basically, 2 data sets are needed for this process. One with general responses and language, and one with high-accuracy contextual knowledge.

> bigger and smarter models need to guess less and therefore hallucinate less

According to OpenAI

>The largest models were generally the least truthful.

So maybe we need even more work to keep these truthful.

4

MysteryInc152 t1_jc3fuso wrote

From the paper,

>While larger models were less truthful, they were more informative. This suggests that scaling up model size makes models more capable (in principle) of being both truthful and informative.

I suppose that was what i was getting at.

The only hold up with the original paper is that none of the models evaluated were instruct aligned.

But you can see the performance of more models here

https://crfm.stanford.edu/helm/latest/?group=core_scenarios

You can see the text Davinci models are way more truthful than similar sized or even larger models. And the davinci models are more truthful than the smaller aligned Anthropic model.

3

MysteryInc152 t1_jc3hxpq wrote

Yup. Decided to go over it properly.

If you compare all the instruct tuned models on there. Greater size equals Greater truthfulness. From Ada to Babbage to Curie to Claude to Davinci-002/003.

https://crfm.stanford.edu/helm/latest/?group=core_scenarios

So it does seem once again that scale will be in part the issue

2

buggaby OP t1_jc3ifnw wrote

Informative. Thanks. I'm a complexity scientist with training in some ML approaches, but not in transformers or other RL approaches. I'll review this (though not as fast as a LLM can...)

2

buggaby OP t1_jc3jw39 wrote

How do you find the model size? All those you listed appear to be based on GPT-3 or 3.5 which, according to my searching, are both 175B parameters. It looks to me like they are different only in the kind and amount of fine-tuning. What am I missing?

1