Single_Blueberry t1_jcjvh6o wrote

Again, can't find a reliable source for that.

I personally doubt that GPT-4 is significantly larger than GPT 3.x, simply because that would also further inflate inference cost, which you generally want to avoid in a product (as opposed to a research feat).

Better architecture, better RLHF, more and better train data, more train compute? Seems all reasonable.

Orders of magnitudes larger again? Don't think so.


Single_Blueberry t1_jcjsxa1 wrote

>the fact that GPT 4 may be two magnitude orders bigger than GPT 3

I'm not aware of any reliable sources that claim that.

Intuitively I don't see why it would stop hallucinating. I imagine the corpus - as big as it may be - doesn't contain a lot of examples for the concept of "not knowing the answer".

That's something people use a lot in private conversation, but not in written language on the public internet or books. Which afaik is where most of the data comes from.