Viewing a single comment thread. View all comments

mrpogiface t1_j7g03gj wrote

Do we actually know that chatGPT is the full 175B? With codex being 13B and still enormously powerful, and previous instruction tuned models (in the paper) being 6.7B it seems likely that they have it working on a much smaller parameter count

7