Viewing a single comment thread. View all comments

blueSGL t1_j2z28ow wrote

> Wisdom of the Crowd

Something I recently saw mentioned by Ajeya Cotra is to query the LLM by re entering the previous output and asking if its correct, repeat this multiple times, take an average the answers provides a higher level of accuracy than just taking the first answer. (something that sounds weird to me)

Well ok, if viewed from the vantage point that the models are very good at doing certain things and people have not worked out how to correctly prompt/fine tune yet, it's not that weird. It's more that the base level outputs are shockingly good and then someone introduces more secret sauce and makes them even better. The problem with this is there is no saying what the limit to the models that already exist are.

1