Viewing a single comment thread. View all comments

Acceptable-Cress-374 t1_izigh23 wrote

Would this improve with some prompt engineering? Could you perhaps use the LLM to first provide itself some context and then answer the question (in what becomes a few-shot attempt)? In other words, is it worth training for 0shot or can we use the LLMs to self provide some context and answer the prompt in self-learned few-shot? Does my question even make sense?

6

mrx-ai OP t1_izijamw wrote

You might want to read at p.8 in the paper. The authors evaluate three different models (GPT-3-175B, InstructGPT-3-175B, and text-davinci-002) using different prompt templates, but none of the models show improved performance. The variance of the results for text-davinci-002 is particularly high, and the best prompt template only achieves a 74.5% accuracy rate.

6