Viewing a single comment thread. View all comments

Flag_Red t1_izjo1xv wrote

Yeah, it's totally clear from "let's think step by step"-style prompt engineering that LLMs have the capability to understand this stuff. I'm confident that a few models down the line we'll have this stuff sorted zero-shot with no prompt engineering.

The interesting part is why this kind of prompt engineering is necessary. Why is this sort of capability seemingly lagging behind others that are more difficult for humans? ELI5-style explanations, for example, are very hard for humans, but LLMs seem to excel at them. In what ways are these tasks different, and what does that tell us about the difference between LLMs and our own brains? Also, why does the ordering of the sentences in the prompt matter so much?

7

liquiddandruff t1_izkq9l5 wrote

one naive explanation is that since chatgpt is at its core a text predictor, by prompting it in such a way that it minimizes leaps of logic (i.e., make each inference step build slowly so as to prevent it from jumping to conclusions), it will be more able to respond coherently and correctly.

2