Viewing a single comment thread. View all comments

LetGoAndBeReal t1_jea1id9 wrote

I believe you are referring to this statement from the link: "Ability to train on more examples than can fit in a prompt." Correct?

If so, as I explained, the key word here is "examples." And if you understand why, you will see that there is no contradiction. I will try to clarify why.

There are two methods that we are discussing for extending the capability of an LLM:

  1. Prompt engineering
  2. Fine-tuning

There are also different types of capability that might be extended. We are discussing the following two:

  1. Adding new knowledge/facts to the model
  2. Improving downstream processing tasks, such as classification, sentiment analysis, etc.

Both of these capabilities are readily done through prompt engineering. Adding new knowledge with prompt engineering involves including that knowledge as context in the prompt. Improving tasks such as classification is done by include examples of the processing you want done in the prompt.

What the article says is that for the case where you want to provide examples in the prompt to make the model perform better, you can alternatively use fine-tuning. The article does not say "Ability to add more knowledge than can fit in a prompt." Examples = downstream processing tasks. Examples != new knowledge.

1