Viewing a single comment thread. View all comments

AuspiciousApple t1_jdyjclk wrote

It could do something semi-fancy, or it might simply be prepending the translation prompt with the previous input, translation, and user-based edits so that it can adjust to your specific perferences. It's called in-context learning, the model doesn't change so it doesn't learn in the standard sense, but it still learns from the current context.

3

petrastales OP t1_jdzefzu wrote

I believe I understand what you said - so it learnt in the context of my input, but that wouldn’t translate to learning and applying that knowledge to the translations of other users?

1

AuspiciousApple t1_jdznf40 wrote

Sorry, that was not very clearly explained on my part.

Do you understand that these models have weights/parameters - numbers that define their behaviour? The standard sense of "learning" in ML is to update these weights to fit some training data better.

And are you aware that large language model get a sequence of text (the "context") and predict the next bit of text from that? Now, these models can use examples in the text they are given to do things they otherwise wouldn't be able to. This is called in-context learning. However, here the parameters of the model don't change and if the examples aren't in the context, then the model doesn't remember anything about it.

1