Submitted by petrastales t3_1248fka in MachineLearning

I made an amendment to text and it appeared to modify the next text I entered in precisely the same way.

For example,

Translation 1 Spanish text: Ley sobre el uso de sombreros rosas, 1986

DeepL’s initial translation: The Law on wearing Pink Hats, 1986

My edit: The Spanish Law on wearing Pink Hats, 1986

Translation 2

Spanish text: La Ley sobre el uso de pantalones cortos amarillos, 1987

DeepL’s initial translation: The Spanish Law on wearing yellow shorts, 1987

There was no need for me to make any edit.

😳

Did it learn my preferences from my edit …immediately ?

3

Comments

You must log in or register to comment.

AuspiciousApple t1_jdyjclk wrote

It could do something semi-fancy, or it might simply be prepending the translation prompt with the previous input, translation, and user-based edits so that it can adjust to your specific perferences. It's called in-context learning, the model doesn't change so it doesn't learn in the standard sense, but it still learns from the current context.

3

petrastales OP t1_jdzefzu wrote

I believe I understand what you said - so it learnt in the context of my input, but that wouldn’t translate to learning and applying that knowledge to the translations of other users?

1

AuspiciousApple t1_jdznf40 wrote

Sorry, that was not very clearly explained on my part.

Do you understand that these models have weights/parameters - numbers that define their behaviour? The standard sense of "learning" in ML is to update these weights to fit some training data better.

And are you aware that large language model get a sequence of text (the "context") and predict the next bit of text from that? Now, these models can use examples in the text they are given to do things they otherwise wouldn't be able to. This is called in-context learning. However, here the parameters of the model don't change and if the examples aren't in the context, then the model doesn't remember anything about it.

1

Exodia141 t1_je0nc53 wrote

I believe it remembers the context of the conversation. Try the second translation in a different chat context. It should fail.

1

petrastales OP t1_je3iv6k wrote

It did fail 😅

1

Exodia141 t1_je42qma wrote

For the model to remember the changes made by you it should be approved by the team and then fed into the latest edition of training data. Then the subsequent answers will carry your changes.

1

petrastales OP t1_je48d9s wrote

Understood but I guess that’s a decision for them to make as to whether they accept it or not and I imagine there is a huge backlog of them to approve

1