Viewing a single comment thread. View all comments

enjakuro t1_jb8thcg wrote

Yeah but copying data in a corpus has yielded better results, at least in NLP translation tasks. It's always good to know what's in your data though. Just saying that it might not be a bad thing.

1

graphicteadatasci t1_jb9afw5 wrote

Really? Because copying all your data once is the same as running your dataset twice per epoch instead of once. Doesn't sound right. Unless your test data is drawn from the same dataset and duplication happens before splitting in which case you would certainly expect metric improvements. Or was this a case of duplicating rare text in which case it is the opposite of having duplicate images in LAION.

1

enjakuro t1_jb9l86l wrote

Ah it was the rare text thing I believe. Now that I'm more awake I also realized that they copied the source to target, meaning the same language as source and target while keeping the rest bilingual. If I can recall correctly, you can have up to 50% copied data which makes the training set much bigger. I guess if the images aren't exactly the same this would have the same effect. Basically training a language model.

2

graphicteadatasci t1_jbdt33t wrote

Yeah, because there's some very nice results on classification models where they remove data that doesn't contribute to learning and it made training faster and more accurate. But of course I can't remember at all what the paper was called.

1

enjakuro t1_jbf0yco wrote

Same hahaha, would've linked it otherwise xD

1