Viewing a single comment thread. View all comments

__Maximum__ OP t1_jbb5bzm wrote

Right, I just noticed that LLaMA says they didn't fix their compute. Thanks. I wonder if there is a small architecture that is trained until convergence.

4

_Arsenie_Boca_ t1_jbbh5ng wrote

Until convergence is something that we often say and hear but makes no sense by definition. Convergence never ends

5

currentscurrents t1_jbbmmqs wrote

Eventually you can reach a point where any possible change to the model decreases performance. Then you've fully converged.

Nobody ever does this though because of diminishing returns.

2

farmingvillein t1_jbk2uyw wrote

> Nobody ever does this though because of diminishing returns.

Extending the LLaMa concept, I would love to see someone like Meta run the experiment where they do take their 1.4T (or w/e) tokens, and run training to convergence...on the largest model that will converge (subject to reasonable LR decay policies) in a "reasonable" time frame.

Meaning, if they trained, say, a 1M param LLM...presumably it would hit convergence (get saturated) pretty quickly. And what about 10M, 100M, etc.?

I.e., how much more can we squeeze out of a relatively-tiny model? Probably it doesn't end up super interesting from a purely generative POV, but it might look like--e.g.--Roberta+.

With a model that is so small, the cost to run this test probably(?) wouldn't be that high.

2

cztomsik t1_jbgdoar wrote

but this is likely going to take forever because of LR decay, right?

1

adt t1_jbbzba8 wrote

There are a few that 'feel' that way. Try Megatron-11B (~200:1) based on RoBERTa (6,198:1). Wayyyyy ahead of its time, and I've matched it with much larger models in some testing.

https://app.inferkit.com/demo

Here's the full table of Chinchilla-align comparisons:

https://lifearchitect.ai/models-table/

2

whata_wonderful_day t1_jbcxdwf wrote

Nice! How did you get access to Megatron-11B? I can't find it online anywhere

1

__Maximum__ OP t1_jbdqy5c wrote

Thanks for the links. Looks like RoBERTa did not gain a lot from the additional trainings, only minor improvements, but yeah, it was a tiny model. How was this not a good lesson? Why did people need Chinchilla? Maybe it's just having a lot of data comes easy so people gather as much as possible, even though they know they will go maximum 1 epoch over it.

1