Viewing a single comment thread. View all comments

lol-its-funny t1_j55ge3m wrote

From 6 months back, also very useful for the future of scaling, time and (traditional) data limits.

https://www.alignmentforum.org/posts/6Fpvch8RR29qLEWNH/chinchilla-s-wild-implications

Basically even the largest model to date, PaLM is very suboptimal by leaning towards WAY more parameters than it’s training data size. In fact there might not be enough data in the world today. Even with infinite data and infinite model sizes there are limits.

Check it out, very interesting compared to recent “more is more” trends.

2