KhurramJaved

KhurramJaved t1_j47qiu0 wrote

Seems like a fairly contrived take. The bitter lesson is about a general principle---algorithms that scale well with more data and compute win---whereas the foundation model regime---pre-train a model on a large dataset, and then either fine-tune it or use the features of the foundation model for down-stream---is a very specific way of leveraging data and compute. I see little reason why other regimes of using large amount of data and compute might not be better.

Based on my own research, my prediction is that foundation models will die out for robotics once we have scalable online continual learners. Extremely large models that are always learning in real-time would replace the foundation models paradigm.

7