justheuristic

justheuristic t1_j02ohk6 wrote

The first link (petals) is about finetuning.

Others (e.g. distributed diffusion) involve training from scratch -- but they deal with smaller models. Thing is, you need a lot of people to train a 100B model from scratch. Like, a few hundred online on average. There aren't many communities that can do that. In turn, with finetuning, you can see it work more immediately.

I've heard a talk by Colin Raffel where he proposed an alternative view where instead of training from scratch, an open-source community could gradually improve the model over time. Like github, but for large models. A contributor can fine-tune for a task, then create a "pull-request", then maintainer runs a special procedure to merge the model without forgetting other tasks. That's how I remember it, anyways.

3

justheuristic t1_j02g9m0 wrote

https://github.com/bigscience-workshop/petals - fine-tuning BLOOM-176B Folding@home style

https://github.com/learning-at-home/hivemind - a library for decentralized training with volunteers

https://github.com/epfml/disco - a library for collaborative training in JS (in a browser!)

https://github.com/chavinlo/distributed-diffusion - a project that tries to train diffusion this way

https://bittensor.com/ - a comminity that makes decentralized training into a cryptocurrency

There are also projects like Together that build networks from university computers for decentralized training.

5