Viewing a single comment thread. View all comments

Scarlet_pot2 OP t1_jed67k5 wrote

True alpaca is competent, but we need more models, better and larger models.. a distributed system where people donate compute could also be used to allow people to run larger models. maybe not 175 billion parameters, but maybe 50-100B as long as everyone donating compute isn't using it at the same time

that being said more smaller models like alpaca / LLaMA are needed too. if we made sufficient resources / training available to anyone, models like that could be created and made available more often

1

Akimbo333 t1_jefh3f9 wrote

Llama just needs much more training and fine-tuning and it'll be good

1