Viewing a single comment thread. View all comments

The_frozen_one t1_jd0sqd7 wrote

You can run llama-30B on a CPU using llama.cpp, it's just slow. The alpaca models I've seen are the same size as the llama model they are trained on, so I would expect running the alpaca-30B models will be possible on any system capable of running llama-30B.

8

mycall t1_jd0ytah wrote

alpaca-30B > llama-30B ?

−1

The_frozen_one t1_jd125zf wrote

Not sure I understand. Is it better? Depends on what you're trying to do. I can say that alpaca-7B and alpaca-13B operate as better and more consistent chatbots than llama-7B and llama-13B. That's what standard alpaca has been fine-tuned to do.

Is it bigger? No, alpaca-7B and 13B are the same size as llama-7B and 13B.

4