Submitted by imgonnarelph t3_11wqmga in MachineLearning
mycall t1_jd0ytah wrote
Reply to comment by The_frozen_one in [Project] Alpaca-30B: Facebook's 30b parameter LLaMa fine-tuned on the Alpaca dataset by imgonnarelph
alpaca-30B > llama-30B ?
The_frozen_one t1_jd125zf wrote
Not sure I understand. Is it better? Depends on what you're trying to do. I can say that alpaca-7B and alpaca-13B operate as better and more consistent chatbots than llama-7B and llama-13B. That's what standard alpaca has been fine-tuned to do.
Is it bigger? No, alpaca-7B and 13B are the same size as llama-7B and 13B.
Viewing a single comment thread. View all comments