Submitted by l33thaxman t3_11ryc3s in deeplearning

Recently, the LLaMA models by Meta were released. What makes these models so exciting, is that despite being small enough to run on consumer hardware, popular metrics show that the models perform as well or better than GPT3 despite being over 10X smaller!

The reason for this increased performance seems to be due to a larger number of tokens being used for training.

Now, following along with the video tutorial and open-source code, you can now fine-tune these powerful models on your own dataset to further increase the ability of these models!

https://youtu.be/d4Cnv_g3GiI

40

Comments

You must log in or register to comment.

vini_2003 t1_jcb90zy wrote

You wrote that description with the model, didn't you?

3

l33thaxman OP t1_jcb9akr wrote

Actually no. I wrote that. Missed opportunity though.

5

vini_2003 t1_jcbc7j0 wrote

Aw, damn! It really seemed like a generated description, haha

Thanks for the guide, by the way! Will be setting it up locally and this is very helpful.

3

DingWrong t1_jcc3axk wrote

Is there a written version? I like reading.

2

l33thaxman OP t1_jcg842w wrote

No sorry. You can read the GitHub README though.

1

ShadowStormDrift t1_jcew7hj wrote

I need proof.

1

l33thaxman OP t1_jcg884o wrote

Not sure what you mean? I show the loss decreasing and then inference it and it obviously learned how to generate quotes.

1