Viewing a single comment thread. View all comments

ActuatorMaterial2846 t1_je8luak wrote

Interesting, curious what size this particular Llama model is, or is that not even relevant?

1

jetro30087 t1_je8mtjp wrote

This is a updated dataset for the 7b model, but you could train the others with the data. From anecdotal reports, the dataset seems to have a great impact on the model's performance than the parameter size up to a point. Less parameters means a faster model. More parameters mean the model can make longer responses.

https://huggingface.co/8bit-coder/alpaca-7b-nativeEnhanced

2