No_Combination_6429
No_Combination_6429 t1_jd20q4w wrote
Could you please provide the source Code for the fine-tuning? Also did you use the LoRa approach?
No_Combination_6429 t1_jcxqot2 wrote
Reply to [R] Stanford-Alpaca 7B model (an instruction tuned version of LLaMA) performs as well as text-davinci-003 by dojoteef
Is it possibile to do the Same with other Models aswell? Like Bloomz etc…
No_Combination_6429 t1_jd3ioav wrote
Reply to comment by juliensalinas in [D] An Instruct Version Of GPT-J Using Stanford Alpaca's Dataset by juliensalinas
Thanks for sharing! As far as I know LoRa approach increases efficiency, not so sure about quality Wiki. Maybe the paper can help you further.