lxe
lxe t1_je9vkqx wrote
Reply to [R] LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention by floppy_llama
In what way is this different than the existing low rank adaptation method everyone is doing already?
lxe t1_jdx0adz wrote
Reply to [N] Predicting Finger Movement and Pressure with Machine Learning and Open Hardware Bracelet by turfptax
I've been using ChatGPT to help me with machine learning and data transformation as well. I knew very little of the field, and now with its help I feel like I have a superpower.
lxe t1_jcsqmdi wrote
Reply to [P] The next generation of Stanford Alpaca by [deleted]
You should try fine tuning openchatkit — it’s Apache 2 licensed afaik. Or GPT-NEOX-20B if you have the hardware.
lxe t1_jcsqk7t wrote
Reply to comment by yaosio in [P] The next generation of Stanford Alpaca by [deleted]
Copyright and license terms are different things.
lxe t1_jc45m7r wrote
Reply to comment by Bulky_Highlight_3352 in [R] Stanford-Alpaca 7B model (an instruction tuned version of LLaMA) performs as well as text-davinci-003 by dojoteef
I thought llama was GPL licensed? Which isn’t ideal either but better than “research only”
lxe t1_jbb4kca wrote
Went there with my dad last year on our way to LA. There were what seemed like hundreds of sea otters. It’s such a gorgeous place.
lxe t1_j30v0mv wrote
Reply to [image] There'll be many people to pull you down as you move ahead, but never give up on your goals. by _Cautious_Memory
This main character needs to shave.
lxe t1_jeg2h5j wrote
Reply to comment by aliasaria in [R] LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention by floppy_llama
Thank you. Much appreciate the explanation.