Submitted by kittenkrazy t3_11w03sy in MachineLearning
π Introducing ChatLLaMA: Your Personal AI Assistant Powered by LoRA! π€
​
Hey AI enthusiasts! π We're excited to announce that you can now create custom personal assistants that run directly on your GPUs!
​
ChatLLaMA utilizes LoRA, trained on Anthropic's HH dataset, to model seamless conversations between an AI assistant and users.
​
Plus, the RLHF version of LoRA is coming soon! π₯
​
π Get it here: https://cxn.to/@serpai/lora-weights
​
π Know any high-quality dialogue-style datasets? Share them with us, and we'll train ChatLLaMA on them!
​
π ChatLLaMA is currently available for 30B and 13B models, and the 7B version.
​
π Want to stay in the loop for new ChatLLaMA updates? Grab the FREE [gumroad link](https://cxn.to/@serpai/lora-weights) to sign up and access a collection of links, tutorials, and guides on running the model, merging weights, and more. (Guides on running and training the model coming soon)
​
π€ Have questions or need help setting up ChatLLaMA? Drop a comment or DM us, and we'll be more than happy to help you out! π¬
​
Let's revolutionize AI-assisted conversations together! π
​
*Disclaimer: trained for research, no foundation model weights, and the post was ran through gpt4 to make it more coherent.
​
π Get it here: https://cxn.to/@serpai/lora-weights
​
*Edit: https://github.com/serp-ai/LLaMA-8bit-LoRA <- training repo/instructions (If anything is unclear just let us know and we will try to help/fix the issue!) (Sorry for spamming the link, don't really know how else to remind people lol)