Submitted by kittenkrazy t3_11w03sy in MachineLearning

πŸš€ Introducing ChatLLaMA: Your Personal AI Assistant Powered by LoRA! πŸ€–

​

Hey AI enthusiasts! 🌟 We're excited to announce that you can now create custom personal assistants that run directly on your GPUs!

​

ChatLLaMA utilizes LoRA, trained on Anthropic's HH dataset, to model seamless conversations between an AI assistant and users.

​

Plus, the RLHF version of LoRA is coming soon! πŸ”₯

​

πŸ‘‰ Get it here: https://cxn.to/@serpai/lora-weights

​

πŸ“š Know any high-quality dialogue-style datasets? Share them with us, and we'll train ChatLLaMA on them!

​

🌐 ChatLLaMA is currently available for 30B and 13B models, and the 7B version.

​

πŸ”” Want to stay in the loop for new ChatLLaMA updates? Grab the FREE [gumroad link](https://cxn.to/@serpai/lora-weights) to sign up and access a collection of links, tutorials, and guides on running the model, merging weights, and more. (Guides on running and training the model coming soon)

​

πŸ€” Have questions or need help setting up ChatLLaMA? Drop a comment or DM us, and we'll be more than happy to help you out! πŸ’¬

​

Let's revolutionize AI-assisted conversations together! 🌟

​

*Disclaimer: trained for research, no foundation model weights, and the post was ran through gpt4 to make it more coherent.

​

πŸ‘‰ Get it here: https://cxn.to/@serpai/lora-weights

​

*Edit: https://github.com/serp-ai/LLaMA-8bit-LoRA <- training repo/instructions (If anything is unclear just let us know and we will try to help/fix the issue!) (Sorry for spamming the link, don't really know how else to remind people lol)

5

Comments

You must log in or register to comment.

There's nothing here…