ahm_rimer t1_je8u2bi wrote on March 30, 2023 at 6:52 AM Reply to [R] LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention by floppy_llama LoRA + PEFT + Zero-init attention adapter = 🤯 Permalink 24
ahm_rimer t1_j20z322 wrote on December 28, 2022 at 9:37 PM Reply to [P] We finally got Text-to-PowerPoint working!! (Generative AI for Slides ✨) by Mastersulm Your product is nice but your demo video is killer. The BGM especially. Permalink 2
ahm_rimer t1_je8u2bi wrote
Reply to [R] LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention by floppy_llama
LoRA + PEFT + Zero-init attention adapter = 🤯