Indeed, there are so many natural methods to interpolate concepts, and I agree 100% that there some are better than others at certain tasks.
Compared to famous Img2Img, I understood this as a "generalized" method to interpolate. Since if you take \mu = 1.0, this becomes just Img2Img interpolation. You can read the paper to see the effect of \mu on interpolation, and it's quite interesting. Since this is more general approach, there are more things to tweak and figure out I guess...?
cloneofsimo OP t1_izdlve0 wrote
Reply to comment by LetterRip in [P] Using LoRA to efficiently fine-tune diffusion models. Output model less than 4MB, two times faster to train, with better performance. (Again, with Stable Diffusion) by cloneofsimo
Glad it worked for you with such small memory constraints!