Viewing a single comment thread. View all comments

somethingclassy t1_j1tna1y wrote

Great concept. What version(s) of Stable Diffusion does it work with?

3

TrueBlueDreamin OP t1_j1vai7n wrote

You can finetune on all versions, 1.4, 1.5 and the newer SD 2.x models :)

4

JanssonsFrestelse t1_j1y2df0 wrote

Fine-tuning the SD 2.1 768x768 resolution model as well or just the 2.1-base 512x512 model?

1

JanssonsFrestelse t1_j1y2npq wrote

Also can you supply your own regularization images, or do you have some selection (w. recommendations for e.g. fine-tuning on a person) to choose from? Training the text-encoder as well I assume? What about jointly learning different concepts when fine-tuning on an object/person?

1

TrueBlueDreamin OP t1_j1y4zjh wrote

We can support regularization with your own class images if you'd want, however it's recommended to use model generated regularization images for prior preservation. You don't want to introduce bias into the model with curated images.

We train the text encoder as well, correct.

You should be able to train multiple concepts/subjects although there is an unsolved problem regarding bleeding when used in the same prompt. Shoot me a DM and we can probably figure something out!

1

JanssonsFrestelse t1_j1ydl3c wrote

Curated images would be generated by the model being trained using the same prompt for reg images as for the subject training images (found via clip interrogation, swapping out e. g. "a woman" to my subject's token). Not a big deal though, if you can train the 768x768 model I'll try it out. Can't run it locally and colabs for the 768 model have been unreliable. Might write my own later on if the model trained by you shows good quality.

Edit: probably not much use having the exact same prompt, but I'm thinking something similar to the clip classification of the image(s) + the general style/concept you want to learn. Or do you see some issues with the method I've described?

1