Viewing a single comment thread. View all comments

SaifKhayoon t1_jb54pnw wrote

Is this why some checkpoints / safetensors make for better results than stable diffusion's 1.5 and 2.1 weights?

Was LAION-2B used to train the base model shared by all other "models"/weights?

5

True_Toe_8953 t1_jb72i4c wrote

> Is this why some checkpoints / safetensors make for better results than stable diffusion's 1.5 and 2.1 weights?

I think this is because of a tradeoff between stylistic range and quality. Your model is only so big, so the more styles the less parameters available for each.

The base SD model is capable of a very wide range of styles, including a lot of abstract styles that no one ever uses. Most fine-tuned models only support a handful of popular styles (usually anime, digital paintings, or photographs) and other styles are merged with the main style and lost.

MidJourney has a wider range than most fine-tuned SD models but appears to be making the same tradeoff.

2