> Is this why some checkpoints / safetensors make for better results than stable diffusion's 1.5 and 2.1 weights?
I think this is because of a tradeoff between stylistic range and quality. Your model is only so big, so the more styles the less parameters available for each.
The base SD model is capable of a very wide range of styles, including a lot of abstract styles that no one ever uses. Most fine-tuned models only support a handful of popular styles (usually anime, digital paintings, or photographs) and other styles are merged with the main style and lost.
MidJourney has a wider range than most fine-tuned SD models but appears to be making the same tradeoff.
True_Toe_8953 t1_jb72i4c wrote
Reply to comment by SaifKhayoon in [R] We found nearly half a billion duplicated images on LAION-2B-en. by von-hust
> Is this why some checkpoints / safetensors make for better results than stable diffusion's 1.5 and 2.1 weights?
I think this is because of a tradeoff between stylistic range and quality. Your model is only so big, so the more styles the less parameters available for each.
The base SD model is capable of a very wide range of styles, including a lot of abstract styles that no one ever uses. Most fine-tuned models only support a handful of popular styles (usually anime, digital paintings, or photographs) and other styles are merged with the main style and lost.
MidJourney has a wider range than most fine-tuned SD models but appears to be making the same tradeoff.