Comments

You must log in or register to comment.

Taenk t1_j99bo8q wrote

Maybe ask over at /r/stablediffusion and check out aesthetic gradients over there. Might be able to replicate your art style and scale it to the thousands of images you'll need to generate.

5

eternalvisions OP t1_j99crnr wrote

Thanks for the tip!

3

sam__izdat t1_j99j0iu wrote

You're not likely to get much help there, unfortunately. With SD, your best bet would probably be Dreambooth, which you can get with the Huggingface diffusers library. It might be overcomplicating matters, if the site is representative of your training data, though. GANs can be notoriously difficult to train but it's probably worth a shot here -- it's a pretty basic use case. You might look into data augmentation and try a u-net with a single-channel output.

A slightly more advanced option might be ProGAN. Here's a good video tutorial if that's your thing.

6

CurrentlyJoblessFML t1_j99mphb wrote

I definitely think diffusion based generative ai models are a great idea. And whole heartedly agree that training GANs can be very painful. Head over to the hugging face diffusers library and you should be able to find a few models that are able to do unconditional image generation. They also have cookie cutter scripts that you can just execute to start training your model from the get go. They also have detailed instructions for how you can set up your own training data.

Although I have been working with these models for a while and I think training diffusion models can be very computationally intensive. Do you have access to a GPU cluster? If not, I’d recommend a U-Net based approach which you could train on GPU/TPUs on Google colab.

I have been using these class of models for my masters thesis and I would be happy to help in case you have any questions. Good luck! :)

2

snowpixelapp t1_j99zc4b wrote

In my experiments, I have found dreambooth implementation by diffusers to be not good. There are many alternatives for it though.

1

violet_zamboni t1_j9ak3ba wrote

I don’t thing machine learning is going to get you good results. Have you asked this on r/generative ? It’s like something from there.

3

banmeyoucoward t1_j9apt54 wrote

What tool did you use to make the art on your website?

Your style relies heavily on recursion and similarities between scales, which conv nets are not good at, but programatic descriptions of images like LOGO are very good at. My strategy would be to manually write simple LOGO, python (or whatever tool you initially used) programs that generate each of the images on your site, and then prompt Chat-GPT with “write a program that generates an image combining ideas from <Program A> and <Program B>

2

currentscurrents t1_j99ivt8 wrote

You could definitely do this with StableDiffusion embeddings.

1

AtomicNixon t1_j9ar9hw wrote

There's not enough here for a network to latch onto. I've trained nets on a variety of geometric patterns of differing styles so I know the minimum needed. I think banme's suggestion is the way to go. Figure out what your personal algo is and go with that.

1