Submitted by mikonvergence t3_11g14sp in MachineLearning

Hi all!

I have recently put together a course on diffusion image generation that includes videos, a minimal PyTorch framework, and a set of notebooks (all results can be run in Google colab!)

https://github.com/mikonvergence/DiffusionFastForward

I am hoping it can help those interested in learning to train diffusion models from scratch in a TLDR mode. What I think is quite different here from other tutorials is that it includes not only low-resolution generation (64x64) but also notebooks for training in high-resolution (256x256) from scratch. And also an example of an image-to-image translation that I think some people will find entertaining!

​

I'm looking forward to hearing some feedback or comments, and I hope you enjoy the course if you decide to check it out!

PS. you can also go directly to the videos on YT https://youtube.com/playlist?list=PL5RHjmn-MVHDMcqx-SI53mB7sFOqPK6gN

77

Comments

You must log in or register to comment.

pogsly t1_jarzom0 wrote

!remind me 1 day

3

blabboy t1_jam5ppi wrote

Looks great! Is the code under a specific licence?

2

SnooMarzipans1345 t1_jan3fnu wrote

I'm new to this-- "whatever" topic
please explain to me this topic of op as if I am a child in TL:DR format.

Ps I did read, but what encryption is this "hero"!?

1

mikonvergence OP t1_jan5fnj wrote

Hi! Sure, here it goes:

It's a course about making AI models that can create images. These models can that by learning from a dataset of example images. "Diffusion" is a new type of AI model that works very well for this task.

The course will work best for those familiar with training deep neural networks for generative tasks, so I would advise catching up on topics like VAEs or GANs. However, the video course material is quite short (about 1,5 hrs) so you can just play it and see if it works for you or not!

7

SnooMarzipans1345 t1_jaoz02v wrote

>so I would advise catching up on topics like VAEs or GANs.

What??? dig** dig** dig*** clunk** what is this? its in a foreign lanauge to me.

−2

SnooMarzipans1345 t1_jaoz34f wrote

> However, the video course material is quite short (about 1,5 hrs) so you can just play it and see if it works for you or not!

However, the video course material is quite short (about 1,5 hrs) so you can just play it and see if it works for you or not!

WHat? did I miss a sign or something? please help.

−2

plocco-tocco t1_janz2xc wrote

Great work! Would the image translation work for (binary) image segmentation?

1

mikonvergence OP t1_jao1h1c wrote

Thank you! Yes, in principle, you can generate segmentation maps using the code from the course by treating the segmentation map as the output. I'm not sure how that would compare to a non-diffusion segmentation with the same backbone network but definitely it would be interesting to explore that!

Please remember that the diffusion process generally expects data bound in [-1,+1] range, so in the framework, the images are shifted from the assumed [0,1] limits to that range automatically (via input_T and output_T). So if you go beyond the binary and use more classes within a single channel, make sure the output ground truth values are still between [0,1] (alternatively, you can split each class confidence into a separate channel but it should still be bound).

But yeah, for binary, it should work with no special adjustment!

2

plocco-tocco t1_jao43p9 wrote

Thanks for the input. I have seen some papers claiming SOTA in image segmentation using diffusion so I am also curious to see how they perform.

I have another question, if you don't mind. How difficult would it be to extend the code for image-to-image translation so that it works on 3D data (64x64x64 for example)?

2

mikonvergence OP t1_jao5zyg wrote

There could be a few simple solutions to extending this to 64x64x64 and each would have certain pros and cons. The two key decisions to make are in regards to the data format (perhaps there is a way to compress/reformat data so it's more digestible than direct 64x64x64) and in regards to the type of the underlying architecture (most importantly, do we use a 2D or 3D CNN, or a differnt type of topology altogether).

A trivial approach would be to use a 2D architecture with 64 channels instead of the usual 3, which could be very easily implemented with the existing framework. I predict that would be quite hard to train, however, though you might still try.

This is an area of active research (beyond DreamFusion and other popular papers I'm not very familiar with it), so exploring different solutions to this is still required, and if you discover something that works reasonably well then that will be really exciting!

3