Submitted by wellfriedbeans t3_10r6qn0 in MachineLearning

What is the state of research in normalizing flows in 2023? Have they been superseded by diffusion models for sample generation? If so, what are some other applications where normalizing flows are still SOTA (or even useful)?

23

Comments

You must log in or register to comment.

jimmymvp t1_j6w4ezb wrote

Any application where you need exact likelihoods, flows are king. Such is the case for example jf you're learning a sampling distribution for MCMC sampling, estimating normalizing constants (I believe in physics there are a lot of these problems) etc.

9

badabummbadabing t1_j6wfsok wrote

Exact likelihoods are what attracted me to normalizing flows once, too. But I soon found them too hard to train to yield any useful likelihoods. The bijectivity constraint (meaning that your 'latent' space is just as large as your data space) seems like too much of a restriction in practice. For my application, switching to variational models and just accepting that I'll only get lower bounds on the likelihood got me further in the end. Diffusion models would be a more 'modern' option in this regard as well.

Are you aware of any applications, where people actually use NFs for likelihoods? I am aware of some research papers, but I'd say that their experiments are too much of a contrived example to convince me that this will ever find its way into an actual application.

3

based_goats t1_j6xftq6 wrote

In science/physics flows are the dominant tool for simulation-based inference. The alternative is lengthy rejection sampling. Diffusion-based models are making an entrance in this area as well but are not as well-understood for practitioners to switch.

3

jimmymvp t1_j71bvhf wrote

The problem with diffusion from an SDE view is that you still don't have exact likelihoods because you're again not computing the exact Jacobian to make it tractable and you have ODE solving errors. People mostly resolve to Hutchinson trace estimator, otherwise it would be too expensive to compute, so I don't think that diffusion in this way is going to enter the MCMC world anytime soon.

1

based_goats t1_j72jd9z wrote

There are some papers showing diffusion working better for high-dimensional data in likelihood free inference, even just using an elbo bound. Can dig up later if wanted

1

badabummbadabing t1_j76tfqt wrote

Fully agree from a technical perspective with you.

The difference is that at best, you only get the likelihood under your model of choice. If that happens to be a bad model of reality (which I'd argue is the case more often than not with NFs), you might be better off just using some approximate likelihood (or ELBO) of a more powerful model.

But I am not an expert in MCMC models, so I might be talking out of my depth here. I was mainly using these models for MAP estimation.

1

jimmymvp t1_j7aend6 wrote

Indeed, if your model is bad at modeling the data there's not much use in computing the likelihoods. If you want to just sample images that look cool, you don't care that much about likelihoods. However, there are certain use-cases where we care about exact likelihoods, estimating normalizing constants and providing guarantees for MCMC. Granted, you can always run MCMC with something close to a proposal distribution. However, obtaining nice guarantees on convergence and mixing times (correctness??) is difficult then, I don't know how are you supposed to do this when using a proposal for which you can't evaluate the likelihood. Similarly when you talk about importance sampling, you can only obtain correct weights if you have the correct likelihoods, otherwise it's approximate, not just in the model but also in the estimator.

This is the way I see it at least, but I'll be sure to read the aforementioned paper. I'm also not sure how much having the lower bound hurts you in estimation.

2

jimmymvp t1_j71cgkw wrote

There is a trick how you can get away with gradually expanding your latent dimension with normalising flows, if you assume that the dimensions are independent to a certain point, then you sample from a base distribution and concatenate in the middle of the flow.

Again, MCMC sampling, simulation based inference are examples. Imagine you have an energy function that describes the distribution (you don't have data), how do you sample from this distribution? You would do some MCMC, how would you arrive to a good proposal distribution to make the MCMC algorithm more efficient? You would fit the proposal based on some limited data that you have or inductive biases such as certain invariances etc.

3

PHEEEEELLLLLEEEEP t1_j72txfo wrote

Diffusion models can also generate exact likelihoods so maybe we'll see a shift to those in the future

1

jimmymvp t1_j7aex0t wrote

In theory yes, in practice it's not exact, it's approximated via trace estimator and ODE solver.

1

vwvwvvwwvvvwvwwv t1_j6xgaqc wrote

I've had success with normalizing flows in problems where both directions of the transformation were important (although presumably an autoencoder might work just as well).

This was published yesterday: Flow Matching for Generative Modeling

TL;DR: We introduce a new simulation-free approach for training Continuous Normalizing Flows, generalizing the probability paths induced by simple diffusion processes. We obtain state-of-the-art on ImageNet in both NLL and FID among competing methods.

Abstract: We introduce a new paradigm for generative modeling built on Continuous Normalizing Flows (CNFs), allowing us to train CNFs at unprecedented scale. Specifically, we present the notion of Flow Matching (FM), a simulation-free approach for training CNFs based on regressing vector fields of fixed conditional probability paths. Flow Matching is compatible with a general family of Gaussian probability paths for transforming between noise and data samples---which subsumes existing diffusion paths as specific instances. Interestingly, we find that employing FM with diffusion paths results in a more robust and stable alternative for training diffusion models. Furthermore, Flow Matching opens the door to training CNFs with other, non-diffusion probability paths. An instance of particular interest is using Optimal Transport (OT) displacement interpolation to define the conditional probability paths. These paths are more efficient than diffusion paths, provide faster training and sampling, and result in better generalization. Training CNFs using Flow Matching on ImageNet leads to state-of-the-art performance in terms of both likelihood and sample quality, and allows fast and reliable sample generation using off-the-shelf numerical ODE solvers.

4

schwagggg t1_j6x7eh7 wrote

i recently found a paper from Blei’s lab that use NF to learn klpq instead of klqp variational inferences (might be what the other commenter is referring to), but i’m afraid that’s not what u r interested in.

then apart from that the last SOTA i can remember was GLOW applied application wise.

2

Ulfgardleo t1_j6vzpgz wrote

There is only very little research. They are a nice theoretical idea, but the concept is very constraining and numerical difficulties make experimenting hell.

I am not aware of any active research and I think they never were really big to begin with.

−4

based_goats t1_j6xg1fv wrote

See my comment for flows used as an alternative for ABC.

2