Viewing a single comment thread. View all comments

happyhammy OP t1_j0ee3fz wrote

I was very pleasantly surprised to see the release of https://www.riffusion.com/ today. I'd say it's the best music generation to date and they are using the 2d spectrogram approach.

What's also interesting is they're not telling us what dataset they trained the model with.

1

Ronny_Jotten t1_j0hgi63 wrote

It depends what you mean by "AI", but there are already generative music systems that produce far better music than that.

Spectral analysis/resynthesis is certainly important. There have long been tools like MetaSynth that let you do image processing of spectrograms. It's interesting that the "riffusion" project works at all, and it's a valuable piece of research. I can imagine the technique being useful for musicians as a way to generate novel sounds to be incorporated in larger compositions.

But it's difficult to see how it can be used successfully on entire, already-mixed-down pieces, to generate a complete piece of music in that way. Although it can produce some interesting and strange loops, it's hard to call the output that riffusion produces "music" in the sense of an overall composition, and I'm skeptical that this basic technique can be tweaked to do so. I could be wrong, but I still think it's a naive approach, and any actually listenable music-generation system will be based on rather different principles.

3