Viewing a single comment thread. View all comments

jazmaan t1_irgo3n5 wrote

Funny thing is when I first got into AI Art and ML it was through a question I asked on Reddit almost two years ago. And its still my dream.

"Would it be possible to train an AI on high quality recordings of Jimi Hendrix live in concert, and then have the AI listen to a crappy audience bootleg and make it sound like a high quality recording?"

AI Art was still in its infancy back then but the people who offered their opinions on my question were the same ones on the cutting edge of VqGAN+Clip. It still looks like the answer to my question is "Someday but probably not within the next five years". But hope springs eternal! Someday that crappy recording of Jimi in Phoenix (one of the best sets he ever played) may be transformed into something that sounds as good as Jimi at Woodstock!

13

PC-Bjorn t1_isnicrn wrote

Soon, we might be upscaling beyond higher bitrate, -depth and fidelity and into multi channel reproductions, or maybe even into individual streams for each instrument and actor on stage as well as a volumetric model for the stage layout itself, allowing us to render the experience as how it would be when experienced from any coordinate on - or around - the stage.

Pair that with a realtime, hardware-accelerated reproduction of the visual experience of being there, based on a network trained on photos from the concert and we'll all be able to go to Woodstock in 1969.

2