Viewing a single comment thread. View all comments

Cyclone4096 t1_j5owtmi wrote

I don’t have too much background on ML. I want to build a fairly small neural network that has only one input which comes from a time series data and has to give only one output for that data. My loss function aggregates the entire time series output to get a single scalar value. I’m using PyTorch and when I call “.backward()” on the loss function it takes a long time (understandably). Is there an easier way to do this rather than doing backward gradient calculation on a loss function that itself is a result if 100s of millions values? Note that the neural network itself is tiny, maybe less than 100 weights, but my issue is that I don’t have any golden target, but I want to minimize a complex function calculated from the entire time series output.

1

zoontechnicon t1_j5q9atd wrote

Would you mind giving more details about the domain and the purpose of the loss function? Maybe people can give you hints based on that.

2

Cyclone4096 t1_j5qi39j wrote

Sure! So this is for audio signal processing. There is an amplifier that takes an audio signal and volume as input. However higher volume causes white noise, so I want the volume to stay low whenever possible and boost the volume by multiplying the input signal instead. But of course the multiplication won’t work if the input to the amplifier itself is already high. Switching the amplifier volume too much is not good either as that would cause pop/click noise. So I’m designing a small neural network that will take the audio signal as input and output the amplifier volume. The way I went about is I modeled the amplifier and all noise associated with it using tensor math. Then I used the amplifier output minus the original input and did MSE on that. Note that the audio signals are pretty long so the filter+MSE is a pretty massive expression. It seems to be working somewhat, but not sure if there is an easier way to do this…

1