Viewing a single comment thread. View all comments

VoyagerExpress t1_iszfebl wrote

I am currently working on a project on an unbounded multi-task optimization problem. Essentially lets say my model outputs a tensor which leads to an SNR type loss (for people familiar with wireless communications jargon, the signal and interference vectors are columns of this tensor) and I would like to improve this SNR upto some required value. Do you guys have any suggestions on loss functions I could use? Rn I am trying out (model_output_snr - Req SNR)^2, basically an MSE loss wrt the required minimum snr. This doesn't change the fact that the problem itself is unbounded and unsupervised. I am new to this style of learning paradigm since I am used to having data with inputs and labels.

I tried a bunch of architectures to solve this problem but fundamentally I feel like the training losses are looking super erratic and not improving at all even after thousands of epochs.

Are there any precursors to this kind of ML technique, anything I should look out for? Really any help would be great at this point thanks! The problem itself is similar to a convex optimization problem statement, but the maximisation objective is non-convex due to inherent non-linearities in activation functions. Is there some theoretical limit on such kind of learning problems which make this approach (using ML instead of convex optimization) pointless in the first place?

1

seiqooq t1_it3ixpk wrote

Correct me if I’m wrong but you say you’d like to improve your SNR up to some value, it sounds like you could simply formulate this as a 1D maximization problem, rather than a 2D optimization problem. In this case, reinforcement learning and genetic algorithms are high on the list as solutions.

1