Submitted by Ricenaros t3_114yiwj in deeplearning
Hi all, I'm using neural networks to solve a multi-output regression problem. Now I want to improve my results, but it is unclear how to proceed. There are many (hyper)parameters that I could adjust: batch size, optimizer, learning rate, number of layers, number of hidden units per layer, type of activation, etc. Since training the network takes a decent amount of time, how can I approach (hyper)parameter selection in an intelligent way? Additionally, how can we decide when a model should be tuned versus scrapped? Is there some intuition that a model 'will not work', regardless of parameter settings?
Sim2955 t1_j8ytfmi wrote
possibly randomised search https://scikit-learn.org/stable/modules/grid_search.html#tuning-the-hyper-parameters-of-an-estimator