Submitted by hardmaru t3_ys36do in MachineLearning
DrXaos t1_iw03k6k wrote
Reply to comment by elcric_krej in [R] ZerO Initialization: Initializing Neural Networks with only Zeros and Ones by hardmaru
I’m not entirely convinced it eliminates every random choice. There is usually a permutation symmetry on tabular inputs, and among hidden nodes.
If I’m reading it correctly, then for a single scalar output of a regressor or classifier coming from hiddens or inputs directly (logreg), it would set the coefficient of the first node to 1 and 0 to all others being a truncated identity.
But what’s so special about that first element. Nothing. Same applies to the Hadamard matrices, it’s making one choice from an arbitrary ordering.
In my opinion, there still could/should be a random permutation of columns on interior weights and I might init the final linear layer of the classifier to equal but nonzero values like 1/sqrt(Nh), and with random sign if hidden activations are nonnegative like relu or sigmoid, instead of symmetric like tanh.
Maybe also random +1/-1 signs times random permutation times identity?
By that matter, any orthogonal rotation also preserves dynamical isometry, and so a random orthogonal before truncated identity should also work as init, and we’re back to an already existing suggested init method.
Training for enhanced sparsity is interesting, though.
Viewing a single comment thread. View all comments