Submitted by cautioushedonist t3_yto34q in MachineLearning
I work exclusively in NLP and since the transformers and especially their pretrained type took over, I haven't written a neural nets (RNN, LSTM, etc.) in over 3 years and haven't had to worry about things like # of layers, hidden size, etc.
Tabular data has XGBoost, etc. NLP has Pretained Transformers. Images have Pretrained CNNs, Transformers.
But I've been through some ML system design books and recommendation system solutions often display neural nets, so that's interesting.
What was the problem and type of data at hand when you last wrote a neural net yourself, layer by layer?
Thanks y'all!
WigglyHypersurface t1_iw5c2u1 wrote
Thankfully I'm doing niche enough projects I still get to. Last one was a multi-modal iwae for imputing missing data.