Submitted by cautioushedonist t3_yto34q in MachineLearning
I work exclusively in NLP and since the transformers and especially their pretrained type took over, I haven't written a neural nets (RNN, LSTM, etc.) in over 3 years and haven't had to worry about things like # of layers, hidden size, etc.
Tabular data has XGBoost, etc. NLP has Pretained Transformers. Images have Pretrained CNNs, Transformers.
But I've been through some ML system design books and recommendation system solutions often display neural nets, so that's interesting.
What was the problem and type of data at hand when you last wrote a neural net yourself, layer by layer?
Thanks y'all!
entropyvsenergy t1_iw58dge wrote
It's all frameworks now, some better than others. I haven't written one outside of demos or interviews in years. With that being said, I've modified neural networks a whole bunch. Usually you can just tweak parameters in a config file but sometimes you want additional outputs or to fundamentally change the model in some way...usually minor tweaks codewise.