Viewing a single comment thread. View all comments

yunguta t1_j1rm1lp wrote

I agree with you, another benefit I might add is the scalability to very large data.

To give an example, the team I work on processes point cloud data, which is easily in the millions or billions of points for a single dataset. Random forest is popular to do per-point classification of the point cloud. However, you need distributed computing for large real-world datasets (think large geographic extents), whereas with a simple MLP, you can train the model in batches with a GPU. Multi-GPU is the next step for scaling. Very natural progression here. Inference is still blazing fast here too.

I personally see DL models as a flexible and modular way to build models with the benefit of improving a model through deeper networks, different activation functions, and network modules. If you need to go simple, just use less layers :-)

As others have mentioned, use the tool which fits the problem. But, a neural network does have the advantages you mentioned, and should also be considered.

2