Viewing a single comment thread. View all comments

Internal-Diet-514 t1_j6oi9qu wrote

If we’re considering the dimensions to be the number of datapoints in an input than I’ll stick to that definition and use the shape of the data instead of dimensions. I don’t think I was wrong to use dimensions to describe the shape of the data but I get that it could be confusing because high dimensional data is synonymous with a large number of features, whereas I meant high dimensions to be data with shape > 2.

Deep learning or CNNs are great because of its ability to extract meaningful features from data with shape > 2 and then pass that representation to an mlp. But the feature extraction phase is a different task than what traditional ml is meant to do, which is to take a set of already derived features and learn a decision boundary. So I’m trying to say a traditional ml model is not super comparable to the convolutional portion (feature extraction phase ) of a cnn.

−3

JimmyTheCrossEyedDog t1_j6osqa5 wrote

Good call, shape is the much better term to avoid confusion.

> If we’re considering the dimensions to be the number of datapoints

To clarify - not the number of datapoints, the number of input features. The number of datapoints has nothing to do with the dimensionality (only the shape).

> Deep learning or CNNs are great because of its ability to extract meaningful features from data with shape > 2

This is where I'd disagree (but maybe you have a source that suggests otherwise). Even for time series tabular data, gradient boosted tree models typically outperform NNs.

Overall, shape rarely has anything to do with how a model performs. CNNs are built to take knowledge of the shape of the data into account (restricting kernels to convolutions of spatially close datapoints), but not all NNs do that. If we were using a network with only fully connected layers, for example, then there is no notion of spatial closeness - we might as well have transformed an NxN image into a N^2 x1 vector and your network would be the same.

So, neural networks handling inputs that have spatial (or temporal) relationships well has nothing to do with it being a neural network, but with the assumptions we've baked into the architecture (like convolutional layers).

3

Internal-Diet-514 t1_j6oujtg wrote

Time series tabular data would have shape 3 (number of series, number of time points, # of features). For gradient boosted tree models isnt the general approach to flatten the space to (number of series, number of time points X # of features). Where as a cnn would be employed to extract time dependent features before flattening the space.

If there’s examples that boosted tree models perform better in this space, and I think you’re right there are, than I think that just goes to show how traditional machine learning isnt dead, but rather if we could find ways to combine it with the thing that makes deep learning work so well (feature extraction) it’d probably do even better.

−4