Viewing a single comment thread. View all comments

ThePhantomPhoton t1_iyj7b4a wrote

Depends on the problem. For physical phenomena, statistical techniques are very effective. For more abstract applications, like language and vision, I just don’t know how the purely statistical methods could compete.

22

TotallyNotGunnar t1_iyjpbzs wrote

Even then. I dabble in image processing at work and haven't found a need for deep learning yet. Every time, there's some trick I can pull with a rule based classifier to address the business need. It's like Duck Hunt: why recognize ducks when you can scan for white vs. black pixels?

20

ThePhantomPhoton t1_iyk2c66 wrote

Upvoted because I agree with you-- for many simple image problems you can even just grayscale and use the distance from the Frobenius Norm of each class as input to a logistic regression and nail many of the cases.

8

TrueBirch t1_iymfk2r wrote

When I first read your comment, I thought you were still talking about Duck Hunt. I'd read the heck out of that whitepaper.

2

ragamufin t1_iyl7qwv wrote

Amen we’ve been doing satellite image time series analytics and deep learning keeps getting pushed off in favor of classification models based on complex features

4

bushrod t1_iyjxns1 wrote

The analysis relates to time series prediction problems. Isn't it fair to say vision and language do not fall under that umbrella?

17

mtocrat t1_iyk1n65 wrote

Consider spoken language, and you're back in the realm of time-series. Obviously simple statistical methods can't deal with those though.

13

bushrod t1_iyk33jc wrote

Right, even though language is a form of time series, in practice it doesn't use TSP methods. Transformers are not surprisingly being applied to TSP problems though.

6

Warhouse512 t1_iykw25k wrote

Eh, predicting where pedestrians are going, or predicting next frames in general. Even images have temporal forecasting use cases

3

ThePhantomPhoton t1_iyk2wnq wrote

I think you have a good argument for images, but language is more challenging because we rely on positional encodings (a kind of "time") to provide us with contextual clues which beat out the following form of statistical language model: Pr{x_{t+1}|x_0, x_1, ..., x_{t}} (Edit-- that is, predicting the next word in sequence given all preceding words in the sequence)

2

eeaxoe t1_iyn2zwu wrote

Tabular data is another problem setting where DL has a tough time stacking up to simpler statistical or even ML methods.

2