Viewing a single comment thread. View all comments

GraciousReformer OP t1_j9lwe7i wrote

Still, why is it that DL has better inductive biases than others?

0

activatedgeek t1_j9lz6ib wrote

I literally gave an example of how (C)NNs have better inductive bias than random forests for images.

You need to ask more precise questions than just a "why".

3

GraciousReformer OP t1_j9m1o15 wrote

So it is like an ability to capture "correlations" that cannot be done with random forests.

1

currentscurrents t1_j9n8in9 wrote

In theory, either structure can express any solution. But in practice, every structure is better suited to some kinds of data than others.

A decision tree is a bunch of nested if statements. Imagine the complexity required to write an if statement to decide if an array of pixels is a horse or a dog. You can technically do it by building a tree with an optimizer; but it doesn't work very well.

On the other hand, a CNN runs a bunch of learned convolutional filters over the image. This means it doesn't have to learn the 2D structure of images and that pixels tend to be related to nearby pixels; it's already working on a 2D plane. A tree doesn't know that adjacent pixels are likely related, and would have to learn it.

It also has a bias towards hierarchy. As the layers stack upwards, each layer builds higher-level representations to go from pixels > edges > features > objects. Objects tend to be made of smaller features, so this is a good bias for working with images.

1

GraciousReformer OP t1_j9oeeo3 wrote

What are the situations that the bias for the hierarchy is not helpful?

1