Viewing a single comment thread. View all comments

ResourceResearch t1_iqunzdz wrote

Well at least for ResNet, there is a technical reason for its success. Skip connections mitigate vanishing gradients, via the chain rule of differentiation.

3

DeepNonseNse t1_iqvgzsk wrote

But then again, that just lead to another question: why are deep(er) architectures better in the first place?

0

Desperate-Whereas50 t1_iqwzlgc wrote

I am not a transformer expert. So maybe this is a stupid question, but is this also true for transformer based architectures? For example BERT uses 12/24 transformer Blocks. Thats sounds not as deep as for example a resnet-256.

1

ResourceResearch t1_iro8zof wrote

Afaik it is not clear. In my personal experience, the number of parameters is more important, rather then the layer size, i.e. a smaller number of wider layers does the same job as a large number of narrower layers.

Consider this paper for empirical insights for large models: https://arxiv.org/pdf/2001.08361.pdf

1