Viewing a single comment thread. View all comments

Oceanboi OP t1_ixlzu9h wrote

Maybe not always, but couldn't you argue that good trained weights for one task may not carry over well to another?

1

asdfzzz2 t1_ixm6knh wrote

Is there any reason for them to be worse than random weights? Because if not, then you have no reason not to use pretrained models.

3

hadaev t1_ixmajnt wrote

To add to it, non random weight might be worse for tiny/simple models.

But modern vision models should be fine with it.

Like bert text weights is a good starting point for images classification.

2