Viewing a single comment thread. View all comments

hgoel0974 t1_ivi38ul wrote

On top of what others have said, one additional aspect of pre-learned experience that we're only kind of starting to look into for ML is that there are architectures that seem more predisposed certain tasks than others.

For instance, "What's Hidden in a Randomly Weighted Neural Network?" discusses how untrained subnetworks in sufficiently large networks can have decent accuracy in classification tasks, in certain cases even when the weights are set to a constant value.

Evolution has had much more time to refine such strategies than ML models have had.

1