RezaRob

RezaRob t1_isdkb67 wrote

Speaking only in general here: often in ML, we don't know exactly why things work theoretically. Even for something like convolutional neural networks, I'm not sure if we have a complete understanding of "why" they work, or what happens internally. There have certainly been papers which brought into question our assumptions about how these things work. Adversarial images are a good example of things that we wouldn't have expected. So, in ML, sometimes the method/algorithm, and whether it works, are more important than an exact theoretical understanding of what's happening internally. You can't argue with superhuman alphago performance.

1