clavalle t1_j35eqzp wrote

Makes sense.

An interesting question related to OPs: could there be a ML solution that humans /can't/ understand?

Not /don't/ understand...but I mean given enough time and study a given solution both outperforms humans and is relatively easy to verify but we cannot understand the underlying model at all.

My current belief is that no model is truly beyond human reasoning. But I've seen some results that make me wonder.


clavalle t1_j2xk9dl wrote

Yes, ML can outperform humans in certain tasks.

  1. Quantity can sometimes make a very big difference - if you could sit down and train a human on the same amount of data, the human might be on par with ML...but that's often not possible

  2. Training data is not always generated by humans.

  3. Given the same data, there are connections or perspectives that humans have not followed or considered.