Submitted by billjames1685 t3_youplu in MachineLearning
Hey guys, I've been thinking about this question recently. There are tasks that ML-based models outperform humans at, such as some image classification benchmarks and a bunch of games including chess, while humans are better at tons of other things like abstract math.
But for which of these tasks can ML models outperform us at given the same amount of data as we have? Like chess for example, can AlphaZero outperform humans if it had as many games of pretraining as, say, Magnus Carlsen has had? I'd imagine that Stockfish might be able to without pretraining just by virtue of computing so many positions ahead, but I'm not sure AlphaZero could, because its tree/policy and value NNs might not be that optimized.
As another example, its well-known that humans are generally pretty great at few-shot learning in, say, image classification; we can distinguish, say, dogs from cats given only a couple input examples.
IntelArtiGen t1_ivfxox3 wrote
For many tasks you can't really compare because we are fed with multiple types of raw data continuously while most models train on one specific type of data coming from one clean dataset.
>we can distinguish, say, dogs from cats given only a couple input examples.
After we've seen billions of images during multiple months/years of life. We had a very large and long "pretraining" before being able to perform "complex" tasks. So it depends on what you compare, most models need less data but train on a cleaner dataset with architectures that are already optimized for that specific task.