Submitted by billjames1685 t3_youplu in MachineLearning
OptimizedGarbage t1_ivh2jio wrote
Reply to comment by billjames1685 in [D] At what tasks are models better than humans given the same amount of data? by billjames1685
There's a famous psych experiment where cats raised in a room containing only horizontal stripes never see vertical lines. After leaving the room their brains simply haven't learned to recognize vertical lines, so they'll walk face first into vertical bars without realizing they're there. There's a massive amount of data that goes into learning the features needed to distinguish objects from each other and learn the basics of how objects in 3D space appear.
Similarly, if you pretrain a neural net on any random assortment of images, you can get very fast learning after that by fine-tuning using new classes. But the overwhelming majority of the data is going towards "how to interpret images in general" not "how to tell two novel object classes apart".
billjames1685 OP t1_ivh2w9b wrote
Wow, that’s fascinating. I think the studies I saw weren’t quite saying what I thought they were as I explained elsewhere; we have so much data and training just by being in the world and seeing stuff that we are able to richly classify images even in new domains, but yeah it seems that pretraining is necessary. Thanks!
Viewing a single comment thread. View all comments