Submitted by sigul77 t3_10gxx47 in Futurology
EverythingGoodWas t1_j57zs70 wrote
Reply to comment by Surur in How close are we to singularity? Data from MT says very close! by sigul77
No it doesn’t. We see this displayed all the time in computer vision. A yolo model or any other CV model doesn’t understand what a Dog is, it just knows what they look like based on a billion images it has seen of them. If all of a sudden some new and different breed of dog appeared people would understand it was a dog, a CV model would not.
PublicFurryAccount t1_j58ye2i wrote
This is a pretty common conflation, honestly.
I think people assume that, because computers struggled with it once, there's some deeper difficulty to language. There isn't. We've known since the 1950s that language has a pretty low entropy. So it shouldn't surprise people that text prediction is actually really really good and that the real barriers are ingesting and efficiently traversing.
ETA: arguing with people about this on Reddit does make me want to bring back my NPC Theory of AI. After all, it's possible that a Markov chain really does have a human-level understanding because the horrifying truth is that the people around you are mostly just text prediction algorithms with no real internal reality, too.
JoshuaZ1 t1_j5bryem wrote
I agree with your central point but I'm not sure when you say:
> If all of a sudden some new and different breed of dog appeared people would understand it was a dog, a CV model would not.
I'd be interested in testing this. Might be interesting to train it on dog recognition on some very big data set and deliberately leave one or two breeds out and then see how well it does.
Surur t1_j598x1z wrote
You are kind of ignoring the premise, that to get perfect results, it needs to have a perfect understanding.
If the system failed as you said, it would not have a perfect understanding.
You know, like you failed to understand the argument as you thought it was the same old argument.
Viewing a single comment thread. View all comments