Viewing a single comment thread. View all comments

BrotherAmazing t1_ir3dmwz wrote

Nearly every data-driven approach to regression and purely discriminative classification has this problem, and it’s a problem of trying to extrapolate far outside the domain that you trained/fit the model in. It’s not about anything else.

Your generated images clearly look nothing like CIFAR-10 training images, so it’s not much different than if I fit two Gaussians to data that was Gaussian in 2-D using samples that all fit within the sphere of radius 1, then I send a 2-D feature measurement into my classifier than is a distance 100 from the origin. Any discriminative classifier that doesn’t have a way to detect outliers/anomalies will likely be extremely confident in classifying this 2-D feature as one of the two classes. We would not say that the classifier has a problem not considering “feature quality”, but would say it’s not very sophisticated.

In the real world in critical problems, CNNs aren’t just fed images like this. Smart engineers have ways to detect if an image is likely not in the training distribution and throw a flag to not have confidence in the CNN’s output.

4