Submitted by Such_Share8197 t3_10wnex1 in deeplearning
levand t1_j7o5zeb wrote
This is inherently a super hard problem, because (to oversimplify) the loss function of any AI generating NN is to minimize the difference between a human generated and AI generated images. So the state of the art for detection & generation is always going to be pretty close.
Such_Share8197 OP t1_j7o6crv wrote
oh i see thanks for the reply!
thelibrarian101 t1_j7p27ns wrote
To add to this, openai itself is pretty mediocre at detecting AI generated text: https://openai.com/blog/new-ai-classifier-for-indicating-ai-written-text/
enterthesun t1_j7uqcx7 wrote
You’re correct that the state of the arts will be close but that doesn’t mean that detection cannot train and predict on generated data. It’s like using synthetic data.
nutpeabutter t1_j7os2xj wrote
Just because it can imitate doesn't mean it can do so perfectly.
DMLearn t1_j7pq8wc wrote
The model is trained by getting rewarded for fooling a model that tries to distinguish between the real and fake images. So no, it won’t be perfect, but it’s going to be good enough to trick a model the vast majority of the time because that is literally a part of the training. Not just a small part, that’s is the central tenet of the training and optimization of generative models, generative ADVERSARIAL networks.
nutpeabutter t1_j7rxvb8 wrote
Your argument falls apart when you realize that there are training artifacts. Ever wonder why FID scales inversely with model size?
Viewing a single comment thread. View all comments