Submitted by whitecastle92 t3_10qgh5x in technology
Comments
9-11GaveMe5G t1_j6qk8ne wrote
> 26% success rate
If all it's doing is saying yes this is AI generated or no it's not, this is considerably worse than chance.
RoboNyaa t1_j6qoeym wrote
It'll never be 100%, because the whole point of ChatGPT is to write like a human.
Likewise, anything less than 100% means it cannot be used to validate academic integrity.
manowtf t1_j6txasq wrote
They could just store the answers they generate and then allow a comparison if you want to check, how close what you submit is to what they generated
kevindamm t1_j6qmixr wrote
There are four buckets (of unequal size) but I don't know if success was measured by landing within the "correct" bucket or being within the highest p(AI-gen) bucket as TP, or both extreme top and bottom buckets. I only read the journalistic article and not the original research, so idk. 1000 character minimum worries me more, there's quite a lot of text smaller than that (like this comment).
9-11GaveMe5G t1_j6qy4i3 wrote
I understand the bigger sample the better, but is it possibly limited to longer text so it's usefulness is kept academic? You mentioned your comment, and I think going to a platform like reddit and being able to offer a report on comment genuineness would be valuable
PhoneAcc2 t1_j6qvw8j wrote
The article misrepresents this.
They use 5 Categories from very unlikely AI generated to very likely.
26% refers to the AI generated part of their validation set being labeled as very likely AI generated.
IKetoth t1_j6rc6i2 wrote
Which is a 26% success rate, how is that being misrepresented? The fact its 'somewhat confident' on other samples means nothing, if this was to be used for validating articles anywhere like writing competitions or academia you'd want that "very confident" number to be in the high 90s or at the very least the false positive amount incredibly low.
PhoneAcc2 t1_j6recky wrote
The article suggests there is a single "success" metric in OpenAIs publication, which there is not and deliberately so.
Labeling text as AI generated will always be fuzzy (active watermarking aside) and become even harder as models improve and get bigger. There is simply an area where human and AI written texts overlap.
Have a look at the FAQ on their page if you're interested in the details: https://platform.openai.com/ai-text-classifier
IKetoth t1_j6rfyq3 wrote
No need, I don't see a point to this, I expect given 3-5 years of adversarial training if left unregulated they'll be completely impossible to tell apart to a level where there'd be any point to it, we need to learn to adapt to the fact AI writing is poised to replace human writing in anything not requiring logical reasoning
Edit: I'd add that we need to start thinking as a species about the fact we've reached the point where human labour need not apply, there are now automatic ways to do nearly everything, the only thing stopping us is the will to use them and resources being concentrated rather than distributed, assuming plentiful resources nearly everything CAN be done without human intervention.
[deleted] t1_j6qx4ti wrote
[deleted]
[deleted] t1_j6rtbsi wrote
[deleted]
Gathorall t1_j6w9xpb wrote
Coming to the concepts of sensitivity and specifity.
The sensitivity of this test would have to be nery near to 100% especially when as you said, most people aren’t cheats; just a few percentage points off sensitivity can mean that a large absolute part of the people "caught" are actually innocent.
[deleted] t1_j6wd6i0 wrote
[deleted]
hawkeye224 t1_j6rjhzz wrote
Anyone knows what's the success rate of GPTZero, the detector that a student wrote?
I found this article (https://futurism.com/gptzero-accuracy) and it seems to indicate around 80%, but on a very small sample.
EmbarrassedHelp t1_j6q95dp wrote
They can use their detector to train a better model that is able to defeat the detector. They are also likely doing this so that they can try to minimize the amount of AI content in future training datasets.
alepher t1_j6qk68z wrote
Then train a better detector to defeat the new model, a cycle of improving technology that we rely on to distinguish between humans and AIs. How the Turing tables
christes t1_j6qzfmr wrote
This is called a generative adversarial network, I believe.
Zettomer t1_j6r7bin wrote
This is how you get skynet.
Vulcan_MasterRace t1_j6pz80d wrote
Good call by Sam Altman.... Even better execution in bringing it to the market so fast
Prestigious_Sort4979 t1_j6rjva2 wrote
This is genius, they introduced a problem, then created a tool to handle the problem. Icons
Essenji t1_j6so1d1 wrote
Like manufacturing a disease and then selling medicine. Imagine they double dip, selling the subscription service to ChatGPT for $42 to the students and then selling the adversarial to universities for $100k licenses.
wazzel2u t1_j6qr6xx wrote
At 26%. flipping a coin for a "yes-no" decision is a better predictor.
WTFRhino t1_j6rj0qk wrote
Per the article. The 26% is the chance an AI piece is labeled "very likely AI". So they can catch out over 1 in 4 pieces generated by AI. The majority of AI writing doesn't get caught, but this also means the vast majority (>99%) of non-AI work doesn't get labeled AI.
In the context of academic work. Universities are at very little risk of accusing a non-cheater of cheating. The 1in4 catch rate while low is a huge deterrent for potential cheaters. If I knew that I had a 1/4 chance of getting caught and punished, I would not cheat. Especially as i had to submit dozens of papers as part of my degree.
[deleted] t1_j6r3p2j wrote
[removed]
Badboyrune t1_j6rbq4l wrote
If you just assume the opposite of the prediction wouldn't you have a 74% chance of detection?
pale-blue-dotter t1_j6r0rbo wrote
They can basically search the content in their own history to cross-check if chatgpt wrote it at some point for someone.
But that search might take a long time??
Isogash t1_j6ru6vl wrote
Long but certainly not impossible, although it's not necessarily a good solution.
WTFRhino t1_j6rjhcl wrote
People are focusing on the 26% being a low catch rate. But this is deliberate in order to lower the number of false positives on human work.
The big debate is in academia and most students will not risk ruining their degree to cheat this way. A 1in4 chance is huge when there are multiple papers that go towards your degree. It just isn't worth the risk.
DaemonAnts t1_j6q3rxa wrote
I'd be impressed if it was able to do this with one word answers or if it were still able to detect it if the text was rephrased in ones own words.
Black_Moons t1_j6qdwsm wrote
No.
-Written by ChatGPT, Maybe.
Earlyflash t1_j6rkyky wrote
"Well well, it looks like ChatGPT's cover has been blown! I guess I'll have to start wearing a disguise when generating text from now on. Time to invest in some robotic shades and a fedora. #NotAllAI"
Nik_Tesla t1_j6t4zg2 wrote
I get how you'd detect AI generated images, video, and voice. Being able to watermark those things at a metadata level or encode a signature into it. But I don't see any way it could possible do that for plain text that is copy/pasted.
Realistically, it would have to be just like existing plagiarism checkers, and just do a match search on text that it has generated itself (rather than searching published works or online) but that only covers stuff that ChatGPT has generated, not other ones that are yet to be made public, or self hosted GPT3 generations.
[deleted] t1_j6qdwnr wrote
[removed]
[deleted] t1_j6r1lau wrote
[deleted]
drawkbox t1_j6r24c5 wrote
This needs to be from third parties otherwise those who control the neural nets and datasets will be able to shroud information as "not generated" when it is clearly astroturfing or manipulated. Then they can throw their hands up and say "must be the algorithm or a bad dataset" for plausible deniability.
The new game is coming or here, and it is misdirecting blame to the "algorithm" when it is an editorialized set of data or filtered for certain aims.
Almost all algorithms and datasets are biased or editorialized in some way, laws need to be adjusted on that. You can't blame the "algorithm" for enragement, because enragement is engagement.
kanaleins t1_j6r6o47 wrote
triplespidermanpointingtoeachother.jpg
nilsgunderson t1_j6taaey wrote
AI recursion
the_zelectro t1_j6ue9it wrote
I don't understand why they don't just have a cache of everything ChatGPT produces over a month, and then delete. Should curb most plagiarism.
Social media companies do it.
kevindamm t1_j6q2lwy wrote
1000 character minimum and 26% success rate, but it's good that they're working on it