Submitted by smswigart t3_10upngh in Futurology

I tested OpenAI's ChatGPT detector and found it to be... terrible. It couldn't tell with any certainty whether the Bible, or copy straight from ChatGPT was AI generated. Making this technology public with reliability this low, IMHO, is irresponsible. No one should be using this to, for example, decide that a student's work was AI generated. And I'm skeptical that this technology will ever be reliable.

For the details of the tests and results, check out: The ChatGPT Detector is Laughably Bad - by Scott Swigart (substack.com)

What are your thoughts on the merits and feasibility of AI detection?

41

Comments

You must log in or register to comment.

the_zelectro t1_j7d7s8m wrote

It's a joke. They need to store a cache of everything that has been generated over the course of a week/month. Then, use that to detect plagiarism. Should catch the worst cases and they can even make it a feature built-in to ChatGPT.

They can do this with ease. But, they just haven't for some reason.

19

smswigart OP t1_j7daxj8 wrote

I think they're trying to make it general purpose so that it can detect ChatGPT output, and outputs from other generative AI, but yeah, detecting its own output should be a no-brainer.

Also, if something existed before 2021 and is in its training data (like bible verses), it should also be certain that it wasn't AI generated.

5

Veleric t1_j7e5c77 wrote

I've said it before, but it makes zero sense for these systems to be 100% accurate with whether it's AI generated because it strongly disincentivizes anyone from using it. Creators don't want to be called out on it for being lazy. Researchers don't want to be called out for plagiarism. Students don't want to be called out for cheating. Copywriters... And on and on. It only hurts them and by introducing any tools to try to identify them leads to witch hunts and false accusations that could ruin livelihoods. It's going to be a bit of a mess trying to validate that students for instance actually learn the material, but if they validate the information as factual and use some thought and revision in prompt engineering to generate quality output, that seems to be a very valuable new skill heading into a very different world than just 6 months ago...

1

RaccoonProcedureCall t1_j7f71qa wrote

I disagree that learning to use ChatGPT should replace learning other skills for students, but your point about there being an incentive for OpenAI to make a bad detector is a good one that I hadn’t considered before.

I guess expecting OpenAI to make a good detector is a bit like expecting a site that allows students to pay for homework answers to include a service to help teachers identify answers taken from the site. Any site that would try such a thing would quickly become unpopular with students looking to cheat, and they’d take their business elsewhere.

5

Veleric t1_j7fih6c wrote

The key for learning in particular is not so much the material itself (with some exceptions) but rather the process of attaining information, processing it in a thoughtful and discerning manner, and disseminating it in a concise and digestible way. Going forward, the average person will need to retain less and less information, and while you can decide whether you think that is a good or bad thing, the ability to find what you need quickly is going to be what's most important.

1

NofksgivnabtLIFE t1_j7fmdkp wrote

This is what the internet was meant to be before corperate took over. Its all going to wildly ok.

0

peter303_ t1_j7e5ci8 wrote

A smart detector would be to record ALL requests and output, then match against that.

10

homezlice t1_j7fmhg5 wrote

If I were in college and a professor accused me of using AI to write a paper when I did not, you can bet your ass I am suing that school.

5

[deleted] t1_j7fb1z0 wrote

Unless the Matrix theory is correct and we live in a complex computer simulation. Then technically you could say the Bible was AI generated 🤔

4

leaky_wand t1_j7i42tz wrote

In the beginning, there was 0. And I said, let there be 1. And it was good.

3

msdlp t1_j7ijeln wrote

It seems to me that there is no way to distinguish between AI generated text and human generated text. AI is in it's infancy and will get many many times better in the future. Since humans willl change very little, it can be concluded that AI detection is a target that will continue to recede further and further from our ability to detect any difference. As a retired programmer, I am frankly surprised that the technical world has not already seen this, or at least admitted it if they have seen it. Tell me what you think.

2

Stealthy_Snow_Elf t1_j7kr44y wrote

Well, to be fair most of the bible is plagiarism. I wouldn’t be surprised that an AI detector would notice a book containing stuff that is more or less copied from other stuff.

That said, yeah, all of this shit is experimental. That’s the nature of it

2

Nervous-Newt848 t1_j7ebqbu wrote

You have to be a fool that ai detection would work at all... Language Models are specifically trained to imitate human conversation... Its impossible

1

FalseTebibyte t1_j7ew2uz wrote

Maybe you're going about this the wrong way.

Would an AI construct inside of an AI construct be capable of detecting such things?

Wouldn't it stand to reason based on current security flaws in virtualization that this kind of behavior is unwanted?

Oh well. Back to wondering why people scream at me and call me an asshole and not actually Autistic.

1

[deleted] t1_j7fmxli wrote

I think so far to texting AI has proven trivial and the only reason to think it won't continue that way is when you do that thing where you think about future technology and all the problems but you forget that all the tools evolve also... To be quite honest that's how most people think most of the time so they always Overlook the advances and focus on the negative consequences because I prepare the human brain is wired to prioritize negative stimulus and for that matter most life because it is a more efficient mechanism for long-term survival to prioritize the things that might kill you versus try to remember all the things that benefit you.

1

smswigart OP t1_j7g1f4n wrote

Except the AI writing technology will improve (and be harder to detect) over time too.

1

naptastic t1_j7hr6lx wrote

An AI smart enough to detect the work of another AI is going to require three things developers are unwilling to give at the same time:

  1. mutable state
  2. self-reference
  3. self-modification

No one has realized that auditing these systems is very simple. (Though it will raise the questions of what intelligence counts as "artificial", where the line is between "programming" and "training", and it will require considerable human input.) Tedious, but simple.

We developers do our best to keep mutable state as small as possible. We have very strict rules about references. Self-modifying code... nobody does that. Maybe in a lab setting but never in production.

If you don't follow the rules with references, your program crashes (or leaks memory). If you carry a lot of mutable state, your program becomes brittle, even buggy. Code that rewrites itself is the definition of "unpredictable." Users don't like crashes, memory leaks, bugs, or unpredictable behavior, so we follow the rules...

But a smart AI is going to have to be able to refer to and modify itself with very few restrictions, and it's going to do so in ways we can't predict. However, I'm pretty sure that anyone clever enough to build a smart AI will also implement an effective safety harness.

Whether this program ever gets written or not is a more interesting question to me than whether life exists elsewhere in the Universe.

1

SkitzoRabbit t1_j7hxexm wrote

Don’t try to detect the generated product. Rather attack the generation and submission process.

Have the word processor develop a signature of the keystrokes when writing. And append the result to the file data.

Schools require certain software all the time. And it doesn’t have to be on the level of an identifiable signature just one that is decidedly human.

No excuse for writing an entire paper in one document then copy paste it into a brand new file.

1

riceandcashews t1_j7imlkv wrote

Wouldn't be long before an AI tool was developed to create a file with those attributes

2

maretus t1_j7oqrjk wrote

Check out gptzero.me. It seems to work quite a bit better and because of the way it works, that will continue to improve as it’s dataset grows.

We’re already using it at the educational company I work for to detect AI written assignments and essays.

1

smswigart OP t1_j7qm158 wrote

It thinks the first 4 verses of Genisis might have been written by an AI.

2