Viewing a single comment thread. View all comments

ateqio OP t1_j8h5g16 wrote

You're right.

The problem is, people (especially professors) are going to look for it no matter what.

Just look at the stats. Roberta OpenAI detector was downloaded a whopping 114k times in just last month. It clearly states not to use it as ChatGPT detector but I see a lot of it's implementations

Better to educate users with a big fat disclaimer and a tool

1

andreichiffa t1_j8hawd4 wrote

I have reported to Huggingface what its detector was used for and its failure modes (hint:false positives are worse). In the first days of December. They decided to keep it up. It’s on their consciousness.

Same thing with API providers. Those willing to sell you one are selling you snake oil. It’s on their consciousness.

Same thing for you. You want to build an app that sells snake oil that can be harmful in a lot of scenarios? It’s on your consciousness.

But at that point you even don’t need an API to build it.

1

ateqio OP t1_j8hcsz6 wrote

What's the ratio of false positives? honestly curious

1

andreichiffa t1_j8hf2th wrote

10% is what OpenAI considered as "good enough" for theirs, but the problem is with the fact that the detection is not uniform. Most neurodivergent folks will be misclassified as generative models, just as for people with social anxiety who tend to be wordy. Non-native and non-fluent English speakers are the other big false-positive triggers.

1