Comments

You must log in or register to comment.

Main_Mathematician77 t1_j8h1by4 wrote

Imo You’re not going to be able to provide a reliable service currently with out of the box solutions. The systems aren’t reliable enough to be certain especially when it can lead to false positives that can falsely defame someone

3

ateqio OP t1_j8h1po3 wrote

I'm totally aware of that and I will be putting a disclaimer in front page, not buried in a Terms and Conditions link somewhere.

The tools currently available can ruin a student's life by not explicitly mentioning it.

I want to address that issue by providing a solution that comes at top of the search and informing professors about limitations as explicitly as possible

3

Main_Mathematician77 t1_j8h3v8z wrote

The best thing I can thing of that relates to this is based off LAIONs style attribution knn index search for their 5B image dataset. A similar approach could be done for text - search over text for similar samples. But again no guarantee however it’s fairly interpretable. the dataset of generations from chatgpt for 100M users is growing fast and searching over it is most likely improbable at the current pricing options . Also, As you said using gpt2 to measure perplexity is good for catching gpt generated text, but it’s not a perfect solution imo

1

andreichiffa t1_j8h43hh wrote

You can’t. Anyone with enough technical knowledge will not want to go anywhere near legal ramifications and responsibility it implies (in addition to looking like a clown in about 10 minutes of uptime once bypasses are found).

There are fundamental limitations on detectability as of now.

1

ateqio OP t1_j8h5g16 wrote

You're right.

The problem is, people (especially professors) are going to look for it no matter what.

Just look at the stats. Roberta OpenAI detector was downloaded a whopping 114k times in just last month. It clearly states not to use it as ChatGPT detector but I see a lot of it's implementations

Better to educate users with a big fat disclaimer and a tool

1

andreichiffa t1_j8hawd4 wrote

I have reported to Huggingface what its detector was used for and its failure modes (hint:false positives are worse). In the first days of December. They decided to keep it up. It’s on their consciousness.

Same thing with API providers. Those willing to sell you one are selling you snake oil. It’s on their consciousness.

Same thing for you. You want to build an app that sells snake oil that can be harmful in a lot of scenarios? It’s on your consciousness.

But at that point you even don’t need an API to build it.

1

andreichiffa t1_j8hf2th wrote

10% is what OpenAI considered as "good enough" for theirs, but the problem is with the fact that the detection is not uniform. Most neurodivergent folks will be misclassified as generative models, just as for people with social anxiety who tend to be wordy. Non-native and non-fluent English speakers are the other big false-positive triggers.

1