Comments

You must log in or register to comment.

Ll42h t1_j8vssi3 wrote

Researchers and people working in the field discussed this topic in a workshop and you can read their findings in this paper, it's very interesting

8

zcwang0702 OP t1_j8vw3e9 wrote

Thanks for sharing! very interesting paper!

1

Terminator857 t1_j8uxy4v wrote

It will happen. Balance that with the good that it is and will be doing.

5

suflaj t1_j8vt849 wrote

Likely

The possible implementation path is just using it, lol, it is already capable of doing that.

At this moment, your two bets are:

  • OpenAI watermarks their generated text and you have models which cam detect this watermark
  • a bigger, better model comes out which can detect synthetic text (although then THAT model becomes the problem)

You could also counter misinformation with a fact checking model, but there are two big problems:

  • we are nowhere near developing useful AI that can reason
  • the truth is subjective and full of dogmas, ex. look at how most countries implement dogmas regarding the holocaust - your model would, without a severe transformation of society itself, be biased and capable of spreading propaganda in a general sense, and misinformation as a subset of propaganda

Therefore I believe your question should be: when can we expect to have models that only share the "truth of the victor". And that's already happening with ChatGPT now, as it seems to be spreading western liberal views.

3

SleekEagle t1_j8xoiu6 wrote

Adversarial training will be a huge factor regarding detection models imo

1

suflaj t1_j8xor46 wrote

It doesn't matter if it isn't of roughly the same size or larger. Either ChatGPT is REALLY sparse, or such detection models won't be available to mortals. So far, it doesn't seem to be sparse, since similarly sized detectors can't reliably differentiate between it and human text.

1

zcwang0702 OP t1_j8xufwt wrote

>if it isn't of roughly the same size or larger. Either ChatGPT is REALLY sparse, or such detection models won't be available to mortals. So far, it doesn't seem to be sparse, since similarly sized detectors can't reliably diffe

Yeah I have to say this is scary, cause if we cannot build a robust detector now, it will become increasingly difficult to do in the future. LLM will make the Internet information continuously blurred.

1

suflaj t1_j8xv8md wrote

Maybe that's just a necessary step to cull those who cannot critically think and to force society to stop taking things for granted.

At the end of the day misinformation like this was already generated and shared even before ChatGPT was released, yet it seems that the governmental response was to allow domestic sources and try to hunt down foreign ones. So if the government doesn't care, why should you?

People generally don't seem to care even if the misinformation is human-generated, ex. Hunter Biden's laptop story. I wouldn't lose sleep over this either way, it is still human-operated.

2

SleekEagle t1_j8y0vuc wrote

Agreed! I mean even if the proper resources were dumped into creating such a large detector it could be come quickly obsolete because of adversarial training (AFAIK, not an expert on adv. training)

1

NeuroDS t1_j8v0vqb wrote

Too much inaccurate information will fill up the internet with language models.

2

HBRYU t1_j8vuecc wrote

Seo in the next level i think

2