Viewing a single comment thread. View all comments

SleekEagle t1_j8xoiu6 wrote

1

suflaj t1_j8xor46 wrote

It doesn't matter if it isn't of roughly the same size or larger. Either ChatGPT is REALLY sparse, or such detection models won't be available to mortals. So far, it doesn't seem to be sparse, since similarly sized detectors can't reliably differentiate between it and human text.

1

zcwang0702 OP t1_j8xufwt wrote

>if it isn't of roughly the same size or larger. Either ChatGPT is REALLY sparse, or such detection models won't be available to mortals. So far, it doesn't seem to be sparse, since similarly sized detectors can't reliably diffe

Yeah I have to say this is scary, cause if we cannot build a robust detector now, it will become increasingly difficult to do in the future. LLM will make the Internet information continuously blurred.

1

suflaj t1_j8xv8md wrote

Maybe that's just a necessary step to cull those who cannot critically think and to force society to stop taking things for granted.

At the end of the day misinformation like this was already generated and shared even before ChatGPT was released, yet it seems that the governmental response was to allow domestic sources and try to hunt down foreign ones. So if the government doesn't care, why should you?

People generally don't seem to care even if the misinformation is human-generated, ex. Hunter Biden's laptop story. I wouldn't lose sleep over this either way, it is still human-operated.

2

SleekEagle t1_j8y0vuc wrote

Agreed! I mean even if the proper resources were dumped into creating such a large detector it could be come quickly obsolete because of adversarial training (AFAIK, not an expert on adv. training)

1