Viewing a single comment thread. View all comments

GuidotheGreater t1_j697y2k wrote

One of the limitations of chatGPT is that it can't search the internet. It has limited knowledge of events after 2021.

One of the key requirements for spotting fake news would be gathering multiple eye witness accounts via social media etc.

Maybe in the future although as was previous mentioned the people that believe fake news won't be swayed.

22

kytheon t1_j69bxd0 wrote

I noticed this after asking how to end the war in Ukraine and who won the World Cup. Both answers showed it was stuck in 2021.

7

Alistair_TheAlvarian t1_j6a760q wrote

Hey in its defense I have real human relatives who are stuck in 1921 or at best 61 or 71

So that's a huge bump up in recent information availability.

7

MINIMAN10001 t1_j6ciouo wrote

I'm not sure what answers you're getting and other than the fact it always responds with "But violence is bad mmmk" I can't really even get it to consider the possibility of conflict because of that

but it does more or less mirror what played out as far as "What would Russia do if they learned of resources" "What would the international community do in response"

2

kytheon t1_j6ckqlr wrote

It straight up said the World Cup 2022 was coming up (instead of finished) and that Russia invaded Ukraine in 2014, but nothing about 2022.

1

chasonreddit t1_j6a3lkd wrote

> multiple eye witness accounts via social media

I mean that's a good idea, but still not truth. I've seen lots of eyewitness accounts of events I've been at, and they weren't even close. Mostly people with a slant or agenda post their observation on social media.

4

Substantial_Space478 t1_j6fibov wrote

while eyewitness accounts is a good starting baseline, that requirement is funny because the "real" news doesn't gather eyewitness accounts either. in fact they make a concerted effort to only offer 2nd and 3rd hand sources for entertainment value. imho this is why fake news has proliferated so quickly. relative to the world's largest propaganda machine which also offers limited factual information, the fake news looks a lot like the "real" thing

1

schrod t1_j69gp5e wrote

I thought it does search the internet? How else could it work??

−3

BoxOfDemons t1_j6a6ucp wrote

It doesn't search the internet. The information it uses comes from internet resources, but it was all downloaded in the past. It doesn't access the internet itself.

9

schrod t1_j6a8cv2 wrote

Surely they will figure out how to access the internet and bring it up to date? This could be an amazing tool to help with disinformation, hopefully?

−1

BoxOfDemons t1_j6a8qyz wrote

Maybe one day, but that seems rather difficult. The AI isn't meant to determine what's true or false, it's just a language network. Having the data downloaded means they can fix any issues with their dataset. Put it on the internet and it will probably start spreading misinformation as well.

3

rishabmeh3 t1_j6biukd wrote

GPT3 / ChatGPT may seem smart from the outside, but it is simply a large language model learning patterns in data. Using something like this for figuring out disinformation would be a rather bad idea because of all the biases it would introduce. You instead want a model specifically for the task of misinformation with the biases reduced as much as possible, and there are already thousands of research papers on this out there -- its a really hard machine learning problem.

2

Thibaut_HoreI t1_j6c7utt wrote

Even then, it ‘hallucinates’ a reality all its own, very much like an image creating AI’s may create a picture that does not resemble reality. If you ask it to write your bio it may, for instance, confidently tell you you died in 2017.

If anything, it will teach us that eloquence does not equate veracity.

2

swiss023 t1_j6bweyx wrote

It’s not so much an issue of figuring out how to connect it to the internet, in fact this ChatGPT network available to the public was intentionally designed this way. The model itself and its NLP abilities still need to be improved more before it could accomplish what you’re imagining

1

Writerro t1_j69yh58 wrote

I think that it have data from the Internet, but before 2021. The data from the internet was loaded once into the chatGPT (or it was scanned by chatGPT once) and since that, it does not get any new data (thats my understanding)

4

schrod t1_j6a8m8t wrote

Hopefully they will figure out how to make it current and allow real time searches.

2

Sentsuizan t1_j6bmeqo wrote

It specifically says it does not access the internet on the splash page anytime you try to use it

3

Irate_Librarian1503 OP t1_j698tz9 wrote

Ok. At least some detector of obviously false information would still work. If the model was trained correctly

−11

IOnlySayMeanThings t1_j6a2zu0 wrote

Can you explain how this would work? There's some compelling arguments in here on how it just can't do that.

6