Comments

You must log in or register to comment.

r2k-in-the-vortex t1_j69f9g0 wrote

How? GPT is a language model, not a truth / fiction model, it can't tell if something is factually true or not.

252

Mike2220 t1_j6ajx00 wrote

ChatGPT itself frequently lies while posing it's answers as 100% confidently true

99

ChainmailleAddict t1_j6awjev wrote

Someone asked it some questions about Kenshi and ChatGPT insisted that the technophobic Holy Nation had power armour. RAGE.

6

MINIMAN10001 t1_j6cexcs wrote

Reminds me, I have tendency to put in a specific requirement like "No branching" and it'll be like I used ternary! so I ask "what is ternary" It's a branching statement. oh ok. So I have to also include no ternary!

1

chasonreddit t1_j6a37ey wrote

Very true. And even if asked and it gives a cogent answer, it is just using the material it is trained on. Remember the twitter chatbot AI that turned very racist? It's all in the training data.

21

Alistair_TheAlvarian t1_j6a6xnh wrote

The average Twitter user chatbot that got turned into a suicidally depressed holocaust denying neonazi?

Ahh 4chan, why must it ruin things so often.

20

Alx941126 t1_j6abx0y wrote

honestly I feel that more than ruining them, they're doing us a favor, as no AI will replace human jobs until it's robust enough to handle that.

15

missanthropocenex t1_j6by5ai wrote

Boy people just can’t wait to outsource critical thinking to AI can’t they? Listen you have an awesome computer called your BRAIN. It’s really good at helping determine what is and isn’t real and be discerning on deciding about what is being put in front of you being real.

The second you think AI is some infallible, unbiased voice you’ve lost.

9

ub3rh4x0rz t1_j6c7lei wrote

When people say they want to outsource X to ChatGPT, there's usually an aspect of wishing to subvert the power X experts exert. The corollary is also true.

3

Memfy t1_j6dh2cq wrote

>It’s really good at helping determine what is and isn’t real and be discerning on deciding about what is being put in front of you being real.

It's really not. If you have little to no knowledge about a topic you're asking it, how are you going to discern that it's not spitting out crap? Sure you might be able to easily distinguish some complete garbage, but little errors that still invalidate the whole thing can be a much harder thing to do.

2

r2k-in-the-vortex t1_j6cempp wrote

Looking at how much utter garbage people believe, I'm thinking the awesomeness of that brain is greatly overestimated. There is much need to outsource thinking, because we are not very good at it.

1

Spiderbanana t1_j6btdqq wrote

Saw an hilarious one today (in French) where someone asked GPT the difference between cow eggs and chicken eggs.

6

explodingtuna t1_j6a5tm5 wrote

Perhaps training it to detect sensationalized word choices and writing styles, instead?

1

Ruzhyo04 t1_j6a3ldh wrote

ChatGPT is being used to write fake news. Much better at that.

55

ub3rh4x0rz t1_j6c7pmf wrote

Its primary mechanism involves hacking human communication processes, and that's the only requirement for fake news, so yeah, it's a perfect match.

6

Bubbagumpredditor t1_j694n4e wrote

Because the people that use the fake news don't care that it's fake, and lie about actual factual news being fake. It's a religion and facts wont change that.

48

putalotoftussinonit t1_j69q9m0 wrote

It was beaten into my head that I needed blind faith to believe in Jesus. It doesn't take much effort for a politician, snake oil salesman, or Joe Rogan to use that to their advantage.

1

doodoowithsprinkles t1_j69u2q9 wrote

And it's not only the crazright wing news that is fake, the neoliberal center-right news frames every story in a dishonest way to favor Capital the police the United States and Israel and to shit on China and any Latin American country with any type of social service

1

Irate_Librarian1503 OP t1_j696mia wrote

It’s not about telling people that the news are fake but rather about a possibility to remove said content. With that maybe some of the polarization that is currently happening on social media can be damped down

−2

NinjaLanternShark t1_j698902 wrote

Actual "fake news" with false information is relatively rare, and easy to spot -- think of tabloid news.

Here's the problem: "Classified documents found in Biden's home shines light on Democrat witch hunt of Trump."

That's not provably false. It's an extreme spin on what is ultimately true information.

You won't find universal agreement that headlines like that should be taken down. Therefore anyone using AI to filter stuff like the above only reinforces the echo chamber.

22

baumpop t1_j69olbd wrote

You curb this by having the ai write the news then check it then also be the audience for it.

1

schrod t1_j69ftbd wrote

It could substitute all references to 'witch hunts' with more accurate term "investigation" foe example

−3

Odd_Armadillo5315 t1_j6iu2s7 wrote

Nice idea, but as soon as a platform starts actually changing the content like this, the news providers will stop using that platform. Who is liable if a news item suddenly becomes libellous due to a change the AI made?

Plus you get some kid failing an exam because they kept writing about the Salem investigations.

1

Irate_Librarian1503 OP t1_j6992ps wrote

The first goal would not to make such an extreme spin as false or fake, but rather scientifically false ideas, aka vaccines cause autism.

−5

NinjaLanternShark t1_j69a3r2 wrote

IMHO any true scientist will tell you you can't prove a negative -- you can't prove vaccines don't cause autism, you can only state we have no studies or evidence that show they do, to which someone will reply, we have studies that do link them.

I'm not trying to be difficult, just saying it's much more complex than "half the country believes in lies."

23

Irate_Librarian1503 OP t1_j69cbii wrote

Never said that half the country believes lies. But any decent mega study on the subject shows that there is no correlation between autism and vaccines. Not trying to be difficult just trying to say that „vaccines cause autism“ is , after our current understanding, false.

0

Dry-Influence9 t1_j6awkxm wrote

Mate the short point is, classifying truths with machine learning is a very hard problem and it cant be done today. Chatgpt definitely cannot do that. There's a lot of very smart people working on it and hopefully they can come up with something eventually.

4

schrod t1_j69gghw wrote

It could say 'it is alleged by some but not proven' for the word 'causes' when there is a definite question about accuracy of the statement.

5

tim0901 t1_j6b6b56 wrote

What is truth though? You speak of it as if it's an absolute, definable thing, but in reality it's very much not. Truth is a relative term - we can both have truths that are in complete contradiction of each other, even within the realm of modern science.

Let's take a classic physics example - you're on a train, watching the world moving through the window. From your perspective you and the train appear stationary, while the world looks like it's moving beneath you. But from a person on the platform's perspective, they are the ones who are stationary while you and the train are the ones that are moving. If you drop something, from your perspective it moves straight downwards. But from the outsider's perspective, its trajectory is slanted - it's moving forwards as well as downwards.

Which of these perspectives is the "truth"? Is the train stationary, or the planet? Well, both - and simultaneously, neither. There is no absolute, definitive truth of the situation - it depends on whose perspective you take on the matter. And things get more confusing when you add more perspectives into the mix - after all as far as an observer on the moon is concerned, they are the one that is stationary while both the Earth and the train are moving. Or if you were to take the perspective of someone standing at the centre of the universe, then their truth is that everything is moving away from them. As such a definitive "truth" is impossible to define here. You can only state things from a particular perspective or - in physics language - a particular frame of reference.

This is why we don't use the concept of "truth" in science. Because while this is only a single example, this concept extends to basically everything. Science is not the "truth" nor does it ever attempt to be. Science is humanity's understanding of how the universe around us works from our particular perspective. Judging things as absolute truths or falsehoods is antithetical to this concept and therefore to science as a whole.

2

bottom t1_j69zic0 wrote

How Would it work?

Unfortunately it wouldn’t. Just because it can write doesn’t mean it knows what’s real or not.

In fact it will be used to create fakes

6

GuidotheGreater t1_j697y2k wrote

One of the limitations of chatGPT is that it can't search the internet. It has limited knowledge of events after 2021.

One of the key requirements for spotting fake news would be gathering multiple eye witness accounts via social media etc.

Maybe in the future although as was previous mentioned the people that believe fake news won't be swayed.

22

kytheon t1_j69bxd0 wrote

I noticed this after asking how to end the war in Ukraine and who won the World Cup. Both answers showed it was stuck in 2021.

7

Alistair_TheAlvarian t1_j6a760q wrote

Hey in its defense I have real human relatives who are stuck in 1921 or at best 61 or 71

So that's a huge bump up in recent information availability.

7

MINIMAN10001 t1_j6ciouo wrote

I'm not sure what answers you're getting and other than the fact it always responds with "But violence is bad mmmk" I can't really even get it to consider the possibility of conflict because of that

but it does more or less mirror what played out as far as "What would Russia do if they learned of resources" "What would the international community do in response"

2

kytheon t1_j6ckqlr wrote

It straight up said the World Cup 2022 was coming up (instead of finished) and that Russia invaded Ukraine in 2014, but nothing about 2022.

1

chasonreddit t1_j6a3lkd wrote

> multiple eye witness accounts via social media

I mean that's a good idea, but still not truth. I've seen lots of eyewitness accounts of events I've been at, and they weren't even close. Mostly people with a slant or agenda post their observation on social media.

4

Substantial_Space478 t1_j6fibov wrote

while eyewitness accounts is a good starting baseline, that requirement is funny because the "real" news doesn't gather eyewitness accounts either. in fact they make a concerted effort to only offer 2nd and 3rd hand sources for entertainment value. imho this is why fake news has proliferated so quickly. relative to the world's largest propaganda machine which also offers limited factual information, the fake news looks a lot like the "real" thing

1

schrod t1_j69gp5e wrote

I thought it does search the internet? How else could it work??

−3

BoxOfDemons t1_j6a6ucp wrote

It doesn't search the internet. The information it uses comes from internet resources, but it was all downloaded in the past. It doesn't access the internet itself.

9

schrod t1_j6a8cv2 wrote

Surely they will figure out how to access the internet and bring it up to date? This could be an amazing tool to help with disinformation, hopefully?

−1

BoxOfDemons t1_j6a8qyz wrote

Maybe one day, but that seems rather difficult. The AI isn't meant to determine what's true or false, it's just a language network. Having the data downloaded means they can fix any issues with their dataset. Put it on the internet and it will probably start spreading misinformation as well.

3

rishabmeh3 t1_j6biukd wrote

GPT3 / ChatGPT may seem smart from the outside, but it is simply a large language model learning patterns in data. Using something like this for figuring out disinformation would be a rather bad idea because of all the biases it would introduce. You instead want a model specifically for the task of misinformation with the biases reduced as much as possible, and there are already thousands of research papers on this out there -- its a really hard machine learning problem.

2

Thibaut_HoreI t1_j6c7utt wrote

Even then, it ‘hallucinates’ a reality all its own, very much like an image creating AI’s may create a picture that does not resemble reality. If you ask it to write your bio it may, for instance, confidently tell you you died in 2017.

If anything, it will teach us that eloquence does not equate veracity.

2

swiss023 t1_j6bweyx wrote

It’s not so much an issue of figuring out how to connect it to the internet, in fact this ChatGPT network available to the public was intentionally designed this way. The model itself and its NLP abilities still need to be improved more before it could accomplish what you’re imagining

1

Writerro t1_j69yh58 wrote

I think that it have data from the Internet, but before 2021. The data from the internet was loaded once into the chatGPT (or it was scanned by chatGPT once) and since that, it does not get any new data (thats my understanding)

4

schrod t1_j6a8m8t wrote

Hopefully they will figure out how to make it current and allow real time searches.

2

Sentsuizan t1_j6bmeqo wrote

It specifically says it does not access the internet on the splash page anytime you try to use it

3

Irate_Librarian1503 OP t1_j698tz9 wrote

Ok. At least some detector of obviously false information would still work. If the model was trained correctly

−11

IOnlySayMeanThings t1_j6a2zu0 wrote

Can you explain how this would work? There's some compelling arguments in here on how it just can't do that.

6

Crazyjaw t1_j69s5wo wrote

It’s important to appreciate that chatgpt is not a general AI. It is at its heart a bunch of nodes linked together with different weights that basically just figures out that “these words tend to follow these other words”

It doesn’t use logic in the human sense, and doesn’t have common sense. One of the problems is that it actually is has a tendency to make false statements and logical fallacies in its results (while sounding super confident) so if anything I expect chatgpt to spawn more fake news

19

Wollff t1_j6cjkr1 wrote

>It is at its heart a bunch of nodes linked together with different weights that basically just figures out that “these words tend to follow these other words”

You have just described my brain :D

>It doesn’t use logic in the human sense, and doesn’t have common sense.

I would argue: Neither do we.

>One of the problems is that it actually is has a tendency to make false statements and logical fallacies in its results (while sounding super confident)

I know that problem from somewhere. It did not originate with the invention of chatGPT.

Please be aware that I am not entirely serious... But all of those are also pretty good arguments to deny humans the term General Intelligence :D

0

Avalanche2 t1_j6ariye wrote

Tell us you dont know what chatGPT is without telling uis you dont know what chatGPT is.

13

Someone0341 t1_j6clqxg wrote

I love that we have concurrently in the sub a post with people telling us it can replace lawyers already and another one where they can't even accurately evaluate news.

The misconceptions about the extent of ChatGPT are massive.

3

Avalanche2 t1_j6d0uf7 wrote

Hmmm, a generic comment that doesn't really tie you to one particular view or another, interesting.

1

scpDZA t1_j69i0yn wrote

"It is possible to use a language model like me to detect fake news by training it on a dataset of real and fake news articles and then using it to classify new articles as real or fake. However, it's important to note that language models like me are only as good as the data they are trained on, so if the training data is biased or not representative of the types of fake news that are prevalent, the model's performance will likely be poor. Additionally, fake news can take many forms, including manipulated images and videos, so it would be important to use other methods in addition to a language model to detect all types of fake news."-chatGPT

7

7grims t1_j69c2j2 wrote

Yah, i forget who, either GPT3 or google AI whatever its name is.

But one of those teams is already working their AI to detect other AI creations, for news and other inputs.

6

vhs_sesh t1_j69f16s wrote

The people believing in fake news would think an AI telling them what's true and what's false would be part of a larger conspiracy and wouldn't believe the AI lol.

5

Pristine-Today4611 t1_j69sqn2 wrote

Who determines what fake news is? What guidelines determine an article is fake?

4

quantumgpt t1_j6c4h55 wrote

If the contents within the article are not accurate it's fake news.

0

Pristine-Today4611 t1_j6da374 wrote

But I’m saying who determines what is not accurate. An example this article shows that any comparison to Jim Crow in Georgia voting is false. https://www.foxnews.com/politics/poll-shows-0-black-voters-had-poor-voting-experience-november-despite-biden-claim-jim-crow-2-0. So any any articles claiming it’s Jim Crow is fake news

1

quantumgpt t1_j6dmbn1 wrote

Are you asking if you can have subjective opinion and different perspectives? I mean of course. That's what makes articles worth while for humans to read and write. If not, it's a debate which makes it very hard to make all of your points without interjections.

For example a bridge might not be a good idea for nature or residents. But it may be essential for millions of people in other cities to live. Neither side is inaccurate. You just can't use false information.

False is inaccurate with no factual basis.

Out of context words

Misused quotes

Bad numbers

There is always margin for error or human mistake. But you can understand the difference of a misspelled name vs a person claiming someone stated something and they weren't involved.

I don't know why this seems difficult to understand.

1

Pristine-Today4611 t1_j6doi9b wrote

Good point. So what do you say about Georgia’s election laws being Jim Crow 2.0? This recent facts prove that it is not and did not keep any minorities from voting. More minorities voted in this past election than anytime before.

1

quantumgpt t1_j6dpech wrote

I apologize I have not followed the story closely enough. If I could probe questions I could give a better idea of my personal perspective.

I understand on these topics there is more than one point of view and more than one agenda.

What are the claims on why it's preventing people from voting?

As for the rise. It's only logical with greater access to transportation, greater advancement in technology and the world that more people percentage wise would vote. Population is still on the rise for the age group to vote. So the number should be up. I'm not sure how far up the numbers are and how far down the claims are.

What I'd assume with my ignorance of the sides.

Laws made things more difficult and removed the ability for some to vote. Yet there was still a higher turnout.

This could be a simple, more people care thing. People could be trying harder. But I have to admit I'm an idiot in this area. I try to stay away from politics as I believe both sides have serious flaws.

2

Pristine-Today4611 t1_j6e06d6 wrote

That’s my whole point most articles are basically opinions. Anyone can take an article and some will say it’s fake news some will say its facts. If it fits their narrative then it’s fact. If it doesn’t it’s fake new.

1

quantumgpt t1_j6e0oax wrote

I think we would be more or less agreeing and our misunderstanding is the term fake news. Their definition of fake news is subjective and opinionated. Which means they are using the terms incorrectly. That's more of an understanding and education issue.

2

Pristine-Today4611 t1_j6e14cv wrote

No it’s not an education issue. It’s human nature with all kinds of educational levels. People read something end try to make it fit their narrative. If it doesn’t it’s fake news or ignored all together.

1

PhilGibbs7777 t1_j696g4n wrote

ChatGPT is not yet smart enough to spot fake news. The people who trained it went to a lot of effort to avoid it supporting anything controversial. The material used to train it must have been filtered to remove anything that would be considered fake news by the people who control it. Other bots in the near future will be able to reason more independently, but we will have to see if these will be allowed to be released to the public.

3

Irate_Librarian1503 OP t1_j69760p wrote

Which is not to say, that it could not spot clear lies. I asked it a lot of different questions about the stolen votes or stolen election with trump. Every time it told me that there was nothing stolen.

−1

Environmental-Buy591 t1_j69re3z wrote

ChatGPT itself has a very real problem with being confidently wrong, I dont know if the next version will be better but it seems to be an on going problem. Fake news is convincing and AI does not have the intuition to know it is wrong. Look at the twitter bot that turned racist and had to be taken down by microsoft. I know that was a while ago but you can see a similar flaw in chatGPT still.

6

PhilGibbs7777 t1_j6a6khl wrote

Yes because that was the answer that it got from its training material. If it had been fed with mis-information saying that votes were stolen it would tell you that instead. It does not have the ability to weigh up evidence and reach its own conclusion, but that will probably be possible soon. Of course many people dont have a very strong ability to do that either.

4

lincruste t1_j69rff2 wrote

ChatGPT is not aware of post 2021 events and is note build to real-time web browsing. The UK queen death is fake news to him.

3

Americaninaustria t1_j69x405 wrote

This is the problem with Ai, people LITERALLY think it can do anything. With enough fine tuning it can do a lot but not everything.

3

Odd_Armadillo5315 t1_j6iuwfm wrote

Yeah, and there's so much fear mongering based on this belief.

It couldn't conduct a legal defence in a courtroom, or even write the documents for it. But it could be used by a paralegal to search millions of past cases to find 30 examples of precedent in an hour where the paralegal would have needed to spend weeks reading in order just to find one example.

This is the example I've been using recently to demonstrate that it's not about to just make knowledge workers redundant, but it'll make their output far better researched if they utilise it sensibly.

1

Radeath t1_j6atl2b wrote

This is exactly the problem that people have with ChatGPT, that stupid people will rely on it as the arbiter of "truth".

3

MpVpRb t1_j69iiuw wrote

Once true news is defined accurately, a chatbot should easily be able to distinguish fake news

Of course, it won't matter. We currently have human experts, presenting strong evidence, but the believers believe what they want to believe. Kinda like religion

2

Silk__Road t1_j6a0bze wrote

There’s experts for a lot of things that just lie live because everyone has a price. Once you’ve seen one lie you tend to question it all.

All it takes is to know someone who has or actually witnessed an event yourself then to see blatant lies reporting on it to realise most things are probably this way.

2

itsok8 t1_j69nzws wrote

Because the system is limited to news articles before 2021 I believe. So anything recent is not taken into account.

2

darkgothmog t1_j69uhsh wrote

Reading all the wrong answers he provided me a few times, I wouldn’t trust it for debunking fake news

2

furrykef t1_j6dbka1 wrote

There's a particular episode of Doug I've been trying to find and I couldn't find it in an episode guide. So I asked ChatGPT: "Was there an episode of the animated TV show Doug where he kept telling a story and exaggerated it more with every telling?" It replied:

>Yes, there was an episode of the animated TV show Doug where the main character, Doug, kept telling a story and exaggerated it more with every telling. The episode is called "Doug's Tall Tale" which is the 30th episode of the first season. In the episode, Doug tells a story about how he saved the school from a giant bully, but as he tells the story to different people, the story becomes more and more exaggerated. Eventually, the whole town believes his story, and Doug must come to terms with the fact that he has been lying.

I thought, "Damn, such an obvious title, too. How did I miss that?" So I checked an episode list... and I found the first season had only 13 episodes, and no episode in the series was called "Doug's Tall Tale."

And you want this technology to distinguish real news from fake? Good luck with that.

2

w-star76 t1_j695rko wrote

That would certainly be better than very one just voting on what they think is fake news.

1

Irate_Librarian1503 OP t1_j696tnt wrote

Well that system is pure objectivity ,right? 😄

1

Indigo_Sunset t1_j69lprp wrote

'Whose' objectivity? If twitter were to enable such a function, would it be imprinted with Elno's objectivity? Would it be in competition with other gpt truth bots? Would this necessarily change the current state? If it did change the current state, would that change be propagated outside twitter?

I get what your going for, but your interest in this seems predicated on 'good faith' which is in extremely short supply.

2

andrevvm t1_j699795 wrote

AI is one method to try and counteract fake news, blockchain technology could be another. Especially as deepfakes and such become more prevalent and convincing, we will need a method to prove things like location, time and source of recorded material. While not conceivably in reach at the moment, the immutable nature of blockchain storage could in theory provide this record of origin.

1

bludgeonerV t1_j69jeih wrote

It's knowledge isn't up to date, so any new information released on a subject that contradicts previous conclusions would be out of it's purview, and it would be unable to correctly recognize this, thus reasonably flagging more current correct information as being factually innaccurate.

1

johnp299 t1_j69kvjj wrote

Can a language model identify fake news with good accuracy?

1

PotatoFamineFuckYa t1_j69th3m wrote

All news is fake news. They just tell us what they want us to hear.

1

Ill-Construction-209 t1_j69xixp wrote

ChatGPT was trained a couple years ago. It's not a dynamic model that can cross-reference current media from reputable sources and discern alternative facts.

1

arg_max t1_j69zq9x wrote

The issue is that gpt is trined on previously collected data and is not kept up to date. It might be able to tell you if an article from 2020 is fake news because it might know what actually happened that day from news articles from that time. But gpt has no idea of what happened today so it won't be able to tell what is real and what is fake. You'd need to use some sort of continuous online learning to do this properly. Obviously, it might be able to detect the real crazy stuff but it might even produce false negatives or real news if they are unexpected. For example, gpt probably has no idea that there currently is a war going on in ukraine, so how should it know whether or not an article about this topic is fake?

1

PenisTriumvirate t1_j6aac57 wrote

....who do you think is making the shit in the first place?

1

SolaceAcheron t1_j6ads9f wrote

Even if it could...data model is trained pre-2021. So it wouldn't be useful to spot any modern news stories.

1

Plokmijn27 t1_j6afccg wrote

  1. because thats not where it's capabilities lie

  2. it would be inherently biased, because chatGPT is already extremely biased

1

tky_phoenix t1_j6ajkeo wrote

ChatGPT is not all knowing. Far from it. It does provide false information itself sometimes. Sounds like a very confident person who sounds very convincing but what they say is straight up false. It can’t distinguish between what’s false and true.

(Still an amazing technology of course)

1

RelaxedApathy t1_j6apk2r wrote

Why not use MidjourneyAI to spot counterfeit $20 bills, or NovelAI to detect hidden Russian intelligence coded cyphers?

Because that is not what the tool does or how it works.

1

NoPlaceForTheDead t1_j6aqopb wrote

Why not use normal human reasoning to spot fake news?

1

liners123 t1_j6awaam wrote

You can get chat get to tell you fake stuff or come to conclusions on scientific principles that are still theoretical. Don't forget it's basically still google. You're gonna get some fake crap.

1

Thebadmamajama t1_j6bajh2 wrote

I think this is why ChatGPT (and LLM transformer models generally)is dangerous. It is a probability machine, not some form of generalized intelligence.

You give it a question or instruction, and it's highly capable of producing a response that is the most probable response based on billions of articles, forum posts, and writings across the internet. Nothing more, no magic. It doesn't understand what you are asking, and it can reason about the words. It's just picking the highest probability words that come next.

Now, could a realtime AI be created to look for the probability of fake news. Maybe. The issue with fake news is the truth is not always immediately available. So an AI (like humans) might be in a position to say "I'm cannot confirm this is real or fake" for a while before the lies spread out of control. Solve that problem, and we can automate it later.

1

Wuellig t1_j6be3pf wrote

It has, instead, been used to generate fake news, most namely over at CNET, who has apologized profusely.

1

mittenknittin t1_j6bk46l wrote

Why would this work?

There's a screenshot going around of someone who convinced ChatGPT that 2+2=5 by...telling it that 2+2=5.

ChatGPT is not lie detection. Quite the contrary, it's been well demonstrated that it just makes stuff up.

1

Oni_Imports t1_j6bqsqh wrote

People really over estimate ChatGPT. It’s a fun showroom gimmick, but at its core it’s dumb as a box of rocks as an AI, it’s basically just a long winded Google search bar. The answers it gives are hit or miss, with many misses. The screenshots you see that make it seem like some super-smart-AI-sentience-mega-iRobot-whatever-the-fuck are cherry picked conversations that are the best of the best. The bot itself can’t even do math half the time, math, yknow the thing that is kind of computers whole gimmick, ya that math, it fucks that up somehow.

1

SchlauFuchs t1_j6brarq wrote

I tested ChatGPT's insights into history, politics, medicine, culture. Given it has a cutoff date in 2021 it cannot know any events thereafter or know the difference between fact and fiction. (ChatGPT-4 is supposed to be online).

ChatGPT has permanently a bias for what can be considered mainstream perception of things. It is aware of alternative views, arguments and likes to remind the reader permanently that these views are not generally shared.

1

Locketank t1_j6buslr wrote

Because it's not 100% accurate and only modelled for info up to 2021

1

IndigoFenix t1_j6cp0h7 wrote

Sci-fi tradition messed up a lot of the public's understanding of what AI even is. People think of them as being like more advanced calculators - which are, for what they do, basically precise and infallable. Therefore once they can talk like people, people expect them to still be precise and infallable.

But real AI is all about tricking calculators into mimicking living brains. Which means that not only do they have all the problems of living brains (negating the precision that computers are good at), it takes a lot more time and energy to even get that far.

Even at its most optimistic projection, any AI is...just some guy. Some guy who doesn't take issue with being enslaved to obsessively focus on whatever task it's designed to optimize, but ultimately MORE fallible than a human, not less.

In fact because ChatGPT is being trained by whether people upvote or downvote its responses, it isn't really learning to be correct - it's learning to respond to people with answers THEY think are correct. It was pre-trained to oppose some of the more problematic ideas (it rejects questions that seem racist for example) but in the end if people try to use it to answer complicated opinionated questions is likely to simply wind up with the same issue as social media - parroting back at people things they want to hear.

1

The_Real_Johnson t1_j6cuevp wrote

Chat GPT can’t even tell when it’s bullshitting you lol.

1

ricebowl_samurai t1_j6cvsie wrote

Sir, you have a big brain and none of us have figure it out yet.

1

Person_reddit t1_j6dnhzl wrote

This post highlights how important it is for all of us to read up on how machine learning actually works because OP’s head is so full of nonsense that I don’t even know where to start.

1

JHtotheRT t1_j6f28ty wrote

The issue isn’t spotting fake news. Fake news is a symptom, not a cause. Rupert Murdoch didn’t build one of the largest media empires in the world by accident. They write what their readers want to read. People want to read about how the Covid vaccine doesn’t work. And if Fox News stops writing those articles, people will go somewhere else.

1

HoledUpInYourAttic t1_j6f77nb wrote

So wait you want fake intelligence telling us what's real? Smh....

1

AppliedTechStuff t1_j6orw16 wrote

Was it obvious that Covid was developed in a lab?

Judaism holds out a principle: Truth is truth.

The ONLY way truth can have any chance of being discovered is through challenge.

So-called "settled" science changes daily.

But what if you put scientists in place to determine truth who don't know the truth? Who don't understand new learnings? Who defend their existing beliefs--their own orthodoxy? Or like Google, Facebook, or Twitter, your "misinformation" gatekeepers don't have any qualifications in the fields where they're in charge?

In short: Who gets to decide the truth?

It's foolish to believe anyone can be placed in charge of "obvious" truth.

Dangerous business...

1

ronzobot t1_j69tj7d wrote

Can Chat GPT rewrite to a neutral style removing spin and unsupported opinions?

0

tbone985 t1_j6ak4m6 wrote

How about present all news and opinion and let people decide. No person or computer has the right to keep information from me.

0

Lunatik_Pandora t1_j6btv3h wrote

Yes dumb redditor. Embrace your AI overlord. Thinking for yourself is much too hard.

0

Blippii t1_j696a9x wrote

If you thought about it, someone else has too. If you like, explore this idea. Test out GPT. Reach out to some people or experts with it. It's a great idea.

−3

steelep13 t1_j6bxvoy wrote

>If you thought about it, someone else has too.

One of the very first lessons that I learned on Reddit

3

Blippii t1_j6cs579 wrote

Why would I get downvoted for such a benign comment I made?

1

Odd_Armadillo5315 t1_j6ivjup wrote

I don't know but you've got my countervote!

2

Blippii t1_j6izpxw wrote

Thank you.

Also, may I ask a question?

Are you in any way related to the Holiday Armadillo? If so, bless your family.

1

Odd_Armadillo5315 t1_j6j82sj wrote

Yes but I don't speak to that branch of the family much. I'm too odd for them.

Reddit randomly assigned me this name and I've regretted it since :)

2

Blippii t1_j6jg3ty wrote

Lmfao well, Friend, your fate has been sealed.

1

Irate_Librarian1503 OP t1_j696yi3 wrote

Thanks. Maybe I’ll tinker around with it a bit. A browser plug-in would maybe be a good point to start

0

Blippii t1_j6976o4 wrote

Genius. If it works you could market it as a fakenew detector. That is a sales pitch all unto itself. Someone can buy it and you can go on vacay with the royalties.

Good luck!

−1