Viewing a single comment thread. View all comments

JoieDe_Vivre_ t1_jda6318 wrote

Really gotta love how “being wrong because it’s in beta” is twisted to be “spreading misinformation” as long as whatever dogshit community decides they don’t like the the companies involved. Regressive morons drag us all down.

10

drawkbox t1_jdafxt0 wrote

On the flipside AI's blackbox and swappable datasets that take massive wealth to build, will be used for misinformation more than social media has.

Even the OpenAI CEO Sam Altman admits this.

> But a consistent issue with AI language models like ChatGPT, according to Altman, is misinformation: The program can give users factually inaccurate information.

8

Gabelschlecker t1_jdcmlkf wrote

Yes, because they were never developed to give factual information. Just a glance at how these models actually work reveals very obviously, that they do not have an internal knowledge base. They have no clue whatsoever, what is a factual correct and what is not.

Their job is producing realistic language. That's what their architecture is supposed to achieve and they do it quite well when trained on large datasets. That they, at times, produce real facts is mere side effect.

The problem is that people ignore this, because they project human-like intelligence on anything that can produce human-like language.

ChatGPT is a great tool, because it can be used to help you produce new texts (e.g., editing your own text) or can give you ideas or suggestions. It cannot replace a search engine and it can't cite you any sources.

2

skywalkerze t1_jdbo396 wrote

It's in beta because it's wrong too often, it's not wrong because it's in beta. Not like if they declare it done it will be wrong less often.

It's not finished, and at the current stage it's spreading misinformation. Sure, if they fix it we should use it. But as it is now... Maybe not.

5

cas13f t1_jdcclbf wrote

It's a language model. It's not meant to be a source of truth or even to be right about whatever it is prompted to write about. It's meant to generate text blurbs in response to a prompt using word association and snippets from it's ma-hoo-sive training set.

That's why it just makes up citations in many occasions, because all the model cares is that a citation is grammatically made up a certain way, and that the contents are associated to the prompt.

Also why it can't do math. It's a language model.

What people need to do is stop fucking using it as google because it is not a search engine and it does not give a single fuck about the veracity of the generated text, only that the generated text looks like it was taught to make it look.

4

cmfarsight t1_jdb7ef9 wrote

Those two things are not mutually exclusive. If the last decade has taught us anything it's that vast sections of the population will believe anything they are told if it suits them.

1

Human-Concept t1_jdbuh79 wrote

Well, being in beta doesn't mean it can't spread misinformation. If I say something deliberately wrong, I will also be spreading misinformation.

1