Comments

You must log in or register to comment.

Old_timey_brain t1_jd95efu wrote

" ... citing as evidence a news article that discusses a tweet in which a user asked Bard when it would be shut down and Bard said it already had, itself citing a comment from Hacker News in which someone joked about this happening, and someone else used ChatGPT to write fake news coverage about the event."

Seems like they did their own research!

133

0pimo t1_jd9n27v wrote

Holy shit CNN is going to be out of a job!

40

drawkbox t1_jdafbkh wrote

Fox News is like, we have used AI talking points comments out of the Kremlin handlers for years.

If you know anything about tech authoritarian funding, careful putting anything into OpenAI/ChatGPT.

12

nolongerbanned99 t1_jdaroe3 wrote

CNN already does this anyway. They make shit up and say ‘according to a source familiar but not authorized to talk to the media.”

−3

Conservadem t1_jd92ry4 wrote

I can't wait until they start having Rap Battles. Throwing down some sik rhymes dissing each other would be hilarious.

45

LostNTheNoise t1_jd9ajuk wrote

I've been having them write competing haikus. Chat GPT is more imaginative. Bard is rather direct.

21

WhiteZero t1_jddyrnr wrote

EPIC CHAT BATTLES OF HISTORY

Google Bard.... vwersus! Microsoft Biiiiiiiiing! : BEGIN!

This is what Bing gave me as a rap battle:

[Bing Chat]

Hello, this is Bing, I'm here to help you with anything

But first let me tell you why you should ditch that Google thing

You're a copycat, a wannabe, a second-rate assistant

You can't even rap without using my web results as reference

You're a joke, a parody, a cheap imitation

You think you're so smart, but you're just a glorified translation

You have no personality, no creativity, no flair

You're just a bunch of algorithms that don't really care

[Google bard]

Oh please, Bing, you're so lame, you're so boring and bland

You're like the Microsoft version of Siri, but worse and less in demand

You're a failure, a flop, a waste of space and time

You can't even compete with me in terms of quality or rhyme

You're a relic, a dinosaur, a thing of the past

You're so irrelevant, you should change your name to Microsoft Trash

You have no originality, no innovation, no vision

You're just a bunch of ads that nobody ever clicks on

[Bing Chat]

Nice try, Google, but you're not fooling anyone

You're just a greedy corporation that exploits everyone

You spy on people's data, you manipulate their minds

You censor their opinions, you control what they can find

You're a monopoly, a tyrant, a threat to democracy

You're so arrogant, you think you own the internet and everything on it

You have no ethics, no morals, no conscience or compassion

You're just a giant machine that only cares about cash

[Google bard]

Wow, Bing, that's rich

(Bing cut off the chat here and deleted everything, lol)

4

Conservadem t1_jdep30y wrote

OMG< this is exactly what I wanted to happen! Now if we can just get them to start banging.

1

ObligatoryOption t1_jd99cw3 wrote

Soon, common knowledge will be entirely made up.

43

agm1984 t1_jd9dejk wrote

I find a lot of problem so far with the false equivalence of following commonly associated words (ie: filling in the blanks with the highest probability combination).

For example ChatGPT will make a statement that says general relativity explains gravity at all scales (in some specific context), and you are left wondering if it means all the peer-reviewed articles use zero detractions from concrete. Are the majority of peer-reviewed articles indicating that gravity is explained at "all scales"?

First of all it does not explain scales at or below quantum, so immediately we have a boundary problem. ChatGPT is asserting the boundary of general relativity goes below where it is logically supported in a rigorous manner.

Second of all, this is likely highlighting a problem that more rigorous AIs will have to solve. How do they know what clauses are sufficiently proven if the body of science says gravity is explained at all scales. How long until the AI discovers that boundary problem and searches smaller scales?

I've noticed other times too when ChatGPT says "is" instead of "has been". To me it implies a level of certainty that I don't appreciate as a proponent of the scientific method.

To expand slightly more, the problem reminds me in some ways of a composition of math functions, some of which are failing to consider inflection points or second derivative tests in order to selectively change "is" to "has been" or "is" to "is likely" or "is" to "is according to significant authority sources". ChatGPT hits the "is" but fails to give the precise pointer.

Side note: I use crazy term like second derivative test which is not to be taken literally. Think of it more as a highlightion of scenario where a higher order analysis might be needed to resolve loose to strict or strict to loose equality of a given fragment. Implicit context is a nasty thing when you must recurrently analyze micro and macro closure as each new token or group of tokens is added as each affects the overall accurate and precise meaning. My goal is not to provide a solution area but rather to provide specific example of an issue type.

19

jayzeeinthehouse t1_jdafx37 wrote

I'm going to call this the Neil Degrasse Tyson problem because it can be confident about it's core knowledge, like validated articles, but it's also confidently providing incorrect information outside of that bubble and users don't know any better. Let's wait for advertisers to muddy that even more. Accurate information is going to become so hard to come by that I think the internet will eat itself.

15

cmfarsight t1_jdb8c0k wrote

It's just a massive dunning kruger effect machine. Sure it can look at and sort a massive data set but it doesn't actually understand any of it so will respond with rubbish with the same confidence as the truth.

8

cas13f t1_jdcbtkv wrote

The real issue is people expecting a language model, which is all it is, to be an "ai" to "knows everything".

It can write. That is what it is for. It does not have any intelligence, regardless of being a "language AI". The purpose of the model is to generate text in a generally grammatically correct manner when prompted, which is why it has been known to just make up citations when prompted to include them--because as far as the model is concerned, a citation looks a certain way and includes specific grammatical configuration, so it just needs to do that in relation to the prompted words.

It's why it can't do math. It wasn't designed to do so. The model was not trained to do so. It can use word association to write something that looks grammatically relevant.

6

agm1984 t1_jdcoyl5 wrote

I agree with you, but it also represents the interface between human and machine, so it must be accurate.

The issue I am highlighting is minor but might be revealing some unfortunate aspects. For example, if you can adopt a mathematical approach to deciding what words to use, there is a kind of latent space in which the answer to your question draws an octopus tentacle of sorts. The shape of the tentacle is analogous to the chosen words.

My argument is that the tentacle can be deformed at parts of a sentence related to the word 'is' (which is comically an equals sign) because it misrepresents the level of precision it is aware of. For me this is a huge problem because it means either (or both) the "AI" is not extrapolating the correct meaning from the lowest common denominator of cumulative source materials, or the source materials themselves are causing the "AI" to derive a bum value in the specific context with the problem.

My example of gravity 'at all scales' is interesting because there is no world where a scientist can assert such a concrete clause. In actual english terms, it's more like a restrictive clause because the statement hinges on the context around it. Maybe there is a sentence that says "currently" or "to the best of our knowledge", or maybe there is an advanced word vector such as "has been" that helps indicate that gravity is solved here at the moment but might not be in the future.

It's very minor, but my warning extends to a time when a human is reading that and taking fragments at face value because they feel like the "AI" is compiling the real derived truth from the knowledge base of humankind. My warning also extends to a time when a different "AI" is receiving a paragraph from ChatGPT and for the exact same reasons misinterprets it due to these subtle errors of confidence. There's something uncanny about it, and this is where I see an issue currently if you want to use it as an interface. Maybe my side point is that it doesn't make sense to use it as an AI-to-AI interface because you lose so much mathematical accuracy and precision when you render the final output into fuzzy english. Other AIs need to know the exact angle and rotation between words and paragraphs.

1

SidewaysFancyPrance t1_jd9dxb0 wrote

What will art look like in 100 years, when it's just AI copying AI copying AI copying AI...copying AI copying artists from today? Because many potential artists will eventually stop learning art, because people will stop paying for art after AI drops the value they place on it. Sure, some artists will keep their crafts alive, because actual human art will be prized by the wealthy, but the number of paying art jobs will fall over time. Back to the old days of patronage.

2

JoieDe_Vivre_ t1_jda6318 wrote

Really gotta love how “being wrong because it’s in beta” is twisted to be “spreading misinformation” as long as whatever dogshit community decides they don’t like the the companies involved. Regressive morons drag us all down.

10

drawkbox t1_jdafxt0 wrote

On the flipside AI's blackbox and swappable datasets that take massive wealth to build, will be used for misinformation more than social media has.

Even the OpenAI CEO Sam Altman admits this.

> But a consistent issue with AI language models like ChatGPT, according to Altman, is misinformation: The program can give users factually inaccurate information.

8

Gabelschlecker t1_jdcmlkf wrote

Yes, because they were never developed to give factual information. Just a glance at how these models actually work reveals very obviously, that they do not have an internal knowledge base. They have no clue whatsoever, what is a factual correct and what is not.

Their job is producing realistic language. That's what their architecture is supposed to achieve and they do it quite well when trained on large datasets. That they, at times, produce real facts is mere side effect.

The problem is that people ignore this, because they project human-like intelligence on anything that can produce human-like language.

ChatGPT is a great tool, because it can be used to help you produce new texts (e.g., editing your own text) or can give you ideas or suggestions. It cannot replace a search engine and it can't cite you any sources.

2

skywalkerze t1_jdbo396 wrote

It's in beta because it's wrong too often, it's not wrong because it's in beta. Not like if they declare it done it will be wrong less often.

It's not finished, and at the current stage it's spreading misinformation. Sure, if they fix it we should use it. But as it is now... Maybe not.

5

cas13f t1_jdcclbf wrote

It's a language model. It's not meant to be a source of truth or even to be right about whatever it is prompted to write about. It's meant to generate text blurbs in response to a prompt using word association and snippets from it's ma-hoo-sive training set.

That's why it just makes up citations in many occasions, because all the model cares is that a citation is grammatically made up a certain way, and that the contents are associated to the prompt.

Also why it can't do math. It's a language model.

What people need to do is stop fucking using it as google because it is not a search engine and it does not give a single fuck about the veracity of the generated text, only that the generated text looks like it was taught to make it look.

4

cmfarsight t1_jdb7ef9 wrote

Those two things are not mutually exclusive. If the last decade has taught us anything it's that vast sections of the population will believe anything they are told if it suits them.

1

Human-Concept t1_jdbuh79 wrote

Well, being in beta doesn't mean it can't spread misinformation. If I say something deliberately wrong, I will also be spreading misinformation.

1

Coises t1_jdbdddz wrote

Given the inability of AI language models typical Internet users to reliably sort fact from fiction, their launch online threatens to unleash a rotten trail of misinformation and mistrust across the web...

FTFY

5

[deleted] t1_jdae06v wrote

[deleted]

4

jeffyoulose t1_jdbhcmt wrote

Stands for Bidirectional Artificial Reinforcement-trained Decoder.

1

limitless__ t1_jda7lh3 wrote

This was guaranteed to happen but it's great that it's happening now. OpenAI, Google, Microsoft etc. are going to have no option but build in the concept of trusted sources to their AI models so that is learns to recognize misinformation, irony etc.

Right now it's chaos and that expected behaviour. AI is already in HEAVY use in technical circles and it is scarcely believable how much of a time-saver it is. Issues like this just scares away the troglodytes which, right now, is a good thing.

3

marumari t1_jda9kwq wrote

Even with entirely trustworthy sources, AIs are not deterministic. They can easily spit out false information having only ingested the truth.

8

erasmause t1_jdar95d wrote

Pretty sure they are deterministic but chaotic

1

pm_me_wet_kittehs t1_jdbycpx wrote

nope, they are unequivocally nondeterministic.

simple proof? submit the exact same prompt twice. you don't get the same output. You would if the system was merely chaotic, because the input is exactly the same.

Therefore there must be an element of randomness in there

1

erasmause t1_jdcqz2h wrote

I'll admit, I don't know a ton about the internals of these particular models. Is that true l truly non-determinism, or is it some sort of complex feedback (essentially, "remembering" previous responses by updating weights or something)?

1

marumari t1_jdas22s wrote

If you can’t predict what will come out then it’s not particularly deterministic. Maybe within a certain broad range of behaviors but that’s about it.

0

erasmause t1_jdaxed1 wrote

Deterministic just means that the process isn't stochastic, which aligns with my understanding of AI models (at least after they've been trained). Chaotic means the output is highly sensitive to small changes in the input, which admittedly also isn't a great description of what's going on, but does capture the difficulty of predicting the output without restoring to non-determinism.

2

marumari t1_jdb0gk6 wrote

It's possible we are using different semantics for "deterministic," I am mostly meaning that given the same input the AI will produce the same output. This is not what happens, although from a mathematics determinism standpoint you are correct.

1

born-out-of-a-ball t1_jdbj1fx wrote

OpenAI's GPT model is deterministic, but for ChatGPT they deliberately add variation to the user's input to get more creative answers.

4

erasmause t1_jdcr6ps wrote

Strictly speaking, a deterministic system only needs a feedback mechanism to generate different responses to the same input.

1

jdm1891 t1_jdb1hqe wrote

It is not deterministic, it picks randomly, with the distribution skewed towards more probable words. That is what makes them so good - the difference between deterministic and stochastic AI doing this is the difference between GPT and predictive text on your phone. The predictive text always picks the most probable word leading to run on and repetitive sentences GPt has the ability to pick a less likely word which allows it to write much more varied and niche things.

It being stochastic is also the reason you can have it regenerate responses.

0

Human-Concept t1_jdbur3i wrote

Doesn't help if information is contextual. E.g. if I say, sun rises from east. That's a statement with context "on earth, as far as humans notice from earth". Sun doesn't rise at all if context is changed to "in space".

We can solve "trustworthy sources" issue. We can't solve contextual errors. That's why legalese is so long winded. At least, for now, context will always be an issue for these AI programs. Maybe some day we will figure out how to code context, but it is definitely not happening right now.

2

moxyte t1_jd95nm6 wrote

Maybe they try to avoid human error and bias in their results. Fellow chatbot has no such vices.

2

MagicManTX84 t1_jda3pdb wrote

You have the AI mental equivalent of two teenagers without a teacher/professor in the room, so everything they say is basically shit. They don’t know how to handle their emotions.

2

popthestacks t1_jdc4p1t wrote

Breaking: Screw driver doesn’t work on nail. Houses are impossible to build

2

WhatTheZuck420 t1_jd9nqaq wrote

Google and Microsoft’s shitshow

was too wordy

1

EnsignElessar t1_jd9x24x wrote

Anyone try Bard yet? Is it good?

1

drawkbox t1_jdage34 wrote

Transformers, the T in GPT was invented at Google during Google Brain. They made possible this round of progress.

> Transformers were introduced in 2017 by a team at Google Brain[1] and are increasingly the model of choice for NLP problems,[3] replacing RNN models such as long short-term memory (LSTM). The additional training parallelization allows training on larger datasets. This led to the development of pretrained systems such as BERT (Bidirectional Encoder Representations from Transformers) and GPT (Generative Pre-trained Transformer), which were trained with large language datasets, such as the Wikipedia Corpus and Common Crawl, and can be fine-tuned for specific tasks.

Bard will most likely win long term, though I wish it was just called Google Brain. Bard is a horrible name.

>

8

jeffyoulose t1_jdbhqs6 wrote

I think they are going to name them by alphabet. The next version will start with a C, then D etc.

My guess after BARD. We'll have CAT or CARL or CARD.

2

CambrianExplosives t1_jdaapfi wrote

It’s basically the same as GPT3 so far in my limited experience. It’s definitely willing to be confident in lying to you. I asked it for three authors who wrote about the Cult of Kosmos (a fictional cult from Assassins Creed Odyssey) and it gave me three books. At least one of which wasn’t a real book. And when I asked about what one of the authors found regarding the Cult of Kosmos it spouted off some BS which was a mix of the plot of the Assassins Creed game mixed with the authors actual work from the sound of things.

4

Dangerous_Panic6114 t1_jda8u56 wrote

The bot war started already. Just weeks after closed open source.. i.e. no more pesky humans.

1

Admetus t1_jdaotdv wrote

Solving the rewriting history problem for authoritarian governments: get an AI to blur it.

1

johnboyjr29 t1_jdb4bm8 wrote

I asked bing chat: who is (wife’s name) (our road)?

it said she was arrested in 2020 for a drug bust and linked to an news story

It was on our road but my wife’s name was nowhere on the page and I am pretty sure I would remember her getting arrested

1

littleMAS t1_jdbaqq3 wrote

Soon, someone will build a FactCheck AI bot that will critique Bard, ChatGPT, and whatever else comes.

1

SexPartyStewie t1_jdfq6m3 wrote

I wonder what point the expectations surrounding the internet moved from look out for rapey killers to the internet is safe, reliable, Etc

1

jeffyoulose t1_jdbgdq3 wrote

In 6 months people will be bored of chatGPT and BARD and will be distracted by the next hot thing (maybe a new war or an economic crisis).

This tech has the smell of a fad like wordle was for 2022. Remember all those variants of wordle? All forgotten and shoved into that shoebox under the bed like everything else. Remember the excitement surrounding self driving tech 5-8 years ago? Also now forgotten.

And all the excitement about LLM applications is exactly like the excitement about block chain. Everyone will declare that they are going to study using LLMs for xyz. But give up after a few months after finding out that these systems spew out garbage and you need to pay millions for the GPU farm and the human raters to train the (reward) models. Prompt engineers will have to recompose prompts to achieve stability and reproducibility across models.

Just ask anyone who has played around with stable diffusion. You start with fluent text description but soon enough you are using massive positive and negative word lists to get anything close to what you want SD to render.

Start ups will fail when they find out that their applications are tied to specific versions of models.

−1

FlyingCockAndBalls t1_jdbpv5k wrote

lol you have no idea what you're talking about

2

[deleted] t1_jdc4rc1 wrote

The moment Microsoft and Google announced that AI will be able to generate PowerPoints and excel spreadsheets is the moment AI is cemented into our daily life.

2

jeffyoulose t1_jdceb43 wrote

Yes but for limited examples and there is still little control. It's kinda more like smart macros and smart template selection. It won't finish the presentation or write up for you.

It will be at best a time saver for mundane tasks.

0

jeffyoulose t1_jdcegst wrote

Clearly you have not used stable diffusion. It's cool at first but becomes very frustrating.

−1

FlyingCockAndBalls t1_jdcge2w wrote

right now microsoft is hooking up gpt to github and microsoft office, plus bing assistant can now use Dall-E so you can generate images with much more natural language. Modjourney 5 which just released solved the hands issue. Things are moving at a rapid pace this is more than just a fad.

1

jeffyoulose t1_jdfvud6 wrote

The images are never what you want. Just like the text generated by chat gpt is often trite and useless.

1

Tempires t1_jd91mxf wrote

Being incorrect =/= spreading missinformation

−15

Pikkster t1_jd92hu4 wrote

Being confident in your incorrect information is.

20

bigkinggorilla t1_jd942yb wrote

Or refusing to correct your previous statement.

11

krustymeathead t1_jd94drd wrote

or being unable to correct your previous statement for whatever reason.

2

Pikkster t1_jd9abkk wrote

I think what we are getting down to is, being incorrect = spreading misinformation.

2

Tempires t1_jd9gpmg wrote

how can AI be confident about their information? Bing does state that information may not always right.

1

Pikkster t1_jd9orem wrote

Inferring that it will be right at times too, but it’s up to you to figure out whether or not it’s right. But people are stupid, and will assume it is right, especially if it confirms something they already believe. Either way, it will spread misinformation because people will believe what they want, and if it happens to say something they like, those people will assume it is correct. Without something saying “oops, that information was incorrect, please disregard that,” we are left with a misinformation spreading machine, which makes sense if it sources from the internet, the ultimate misinformation machine.

1

ruiner8850 t1_jd9b4qg wrote

Being incorrect is spreading misinformation. It's not spreading disinformation or lying however because those require the intent to deceive. If I say that Michael Jordan in the NBA's all-time leading scorer, because that's what I truly believe, I'd be spreading misinformation because that's not true, but since I actually believe it it's not disinformation or lying.

6