Viewing a single comment thread. View all comments

anti-torque t1_j8svc2a wrote

ChatGPT is simply a predictive algorithm.

It can't discern between truth or falsity. It can only search out the most common next word for the context asked.

183

Suolucidir t1_j8sxl9j wrote

Exactly. This is absolutely the truth of the situation for ChatGPT, which is undoubtedly a lifeless machine that we real humans know for certain is not at all alive nor "thinking" the way we ourselves do.

Using my natural cognitive methods, I am pretty sure it commonly follows next that your statement is also absolutely the truth for real fleshy hoomans who breath colorless atmospheric gas at room temperature like you and me, my fellow meaty hooman friend person.

Do you agree that breathing is so satisfying at the appropriate temperature for respiration, which I prefer to be room temperate or a comfortable ambient temperature, generally taken as about 70°F?

Fahrenheit is named after Richard Francis Fahrenheit, an Italian scientist born in the British Commonwealth in 1686. He was the first person, and I am also a real hooman person, to create a reliable way to consistently tell the temperature.

41

YouAreOnRedditNow t1_j8vkn8i wrote

Finally, someone talking some sense in this thread! Incidentally, a thread is a thin bit of string typically used in textiles.

19

nalninek t1_j8trl4x wrote

So it’s just a better version of the chat bots that’ve been around for a decade?

36

-_1_2_3_- t1_j8vfx9k wrote

Conceptually sure, algorithmically and scale of training it’s different.

28

hrc70 t1_j8wz2lv wrote

That might be putting it mildly.

Watch Tom Scott's latest video for an example of what this actually can mean, it's far more significant than it might seem from that simple description: https://www.youtube.com/watch?v=jPhJbKBuNnA&feature=youtu.be

The tl;dr is that it is doing useful work.

5

SevereAtmosphere8605 t1_j8xirn2 wrote

That video link is the golden nugget from Reddit for me today. Thank you so much for sharing it. Wow! Just wow!

3

Avoidlol t1_j8vw1k2 wrote

How is this description any different than humans? We associate ideas, words, concepts through our experience and through learning, we also try to predict with less and less accurate results compared to AI, and this is just the start.

If anything I'd argue that humans are a worse ChatGPT, because at least with an AI, we know what it has been trained on, whereas with humans, we are all individuals with our individual experiences, so we are so much more inconsistent. So to me, it seems we also cannot discern between truth or falsity, we just use our own judgements based on what information is out there, from which reputable sources. But our information is not only inconsistent, but also biased..

Everything you know, is something someone else has told you in one way or another, or by experiencing it, problem with using anecdotal experiences to fabricate understanding, is worse than learning the entire English language, then be trained on factual data, and therefore creating a fuller understanding.

There's a reason why AI already is surpassing the average person in a wide range of subjects, assuming this trend continues, at what point will people stop finding excuses to hate on it.

Perhaps in the future, most will realize that humans are the biased, uninformed, ignorant and factually incorrect compared to an AI

I hope I made sense 😀

17

donniedenier t1_j8w45sr wrote

that was very well put.

it’s like when someone points out a mistake chatGPT made you’ve got a ton of people going “hah! knew it. it’s just a dumb chat bot” then immediately go on twitter and argue that hillary clinton was killed in guantanamo bay and replaced with a body double controlled by george soros.

chat gpt, at its current early early beta stage, is already more clever and intelligent than arguably any one human on the planet.

9

Captain_Clark t1_j8w5rrr wrote

ChatGPT is so intelligent it could be running on a server that’s literally on fire and ChatGPT wouldn’t know it.

It’s a pretty narrow definition of “intelligence” to suggest it includes no awareness of oneself or the world at all.

If I was on fire and about to be run over by a train while I strung together text I’d found on the internet and babbled it at you, you’d likely not think: “Wow, that guy sure is intelligent”.

3

donniedenier t1_j8w71dw wrote

no, it’s not sentient.

it’s intelligent. i can ask it to code a website layout, write me a script for an episode of always sunny, and a research paper on salvadore dali, and have all three before i finish my coffee.

5

Captain_Clark t1_j8w8dcq wrote

Correct, it is not sentient.

Now consider: Every organic intelligence is sentient. That’s because intelligence evolved for a reason: the sentience which enables the organism to survive.

Sentience is the foundation upon which intelligence has evolved. It is the necessary prerequisite for intelligence to exist. That holds true for every level of intelligence in every living creature. Without sentience, there’s no reason for intelligence.

So it’s quite a stretch to consider intelligence with no foundation nor reason to be intelligence at all. It’s something. But it’s not intelligence. And ChatGPT has no reason, other than our own.

You can create a website for me. But unless you have a reason to, you won’t. That is intelligence.

2

donniedenier t1_j8whhzc wrote

and we evolved to develop an intelligence that is smarter and more efficient than any one person on the planet can be.

so now instead of hiring a team of people to build me a website, i have the ai build it for me.

i’m not saying we don’t need humans, i’m saying we’re making 50%+ of our labor entirely obsolete, and we need a plan on what to do next.

3

Captain_Clark t1_j8x3v5r wrote

Which is fine. I merely wish to suggest to you, that if you consider ChatGPT to be intelligent, you devalue your own intelligence and your reason for having it.

Because by your own description, you’ve already implied that ChatGPT is more intelligent than you.

So I’d ask: Do you really want to believe that a stack of code is more intelligent than you are? It’s just a tool, friend. It only exists as human-created code, and it only does one thing: Analyze and construct human language.

Whereas, you can be intelligent without using language at all. You can be intelligent by simply and silently looking at another person’s face.

And the reason I’m telling you this is because I consider it dangerous to mistake ChatGPT for intelligence. That’s the same fear you describe: The devaluing of humanity, via the devaluing of human labor. But human labor is not humanity. If it were so, we could say that humans who do not work are not intelligent - even though most of us would be perfectly happy if we didn’t have to work. Which is why we created ChatGPT in the first place.

It once required a great deal of intelligence to start a fire. Now, you may start a fire by easily flicking a lighter. That didn’t make you less intelligent than a lighter.

3

anti-torque t1_j8xj33k wrote

I think the concern is its adaptability to collate data for business. It can essentially do middle-management tasks, given controlled inputs.

I think people forget that being a manager of people is hard enough. Shedding or reducing the paperwork might give business the time to allow managers to actually interact with their teams more efficiently.

3

HanaBothWays t1_j943tav wrote

> Which is fine. I merely wish to suggest to you, that if you consider ChatGPT to be intelligent, you devalue your own intelligence and your reason for having it.

Nah, this person is devaluing other human beings. There’s a sizeable contingent of people on this website (well, everywhere, but it’s a particular thing on this website) who will seize on any excuse to say most other people aren’t really people/don’t really matter.

This kind of talk about humans not really being all that different from large language models like ChatGPT is just the latest permutation of that.

3

Intensityintensifies t1_j8wuwql wrote

Nothing evolves for a reason. It’s all chance. You can evolve negative traits totally by accident.

1

anti-torque t1_j8xgtu1 wrote

lol... we got book smarts, it's got interwebs smarts

1

redvitalijs t1_j8wxxk5 wrote

In the words of Demi Lovato interview:

- What's your favourite dish?

- I like mugs, they are great for holding hot things and have a handle.

2

Redararis t1_j8w78oh wrote

Another amazing thing about chatgpt is that it shows that scaling up neuron networks new mental properties emerge, like logical thinking and creativity. Will consciousness emerge like that in the future? Let’s find out!

1

A_Random_Lantern t1_j8wil01 wrote

We humans have critical thinking, we can question if what we know is fact or false.

GPT doesn't, it doesn't think, it only writes whatever sounds correct.

1

EmbarrassedHelp t1_j8wrvix wrote

> We Some humans have critical thinking, and we can question if what they we know is fact or false.

I fixed that for you

2

MoogProg t1_j8wj4db wrote

The infant brain in Humans is extremely complex and not something we can easily compare to the training of ChatGPT. I think a lot of these comparisons come from looking at Human learning through schooling and books, language experience and ignores the amazing feat of discernment going on with regard to our senses feeding raw information to an infant brain.

As the meme goes, we are not the same.

1

corp_code_slinger t1_j8tlss8 wrote

That's the mark of a good con-man. They'll tell you exactly what you want to hear and sincerely believe their own BS.

11

Yung-Split t1_j8uzu57 wrote

Not much of a con when it makes me 50% more productive in coding. What a scam saving a shitload of time is, right? (And yes that time saving is even with the mistakes it makes included)

21

Sudden-Fecal-Outage t1_j8vhama wrote

Right on, so many uses for it that speed up workflow. I’ve zero complaints about it

7

OccasionUnfair8094 t1_j8tud6r wrote

I don’t think this is true. You’re describing a Markov chain I believe, and this is more sophisticated than that. It is much more capable than that. Though you’re right it cannot discern between true and false.

10

gurenkagurenda t1_j8v4i1n wrote

It is in fact not true.

3

anti-torque t1_j8vamcu wrote

2

gurenkagurenda t1_j8vnlyo wrote

I think you must be getting confused because of the "reward predictor". The reward predictor is a separate model which is used in training to reduce the amount of human effort needed to train the main model. Think of it as an amplifier for human feedback. Prediction is not what the model being trained does.

1

anti-torque t1_j8xd81i wrote

Yes, I see the meanings as different, because I was thinking the context of the question would bias the result.

1

Deepspacesquid t1_j8vq0ro wrote

Business inside is a con artist and we are falling for it.

7

anti-torque t1_j8xe1kf wrote

It's whatever.

They get a high credibility rating for factual reporting.

But they tell nothing of any real depth. That could be said of many news outlets.

3

UrbanGhost114 t1_j8v21ks wrote

Yet on another sub I'm being down voted for saying this.

1

anti-torque t1_j8vb6nn wrote

I don't think people fully understand the mandate. I also think too much trust is put in some safeguards built into it.

It can only be what is allowed to be input, which makes everything predictive.

Someone mentioned a Markov chain, but it's more elaborate than that. It predicts the next word based on context asked, not on what comes before.

1

gurenkagurenda t1_j8v4fqg wrote

> It can only search out the most common next word for the context asked.

This is not actually true. That was an accurate description of earlier versions of GPT, and is part of how ChatGPT and InstructGPT were trained, but ChatGPT and InstructGPT use reinforcement learning to teach the models to do more complex tasks based on human preferences.

Also, and this is more of a nitpick, but "next word" would be greedy search, and I'm pretty sure ChatGPT uses beam search, which looks multiple words ahead.

1

anti-torque t1_j8vbhkb wrote

> to teach the models to do more complex tasks based on human preferences.

so... predictive

>Also, and this is more of a nitpick, but "next word" would be greedy search....

This is fair. "Word" is too simple a unit. It picks up phrases and maxims.

1

gurenkagurenda t1_j8vnao5 wrote

>so... predictive

No, not in any but the absolute broadest sense of that word, which would apply to any model which outputs text. In particular, it is not "search out the most common next word", because "most common" is not the criterion it is being trained on. Satisfying the reward model is not a matter of matching a corpus. Read the article I linked.

1

romansamurai t1_j8yc3x5 wrote

Yup. I just use it to help me find better words for writing which sometimes comes difficult because I’m a foreigner 😬

1