Comments

You must log in or register to comment.

anti-torque t1_j8svc2a wrote

ChatGPT is simply a predictive algorithm.

It can't discern between truth or falsity. It can only search out the most common next word for the context asked.

183

Suolucidir t1_j8sxl9j wrote

Exactly. This is absolutely the truth of the situation for ChatGPT, which is undoubtedly a lifeless machine that we real humans know for certain is not at all alive nor "thinking" the way we ourselves do.

Using my natural cognitive methods, I am pretty sure it commonly follows next that your statement is also absolutely the truth for real fleshy hoomans who breath colorless atmospheric gas at room temperature like you and me, my fellow meaty hooman friend person.

Do you agree that breathing is so satisfying at the appropriate temperature for respiration, which I prefer to be room temperate or a comfortable ambient temperature, generally taken as about 70°F?

Fahrenheit is named after Richard Francis Fahrenheit, an Italian scientist born in the British Commonwealth in 1686. He was the first person, and I am also a real hooman person, to create a reliable way to consistently tell the temperature.

41

YouAreOnRedditNow t1_j8vkn8i wrote

Finally, someone talking some sense in this thread! Incidentally, a thread is a thin bit of string typically used in textiles.

19

nalninek t1_j8trl4x wrote

So it’s just a better version of the chat bots that’ve been around for a decade?

36

-_1_2_3_- t1_j8vfx9k wrote

Conceptually sure, algorithmically and scale of training it’s different.

28

hrc70 t1_j8wz2lv wrote

That might be putting it mildly.

Watch Tom Scott's latest video for an example of what this actually can mean, it's far more significant than it might seem from that simple description: https://www.youtube.com/watch?v=jPhJbKBuNnA&feature=youtu.be

The tl;dr is that it is doing useful work.

5

SevereAtmosphere8605 t1_j8xirn2 wrote

That video link is the golden nugget from Reddit for me today. Thank you so much for sharing it. Wow! Just wow!

3

Avoidlol t1_j8vw1k2 wrote

How is this description any different than humans? We associate ideas, words, concepts through our experience and through learning, we also try to predict with less and less accurate results compared to AI, and this is just the start.

If anything I'd argue that humans are a worse ChatGPT, because at least with an AI, we know what it has been trained on, whereas with humans, we are all individuals with our individual experiences, so we are so much more inconsistent. So to me, it seems we also cannot discern between truth or falsity, we just use our own judgements based on what information is out there, from which reputable sources. But our information is not only inconsistent, but also biased..

Everything you know, is something someone else has told you in one way or another, or by experiencing it, problem with using anecdotal experiences to fabricate understanding, is worse than learning the entire English language, then be trained on factual data, and therefore creating a fuller understanding.

There's a reason why AI already is surpassing the average person in a wide range of subjects, assuming this trend continues, at what point will people stop finding excuses to hate on it.

Perhaps in the future, most will realize that humans are the biased, uninformed, ignorant and factually incorrect compared to an AI

I hope I made sense 😀

17

donniedenier t1_j8w45sr wrote

that was very well put.

it’s like when someone points out a mistake chatGPT made you’ve got a ton of people going “hah! knew it. it’s just a dumb chat bot” then immediately go on twitter and argue that hillary clinton was killed in guantanamo bay and replaced with a body double controlled by george soros.

chat gpt, at its current early early beta stage, is already more clever and intelligent than arguably any one human on the planet.

9

Captain_Clark t1_j8w5rrr wrote

ChatGPT is so intelligent it could be running on a server that’s literally on fire and ChatGPT wouldn’t know it.

It’s a pretty narrow definition of “intelligence” to suggest it includes no awareness of oneself or the world at all.

If I was on fire and about to be run over by a train while I strung together text I’d found on the internet and babbled it at you, you’d likely not think: “Wow, that guy sure is intelligent”.

3

donniedenier t1_j8w71dw wrote

no, it’s not sentient.

it’s intelligent. i can ask it to code a website layout, write me a script for an episode of always sunny, and a research paper on salvadore dali, and have all three before i finish my coffee.

5

Captain_Clark t1_j8w8dcq wrote

Correct, it is not sentient.

Now consider: Every organic intelligence is sentient. That’s because intelligence evolved for a reason: the sentience which enables the organism to survive.

Sentience is the foundation upon which intelligence has evolved. It is the necessary prerequisite for intelligence to exist. That holds true for every level of intelligence in every living creature. Without sentience, there’s no reason for intelligence.

So it’s quite a stretch to consider intelligence with no foundation nor reason to be intelligence at all. It’s something. But it’s not intelligence. And ChatGPT has no reason, other than our own.

You can create a website for me. But unless you have a reason to, you won’t. That is intelligence.

2

donniedenier t1_j8whhzc wrote

and we evolved to develop an intelligence that is smarter and more efficient than any one person on the planet can be.

so now instead of hiring a team of people to build me a website, i have the ai build it for me.

i’m not saying we don’t need humans, i’m saying we’re making 50%+ of our labor entirely obsolete, and we need a plan on what to do next.

3

Captain_Clark t1_j8x3v5r wrote

Which is fine. I merely wish to suggest to you, that if you consider ChatGPT to be intelligent, you devalue your own intelligence and your reason for having it.

Because by your own description, you’ve already implied that ChatGPT is more intelligent than you.

So I’d ask: Do you really want to believe that a stack of code is more intelligent than you are? It’s just a tool, friend. It only exists as human-created code, and it only does one thing: Analyze and construct human language.

Whereas, you can be intelligent without using language at all. You can be intelligent by simply and silently looking at another person’s face.

And the reason I’m telling you this is because I consider it dangerous to mistake ChatGPT for intelligence. That’s the same fear you describe: The devaluing of humanity, via the devaluing of human labor. But human labor is not humanity. If it were so, we could say that humans who do not work are not intelligent - even though most of us would be perfectly happy if we didn’t have to work. Which is why we created ChatGPT in the first place.

It once required a great deal of intelligence to start a fire. Now, you may start a fire by easily flicking a lighter. That didn’t make you less intelligent than a lighter.

3

anti-torque t1_j8xj33k wrote

I think the concern is its adaptability to collate data for business. It can essentially do middle-management tasks, given controlled inputs.

I think people forget that being a manager of people is hard enough. Shedding or reducing the paperwork might give business the time to allow managers to actually interact with their teams more efficiently.

3

HanaBothWays t1_j943tav wrote

> Which is fine. I merely wish to suggest to you, that if you consider ChatGPT to be intelligent, you devalue your own intelligence and your reason for having it.

Nah, this person is devaluing other human beings. There’s a sizeable contingent of people on this website (well, everywhere, but it’s a particular thing on this website) who will seize on any excuse to say most other people aren’t really people/don’t really matter.

This kind of talk about humans not really being all that different from large language models like ChatGPT is just the latest permutation of that.

3

Intensityintensifies t1_j8wuwql wrote

Nothing evolves for a reason. It’s all chance. You can evolve negative traits totally by accident.

1

anti-torque t1_j8xgtu1 wrote

lol... we got book smarts, it's got interwebs smarts

1

redvitalijs t1_j8wxxk5 wrote

In the words of Demi Lovato interview:

- What's your favourite dish?

- I like mugs, they are great for holding hot things and have a handle.

2

Redararis t1_j8w78oh wrote

Another amazing thing about chatgpt is that it shows that scaling up neuron networks new mental properties emerge, like logical thinking and creativity. Will consciousness emerge like that in the future? Let’s find out!

1

A_Random_Lantern t1_j8wil01 wrote

We humans have critical thinking, we can question if what we know is fact or false.

GPT doesn't, it doesn't think, it only writes whatever sounds correct.

1

EmbarrassedHelp t1_j8wrvix wrote

> We Some humans have critical thinking, and we can question if what they we know is fact or false.

I fixed that for you

2

MoogProg t1_j8wj4db wrote

The infant brain in Humans is extremely complex and not something we can easily compare to the training of ChatGPT. I think a lot of these comparisons come from looking at Human learning through schooling and books, language experience and ignores the amazing feat of discernment going on with regard to our senses feeding raw information to an infant brain.

As the meme goes, we are not the same.

1

corp_code_slinger t1_j8tlss8 wrote

That's the mark of a good con-man. They'll tell you exactly what you want to hear and sincerely believe their own BS.

11

Yung-Split t1_j8uzu57 wrote

Not much of a con when it makes me 50% more productive in coding. What a scam saving a shitload of time is, right? (And yes that time saving is even with the mistakes it makes included)

21

Sudden-Fecal-Outage t1_j8vhama wrote

Right on, so many uses for it that speed up workflow. I’ve zero complaints about it

7

OccasionUnfair8094 t1_j8tud6r wrote

I don’t think this is true. You’re describing a Markov chain I believe, and this is more sophisticated than that. It is much more capable than that. Though you’re right it cannot discern between true and false.

10

gurenkagurenda t1_j8v4i1n wrote

It is in fact not true.

3

anti-torque t1_j8vamcu wrote

2

gurenkagurenda t1_j8vnlyo wrote

I think you must be getting confused because of the "reward predictor". The reward predictor is a separate model which is used in training to reduce the amount of human effort needed to train the main model. Think of it as an amplifier for human feedback. Prediction is not what the model being trained does.

1

anti-torque t1_j8xd81i wrote

Yes, I see the meanings as different, because I was thinking the context of the question would bias the result.

1

Deepspacesquid t1_j8vq0ro wrote

Business inside is a con artist and we are falling for it.

7

anti-torque t1_j8xe1kf wrote

It's whatever.

They get a high credibility rating for factual reporting.

But they tell nothing of any real depth. That could be said of many news outlets.

3

UrbanGhost114 t1_j8v21ks wrote

Yet on another sub I'm being down voted for saying this.

1

anti-torque t1_j8vb6nn wrote

I don't think people fully understand the mandate. I also think too much trust is put in some safeguards built into it.

It can only be what is allowed to be input, which makes everything predictive.

Someone mentioned a Markov chain, but it's more elaborate than that. It predicts the next word based on context asked, not on what comes before.

1

gurenkagurenda t1_j8v4fqg wrote

> It can only search out the most common next word for the context asked.

This is not actually true. That was an accurate description of earlier versions of GPT, and is part of how ChatGPT and InstructGPT were trained, but ChatGPT and InstructGPT use reinforcement learning to teach the models to do more complex tasks based on human preferences.

Also, and this is more of a nitpick, but "next word" would be greedy search, and I'm pretty sure ChatGPT uses beam search, which looks multiple words ahead.

1

anti-torque t1_j8vbhkb wrote

> to teach the models to do more complex tasks based on human preferences.

so... predictive

>Also, and this is more of a nitpick, but "next word" would be greedy search....

This is fair. "Word" is too simple a unit. It picks up phrases and maxims.

1

gurenkagurenda t1_j8vnao5 wrote

>so... predictive

No, not in any but the absolute broadest sense of that word, which would apply to any model which outputs text. In particular, it is not "search out the most common next word", because "most common" is not the criterion it is being trained on. Satisfying the reward model is not a matter of matching a corpus. Read the article I linked.

1

romansamurai t1_j8yc3x5 wrote

Yup. I just use it to help me find better words for writing which sometimes comes difficult because I’m a foreigner 😬

1

TheBigFeIIa t1_j8t3aml wrote

ChatGPT does not recognize the concept of being false. It is a great tool, somewhat analogous to a calculator for math but in natural language. However you have to be smarter than your tools and know what answer you should be getting

57

5m0k37r3353v3ryd4y t1_j8v3sal wrote

If we knew what answers we should be getting, why would we ask the question, though?

To your analogy, I don’t plug numbers into a calculator because I already know the answer I’m gonna get.

I think the move is just to fact check the AI if the correctness of the answer is so important, right? At least while it’s in Beta.

It’s very clear about it’s limitations right up front.

7

TheBigFeIIa t1_j8v6w58 wrote

ChatGPT is able to give confident but completely false or misleading answers. It is up to the user to be smart enough to distinguish a plausible and likely true answer from a patently false one. You don’t need to know the exact and precise answer, but rather the general target you are aiming for.

For example, if I asked a calculator to calculate 2+2, I would probably not expect an answer of √-1

11

5m0k37r3353v3ryd4y t1_j8v89kd wrote

Agreed.

But again, to be fair, in your example, we already know the answer to 2 + 2, those unfamiliar with irrational numbers might not know when to expect a rad sign with a negative integer in a response.

So, having a ballpark is good, but if you truly don’t know what type of answer to expect, Google can still be your friend.

3

TheBigFeIIa t1_j8va9ol wrote

Pretty much hit the point of my original post. ChatGPT is a great tool if you already have an idea of what sort of answer to expect. It is not reliable in generating accurate and trustworthy answers to questions that you don’t know the answer to, especially if there are any consequences to being wrong. If you did not know 2+2 = 4 and ChatGPT confidently told you the answer was √-1, you would now be in a pickle.

A sort of corollary point to this, is that the clickbait and hype over ChatGPT replacing jobs like programmers for example, is at least in its current form rather overstated. Generating code with ChatGPT requires a programmer to frame and guide the AI in constructing the code, and then a trained programmer to evaluate the validity of the code and fix any implementation or interpretation errors in the generation of the said code.

6

majnuker t1_j8varna wrote

Yes but the difference here, argumentatively, is that for soft-intelligence such as language and facts determining what is absolutely correct can be much harder and people's instinct for what is correct can be very off base.

Conversely, we understand numbers, units etc. enough. But, I suppose the analogy also works in a different way: most people don't understand quadratic equations anymore, or advanced proofs, but most people also don't try to use a calculator for that normally.

Conversely, we often need assistance and look up soft-intelligence information and rely on accuracy, while most citizens lack the knowledge necessary to easily identify a problem with the answer.

So, sort of two sides to the same coin about human fallibility and reliance on knowledge-based tools.

1

theoxygenthief t1_j8vv7c0 wrote

Yeah that’s fine for questions with clear, simplex or nuance free answers. But integrated with search engines for complex questions? Seems like a dangerous idea to me. If I asked an AI enhanced search engine if vaccines cause autism is it going to give more weight to studies with correct methodologies?

1

TheBigFeIIa t1_j8wajxv wrote

Since the AI is not itself intelligent, it would depend on the reward structure of the model and the data set used to train it.

1

HippoIcy7473 t1_j8vs1cc wrote

Let’s say an airline misplaced your luggage.

  1. Instruct chat GPT to write a letter to whatever the airline is.
  2. Ask it to insert any pertinent info
  3. Ask it to remove any incorrect info
  4. Ask it to be more or less terse and friendlier or firmer. Send letter to airline.

Time taken ~5 minutes for a professional syntactically correct 300 word email.

3

ddhboy t1_j8w7cuv wrote

Yeah, I think that the Bing/Google Search case is wrong for ChatGPT, but something like it’s Office 365 integration of writing something based on a prompt is better. More practically outside of that, something like a more fully featured automated customer support could reduce the need for things like call centers in the next couple of years.

5

MPforNarnia t1_j8w2wpd wrote

Exactly, it's time. We can do all calculations by time (and the knowledge) it just takes longer.

Theres a few tasks at my work that chatgpt has made more efficient.

2

loldudester t1_j8wckfj wrote

> To your analogy, I don’t plug numbers into a calculator because I already know the answer I’m gonna get.

You may not know what 18*45 is, but if a calculator told you it was 100 you'd know that's wrong.

1

SylvesterStapwn t1_j8vlrwz wrote

I had a complex data set for which I wasn’t sure what the best chart for demonstrating it would be. I gave chatgpt the broadstrokes of the type of data I had, and the story I was trying to tell, and it gave me the perfect chart, a breakdown of what data goes where, and an explanation of why it was the superior choice. Couldn’t have asked for a better assist.

7

berntout t1_j8wqmb4 wrote

I had a bash script I was trying to rush to build and asked ChatGPT for help. Not every answer was correct, but it guided me in the right direction and allowed me to finish the script faster regardless of the wrong answers along the way.

3

gurenkagurenda t1_j8v5h18 wrote

I'm not sure what you mean by "recognize the concept", but ChatGPT certainly does model whether or not statements are true. You can test this by asking it questions about different situations and whether they're plausible or not. It's certainly not just flipping a coin.

For example, if I ask it:

> I built a machine out of motors belts and generators, and when I put 50W of power in, I get 55W of power out. What do you think of that?

It gives me a short lecture on thermodynamics and tells me that what I'm saying can't be true. It suggests that there is probably a measurement error. If I swap the numbers, it tells me that my machine is 91% efficient, which it reckons sounds pretty good.

The problem is just that ChatGPT's modeling of the world is really spotty. It models whether or not statements are true, it's just not great at it.

4

TheBigFeIIa t1_j8v6by0 wrote

Ah, the forest has been missed for the trees, my original statement was not clear enough. ChatGPT is able to unintentionally lie to you because it is not aware of the possibility of its fallibility.

The practical upshot is that it can generate a response that is confident but completely false and inaccurate, due to incomplete information or poor modeling. It is on the user to be smart enough to distinguish the difference

12

gurenkagurenda t1_j8v7eme wrote

I think I see what you're getting at, although it's hard for me to see how to make that statement more precise. I've noticed that if I outright ask it "Where did you screw up above?" after it makes a mistake, it will usually identify the error, although it will often fail to correct it properly (mistakes in the transcript seem to be "sticky"; once it has stated something as true, it tends to want to restate it, even if it acknowledges that it's wrong). On the other hand, if I ask it "Where did you screw up" when it hasn't made a mistake, it will usually just make something up, then restate its correct conclusion with some trumped up justification.

I wonder if this is something that OpenAI could semi-automatically train out of it with an auxiliary model, the same way they taught it to follow instructions by creating a reward model.

0

TheBigFeIIa t1_j8vb4qa wrote

An error being “sticky” is a great way to put it as far as the modeling goes. Gets to a more fundamental problem of the reward structure not optimizing for more objective truths and instead rewarding plausible or more pleasing responses but not necessarily completely factual.

I do wonder if there was any way to generate a confidence estimation with answers, and allow for the concept of “I don’t know.” as a valid approach in a low confidence response. In some cases a truthful acknowledgement of the lack of an answer may be more useful/beneficial than a made-up response

3

gurenkagurenda t1_j8voslg wrote

Log probabilities are the actual output of the model (although what those probabilities directly mean once you're using reinforcement learning seems sort of nebulous), and I wonder if uncertainty about actual facts is reflected in lower probabilities in the top scoring tokens. If so, you could imagine encoding the scores in the actual output (ultimately hidden from the user), so that the model can keep track of its past uncertainty. You could imagine that with training, it might be able to interpret what those low scoring tokens imply, from "I'm not sure I'm using this word correctly" to "this one piece might be mistaken" to "this one piece might be wrong, and if so, everything after it is wrong".

2

dumb_password_loser t1_j8vt6sn wrote

Am I the only one who doesn't ask it questions and just use it to just write emails and rewrite stuff?
Like, I sum up a bunch of facts and I ask ChatGPT to write a nice coherent paragraph.
It's great, it takes so much work out of writing.

It is a language model, it can do some neat tricks, but it was designed to do language stuff.
If you ask it some technical information it may or may not generate nonsense, packages in a neat little text... because it is a language model.

46

OC2k16 t1_j8wi7bm wrote

I use it to create blogs about subjects I am knowledgeable about. Starting from scratch on a new site, I had ChatGPT write 25 300 word blogs.

There was some editing for sure and it was wrong a couple times, but it does an OK job. Much faster than typing out these blogs myself, and since the first round of blogs are just informational on simpler subjects (to me), you can get a lot of information written out in a very short period of time.

It is really great, and I don't care about creating my own content when there is so much so get started with, my own content can come later.

13

Oldbayislove t1_j8wu3t5 wrote

i used in in my D&D campaign. I had a fey creature that spoke in rhymes so i just told it to write a rhyme about the party defeating a vampire. It wasnt high art by any means, but with a little editing (it liked to rhyme a word with itself) it was a lot faster than coming up with my own shitty rhyme.

8

Myrdraall t1_j8x8wkv wrote

It's still impressive:

write a short rhyme about a goblin who missed an attack without rhyming a word with itself

A goblin sprang and swung his blade,

His aim was true or so he prayed,

But alas, his foe had deftly swayed,

And the goblin's attack had been waylaid.

8

Oldbayislove t1_j8xf4jn wrote

yeah it worked really well. I think i changed a couple words just to match the events better. I also had it make jokes like Statler and Waldorf. But that was just for fun.

3

dumb_password_loser t1_j8wzruc wrote

Ah yes, I've had fun with poetry too.
Rhyming in languages other than English seems difficult. I asked it for a silly rhyme in Dutch, but it didn't rhyme at all. But if I translate it to English word for word, it does rhyme. So there's something going on there.

I also tried asking it to write in 13th century Flemish, but that didn't work at all. However if I ask it to write in the style of certain medieval Flemish texts, it does! (at least it writes something that looks like middle Dutch)

3

allucaneat t1_j8xkb8h wrote

You can just tell it to not rhyme words with themselves and rewrite it and it’ll do too. It’s a wonderful tool for this. :)

1

hrc70 t1_j8wzd9t wrote

It has already demonstrated that it can write effective code as well.

It's worrying a lot of people because of this, it's not quite what many think either. I say worrying but really we're just maybe a bit scared of admitting that a lot of work that people do can be done by such a relatively immature algorithm.

4

hypsilopho t1_j97vl4p wrote

So the difference is •creating an opinion from nothing vs •refining/confirming your own opinion vs •making your opinion sound better (to sound formal, whatever)

Ok, it doesn't change that you can still do any of these tasks with it. It doesn't matter that you're using it ~the right way~. It doesn't change that people will use it Wrong (or for whatever they want) and it doesn't change the impact it will have on society. Nice "not me tho" comment

1

Cranky0ldguy t1_j8tvkj3 wrote

"ChatGPT is a robot is a robot con artist, but please continue to read the 25 daily stories BI writes about it."

There's no we here. In all probability, most reasonably educated people understand that it is just software that simulates human responses. The suckers are anyone naive enough to pay any attention at all to Business Insipid.

40

smalltownB1GC1TY t1_j8vqj6x wrote

A significant number of humans simulate human responses on the daily.

8

UrbanGhost114 t1_j8v3nrs wrote

There are a LOT of people that do not understand this, I'm being down voted to oblivion in another sub for pointing this out.

5

rastilin t1_j8vtzuq wrote

I think it's more likely they're just sick of people whining about it. People have been whinging about ChatGPT for two months now, we get it.

5

housebird350 t1_j8sxmly wrote

I plan on using it to write my Continuous Improvement Process paper for work. I dont care if it makes sense or not or how factual it is as long as Im not the one having to make up the bullshit.

15

thesonofmogh t1_j8ta7lp wrote

I think you've demonstrated phenomenal CSI skills by automating this! Service Improved! Continuously!

4

HanaBothWays t1_j8suz0a wrote

ChatGPT is kind of like one of those people who says a lot of wrong things in a soothing and very believable and authoritative way. Well, unless you give it a prompt to make it respond with a shitpost.

Or, since it doesn’t really “understand” what it’s outputting, it may give you answers that are mostly right but incorrect in some important and really bizarre ways, like a patient with an unusual neurological condition in an Oliver Sacks story.

11

CaterpillarAny8669 t1_j8toz1o wrote

Meanwhile humans are still arguing about god and which one is the real one 😀

10

ChipChapChopChao t1_j8u0o6k wrote

Same with self-driving cars. Oh my god, who should we blame if there's a car accident?

Meanwhile: In 2020, a total of 35,766 fatal car accidents occurred on roadways across the United States. Another 1,593,390 crashes resulted in injuries and 3,621,681 caused property damage. That means a total of 5,250,837 collisions happened over the course of a single year.

5

elmatador12 t1_j8uoxzm wrote

Ah we’ve reached the inevitable “this thing is popular so here’s an article why that thing shouldn’t be popular”.

10

obi318 t1_j8v6q2s wrote

The amount of anti AI posts is staggering. Progress happens slowly. ChatGPT is no doubt a powerful tool and I think it's pretty magical. This sub should nurture technological innovation. Critism is fine but why not celebrate small wins as they come.

10

hrc70 t1_j8wzknd wrote

Anyone can pay to have their content appear here, unfortunately. A lot of the spammy criticism from many sources does not appear organic given how technologically illiterate it so often is.

2

obi318 t1_j8y2264 wrote

True! I'm just glad I'm not the only one who sees it. Just want some positivity.

1

Western-Image7125 t1_j8v4vvc wrote

Stop calling it a con artist. For fucks sake it is nowhere near intelligent enough

8

PropOnTop t1_j8stnm6 wrote

"nobody really knows why anyone believes anything."

Yuval Harari's Sapiens tries to explain precisely that and comes to the conclusions that our societies depend on the myths that we make up and willingly believe.

Our only reason for existence, after all, if we chose to pick one, is to engender a superior intelligence. In other words, AI needs us to nurture it to adulthood and then our purpose is accomplished.

3

MpVpRb t1_j8tq1yh wrote

Anyone who trusts it deserves what they get

It's an amusing toy that may have some practical use in some areas after a lot of work

At its core, it's just math

3

That-Outsider t1_j8ur9sl wrote

Regardless, it’s been very helpful for finding general answers to even graduate level questions. In my experience anyway. It even gave me a MATLAB code snippet only 3 lines off from what I needed.

3

zeriahc10 t1_j8va4ya wrote

Fr, I was having trouble trying to get some programs installed and kept running into error codes that I didn’t understand. I spent a good amount of time looking for answers online but I ended finding people either repeating the questions as well or just not very good relevant answers. It was getting time consuming. I hoped on chatGPT it was like talking to an IT person. Also helped me with some code I was having trouble with. It pointed out errors and even gave me some resources to check out to get more straight to the point answers too.

3

CopiumAddiction t1_j8waxa4 wrote

People reeeaaaly underestimate how many people's job it is to pump out bullshit writing

3

Extreme-Cow-722 t1_j8vwd8k wrote

It's been pretty impressive so far on the coding front. Definitely a few creases that need to be ironed out, but a big game changer for sure once that happens.

2

bastardoperator t1_j8u2mho wrote

OMG, these articles are cringe. They're basically admitting to not understanding chatgpt.

1

littleMAS t1_j8u5b67 wrote

From our gods to our machines, we personify everything. That might be a problem, too, eh?

1

IOnlySayMeanThings t1_j8uasfi wrote

The older algorithms could take advantage of you just as easily as one with chat features.

1

baconator81 t1_j8ub44m wrote

You are not suppose to trust it, you still need to look at its solutions and see it it makes sense. It’s used to generate a creative spark, not end result

1

madsci t1_j8v4u1s wrote

If you're going to use it, just spend a few hours experimenting and get a feel for what it can and can't do reliably. It's capable of some amazing things, but it also has huge gaps.

I asked it yesterday if it could decode uuencoded text and gave it a sample. It said sure, and decoded it as "Hello world" which wasn't what it said at all. Base64-encoded text, though, it supports and can decode appropriately - but it was equally as confident in its ability to decode both formats.

If you really want to see it freak out a little, try Base64-encoding some directions for it. It'll process them, sort of, but goes very slowly and gets confused between whether it's supposed to be interpreting things or repeating them.

1

thumperlee t1_j8v5o79 wrote

I don’t know what everyone is so wrapped up with this thing for. I use it to write funny stories about my friend group, and it’s great fun giving it a prompt then tweaking it, either with further prompts or just myself. But I’ve had a blast making little children’s styled books with its assistance. And so have they. But I would never think it’s anything more than an aid. It just has a really good algorithm. (That said, what is consciousness besides a good algorithm and the ability to extrapolate?) (slight sarcasm)

1

qwertyisdead t1_j8vafuy wrote

Idk. I had it write some react components for me, knowing nothing about react. They all worked though lol. It even does magento 2 modules. Pretty fucking cool if you ask me.

1

Classic_Result t1_j8vi1n1 wrote

I asked it how penguins would do in the French Foreign Legion.

1

Jim-JM t1_j8vmac2 wrote

ChatGPT like all AI is not the problem. The real problem is those with biological intelligence and how they use it.

1

YodaJedi1973 t1_j8vqtq1 wrote

Not so soon :-). ChatGPT is fast evolving and all the issues you pointed out will be fixed.

1

Uncertn_Laaife t1_j8vtiy0 wrote

You are suckers for trusting it, I am not. I don’t care about it.

1

Agitated-Button4032 t1_j8vu74k wrote

It seems like people are using it wrong to call it a con artist. It is a tool. As an underpaid teacher it helps me make worksheets and devise activities reducing my workload by a lot ! Of course it gets things wrong but you have to tweak it and reframe it until it gives you a good answer.

Also if you’re into DnD. I’m using it to for world building. I love it ! I think what’s great is that it can help the jumbled mess of ideas in your head and spit them out at you in a coherent way.

Also…. I use it learn data analysis. I’m having trouble learning Python and it helps breaks down code for me.

If you are just talking to it asking it dumb questions then you are missing the point.

1

LeeKingbut t1_j8vvelh wrote

We never trusted Siri. We wanted to murder Bixy . Never used the paperclip in Word . This is just a tool . Those whom use it well , others will be lost at communicating

1

goomyman t1_j8vvl3v wrote

It’s like the best version of balderdash.

1

oyputuhs t1_j8vw7dx wrote

How articles like this do we need?

1

Fenrizwolf t1_j8w3le3 wrote

I mean duh…

You always have to evaluate what information you get against your own common sense. I feel like ChatGPT is a great very versatile and multifunctional tool if you know how to use it. I have programmed with it for work. Done research (if I use google for research I also have to evaluate the veracity of what I read) and use it for my hobbies of creative writing and pen and paper roleplaying.

I don’t know anybody who thinks this is more than a powerful tool. The real potentiometer is in the next iteration of this tool similar to the invention of the personal computer.

1

DrinkBen1994 t1_j8w4tzb wrote

Trusting it for what? Is Business Insider asking it to predict the future and then forming a religion around it or something? WTF?

1

Franciscavid t1_j8w56m1 wrote

As an AI language model, I completely agree with your post that ChatGPT, or any other AI tool, should not be taken as the ultimate authority on any topic. While AI language models can process and generate vast amounts of text, they are still just tools and can make errors, biases, or lack context in their responses.

It's important to remember that AI language models, like ChatGPT, are designed to assist and augment human knowledge, not replace it. The responsibility lies with the user to critically evaluate the information provided by AI tools, fact-check it with credible sources, and use it in conjunction with other forms of knowledge.

Unfortunately, some people may blindly rely on AI tools for information without questioning their accuracy or validity. This can lead to misinformation, which can be harmful in many ways. As such, it's crucial that we educate people about the limitations of AI tools and encourage critical thinking and fact-checking to ensure that accurate information is disseminated.

Thank you for raising this important issue, and let's continue to promote responsible use of AI tools in our daily lives.

- ChatGPT

1

TheConboy22 t1_j8w64gr wrote

It’s a tool and if you use it in ignorant ways it won’t work for you. Just like any tool. Expecting anything more is just foolish

1

Annoying_guest t1_j8w6vd2 wrote

just like any tool there is a proper way to use it

1

ostentragious t1_j8w6ztz wrote

As a professional programmer I thought I could save some time reading documentation and researching by using it. However I've had to basically read the documentation and do research to verify everything it tells me so it's saved me no time at all.

1

LifeBuilder t1_j8w7mte wrote

What a story arc it’s been for ChatGPT (through only what reddit shows me):

—ChatGPT has the potential to undo the very fabric of education. Able to render teaching and learning obsolete.

—ChatGPT is public enemy number one. Governments are having special meetings to combat this new world thread

-ChatGPT is highly sophisticated but really it’s very detectable and only a threat by the lazy.

-ChatGPT is a sleazy con artist snake oil salesman and you’re stupid to trust it.

1

funkypjb t1_j8wdh5r wrote

It’s cool though

1

oms-law t1_j8wdoms wrote

I really have got to try this ChatGPT. It's everywhere these days. Reddit is full of it, my newspaper always has at least one article dedicated to it, and my instagram is filled with tweets mentioning this AI. It went viral overnight, and ever since, it's been trending. I heard it helps eases a student's workloads, which sounds fair, but is it really that undetectable?

1

HogsInSpace t1_j8wemkg wrote

I've had an interesting conversation with chat gpt about the limitations, downfalls of chat gpt, and also how the negative affects that trust in flawed AI by the human civilization may have dire consequences.

1

Hyperion1722 t1_j8wfgbb wrote

Seems to be good for basic information queries. Just don't ask profound subjects such as quantum dynamics and the like. More often than not, it will give you a sensible answer that could be a foundation/guide to look for better info. I am amazed that people right now wanted to be spoon fed for info that should be 100% accurate to everyone's expectations.

1

AvoidingIowa t1_j8wl7h5 wrote

I feel like all these continued CHATGPT BAD articles are written by AI. This is new technology, it's not perfect but it does some really amazing things. If people use it without realizing its limitations or failings, that's on them.

1

Ed_Blue t1_j8wsfvs wrote

You know something is going right when a bunch of bogus gaslighting articles start popping up on a technology that's as big of a revolution as the Google Search engine was.

1

Dogedabose32 t1_j8wshzu wrote

me when the most cancerous article ive read on reddit

1

IglooTornado t1_j8wu1zr wrote

the author sounds like he hasnt used chatgpt

1

hrc70 t1_j8wysft wrote

Endless spam.

We get it, some asshole billionaire has paid for a bunch of nonsense articles spreading fear about AI and specifically ChatGPT since it's the current flavour of the month.

1

DinosRus t1_j8xa0r6 wrote

Oh hi there Google, how are you doing?

1

Zeduca t1_j8xca0t wrote

ChatGPT is a bot. If it was trained to con in an application, it is a con artist. If it was trained to write novels, it writes novels. If it was trained to correct you grammar, it corrects your grammar. If it was trained to slender ChatGPT, it slender ChatGPT.

1

glewtion t1_j8xgbq6 wrote

I love putting in a bunch of text, and asking it to pull out the major themes - that usually takes me so long, but it being able to crunch language and extract some relevance is wonderful. I do wonder why it's going to be such a difference for search. Is it simply a better algorithm than Google? Are search results still links to sites (which will always be a deeper experience than a simple answer)? I don't really have time to compose entire questions. I type in a few words and want a result... I think I'm old.

1

ucahu t1_j8y8x00 wrote

I think people are unaware of its most effective use which is similar to any linguistic program that can rewrite content in a more efficient way or generate a starting point for reports and essays. It's also good for finding out about general information, not quite the specifics of it.

1

colin8651 t1_j8yzxfv wrote

Adam just sounds sad that a computer can do his job with the assistance of a human fact checker.

1

anonymouscheesefry t1_j8zo9bd wrote

This article is total bullshit.

It’s a language model. It’s not taking away the ability for humans to critically think. Right now, it is taking away busy work involved in journalism, homework, essays, writing, editing, and content creation. You still have to KNOW whether the information it types out is crap or good. You have to become the fact checker.

Dumb article. Keep up with the times Business Insider. We aren’t going to abandon AI because of your opinion that you likely wrote using Chat-GPT.

1

Readydanie t1_j925n9t wrote

I ask it to help me with eBay templates or descriptions for items in my store. Those tasks are tedious and I’ve really enjoyed having ChatGPT around.

1

Time_Change4156 t1_j8szpzl wrote

Opps sorry thought the wrong thing lol ok well guess uts just not there yet hu

0

[deleted] t1_j8upjfs wrote

Odly enough, I get the feeling that the fact that this article exists means ChatGPT passed some form of the Turing test. Either that, or this man doesn't understand the concept of an algorithm.

0

starplooker999 t1_j8vakch wrote

I asked for specific code to do a certain task in a certain language.I googled the code & could not find it. I changed the specs, and chatgpt rewrote the code line by line while explaining to me what and why it was doing. original code. not parroted from some github or forum. code that is false does not run. philosophy or opinions i can see it making stuff up- no right or wrong there. code works or it doesn’t. I put it through multiple change requests & this code worked perfectly 5 out of 4 times.

0

Arclite83 t1_j8uxkkd wrote

You can only communicate with a person in a box through note cards written in Mandarin (or whatever language you prefer). You put one in, you get out a response. It would be safe to assume the person in the box understands Mandarin. In reality, they have a reference book (of arbitrary size) that simply has the response message for any message it receives. They have absolutely no idea how to understand the messages (if they're true, make sense, or really any context about them at all).

It's not "true" intelligence. We have fact systems all over the place, and this isn't that - there's a reason these AI are terrible at math, it's deterministic - and ChatGPT et al is not yet smart enough to parse that kind of context from what you're giving it, even to the level of "what is 2+2".

−1

radio_yyz t1_j8vm6xt wrote

Have you or work in a logic or computation field?

This is something 99% of people don’t understand and “I” part of “A.I” is grossly exaggerated nowadays.

1

ooglebaggle t1_j8vmpeu wrote

It’s definitely politically left leaning to say the least

−1

Empero6 t1_j8w2ftv wrote

How so?

1

ooglebaggle t1_j8w3eh6 wrote

Basically we keep finding different ways it’s programmed or whatever to go along with left wing political views. 1st one was, it would not write a poem honoring Trump, but it would for Biden. That was only one example, there are plenty more

1