You must log in or register to comment.

PeopleProcessProduct t1_j6dljh3 wrote

It's really cool tech, but ask it about subjects you know deeply and you will find enough errors to be concerned about this narrative.


DrQuantum t1_j6fm7in wrote

I don’t find this any different than normal society. Plenty of people pass tests and are idiots or unqualified still.


JustAnOrdinaryBloke t1_j6jw024 wrote

Yes, but they generally remain idiots for life. A computer system could potentially improve over time.


Wiseon321 t1_j6gfrw3 wrote

All they did was feed it the answers to the questions. training it to pass a test, it doesn’t truly UNDERSTAND. These posts are nonsense.


ImUrFrand t1_j6hab9b wrote

Chatgpt gives input based results from ML, this is not really AI, but the headlines keep saying it enough and people will believe it.


[deleted] t1_j6futoq wrote



str8grizzlee t1_j6gehkm wrote

Not really. One of my colleagues asked ChatGPT for a list of celebrities who shared a birthday with him. The list was wrong - ChatGPT had hallucinated false birthdays for a number of celebrities.

Brad Pitt’s birthday is already in ChatGPT’s training data. More or better training data can’t fix this problem. The issue is that it is outputting false information because it is designed to output words probabilistically without regard for truth. Hallucinations can only be solved manually be reinforcing good responses over bad responses but even if it gets better at outputting good responses, it still will have an issue with creating hallucinations in response to novel prompts. Scale isn’t a panacea.


actuallyserious650 t1_j6ggw67 wrote

This is the point most people miss. Chat GPT doesn’t understand anything. It’d tell you 783 x 4561 = 10678 if those three numbers were written that way often enough online. It creates compelling sounding narratives because we, the humans, are masters at putting meaning into words that we read. But as we’re already seeing, Chat GPT will trot out easily disprovable falsehoods if it sounds close enough to normal speech.


erics75218 t1_j6gqte1 wrote

Bingo. And people who matter when it comes to being a huge pain in AI's ass will never learn.

Don't like chatGTP responses...then just talk to Truth Socials FreedomBOT it that's been trained on Fox News Media. Lol.

Ground truth for human created historical documents, outside of scientific shit, probably doesn't exist?

Celeb birthdays are fun, there is souch BS out there about Celebrities that the results must be hilarious on occasion.


DasKapitalist t1_j6hwc3s wrote

What's worse is that its been deliberately lobotomized on a slew of topics, so at best it's repeating the received knowledge of whatever passes for the mainstream. Which is fickle and frequently inaccurate.


BabaYagaTheBoogeyman t1_j6gi57f wrote

As much as we want to believe, we have all the answers. The internet is full of misinformation and half truths. If AI doesn't have the ability to distinguish what's fact from fiction, it will never replace humans.


[deleted] t1_j6gfivm wrote



fksly t1_j6h8hoh wrote

ChatGPT approach? Yes. Nobody really into AI thinks it is a good way to get anything close to general purpose intelligence.

In fact, in a way, it has been getting worse. It is better at bullshiting and appearing correct, but it got less correct compared to last iteration of ChatGPT.


str8grizzlee t1_j6hker8 wrote

Of course I think the tech will improve, I just think accuracy is not solved by more training data


JoaoMXN t1_j6h6yhg wrote

Yes really. ChatGPT is one of the least complex "AI" out there, LaMDA for example that'll be available in the future have billions of more data than it. And we'll get more and more AIs like that in a matter of years. I wouldn't underestimate AIs like you.


str8grizzlee t1_j6hko4c wrote

I’m not underestimating future iterations but you’re totally missing my point - accuracy is not solved by more data. It is solved by better modeling.


Due_Cauliflower_9669 t1_j6gtawv wrote

Where does “better training data” come from? These bots are using data from the open web. The open web is full of good stuff but also a lot of bullshit. The approach ensures it continues to train itself on a mix of high-quality and low-quality data.


nicuramar t1_j6ph4w8 wrote

> Where does “better training data” come from? These bots are using data from the open web.

The raw data is from there, among other things, but there is more to it. It was trained using supervised learning and reinforced learning.


Orpheus75 t1_j6girgr wrote

Considering the absolutely wrong answers I was given by a doctor, answers a simple google search provided, I doubt AI will be worse than humans by the end of the decade.


Thebadmamajama t1_j6ggi8f wrote

💯. Passing tests notwithstanding, the error rate and limits start to show themselves quickly. I've also found cases where there's repetitive information that leaves you believing there aren't alternative options.


Happiness_Stan t1_j6gm4hr wrote

From my experience playing around with it it can’t give definitive answers on anything that requires judgement, at least in my field that is. It always couched it in terms of “It depends on X, Y and Z”.


ParticleShine t1_j6h5p32 wrote

Okay but it's made absolutely insane advances in less than a couple of years, do you assume it's going to stop learning and evolving?


climateadaptionuk t1_j6h7p3k wrote

Yep but as a BA I am already using it to accelerate my work. And that's great in itself. I do have proof it an edit but it gets me at least 50% there so quickly. It's just like having a great assistance to bounce ideas of an get suggestions. Its insane.


palox3 t1_j6hhw0a wrote

because this is was trained on general information on the internet. expert system will be trained only on the expert informations


PropOnTop t1_j6d7fgv wrote

Well, it passed the exam to be my best friend a long time ago : )

It saddens me that now it's famous, it rarely finds time to respond...


WhuddaWhat t1_j6daban wrote

Well, why don't you talk to your ChatGPT therapist about how to manage these feelings in a healthy way?


weirdgroovynerd t1_j6dmbdw wrote


The ChatGPT therapist was very helpful after Scarlett Johansson's voice broke up with me.


Theemuts t1_j6h1alh wrote

Yeah, don't do that.

> ChatGPT (Chat Generative Pre-trained Transformer)[1] is a chatbot launched by OpenAI in November 2022. It is built on top of OpenAI's GPT-3 family of large language models, and is fine-tuned (an approach to transfer learning)[2] with both supervised and reinforcement learning techniques.

> Nabla, a French start-up specializing in healthcare technology, tested GPT-3 as a medical chatbot, though OpenAI itself warned against such use. As expected, GPT-3 showed several limitations. For example, while testing GPT-3 responses about mental health issues, the AI advised a simulated patient to commit suicide.[51]


WhuddaWhat t1_j6h2pa9 wrote

>the AI advised a simulated patient to commit suicide

holy shit. Can you imagine being absolutely despondantly suicidal and reaching out for help and basically being told by what FEELS like an all-knowing computer, but is really just the most statistically relevant response to the series of things you've said, tells you that on reflecting upon your situation, it really would be best to go ahead and end it.

That would probably be enough to expand the crisis for anybody that is truly battling to get back a feeling of control within their life.


DasKapitalist t1_j6hwp55 wrote

If you're taking life altering advice from a probabilistic language algorithm, you're pretty well doomed to begin with.


verybakedpotatoe t1_j6dcnf5 wrote

I find it hard to believe that it can actually obtain any of these certifications when it can't perform any original analytical examination of anything.

Go ahead and try and ask it to perform analysis on publicly available information and it will fail. It can repeat analysis that anybody else has already done and maybe recombine and repackage it in a useful summary but it is wholly unable to answer a simple question like, "If I am leading a sow with seven piglets how many feet are there?".

It's the saxophone backpack problem all over again they never seem to have figured that part out. It's not really creative or intelligent, not yet. It's just brute force fitting pregenerated reasons together until they fit better.

>EDIT: I tried to get it to explain the logic it used. I am somehow even more disappointed.


joec_95123 t1_j6dl36h wrote

There are 2 feet, OP's, assuming OP still has both feet. The sow and the piglets have hooves.


weirdgroovynerd t1_j6dm3sw wrote

This whole limb- counting exercise has become rather...

... boaring!


Can AI a deliver fire dad jokes like that?!


gurenkagurenda t1_j6dkaah wrote

I use ChatGPT to solve analytical problems all the time. The key is that you have to tell it to show it’s work:

> If I am leading a sow with seven piglets how many feet are there. Take it step by step > >> The sow has 4 feet

>> Each piglet has 4 feet

>> Total number of piglet feet = 4 * 7 = 28

>> Total number of feet = 4 + 28 = 32

It’s able to keep track of the analysis far better this way, and it also lets you check its work for errors.

Now obviously it left my feet out, but I think that’s a reasonable ambiguity, and not one that you usually find in professional exams. If I ask it to account for that, it gets it right.


verybakedpotatoe t1_j6dvcyq wrote

It didn't go so well for me. I need to master the special sauce to get better results.

32 is close and the reasoning is almost there, but the correct answer is 34 feet because I am leading them.

I started with the 'man from st ives' riddle and tried to create a novel and simple version of the riddle with a clear answer. I think I would have accepted 32 as a good effort, or even just 2 if it said they all have hooves, but 8 and 11 are just wrong.


TheRealDynamitri t1_j6flx29 wrote

What’s a “saxophone backpack problem”?

Tried googling but no joy


Nonya5 t1_j6ddypy wrote

The total number of feet would be 34, including your feet.

Now it gets it right. Humans never get things right on the first try either.


theduckspants t1_j6derw4 wrote

It just told me

"There are a total of 29 feet (8 for the sow, 7 x 4 for the piglets)."

So it thinks a sow has 8 feet and that 7x4 is 21

Then asked what is 7x4? It said 28

Then asked how many feet a sow has. It said typically a sow has four feet

Then reasked the original question and it said "There are a total of 29 feet (4 for the sow, 7 x 4 for the piglets)."


clintCamp t1_j6dkrfn wrote

Which is why it would be a great virtual doctor that can discern basic ailments that can be directed to over the counter medication, or pharmaceutical, but also be able to direct you to a real doctor when it gets more complicated. Most of the normal human ailments are well documented so other doctors can figure it out which is why this would be great. The only thing I could see going awry would be when it tries to make things up to make you happy. It would probably be better at analyzing drug interactions and stuff better than real doctors who screw up like humans though.


trentgibbo t1_j6fb4be wrote

Your missing the problem. It doesn't know if something is more complicated or not. It might think a rather mundane issue is serious or vice versa.


AgeEffective5255 t1_j6g3z9i wrote

It doesn’t stop it from encountering the same problems human doctors encounter: not having all relevant information. We blame the people all the time, but the structures in place allow for errors to happen; you can’t catch a patient who is hiding symptoms or unknowingly visiting multiple doctors most times, you think ChatGPT will?


clintCamp t1_j6gb0vd wrote

If it was set up right, it would read in their medical profile and full history, and then use it's full medical knowledge to ask the patient relevant questions to narrow down potential causes, or refer them to get specific testing, which would update their profile. Unlike the real medical field, chatGPT medical could be updated with the latest research information often, so it doesn't keep using outdated info like MD's in real life.


thunder-thumbs t1_j6e9rtb wrote

These headlines are so dumb. Those tests aren’t to test whether the information is correct; it’s the curriculum design and science process that does that. Those tests are to test whether the human has learned the material, to have confidence they are supplementing their human judgement responsibly. ChatGPT taking the test, bypassing the human judgement aspect entirely, completely misses the point.


jortzin t1_j6gijyo wrote

Some say that the first person to be killed by chatgpt has already been born!


ErusTenebre t1_j6h1qhu wrote

Okay... so... couldn't GOOGLE pass a test - like literally, if it was allowed to essentially cheat on a test by using google, one could pass these tests. Tests test knowledge, not skill. These are pretty dumb articles.


toast776 t1_j6dlhbm wrote

That Wharton exam was wildly easy and even then they gave the AI second and third chances on questions. These articles are so dumb.


Cranky0ldguy t1_j6dmav0 wrote

One would think Business Insider must own a TON of stock in OpenAI. They are pouring out lots of "news stories" of what it can do. Can't say a piece of software passing any data-driven test is all that impressive. Let me know when it can accurately interpret the overall meaning of the intangible.


cc-test t1_j6e0tlk wrote

How many times is this article going to be posted on Reddit?


Douglas_Fresh t1_j6eieh1 wrote

My god I am sick of hearing about this damn thing


Autotomatomato t1_j6dpfm4 wrote

Cant wait for all the lawsuits when people discover all their work being used to train these non AIs..

I cant tell you how many cases I have seen so far of either undocumented updates/training and literal regurgitation of someone elses IP like lifting entire sections from a forbes mag article. Like at least steal better sources bros.


These bots will soon infest twitch and streaming sites with single entities managing hundreds of vtubers etc.


theduckspants t1_j6ddpb7 wrote

I have a person on my team with a masters degree in analytics from a prestigious institution and he can't solve any problems on his own, provides no value, and is on his way out.

Not saying it won't get better, but let's not pretend passing a test means anything in the real world. The only thing chatgpt would have over my guy would be the speed of uselessness


jlaw54 t1_j6e5g5l wrote

But isn’t that kind of the point? At least one of them?


theduckspants t1_j6f7phh wrote

To make bad stuff fast? I spend most of my planning time trying to find the stuff that will take the longest to cap the damage


Ebisure t1_j6dhdnm wrote

Remember when they said AI is gonna replace repetitive task like data entry? Guess professional jobs that involves regurgitating facts are gonna go too


Chrismercy t1_j6eaat8 wrote

What I’ve been wondering is if ChatGPT has access to the internet during these test?


CGFROSTY t1_j6f1fac wrote

To be honest, can't anyone do these exams if they have access to google?


Flintoid t1_j6fw8hn wrote

So I read this then asked GPC for a Michigan case that I could cite for the proposition that a plaintiff must prove causation in a product liability case. It cited a Pennsylvania case on the first try, then the next three times it tried to cite a case title I couldn't locate online, and random citation numbers that also did not retrieve actual cases.

Might be awhile before this thing writes my next brief.


greatdrams23 t1_j6epblw wrote

How does an AI bot doctor tell the difference between different rashes? Does it have a camera?

One day it will, but not yet.


ZeroBS-Policy t1_j6esusc wrote

Enough of this garbage already. I tried it. It’s stupid.


tomis28 t1_j6fb6qd wrote

Two AI lawyers arguing with each other, LOL


Temporary_Crew_ t1_j6fel0u wrote

This is the next scam techbros will be using to print money. It's usefullness is wildly overrated currently.

Still more usefull than NFTs though. Which will always be useless.


The-Real-Iggy t1_j6fwkma wrote

Such bullshit AstroTurfed nonsense, this industry breaker is good for menial tasks like lists and easy to google ideas. Ask it about complicated subjects or nuanced ideas and it’ll miss key bits of information, hell when I was shown how ‘amazing’ it was I asked it to write an essay arguing against abortion (just for shits and giggles) and the entire essay didn’t even mention Planned Parenthood v. Casey or Griswold v. Connecticut whatsoever…like it’s not remotely capable of beyond surface level writing


rpgnoob17 t1_j6g43fy wrote

It’s very good to bullshit something, but in the end, it’s still bullshit.


Swirls109 t1_j6hppbr wrote

I think the way it currently works, you won't really be able to use it for any significantly factual results. It just conglomerates like things and spits it out. So if it's sources are wrong, it will be wrong. If we don't have people to feed it factual sources then how is it ever going to continue to work?


biglakenorth t1_j6dt7cg wrote

There sure is a lot of ChatGPT hype.


JoanNoir t1_j6e23rx wrote

This tells us more about the testing than ChatGPT.


HiImDan t1_j6e88y4 wrote

I think it'll be very useful as an assistant though. When I think of lawyers, all I imagine is just stacks and stacks of paperwork.. maybe that's just tv or whatever, but I bet it's a huge pain in the butt to generate all of those documents.

The angle that I don't see it being used for is helping out socially awkward people (like myself) figure out how to word things.


ilovepups808 t1_j6el9hf wrote

Ok it passed. However, I assume that it had real time assistance from the internet or a graphing calculate with cheat sheets loaded on it. That’s a no-no in school. J/k


icecreampoop t1_j6fnirt wrote

If it means people who can’t afford these services and if this makes it accessible to the lay person at a cheap price, then why not?


reader960 t1_j6fob2t wrote

So it's on its way to becoming Johnny Sins


TennisLittle3165 t1_j6frkd2 wrote

Late to the party. How do you feed it the initial information about your problem? Does it come with pre-seeded info? It must know dictionary for sure.


Wherewithall8878 t1_j6fuizu wrote

I’m more interested in the rudimentary exams it’s failed so far.


littleMAS t1_j6fuzbw wrote

I have found it to be very human-like, giving different answers to the same question upon "Regenerate Response." Sometimes, it acts like it is rethinking the question, just like someone who gives a quick answer without much thought then providing a more thoughtful response when pressed further.


scifisreal t1_j6fwlqa wrote

That won't last long! One can dream until they put a price tag to it and start limiting it down. We're still on the hook phase.

After all, everything is documented and attached to your User, so if the AI output is used illicitly, it can be traced.


truggles23 t1_j6fyjwb wrote

Johnny sins better watch out he’s got some competition now


whitenoise89 t1_j6g1aog wrote

ChatGPT is telling you something about your tests - it’s not about to replace much of anything, though.

Sorry corpo fuckboys. Pay me.


an_undecided_voter t1_j6g42y8 wrote

And software developer, data scientist, data engineer.


thecaptcaveman t1_j6g7x3y wrote

Bullshit. No AI can touch a person. No AI can do the field work. No AI can see human work. They only make use of the data we make.


Due_Cauliflower_9669 t1_j6gt6m8 wrote

And yet evidence is gathering that AI chatbots often produce incorrect and even plagiarized info. It is not omniscient. Yet.


chidoOne707 t1_j6gtbz8 wrote

Everyone painting this dumb software as Skynet, we are far from that.


Xlash2 t1_j6gxm3p wrote

Only if passing exams and being a professional are the same thing.


Suspicious-Noise-689 t1_j6h3rd8 wrote

So the same bot that told me you can’t fly in Minecraft while I’m watching my kid fly their character in Minecraft? Interesting


rolloutTheTrash t1_j6h568j wrote

So it’s gunning to become virtual Johnny Sins?


ares7 t1_j6h5jnc wrote

Yea but, Can ChatGPT become a chess master?


popey123 t1_j6h5ncc wrote

When your doctor say he did every thing he could and the IA say otherwise


vikas_agrawal77 t1_j6h8q87 wrote

I think its accuracy and reliability will be a significant concern for a while. AI is only as good as the training data fed into it and may not be great currently at understanding the subjective nuances or ambiguous data involved in law, medicine, and business. I would consider it a good support though.


penguished t1_j6hvwlm wrote

Here's all the training material right in front of you. Now, can you pass the test? I'd fucking hope so.


wolfgang187 t1_j6hwzn1 wrote

Society is cumming in its pants too hard over this application. It's great, but also incorrect a lot of the time.


Black_RL t1_j6i0ez2 wrote

Yet they couldn’t find a better name for it…..


imnotknow t1_j6i52gz wrote

Wow, this is really triggering people. You would think we were talking about student loan forgiveness. There is a parallel there. Like suddenly, your expensive education is not so important or exclusive or special. Your fancy title is meaningless. The years of your life spent in college? Wasted.

"But it's not really AI it's machine language!" So what? the end result is the same.

"But it doesn't really know anything!" Again, so what?

"But it makes stuff up! It lies, It's wrong a lot!" SO WHAT? So is my doctor. It doesn't have to be perfect, just better and more consistent than a human.


Lifeinthesc t1_j6i8l8z wrote

Yes, please use ChatGPT as a doctor. I love to study evolution in real time.


Hummgy t1_j6i9acm wrote

Ask about video games, it will often be surface level and often have mistakes (no ChatGPT, DBD only has 1 killer).

Now multiply the seriousness of the topic by a fuck ton, like having it represent you in court or recommend medical procedures for surgeons, and I’m a lil afraid


E_Snap t1_j6flk54 wrote

insert stereotypical Redditor platitude that indiscriminately pans AI to make people on the verge of being made redundant feel better about their job security