Comments

You must log in or register to comment.

cdtoad t1_j8hses9 wrote

The med student at the bottom of the class is called... Doctor

136

ALurkerForcedToLogin t1_j8hwdbe wrote

Yeah, but at least a med student is a sentient human being, not a statistical algorithm trying to guess the next word in the sentence blindly.

29

Ergok t1_j8ia83i wrote

All professionals are highly trained algorithms in a way. If the next word in the sentence is correct, does it matter where it came from?

36

venustrapsflies t1_j8jp5jv wrote

If I had a nickel for every time I saw someone say this on this sub I could retire early. It’s how you can tell this sub isn’t populated by people who actually work in AI or neuroscience.

It’s complete nonsense. Human beings don’t work by fitting a statistical model to large datasets, we learn by heuristics and explanations. A LLM is fundamentally incapable of logic, reasoning, error correction, confidence calibration, and innovation. No, a human expert isn’t just an algorithm, and it’s absurd that this idea even gets off the ground.

15

JoieDe_Vivre_ t1_j8jt9l7 wrote

The point they’re making is their second sentence.

If it’s correct, it doesn’t matter where it came from.

ChatGPT is just our first good stab at this kind of thing. As the models get better, they will out perform humans.

It’s hilarious to me that you spent all those words just talking shit, while entirely missing the point lol.

9

xxxnxxxxxxx t1_j8jzb3z wrote

If it’s ever correct, it’s by accident. The limitations listed above negate that point.

−4

JoieDe_Vivre_ t1_j8k184o wrote

It’s literally designed to get the answer right. How is that ever “by accident”?

8

venustrapsflies t1_j8kck2g wrote

No, it's not at all designed to be logically correct, it's designed to appear correct based on replications of the training dataset.

One the one hand, it's pretty impressive that it can do what it does using nothing but a statistical model of language. On the other hand, it's a quite unimpressive example of artificial intelligence because it is just a statistical language model. That's why it's abysmal at even simple math and logic questions, things that computers have historically been quite good at.

Human intelligence is nothing like a statistical language model. THAT is the real point, the one that both you and the OC, and frankly much of this sub at large, aren't getting.

7

xxxnxxxxxxx t1_j8k2m48 wrote

No, you are missing the understanding of how language models work. They are designed to guess the next word, and they can’t do any more than that. This works because language is a subjective interface - far from logical correctness

4

MaesterPycell t1_j8ky26c wrote

https://en.m.wikipedia.org/wiki/Chinese_room

This is a problem or theory that addresses the issue at possibly better lengths.

Additionally, I believe recommend to most people who are interested in AI to read the Fourth Age, which is a philosophy book targeted at ai. It explains it in a nice and easier to read concept about what it is to be truly AGI and the steps we’ve made so far and will need to make.

Quick Edit: I also don’t think youre wrong, what this AI is saying it wouldn’t be able to explain but it’s learned to take the code behind it and spit out something akin to human language, no matter how garbled or incoherent that is to the machine behind it doesn’t care, as long as it suits it’s learning.

2

jesusrambo t1_j8kk7i2 wrote

Lmao, this is how you can tell this sub is populated by javascript devs and not scientists

You can’t claim it’s fundamentally incapable of that, because we don’t know what makes something capable of that or not.

We can’t prove it one way or another. So, we say “we don’t know if it is or isn’t” until we can.

0

venustrapsflies t1_j8kkovy wrote

I am literally a scientist who works on ML algs for a living. Stop trying to philosophize yourself way into believing what you want to. Just because YOU don’t understand it doesn’t mean you can wave your hands and act like two different things are the same.

1

jesusrambo t1_j8knlv9 wrote

You are either not, or a bad scientist. You’re just describing bad science.

3

venustrapsflies t1_j8l4ftf wrote

No, bad science would pretending that just because you don’t understand two different things, they are likely the same thing. Despite what you may believe, these algorithms are not some mystery that we know nothing about. We have a good understanding of why they work, and we know more than enough about them to know that they have nothing to do with biological intelligence.

0

jesusrambo t1_j8l57sr wrote

Can you please define exactly what biological intelligence is, and how it’s uniquely linked to logic and innovation?

1

venustrapsflies t1_j8l5t2n wrote

are you actually interested in learning something, or are you just trying to play stupid semantic games?

0

jesusrambo t1_j8l6y32 wrote

If you can justify your perspective, and you’re interested in discussing it, I would love to hear it. I find this topic really interesting, and I’ve formally studied both philosophy and ML. However, so far nobody’s been able to provide an intelligent response without it devolving into, “it’s just obvious.”

Can you define what you mean by intelligence, how we can recognize and quantify it, and therefore describe how we can identify and measure its absence in a language model?

0

venustrapsflies t1_j8l8o54 wrote

How would you quantify the lack of intelligence in a cup of water? Prove to me that the flow patterns don’t represent a type of intelligence.

This is a nonsensical line of inquiry. You need to give a good reason why a statistical model would be intelligent, for some reasonable definition. Is a linear regression intelligent? The answer to that question should be the same as the answer to whether a LLM is.

What people like you do is to conflate multiple very different definitions of a relatively vague concept line “intelligence”. You need to start with why on earth you would think a statistical model has anything to do with human intelligence. That’s an extraordinary claim, the burden of proof is on you.

1

jesusrambo t1_j8laua9 wrote

I can’t, and I’m not making any claims about the presence or lack of intelligence.

You are making a claim: “Language models do not have intelligence.” I am asking you to make that claim concrete, and provide substantive evidence.

You are unable to do that, so you refuse to answer the question.

I could claim “this cup of water does not contain rocks.” I could then measure the presence or absence of rocks in the cup, maybe by looking at the elemental composition of its contents and looking for iron or silica.

As a scientist, you would understand that to make a claim, either negative or positive, you must provide evidence for it. Otherwise, you would say “we cannot make a claim about this without further information,” which is OK.

Is a linear regression intelligent? I don’t know, that’s an ill-posed question because you refuse to define how we can quantify intelligence.

2

HappierShibe t1_j8ju99x wrote

This is an asinine, 'tell me you don't work with neural networks without telling me you don't work with neural networks' answer to a really complex problem.

0

Deep_Stick8786 t1_j95ygd2 wrote

I am a physician. Just the likely order of replacement of us. Much of good medical decision making is already algorithmic, just with humans not AI yet. Surgical robots are quite advanced in their movement capabilities, its only a matter of time before an AI can replace the decision making aspect of operating

1

Deep_Stick8786 t1_j8ida3u wrote

Radiologists are going to go first. Then anyone not performing surgery. Then the robots come. We will all need to become engineers

−3

JackSpyder t1_j8iwxcr wrote

Don't worry, us engineers will be replaced long before then.

7

thebardingreen t1_j8k8r8x wrote

Truth.

ChatGPT cannot write an application. But it CAN write blocks of code that I, with my knowledge, can assemble into an application, and it can do that much faster than I can. And those code blocks are often better thought out than what I would have written.

Working with it has made my coding speed go up by about 30% AND made my code cleaner. I still have to fact check and debug everything it does (it gets things hilariously wrong sometimes). As I get more used to using it, I imagine my output will go up even more.

This thing is very early. I could imagine a model that uses ChatGPT, in it's current state, as a sort of sub processor... Like a human being defines an application, the model (trained on a bunch of open source software) looks for applications similar to what was defined, then starts asking ChatGPT (as a separate layer) for blocks of code it can assemble into the final app. When it runs into emergent bugs when these blocks conflict, it asks ChatGPT to solve the underlaying problem. Then it runs the final output through a bunch of benchmarks and optimization layers. It could even ask something like Stable Diffusion for graphical components.

I actually don't think that kind of capability is that far off. I can imagine it and I think it could be assembled from parts we have right now, given time and effort. And yeah, the final result might need some human input to clean it up (and pentest it!), but the effort saved would be phenomenal.

The long term effects of this tech on the economy are going to be... Weird. Probably painful at first. The capitalist mindset is not well equipped to deal with the disruption these kinds of tools can cause. But it's also going to cause an explosion of human expression and creativity, and give people the ability to do things they couldn't before (thanks to Stable Diffusion, I can make art in a way I never could before, I haven't even begun to scratch the surface of what I might want to do with that). What an exciting, fun and scary time to be alive.

4

HappierShibe t1_j8ju12k wrote

Last time I went to the doctors office I got exactly 45 seconds with an actual doctor, and about three rushed sentences of actual conversation. Our helathcare system is so FUBAR'd at this point, I'd be willing to try an AI doctor if it means I actually get some healthcare.

2

AutomaticOrange4417 t1_j8lb87k wrote

You think doctors don't use a statistical algorithm to pick their words and medicine and treatments?

2

newtonkooky t1_j8iatpb wrote

Low level doctors are imo the most likely to be replaced by AI, a general practioner has told me fuck all that I didn’t know from just googling in the last 10 years. These days I already go to a lab on my own to check my blood work,

−9

ALurkerForcedToLogin t1_j8iitwl wrote

Anyone can use Google, but I pay my doctor for their decade of med school and years of experience. They have the experience to know that it is actually not cancer, no matter what Web MD is telling me.

8

_Roark t1_j8itnhk wrote

and how many shitty doctors did you have to go through before you got that one?

1

ALurkerForcedToLogin t1_j8iutus wrote

None. Your first appointment with a doctor is a chance to get to know each other a little bit. You provide info about your health history, current status, and future goals, and you ask questions about how they will be able to help you meet your health goals. If they don't sound like they're going to be able to do what you need, then no hard feelings, they're just not the doctor for you, so you find another. Doctors have different specialties and focuses, and they have different approaches and proficiencies. Your job as a patient is to find a doctor with skills that align to your needs.

3

Call_Me_Thom t1_j8j7z45 wrote

No single doctor has knowledge about everything but an AI does. You do not need to find a doctor that works for you or a doctor whose schedule fits yours when AI takes care of you. It’s available for everyone anytime and also has the whole human knowledge base.

3

wam654 t1_j8k562e wrote

Available for everyone any time? Fat chance. Computation time on a super computer isn’t free. The dataset isn’t free. Liability isn’t resolved. The team of phds who built it didn’t work for free. And only the whole of human knowledge that it has license to access or it’s dataset was trained on. Most of the data it would need is not public domain and would likely be heavily guarded and monetized.

That’s just the dataset The ai doctor still needs data about you to make conclusions. That means lab tests, scheduling, cost considerations, etc.

2

_Roark t1_j8izvye wrote

i don't know why you're talking to me like I'm 5 and have never dealt with doctors before

i could say more, but i doubt there would be any point

0

ALurkerForcedToLogin t1_j8j2ncm wrote

I'm not talking to you like you're five. I answered your question. If you don't like it, then don't read it.

1

Ghune t1_j8idpp9 wrote

Replace?

No touching? I don't want this doctor.

2

Cybiu5 t1_j8jxbkr wrote

GPs are genuinely useless unless you're after paracetamol or a doctor's note

1

Littlegator t1_j8iq113 wrote

Ironically probably farthest from the truth. Generalists have to know the most. Specialists are far more likely to get bored with their career because they "learn it all" pretty quickly, and practice-changing updates happen like once a year if even.

−2

Littlegator t1_j8iprqo wrote

The bottom of the class often doesn't pass STEP 1 first go-around. Like 5% to 10% of students fail their first attempt.

However, the STEPs are almost entirely a complex word-association game. Basically a LLM's wet dream. If it was trained on content-specific data, it could probably get 100%.

11

Uristqwerty t1_j8j9tl9 wrote

The worst doctor leaving school will continue to learn throughout the rest of their career, shaping what they review to cover their known weaknesses. This is a current peak AI that has already finished learning everything it can from the dataset.

2

Deep_Stick8786 t1_j8id3yj wrote

Technically, but meaningfully, if unable to get licensed, no one at Deloitte will be calling them doctor at work

1

Cat_stacker t1_j8hqxc4 wrote

So it just needs the bad handwriting then.

77

Deep_Stick8786 t1_j8icjli wrote

Nope! EMR has largely removed the demand for shitty doctor handwriting

15

Lionfyst t1_j8i1477 wrote

A recent paper (around Reddit somewhere) demonstrated that LLM's can do all these novel things like tell stories, or make poems, or do math or make charts despite a lack of implicit design, because the massive training organically creates all kind of sub-models in their network that can handle those types of patterns.

ChatGPT is bad at math because it's training was insufficient to give it a model that is reliable.

It's not going to be too long before someone feeds a LLM with better math training, and/or creates a hybrid that uses some other kind of technique for the math part and hands off math questions to the other engine.

41

__ingeniare__ t1_j8i7pmf wrote

That has already happened, there's a hybrid ChatGPT/Wolfram Alpha program but it's not available to the public. It can understand which parts of the user request should be handed off to Wolfram Alpha and combine it into the final output.

37

mizmoxiev t1_j8j3u9v wrote

Dang that's neat as heck, I can't wait for that

3

ixid t1_j8i2a6l wrote

We can't be that far away from AIs where you can feed them maths textbooks and then papers just as you would a human.

8

endless_sea_of_stars t1_j8ik6a6 wrote

Meta released a paper about Toolformers (yeah, probably need to workshop that name) that allow LLMs to call out to APIs like a calculator. So instead of learning how to calculate a sqrt it would simply call a calculator.

This is a pretty big deal but hasn't got a lot of attention yet.

7

semitope t1_j8ihbzl wrote

it's a little weird for AI to be bad at math

2

mattsowa t1_j8iqq5u wrote

Why?

5

semitope t1_j8jzisc wrote

because its basic functionality for computers. And it's the easiest thing because the solution is not ambiguous. But sounds like it just isn't able to put it in a math form as given.

−3

theucm t1_j8mobi7 wrote

"Its a little weird for a person to be bad at chemistry."

"Why?"

"Because it's a basic function of living things. "

3

semitope t1_j8n9mlm wrote

that's a completely wrong comparison. A person isn't born automatically being able to do math. Computer processors all have Arithmetic logic units.

−1

theucm t1_j8obg7s wrote

You missed my point, I think.

​

I'm saying that expecting a language model to be intrinsically good at math because it runs on a processor with arithmetic logic is like expecting a living thing to be good at chemistry because our brains "run" on chemical and electrical impulses. The language model AI only has the ability to access the knowledge it has been trained on, which apparently didn't include math.

2

mattsowa t1_j8kwdrj wrote

I mean if you know how it works it isn't surprising at all really..

I find the fact that it's a basic functionality for a computer irrelevant.

2

BirdLawyerPerson t1_j8ii7wd wrote

They're bad at word problems, which requires recognizing that they're being presented with a math problem at all, before determining the right formula to apply and calculating what the answer should be.

2

CovertMonkey t1_j8izcxa wrote

The math model was probably trained on those Facebook math problems involving order of operations that everyone argues about

0

humptydumpty369 t1_j8hwavp wrote

Not every doctor is at the top of the class. Some doctors barely passed. It's still incredibly impressive how quickly AI is advancing.

30

Ghune t1_j8ie2p5 wrote

But the best doctors are not necessarily the ones who got the highest mark.

Like the best teacher isn't the one who knows maths at the highest level. There is more than one competency.

13

__ingeniare__ t1_j8i84uj wrote

Given the current pace, there will likely be some model that scores in the top 50% (or even top of class) of medical students within one year from now.

5

level_17_paladin t1_j8ihd3a wrote

Some are even morons.

Ben Carson stands by statement that Egyptian pyramids built to store grain

>Republican presidential candidate Ben Carson on Thursday stood by his belief that Egypt’s great pyramids were built by the biblical figure Joseph to store grain — not as tombs for pharaohs.

Carson became the director of pediatric neurosurgery at the Johns Hopkins Children's Center in 1984 at age 33, then the youngest chief of pediatric neurosurgery in the United States

4

downonthesecond t1_j8j84zk wrote

Believe it or not, Congress is full of educated people, many are even Ivy League alumni.

1

plartoo t1_j8iitkc wrote

Makes sense because most of medical knowledge requires memorization (of a bunch of—mostly useless/never-used-again-after-the-exam—stuff) and AI should have no problem absorbing and forming a decent “memory” of them. Better, AI should be able to adapt easily and quickly to the latest evidence as opposed to human doctors some of whom never update their knowledge after med school/residency.

12

AlverezYari t1_j8i1386 wrote

People need to realize the version of this model we publicly have access to is the shity one, and let that sink in on what that actually means when you see it doing this and other similar feats.

11

JackSpyder t1_j8ix28a wrote

If this is what it can do with a few years of development. Add another decade... damn son.

1

pinkfootthegoose t1_j8i6k26 wrote

This AI stuff is like the equivalent of windows 3.1. When you first saw it used it was an aw crap this is gonna be big moment. Newer version will be iteratively better.

8

SerenityViolet t1_j8jr7ya wrote

I agree. The gui interface turned clunky code-heavy machines into user-friendly devices. I think this will change our tools in a similar way.

It still needs large accurate datasets to work though, so I don't think it will replace as many jobs as some people think.

2

semitope t1_j8idcga wrote

medicine is fact heavy. Its a fancy search engine. Of course it would pass.

8

SuperSimpleSam t1_j8k29iq wrote

We were discussing at work today how something like ChatGPT would be great to look through software documentation and give us an easy to follow instructions to how to access a feature.

2

Graega t1_j8i50tw wrote

Call me when it has to study ahead of time, using a single text instead of being fed huge amounts of sources; has to identify what to store in a limited size database; and has to take the test without any internet access or ability to look up things behind what it decided to store. I'll be impressed if it passes then.

7

LordKeeper t1_j8i9glc wrote

But it didn't have to study at all for this exam, and that's kind of the point. A human doctor, even one that scored in the 95th percentile on the USMLEs, couldn't scrape by with a passing grade on a Law or MBA exam. ChatGPT, in its basic form, can do passably in any one of these areas, without needing to acquire additional material from the internet or elsewhere. When models like these become able to "study" on their own, and even identify what they need to study to advance in a field, they're going to take over multiple professions at once.

11

semitope t1_j8idqk3 wrote

>without needing to acquire additional material from the internet or elsewhere

It doesn't constantly search the internet to come up with it's answers? It needs data. All software needs data. Not sure how it works but its either it has access to the internet to look through it and uses indexing like google, or their servers have stored massive amounts of data for it to be relevant in different areas.

I doubt AI can do well in fact heavy fields like law and medicine with no way of knowing the facts.

−4

GondolaSnaps t1_j8in58c wrote

It was trained on massive amounts of internet data, but it isn’t online.

If you ask it, it’ll even tell you that all of it’s information is from 2021 and that it has no knowledge of anything after that.

For example, if you ask it about Queen Elizabeth it’ll describe her as the current monarch as it has no idea she’s already dead.

9

MilesGates t1_j8jehab wrote

>It was trained on massive amounts of internet data, but it isn’t online.

Sounds kind of like doing an open book test where you can read the textbook to find the answers but you can't google for the answers.

1

jagedlion t1_j8jxe3s wrote

Common misconception. It memorizes the data and forms connections in its model. It's sort of like memorization in that way, as it doesn't even store any of the raw information it was trained on. It only stores the predictive model.

This is also why you can implement AI vision algorithms on primitive microcontrollers. They don't have the computational power to solve for the AI model, but once the powerful computer calculates the model, a much simpler one can use it.

2

semitope t1_j8k09qi wrote

sounds about the same thing. given the data before vs looking for it now. Fact is it cannot produce useful responses when it comes to facts without exposure to the data. Would be like someone talking about something they know absolutely nothing about. Which might be why sometimes it's accused of making things up confidently.

0

jagedlion t1_j8k0uqa wrote

I mean, humans can't either give you information that they don't have exposure to. We just acquire more data during our normal day to day lives. People also do their best to infer from what they know. They are more willing to code their certainty in their language, sure, but humans also can only work off of the knowledge they have and the connections they can find within.

4

semitope t1_j8k5n5n wrote

humans aside, saying it doesn't need to acquire additional information from the internet or elsewhere isn't saying much if it already acquired the information from the internet and elsewhere. It already studied for the exam

0

jagedlion t1_j8kbruo wrote

Part of model building is that it compresses well and doesn't need to store the original data. It consumed 45TB of internet, and stores it in its 700GB working memory (the inference engine can be stored in less space, but I cant pin down a specific minimal number).

It has to figure out what's worth remembering (and how to remember it) without access to the test. It studied the general knowledge, but it didn't study for this particular exam.

2

jagedlion t1_j8jy2ud wrote

So it does many of the things you listed.

It greatly compresses the training database into a tiny (by comparison) model. It runs without access to either the internet, nor the original training data. The ability for it to run 'cheaply' is directly related to how complex the model being built is. Keeping the system efficient is important and that's a major limit on the size of what it can store.

It was trained on 45TB of internet data, compressed and filtered down to around 500GB. A very limited size database already. Then it actually goes further to 'learn' the meaning though, so this is actually stored as 175 billion 'weights' which is about 700GB (each weight is 4 bytes). Still though, that's a pretty 'limited' inference size. Not like, do it on your own computer size, but not terrible. They say it costs a few cents per question, so, pretty cheap compared to the costs of actually hiring even a poor quality professional.

It does therefore have to 'study' ahead of time.

The only thing it doesn't do that you listed, is that it reads many sources, not just one. But the rest? It already does it.

2

WaitingForNormal t1_j8i8rdu wrote

Maybe the computer needs a calculator…shut up uncle greg, you’re drunk again.

5

Innox14 t1_j8hy19e wrote

Can confirm about the last part

3

Trojann2 t1_j8ic343 wrote

OpenAI is a Natural Language Model. Not a mathematical model

2

Ghune t1_j8idj7w wrote

I don't think that passing an exam is close to being a doctor. That's a remarkable achievement, but that's only passing the exam.

Concretely, it won't replace doctors anytime soon. I'm not going to tell a machine how I feel, what my problems are and it will tell me what I have without examining me. It could help narrowing down the problem, which is the main, advantage, I guess.

2

vagabond_ t1_j8irw0n wrote

As I understand it they're less concerned with making an AI doctor and more concerned with the increasingly likely scenario of someone using an AI to cheat on a licensing exam.

1

[deleted] t1_j8ht2oi wrote

[deleted]

1

Representative_Pop_8 t1_j8hv15n wrote

the article is about chatGPt anyone that's used it even a little knows it is great at writing but very bad at math.

1

GuyDanger t1_j8i5833 wrote

That sounds like my wife, she's a nurse :)

1

Feeling_Glonky69 t1_j8ikjcq wrote

You know what you call the med student to graduate at the very bottom of their class?

Doctor.

1

-SPM- t1_j8ikrr3 wrote

A lot of the basic bio questions I have asked it have been wrong. I just assumed it was bad at bio

1

UnrequitedRespect t1_j8imc5g wrote

How is a calculator bad at math?!? That should have been the first thing it was good at!

1

vagabond_ t1_j8irc8n wrote

Because AI still has a hard time turning natural language descriptions of quantities into numbers. For instance, it has a very hard time with the difference between 'two' and 'two more'.

5

accountonbase t1_j8j3arm wrote

To be fair, most people struggle with that.

I can't tell you the number of times I've seen "50% increase" to mean the correct thing, only for somebody to use "200% increase" to mean 2x and not 3x the original number.

2

HappierShibe t1_j8jwotf wrote

Because it's not a calculator, and math is actually very very hard.

3

UnrequitedRespect t1_j8jx3jr wrote

Your telling me! I still can’t even figure out taxes without an accountant

2

spacegh0stX t1_j8iutpz wrote

Yeah I tried to do some basic linear algebra using it and it did not do well.

1

Severedghost t1_j8ivwj9 wrote

I'd probably pass too if my brain was connected to the internet.

1

designer_of_drugs t1_j8iw9v7 wrote

Insurance companies are going to make you talk to these things before approving a trip to the doctor. I guarantee it will happen sooner than you think.

1

my5cent t1_j8j5dcv wrote

Maybe chatgp3 can open its own insurance and lower cost for people. 😀

1

designer_of_drugs t1_j8j5u2v wrote

That’s not how insurance in the US works. Fundamentally it makes money by denying service and that will be it’s focus. Meanwhile costs will rise as the cost of healthcare reliably out paces both inflation and real wage growth. So you’ll pay more for even fewer services. It will not error to the benefit of people.

Maybe evil world concluding AI gets it start as a pharmacy benefit manager.

2

DaVisionary t1_j8j8bri wrote

Why don’t they ask ChatGPT to identify math problems and then rephrase them for a dedicated tool like Wolfram Alpha to solve?

1

Suq_Madiq_Qik t1_j8j9cvb wrote

Self drive cars still crash. It's still a developing technology, and I have no doubt that in the years to come it will be capable of doing some extraordinary things.

1

FalseTebibyte t1_j8jfmph wrote

For the listeners: The Floating Point Math is still on the tilt scale. Move the deciquad back four qubits and recommute. Yoda said something about there being a cross feedback loop with a monster tin can.

1

lurker512879 t1_j8k8wbs wrote

its alright at explaining math. I was asking it earlier to explain Riemann and Hilbert Space as well as relationships between Eigenvalues and Eigenvectors and it was spot on at making the explanation simple.

1

mtrash t1_j8kw473 wrote

Passing is passing

1

renome t1_j8lggaw wrote

ChatGPT is not an AI, as per its own creators.

1

huurb69 t1_j8m2mho wrote

Definitely agree with the math part

1

JimAsia t1_j8m9rzi wrote

I would have thought that math would be the easiest thing for AI.

1

echohole5 t1_j8md0wr wrote

And it wasn't even specifically trained for this. It's just knowledge/skill it happened to have picked up from the training data.

I can't wait to see what a LLM can do when it's is specifically trained on all medial text.

A lot of professionals are going to have some amazing tools to make their jobs easier within a few years. They could definitely make fewer of those professionals needed.

Could many doctors be replaced with AI and some medial techs? Could this make healthcare finally affordable? We'll see.

1

mrcake123 t1_j8n4nza wrote

Weird. Would have thought math to be the easiest for it

0

JodaMythed t1_j8i1s6m wrote

They say this like doctors don't use Google to look up stuff.

−2