Submitted by balancetheuniverse t3_11rc0wa in dataisbeautiful
Jackdaw99 t1_jc9244f wrote
Reply to comment by Empty_Insight in Exam results for recently released GPT 4 compared to GPT 3.5 by balancetheuniverse
But surely it must rate the sources it uses. Besides it seems to be very good at SAT math, which is obviuously easier, but would rely on the same mimicry.
thedabking123 t1_jc93iop wrote
that's not the way that the system works.
You're using symbolic logic, its thinking is more like an intuition- a vastly more accurate intuition than ours, but limited nonetheless.
And the kicker? Its intuition of what words, characters etc. you are expecting to see. It doesn't really logic things out, it doesn't hold concepts of objects, numbers, mathematical operators etc.
It intuits an answer having seen a billion similar equations in the past and guesses at what characters on the keyboard you're expecting to see based on pattern matching.
Jackdaw99 t1_jcawuaf wrote
I can tell your reply wasn't written by GPT. The possessive "its" doesn't take an apostrophe....
jk
thedabking123 t1_jcbbado wrote
lol- it may make the same mistake if enough people on the internet make the mistake... OpenAI uses all web data to train the machine.
Empty_Insight t1_jc94vo9 wrote
Even if the source is 'right,' it might not pick up the context necessary to answer the question appropriately. I would consider the fact that different prompts resulted in different answers to what is effectively the same question might support that idea.
Maybe ChatGPT could actually give someone the answer of how to make meth correctly if given the right prompt, but in order to even know how to phrase it you'd need to know quite a bit of chemistry- and at that point, you could just as easily figure it out yourself with a pen and paper. That has the added upside of the DEA not kicking in your door for "just asking questions" too.
As far as calculus goes, I can imagine some of the inputs might be confusing to an AI that is not specifically trained for them since the minutiae of formatting is essential. There might be something inherent to calculus that the AI has difficulty understanding, or it might just be user error too. It's hard to say.
Edit: the other guy who responded's explanation is more correct, listen to them. My background in CS is woefully lacking, but their answer seems right based on my limited understanding of how this AI works.
Viewing a single comment thread. View all comments