Viewing a single comment thread. View all comments

Lionfyst t1_j8i1477 wrote

A recent paper (around Reddit somewhere) demonstrated that LLM's can do all these novel things like tell stories, or make poems, or do math or make charts despite a lack of implicit design, because the massive training organically creates all kind of sub-models in their network that can handle those types of patterns.

ChatGPT is bad at math because it's training was insufficient to give it a model that is reliable.

It's not going to be too long before someone feeds a LLM with better math training, and/or creates a hybrid that uses some other kind of technique for the math part and hands off math questions to the other engine.

41

__ingeniare__ t1_j8i7pmf wrote

That has already happened, there's a hybrid ChatGPT/Wolfram Alpha program but it's not available to the public. It can understand which parts of the user request should be handed off to Wolfram Alpha and combine it into the final output.

37

mizmoxiev t1_j8j3u9v wrote

Dang that's neat as heck, I can't wait for that

3

ixid t1_j8i2a6l wrote

We can't be that far away from AIs where you can feed them maths textbooks and then papers just as you would a human.

8

endless_sea_of_stars t1_j8ik6a6 wrote

Meta released a paper about Toolformers (yeah, probably need to workshop that name) that allow LLMs to call out to APIs like a calculator. So instead of learning how to calculate a sqrt it would simply call a calculator.

This is a pretty big deal but hasn't got a lot of attention yet.

7

semitope t1_j8ihbzl wrote

it's a little weird for AI to be bad at math

2

mattsowa t1_j8iqq5u wrote

Why?

5

semitope t1_j8jzisc wrote

because its basic functionality for computers. And it's the easiest thing because the solution is not ambiguous. But sounds like it just isn't able to put it in a math form as given.

−3

theucm t1_j8mobi7 wrote

"Its a little weird for a person to be bad at chemistry."

"Why?"

"Because it's a basic function of living things. "

3

semitope t1_j8n9mlm wrote

that's a completely wrong comparison. A person isn't born automatically being able to do math. Computer processors all have Arithmetic logic units.

−1

theucm t1_j8obg7s wrote

You missed my point, I think.

​

I'm saying that expecting a language model to be intrinsically good at math because it runs on a processor with arithmetic logic is like expecting a living thing to be good at chemistry because our brains "run" on chemical and electrical impulses. The language model AI only has the ability to access the knowledge it has been trained on, which apparently didn't include math.

2

mattsowa t1_j8kwdrj wrote

I mean if you know how it works it isn't surprising at all really..

I find the fact that it's a basic functionality for a computer irrelevant.

2

BirdLawyerPerson t1_j8ii7wd wrote

They're bad at word problems, which requires recognizing that they're being presented with a math problem at all, before determining the right formula to apply and calculating what the answer should be.

2

CovertMonkey t1_j8izcxa wrote

The math model was probably trained on those Facebook math problems involving order of operations that everyone argues about

0