Submitted by RadioFreeAmerika t3_122ilav in singularity
As stated in the title, I can't understand why math seems so hard for LLMs.
In many senses, math is a language. Large LANGUAGE Models are tailored to languages.
Even if LLMs don't "understand math", when they are trained on enough data that states 2+2=4 they should be able to predict that after "2+2=" comes "4" with an overwhelming probability.
Furthermore, all math problems can be expressed in language and vice versa, so if 2+2=4 is hard, "two plus two equals four", shouldn't. LLMs should even be able to pick up on maths logic through stories. The SEVEN Dwarfs, "TWENTY-EIGHT days later", "Tom and Ida are going to the market to buy apples, Tom buys two green apples and Ida buys three red apples, how many apples do they have? What do you think kids? Let me tell you, the answer is five, they have five apples.", ... .
I am no expert on the issue, but from a lay perspective, I just don't get it.
21_MushroomCupcakes t1_jdqdh4k wrote
We're kinda language models and we're often bad with math, and they didn't grow up having to spear a gazelle.