Viewing a single comment thread. View all comments

Borrowedshorts t1_jds6tbd wrote

Math is hard for people too, and I don't think GPT 4 is worse than the average person when it comes to math. In many cases, math requires abstract multiple step processing which is something LLM's typically aren't trained on. If these models were trained on processes rather than just content, they'd likely be able to go through the steps required to perform mathematical operations. Even without specific training, LLM's are starting to pickup the ability to perform multiple step calculations, but we're obviously not all the way there yet.

2

RadioFreeAmerika OP t1_jduh0w6 wrote

Hmm, is it valid to make an inverse conclusion from this in the following way: LLMs have problems with maths that requires multistep processes. Some humans are also bad at maths. In conclusion, these humans can be assumed to also have problems with or are lacking multistep processes?

1