Viewing a single comment thread. View all comments

royalsail321 t1_jdqq7yo wrote

If these LLMs become properly trained in mathematical logic it may make them more capable of other reasoning as well

2

FoniksMunkee t1_jdqtjhv wrote

This opinion is not shared by MS. In their paper discussing the performance of ChatGPT 4 they referred to the inability of ChatGPT 4 to solve some simple maths problems. They commented:

"We believe that the issue constitutes a more profound limitation."

They say: "...it seems that the autoregressive nature of the model which forces it to solve problems in a sequential fashion sometimes poses a more profound difficulty that cannot be remedied simply by instructing the model to find a step by step solution" and "In short, the problem ... can be summarized as the model’s “lack of ability to plan ahead”."

So they went on to say that more training data will help - but will likely not solve the problem and made an offhand comment that a different architecture was proposed that could solve it - but that's not an LLM.

So yes, if you solve the problem - it will be better at reasoning in all cases. But the problem is LLM's work in a way that makes that pretty difficult.

4