Submitted by [deleted] t3_11tmu9u in MachineLearning
NotARedditUser3 t1_jcjsqta wrote
Reply to comment by Available_Lion_652 in [D] GPT-4 is really dumb by [deleted]
All language models are currently trash at math. It's not an issue of training material, it's a core flaw in how they function.
People have found some success in getting reasonable outputs from language models using language input-output chains , breaking the task up into smaller increments. Still possible to hallucinate though and i saw one really good article that explained how even tool-assisted language chains (where a language model is able to print a token in one output, to call a function in a powershell or python script to appear in the next input, to generate the correct output later on) , when generating funny unexpected numbers from a 'trusted' tool in the input, the language model sometimes still disregards it, if it's drastically farther off than what the model's own training would lead it to expect the answer to look like.
Which also makes sense - the way the language model works , as we all know, it's just calculating which words look appropriate next to each other. Or tokens, to be more exact. The language model very likely doesn't distinguish much of a difference from 123,456,789 and 123,684,849 , both probably evaluate to roughly the same accuracy stat when it's looking for answers to a math question, in that both are higher than some wildly different answer such as.... 4.
yumiko14 t1_jcju8tw wrote
link to that article please
NotARedditUser3 t1_jckof25 wrote
[deleted] OP t1_jcknwkx wrote
[deleted]
[deleted] OP t1_jcko3jr wrote
[deleted]
Available_Lion_652 t1_jcjuxp5 wrote
Is not an article. Someone on Twitter estimated the total compute power based on a report that Microsoft had 25k A100 GPU racks. That was all
NotARedditUser3 t1_jckne7y wrote
He wasn't talking to you, dingus
Available_Lion_652 t1_jckrfwd wrote
I don t understand why you insulted me. I really tried to wrote a post about a case where GPT 4 hallucinate s, with all good intentions, but I guess you have to be a smartass
Available_Lion_652 t1_jcjt3yi wrote
The tokenizer of Llama from Facebook splits numbers into digits such that the model is better at math calculations. The question that I asked the model is more than adding or subtracting numbers. The model must understand what a perfect cube is, which it does, but also it must not hallucinate when reasoning, which it fails at
kaoD t1_jcjvsmo wrote
Looks like you don't understand the comment you're replying to.
Available_Lion_652 t1_jcjw5cf wrote
I understood the post really well. My comment was an augmentation. I think you did not understand what I said
Viewing a single comment thread. View all comments