RadioFreeAmerika
RadioFreeAmerika OP t1_jduhe5e wrote
Reply to comment by KGL-DIRECT in Why is maths so hard for LLMs? by RadioFreeAmerika
Thanks for your reply! And what an interesting use case you present. Haven't thought about generating example data for courses yet, but it makes total sense. Just have to check for inconsistencies with the maths I guess. And after having played around with it some more yesterday evening, the model seems to have improved in that regard in the last few days.
RadioFreeAmerika OP t1_jduh0w6 wrote
Reply to comment by Borrowedshorts in Why is maths so hard for LLMs? by RadioFreeAmerika
Hmm, is it valid to make an inverse conclusion from this in the following way: LLMs have problems with maths that requires multistep processes. Some humans are also bad at maths. In conclusion, these humans can be assumed to also have problems with or are lacking multistep processes?
RadioFreeAmerika OP t1_jdufzz4 wrote
Reply to comment by turnip_burrito in Why is maths so hard for LLMs? by RadioFreeAmerika
But even with $60/h, this might already be profitable if you replace a job that has a higher hourly wage. Lawyers, e.g. At 14.4/h, you beat minimum wage. For toying around, yeah, that's a bit expensive.
RadioFreeAmerika OP t1_jdrfevx wrote
Reply to comment by Personal_Problems_99 in Why is maths so hard for LLMs? by RadioFreeAmerika
They named it Bard. What did you expect ;-)
Do you have access to GPT-4? I only played around with the public version on OpenAi and when prompted it didn't even know about GPT-4, specifically.
RadioFreeAmerika t1_jdrezc7 wrote
Reply to comment by often_says_nice in Are We Really This Lucky? The Improbability of Experiencing the Singularity by often_says_nice
Could be. I asked in another post about LLMs and maths capabilities, and it seems that LMMs would profit greatly from the capability to do internal simulations. LLMs can't do this currently, and people commented that in the Microsoft paper, they state that (current?) LLMs models are conceptually unable to do more than linear sequence processing of one sequence. Possible workarounds are plug-ins or neuro-symbolic AI models.
Nevertheless, maybe our reality is just the internal simulation of an ASIs prompt response. Who knows, would that be ironic?
Your second question is an eons-long discussion and greatly depends on how you define god.
RadioFreeAmerika OP t1_jdrcxqw wrote
Reply to comment by Personal_Problems_99 in Why is maths so hard for LLMs? by RadioFreeAmerika
From which LLM is this? Maybe it got improved in the last few days. A few days ago, similar queries didn't work for me with ChatGPT and Bing.
RadioFreeAmerika OP t1_jdrayae wrote
Reply to comment by Personal_Problems_99 in Why is maths so hard for LLMs? by RadioFreeAmerika
I am always friendly to it. But your results would support the theory that it is better at "two+two" than "2+2".
RadioFreeAmerika OP t1_jdr9di9 wrote
Reply to comment by Personal_Problems_99 in Why is maths so hard for LLMs? by RadioFreeAmerika
Thanks, I guess.
RadioFreeAmerika OP t1_jdr81hd wrote
Reply to comment by Personal_Problems_99 in Why is maths so hard for LLMs? by RadioFreeAmerika
Why LLMs poor maths?
RadioFreeAmerika OP t1_jdr6zub wrote
Reply to comment by No_Ninja3309_NoNoYes in Why is maths so hard for LLMs? by RadioFreeAmerika
Looking forward to neurosymbolic AI then.
RadioFreeAmerika OP t1_jdr46f0 wrote
Reply to comment by throwawaydthrowawayd in Why is maths so hard for LLMs? by RadioFreeAmerika
Very insightful! Seems like even without groundbreaking stuff, more efficient hardware will likely make the solutions you mentioned more feasible in the future.
RadioFreeAmerika OP t1_jdr3b6j wrote
Reply to comment by ArcticWinterZzZ in Why is maths so hard for LLMs? by RadioFreeAmerika
Thank you very much for your clarification! Do you know if it is possible to make a LLM with more space and greater complexity than O(1) or how it possibly could be added to GPT-4 with or without plug-ins?
RadioFreeAmerika OP t1_jdr2woq wrote
Reply to comment by inigid in Why is maths so hard for LLMs? by RadioFreeAmerika
On the one hand, while we read one Wikipedia page, the AI could train on all information on multiplication. On the other hand, yes, we might need a dataset for maths.
RadioFreeAmerika OP t1_jdr25uz wrote
Reply to comment by FoniksMunkee in Why is maths so hard for LLMs? by RadioFreeAmerika
So plugins I guess? Or completely integrating another model?
RadioFreeAmerika OP t1_jdr091l wrote
Reply to comment by Personal_Problems_99 in Why is maths so hard for LLMs? by RadioFreeAmerika
Why LLMs not do two plus two?
RadioFreeAmerika OP t1_jdqnm1k wrote
Reply to comment by ecnecn in Why is maths so hard for LLMs? by RadioFreeAmerika
Hmm, now I'm interested in what would happen if you integrate the training sets before training, have some kind of parallel or two-step training process, or somehow merge two differently trained or constructed AIs.
RadioFreeAmerika OP t1_jdqlcsd wrote
Reply to comment by turnip_burrito in Why is maths so hard for LLMs? by RadioFreeAmerika
I also don't think it is a weakness of the model, just a current limitation I didn't expect from my quite limited knowledge about LLMs. I am trying to gain some more insights.
RadioFreeAmerika OP t1_jdqky02 wrote
Reply to comment by throwawaydthrowawayd in Why is maths so hard for LLMs? by RadioFreeAmerika
Ah, okay, thanks. I have to look more into this vector-number representation.
For the chatbot thing, why can't the LLM generate a non-displayed output, "test it", and try again until it is confident it is right and only then display it? Ideally, with a time counter that at some point lets it just display what it has with a qualifier. Or if the confidence still is very low, just state that it doesn't know.
RadioFreeAmerika OP t1_jdqix38 wrote
Reply to comment by Surur in Why is maths so hard for LLMs? by RadioFreeAmerika
Thanks! I will play around with maths questions solely expressed in language. What I wonder however is not the complex questions, but the simple ones for which incorrect replies are quite common, too.
From the response it seems that, while some probless are inherent to LLMs, most can and will most probably be adressed in future releases.
Number 1 just needs more mathematical data in the training data.
Number 2 could be addressed by processing the output a second time before prompting, or alternatively running it through another plugin. Ideally, the processed sequence length would be increased. Non-linear sequence processing might also be an option, but I have no insights into that.
Number 3 shouldn't be a problem for most everyday maths problems, depending on the definition of precise. Just cut off after two decimal places, e.g. . For maths that is useful in professional settings, it will, though.
Number 4 gets into the hard stuff. I have nothing to offer here besides using more specialized plugins.
Number 5 can easily be addressed. Even without plugins, it can identify and fix code errors (at least sometimes in my experience). This seems kinda similar to errors in "mathematical code"
Number 6 is a bit strange to me. Just translate the symbolic notation into the internal working language of an LLM, "solve" it in natural language space, and retranslate it into symbolic notation space. Otherwise, use image recognition. If GPT4 could recognize that a VGA plug doesn't fit into a smartphone and regarded this as a joke, it should be able to identify meaning in symbolic notation.
Besides all that, now I want a "childlike" AI that I can train until it has "grown up" and the student becomes the master and can help me to better understand things.
RadioFreeAmerika OP t1_jdqgvof wrote
Reply to comment by turnip_burrito in Why is maths so hard for LLMs? by RadioFreeAmerika
There's something to it, but then they currently still fail at the simplest maths questions from time to time. So far, I didn't get a single LLM to correctly write me a sentence with eight words in it on first try. Most get it correct on the second try, though.
RadioFreeAmerika OP t1_jdqecil wrote
Reply to comment by 21_MushroomCupcakes in Why is maths so hard for LLMs? by RadioFreeAmerika
Yeah, but we can't be trained on all the maths books and all the texts including mathematical logic, and from there develop a model that let us do maths by predicting the next words/sign.
RadioFreeAmerika t1_jdqd3cw wrote
Reply to Are We Really This Lucky? The Improbability of Experiencing the Singularity by often_says_nice
Let's assume that our reality is an ancestor simulation. Maybe conducted by an artificial superintelligence. What would be the most interesting parts of history to simulate? Many would argue this to be the time up to the inception of the ASI.
Submitted by RadioFreeAmerika t3_122ilav in singularity
RadioFreeAmerika t1_jdqbplc wrote
Reply to Taxes in A.I dominated labour market by Newhereeeeee
There are easy solutions: Tax automation or significantly increased corporate taxes.
Use the money to pay for a UBI and work-substitution offers (Star-Trek-like research and exploration agency, "playgrounds" for adults (e.g. tech garages), community meet-up areas, voluntary work opportunities (e.g. taking animals for a walk, preparing and offering food, tutoring, ...), etc.
With all the improvements in AI, the only thing that stands between us and utopia is society.
RadioFreeAmerika OP t1_jduhkmz wrote
Reply to comment by Independent-Ant-4678 in Why is maths so hard for LLMs? by RadioFreeAmerika
Interesting, just voiced the same thought in a reply to another comment. I can totally see this being the case in one way or another.