RadioFreeAmerika

RadioFreeAmerika OP t1_jduhe5e wrote

Thanks for your reply! And what an interesting use case you present. Haven't thought about generating example data for courses yet, but it makes total sense. Just have to check for inconsistencies with the maths I guess. And after having played around with it some more yesterday evening, the model seems to have improved in that regard in the last few days.

2

RadioFreeAmerika OP t1_jduh0w6 wrote

Hmm, is it valid to make an inverse conclusion from this in the following way: LLMs have problems with maths that requires multistep processes. Some humans are also bad at maths. In conclusion, these humans can be assumed to also have problems with or are lacking multistep processes?

1

RadioFreeAmerika t1_jdrezc7 wrote

Could be. I asked in another post about LLMs and maths capabilities, and it seems that LMMs would profit greatly from the capability to do internal simulations. LLMs can't do this currently, and people commented that in the Microsoft paper, they state that (current?) LLMs models are conceptually unable to do more than linear sequence processing of one sequence. Possible workarounds are plug-ins or neuro-symbolic AI models.

Nevertheless, maybe our reality is just the internal simulation of an ASIs prompt response. Who knows, would that be ironic?

Your second question is an eons-long discussion and greatly depends on how you define god.

5

RadioFreeAmerika OP t1_jdqky02 wrote

Ah, okay, thanks. I have to look more into this vector-number representation.

For the chatbot thing, why can't the LLM generate a non-displayed output, "test it", and try again until it is confident it is right and only then display it? Ideally, with a time counter that at some point lets it just display what it has with a qualifier. Or if the confidence still is very low, just state that it doesn't know.

2

RadioFreeAmerika OP t1_jdqix38 wrote

Thanks! I will play around with maths questions solely expressed in language. What I wonder however is not the complex questions, but the simple ones for which incorrect replies are quite common, too.

From the response it seems that, while some probless are inherent to LLMs, most can and will most probably be adressed in future releases.

Number 1 just needs more mathematical data in the training data.

Number 2 could be addressed by processing the output a second time before prompting, or alternatively running it through another plugin. Ideally, the processed sequence length would be increased. Non-linear sequence processing might also be an option, but I have no insights into that.

Number 3 shouldn't be a problem for most everyday maths problems, depending on the definition of precise. Just cut off after two decimal places, e.g. . For maths that is useful in professional settings, it will, though.

Number 4 gets into the hard stuff. I have nothing to offer here besides using more specialized plugins.

Number 5 can easily be addressed. Even without plugins, it can identify and fix code errors (at least sometimes in my experience). This seems kinda similar to errors in "mathematical code"

Number 6 is a bit strange to me. Just translate the symbolic notation into the internal working language of an LLM, "solve" it in natural language space, and retranslate it into symbolic notation space. Otherwise, use image recognition. If GPT4 could recognize that a VGA plug doesn't fit into a smartphone and regarded this as a joke, it should be able to identify meaning in symbolic notation.

Besides all that, now I want a "childlike" AI that I can train until it has "grown up" and the student becomes the master and can help me to better understand things.

2

RadioFreeAmerika t1_jdqbplc wrote

There are easy solutions: Tax automation or significantly increased corporate taxes.

Use the money to pay for a UBI and work-substitution offers (Star-Trek-like research and exploration agency, "playgrounds" for adults (e.g. tech garages), community meet-up areas, voluntary work opportunities (e.g. taking animals for a walk, preparing and offering food, tutoring, ...), etc.

With all the improvements in AI, the only thing that stands between us and utopia is society.

1