Viewing a single comment thread. View all comments

liqui_date_me t1_jdr7pnr wrote

Tough to say, probably in 10-20 years at the very least. Modern LLMs are transformers which are architected to predict the next token in a sequence in O(1) time, regardless of the input. Unless we get a radically different neural network architecture it’s not possible we’ll ever get GPT to perform math calculations exactly

2

sdmat t1_jdut7jg wrote

Or just go with a workable hack for calculation like the Wolfram plugin.

Does it matter if the model isn't doing it natively if it understands how and when to use the tool? How often do we multiply large numbers unaided?

1