Viewing a single comment thread. View all comments

ArcticWinterZzZ t1_jdt0plo wrote

GPT-4 always takes the same amount of time to output a token. However, multiplication has been proven to take more time than GPT-4 has available. Therefore, an LLM like GPT-4 cannot possibly "grow" the requisite structures required to actually calculate multiplication "instantly". There are probably quite a few more problems like this, which is why chain-of-thought prompting can be so powerful.

3