Viewing a single comment thread. View all comments

Baron_Samedi_ t1_jdzjakg wrote

>LLM, wired like this... has no ability to recall past experience. It has no ability to evolve, and it always predicts the same output probabilities from the same input. It must go from input straight to output, it can't reserve space to think or refine its answer depending on the complexity of the task.

However, memory augmented LLMs may be able to do all of the above