Viewing a single comment thread. View all comments

harharveryfunny t1_je50vw9 wrote

There's no indication that I've seen that it maintains any internal state from one word generated to the next. Therefore the only way it can build upon it's own "thoughts" is by generating "step-by-step" output which is fed back into it. It seems its own output is its only working memory, at least for now (GPT-4), although that's an obvious area for improvement.

7

visarga t1_je6a6w8 wrote

> its own output is its only working memory

All the fantastic feats LLMs can do are thanks to context conditioning.

1