Viewing a single comment thread. View all comments

Emory_C t1_j4nfrmc wrote

Quite a long time, maybe. One of the key limitations of GPT-4, as well as other language models in its class, is the context window.

In the case of GPT-3, the context window is approximately 2,048 tokens. This means that when generating text, GPT-3 can only consider the 2,048 tokens immediately preceding the point at which it is generating text. This can make it difficult for the model to maintain coherence and consistency when generating longer texts, such as a book.

When GPT-4 is developed, it'll likely still be limited by the context window . Researchers would need to develop new architectures that are able to take into account a larger context window. This is complex as hell, and it is not clear if it'll be possible with our current hardware limitations.

7