Viewing a single comment thread. View all comments

Daos-Lies t1_j4zpwjr wrote

This is just a suspicion, but I think it's just a matter of embedding the conversation and using that embedding as an input, in addition to your most recent question. (Which is just classic recurrence really).

I'm relatively confident that the mechanism would be something along those lines because they made a relatively big fuss about their new embedding service around the same time that chatgpt was released. (tho obviously that didn't get as much attention as chatgpt itself).

(and in response to u/DaLameLama asking if chatGPT goes past the token limit: Yes. it deffo can go past 8000 tokens, I have had some v v v long conversations with it.)

21

IntelArtiGen t1_j4zr3iq wrote

Yeah that's also what I would say, I doubt it's anything revolutionary as it's likely not necessary. It might be an innovative use of embeddings of a conversation but I wouldn't qualify that as "revolutionary".

They probably don't use only one embedding for the whole conv, perhaps they use one embedding per prompt and/or they keep in memory some tokens.

1

MysteryInc152 t1_j50pw6e wrote

With embeddings, it should theoritically not have a hard limit at all. But experiments here suggest a sliding context window of 8096

https://mobile.twitter.com/goodside/status/1598874674204618753?t=70_OKsoGYAx8MY38ydXMAA&s=19

6

Daos-Lies t1_j50vdq9 wrote

That is indeed fair enough.

Big fan of the concept of screaming at it until it forgets ;)

And I suppose it is very possible that as part of my 'v long conversations with it' if the topic of the conversation repeated at any stage, which I'm sure they would have done at points, then that could have fooled me into thinking it was remembering things from right at the start.

2