Viewing a single comment thread. View all comments

psdwizzard t1_jde850w wrote

A memory plug in would be amazing. it would allow it to learn.

1

ghostfaceschiller t1_jdekpke wrote

Trivially easy to build using the embeddings api, already a bunch of 3rd party tools that give you this. I’d be surprised if it doesn’t exist as one of the default tools within a week of the initial rollout.

EDIT: OK yeah it does already exist a part of the initial rollout - https://github.com/openai/chatgpt-retrieval-plugin#memory-feature

14

willer t1_jdgps4b wrote

I read through the docs, and in this release, ChatGPT only calls the /query API. So you can't implement long term memory of your chats yourself, as it won't send your messages and the responses to this service. Your retrieval API acts in effect as a readonly memory store of external memories, like a document library.

3

ghostfaceschiller t1_jdgrba9 wrote

Fr??? Wow what an insane oversight

Or I guess maybe they don’t wanna rack up all the extra embeddings calls, bc I assume like 100% if users would turn that feature on

1

BigDoooer t1_jdele0p wrote

I’m not familiar with these. Can you give the name/location if one to check out?

2

ghostfaceschiller t1_jdenoo2 wrote

Here's a standalone product which is a chatbot with a memory. But look at LangChain for several ways to implement the same thing.

The basic idea is: periodically feed your conversation history to the embeddings API and save the embeddings to a local vectorstore, which is the "long-term memory". Then, any time you send a message or question to the bot, first send that message to embeddings API (super cheap and fast), run a local comparison, and prepend any relevant contextual info ("memories") to your prompt as it gets sent to the bot.

14

xt-89 t1_jdessgl wrote

This also opens the door to a lot of complex algorithms for retrieving the correct memories

7