Viewing a single comment thread. View all comments

tensor_searcher t1_j8mbbb9 wrote

A scalable vector database like Marqo as memory to be injected into GPT-like LLMs seems to be the way to go. The Bing use case shows that setting these systems up haphazardly can lead to blatantly false results. How do these systems prevent that type of performance? This would be an import problem to solve for these new tools, like LangChain.

4