Comments

You must log in or register to comment.

skeltzyboiii OP t1_j8m63ka wrote

TL;DR We show how an update-able and domain specific memory (via an external knowledge base) can be added to GPT to perform question and answering for products and chat agents. Some very humorous interactions arise when GPT is connected to an external knowledge base and forced to use irrelevant context in answering questions.
Article: https://www.marqo.ai/blog/from-iron-manual-to-ironman-augmenting-gpt-with-marqo-for-fast-editable-memory-to-enable-context-aware-question-answering
Code: https://github.com/marqo-ai/marqo/tree/mainline/examples/GPT-examples

3

tensor_searcher t1_j8mbbb9 wrote

A scalable vector database like Marqo as memory to be injected into GPT-like LLMs seems to be the way to go. The Bing use case shows that setting these systems up haphazardly can lead to blatantly false results. How do these systems prevent that type of performance? This would be an import problem to solve for these new tools, like LangChain.

4

TheLoneKid t1_j8mhy7z wrote

Is the frontend streamlit? It looks like streamlit.

1

HiPattern t1_j8naxdx wrote

Very nice! What runs in the docker service?

1

extracoffeeplease t1_j8nqdl8 wrote

So IIUC this searches text first, then adds that to the prompt as input to the LLM. Now for the text search, why do vector searching and not Elasticsearch, or both? Reason I'm asking is I've seen vector search issues pop up when your data is uncommon and hence badly embedded, for example searching for a unique name or a weird token ( for example, P5.22.a.03), whereas classic text search can find that exact token.

1

buttertoastey t1_j8x1gij wrote

Is the UI included in the source code? I cant seem to find it. Just the product_q_n_a.py which runs in the command line

1