Viewing a single comment thread. View all comments

abriec t1_jc34zx3 wrote

Given the constant evolution of information through time combining LLMs with retrieval and reasoning modules is the way forward imo

14

currentscurrents t1_jc4ev00 wrote

This is (somewhat) how the brain works; language and knowledge/reasoning are in separate structures and you can lose one without the other.

2

visarga t1_jc3wlib wrote

I give you a simple solution: run GPT-3 and LLaMA in parallel, if they concur, then you can be sure they have not hallucinated the response. Two completely different LLMs would not hallucinate the same way.

−7

LessPoliticalAccount t1_jc4umir wrote

  1. Sure they could
  2. I imagine you'd have lots of situations where the probability of concurring, even with truthful responses, would be close to zero, so this wouldn't be a useful metric. Questions like "name some exotic birds that are edible, but not commonly eaten" could have thousands of valid answers, and so we wouldn't expect truthful responses to concur. Even for simpler questions, concurrence likely won't be verbatim, so how to you calculate whether or not responses have concurred? You need to train another model for that presumably, and that model will have some nonzero error rate, etc., etc.
5

visarga t1_jc5teq6 wrote

Then we need to only use a second model for strict fact checking, not creative responses. Since entailment is a common NLP task I am sure any LLM can solve it out of the box, of course with its own error rate.

1