Viewing a single comment thread. View all comments

adventuringraw t1_jdw6enx wrote

You're right that there isn't a system yet that has the power of a LLM without the risk of hallucinated 'facts' woven in, but I don't think it's fair to say 'we're a long ways from that'. There's a ton of research going into different ways to approach this problem, approaches involving a tool using LLM seem likely to work even in the relatively short term (production models in the next few years, say) and that's only one approach.

I certainly don't think it's a /given/ that this problem will be solved soon, I wouldn't bet money that you're wrong about it taking a long time to get it perfect. But I also wouldn't bet money that you're right, given all the progress being made on multiple fronts towards solving this, and given the increasingly extreme focus by so many researchers and companies on this problem, and especially given the fact that solutions like this are both promising and seemingly realistic. After all, if there's a sub-system to detect that an arxiv search should be used to verify a reference before giving it, you could at least eliminate halucinated examples in this narrow area. The downside then might just be an incomplete overview of available papers, but it could eliminate any false papers from what the user sees.

All that said, this only fixes formal citations with a somewhat bespoke system. Fixing ALL inaccurate facts probably won't be possible with even dozens of 'tools'... that'll take more what you're thinking I imagine: something more like a truly general learned knowledge graph embedded as a system component. I know there's work on that too, but when THAT's fully solved, (like, TRULY solved, where modular elements of the world can be inferred from raw sensory data, and facts accumulated about their nature from interaction and written content) we'll be a lot closer to something that's arguably AGI, so... yeah. I think you're right about that being a fair ways away at least (hopefully).

13