melodyze

melodyze t1_j7j6h6t wrote

The Lamda paper has some interesting sidelines at the end about training the model to dynamically query a knowledge graph for context at inference time and stitch the result back in, to retrieve ground truth, which may also allow the state change at runtime without requiring constant retraining.

They are better positioned to deal with that problem than chatgpt, as they already maintain what is almost certainly the world's most complete and well maintained knowledge graph.

But yeah, while I doubt they have the confidence they would really want there, I would be pretty shocked if their tool wasn't considerably better at not being wrong on factual claims.

1

melodyze t1_j1etmmu wrote

I'm pretty sure haus25, a 57 story sky scraper that has now been functional for a while, was a parking lot when I first saw the sign that that whole foods was coming soon.

Or maybe it was a grass field. It's so long ago I don't remember exactly.

2