Viewing a single comment thread. View all comments

ftc1234 t1_iwwxrcd wrote

Isn’t this like, Duh?!

All of deep learning, including LLMs, is about coming up with a non linear model that best models input data. Does it guarantee that: a) any output it generates is consistent with the actual input data (I don’t mean input distribution here) and b) it understands what’s not said in the input data (eg., it doesn’t have enough knowledge or training to answer the prompt accurately).

At a high level, all that LLMs do it model an input distribution. And you can sample it for interesting images and text. There are no guarantees that the output makes sense and the AI community is not even close to developing techniques that limits generated output to only sensible ones (or throw up an error if there is no good answer).

And more importantly, given how easy it is to generate output, the real challenge is to not get lost in a world of simulation and to keep it real.

3