[D] Will prompting the LLM to review it's own answer be any helpful to reduce chances of hallucinations? I tested couple of tricky questions and it seems it might work. Submitted by tamilupk t3_123b4f0 on March 27, 2023 at 4:19 AM in MachineLearning 30 comments 47
oimrqs t1_jduemcp wrote on March 27, 2023 at 7:33 AM In my mind this + plugins + modules (vision) is the next step. Am I crazy? Permalink 1
Viewing a single comment thread. View all comments