Viewing a single comment thread. View all comments

LifeScientist123 t1_jdvgzkx wrote

This doesn't even work on humans. Most people when told they are wrong will just double down on their mistaken beliefs.

1

tamilupk OP t1_jdvk3xs wrote

Yeah humans tend to do that, but llms seems to be a bit better than humans in this. As someone replied to this post even OpenAI used this kind of technique to reduce toxicity/ hallucinations.

1