Ask ChatGPT for an explanation of anything without a known correct answer, and then tell it that "that answer is incorrect". It will proceed to dream up a new answer. This could be non-existent syntax for a programming language, for example. The sequential nature of the model means it can paint itself into a corner quite easily.
Isn't knowledge accuracy a by-product of modeling correct language use to some degree, and not the design goal of the system? A fantasy story is just as valid a language use as a research paper. Accuracy seems to correlate with how the system is primed for the desired context.
bitRAKE t1_j6mj7s2 wrote
Reply to [Discussion] ChatGPT and language understanding benchmarks by mettle
Ask ChatGPT for an explanation of anything without a known correct answer, and then tell it that "that answer is incorrect". It will proceed to dream up a new answer. This could be non-existent syntax for a programming language, for example. The sequential nature of the model means it can paint itself into a corner quite easily.
Isn't knowledge accuracy a by-product of modeling correct language use to some degree, and not the design goal of the system? A fantasy story is just as valid a language use as a research paper. Accuracy seems to correlate with how the system is primed for the desired context.