TheBigFeIIa

TheBigFeIIa t1_j8vb4qa wrote

An error being “sticky” is a great way to put it as far as the modeling goes. Gets to a more fundamental problem of the reward structure not optimizing for more objective truths and instead rewarding plausible or more pleasing responses but not necessarily completely factual.

I do wonder if there was any way to generate a confidence estimation with answers, and allow for the concept of “I don’t know.” as a valid approach in a low confidence response. In some cases a truthful acknowledgement of the lack of an answer may be more useful/beneficial than a made-up response

3

TheBigFeIIa t1_j8va9ol wrote

Pretty much hit the point of my original post. ChatGPT is a great tool if you already have an idea of what sort of answer to expect. It is not reliable in generating accurate and trustworthy answers to questions that you don’t know the answer to, especially if there are any consequences to being wrong. If you did not know 2+2 = 4 and ChatGPT confidently told you the answer was √-1, you would now be in a pickle.

A sort of corollary point to this, is that the clickbait and hype over ChatGPT replacing jobs like programmers for example, is at least in its current form rather overstated. Generating code with ChatGPT requires a programmer to frame and guide the AI in constructing the code, and then a trained programmer to evaluate the validity of the code and fix any implementation or interpretation errors in the generation of the said code.

6

TheBigFeIIa t1_j8v6w58 wrote

ChatGPT is able to give confident but completely false or misleading answers. It is up to the user to be smart enough to distinguish a plausible and likely true answer from a patently false one. You don’t need to know the exact and precise answer, but rather the general target you are aiming for.

For example, if I asked a calculator to calculate 2+2, I would probably not expect an answer of √-1

11

TheBigFeIIa t1_j8v6by0 wrote

Ah, the forest has been missed for the trees, my original statement was not clear enough. ChatGPT is able to unintentionally lie to you because it is not aware of the possibility of its fallibility.

The practical upshot is that it can generate a response that is confident but completely false and inaccurate, due to incomplete information or poor modeling. It is on the user to be smart enough to distinguish the difference

12