TheBigFeIIa
TheBigFeIIa t1_j8vb4qa wrote
Reply to comment by gurenkagurenda in ChatGPT is a robot con artist, and we’re suckers for trusting it by altmorty
An error being “sticky” is a great way to put it as far as the modeling goes. Gets to a more fundamental problem of the reward structure not optimizing for more objective truths and instead rewarding plausible or more pleasing responses but not necessarily completely factual.
I do wonder if there was any way to generate a confidence estimation with answers, and allow for the concept of “I don’t know.” as a valid approach in a low confidence response. In some cases a truthful acknowledgement of the lack of an answer may be more useful/beneficial than a made-up response
TheBigFeIIa t1_j8va9ol wrote
Reply to comment by 5m0k37r3353v3ryd4y in ChatGPT is a robot con artist, and we’re suckers for trusting it by altmorty
Pretty much hit the point of my original post. ChatGPT is a great tool if you already have an idea of what sort of answer to expect. It is not reliable in generating accurate and trustworthy answers to questions that you don’t know the answer to, especially if there are any consequences to being wrong. If you did not know 2+2 = 4 and ChatGPT confidently told you the answer was √-1, you would now be in a pickle.
A sort of corollary point to this, is that the clickbait and hype over ChatGPT replacing jobs like programmers for example, is at least in its current form rather overstated. Generating code with ChatGPT requires a programmer to frame and guide the AI in constructing the code, and then a trained programmer to evaluate the validity of the code and fix any implementation or interpretation errors in the generation of the said code.
TheBigFeIIa t1_j8v6w58 wrote
Reply to comment by 5m0k37r3353v3ryd4y in ChatGPT is a robot con artist, and we’re suckers for trusting it by altmorty
ChatGPT is able to give confident but completely false or misleading answers. It is up to the user to be smart enough to distinguish a plausible and likely true answer from a patently false one. You don’t need to know the exact and precise answer, but rather the general target you are aiming for.
For example, if I asked a calculator to calculate 2+2, I would probably not expect an answer of √-1
TheBigFeIIa t1_j8v6by0 wrote
Reply to comment by gurenkagurenda in ChatGPT is a robot con artist, and we’re suckers for trusting it by altmorty
Ah, the forest has been missed for the trees, my original statement was not clear enough. ChatGPT is able to unintentionally lie to you because it is not aware of the possibility of its fallibility.
The practical upshot is that it can generate a response that is confident but completely false and inaccurate, due to incomplete information or poor modeling. It is on the user to be smart enough to distinguish the difference
TheBigFeIIa t1_j8t3aml wrote
ChatGPT does not recognize the concept of being false. It is a great tool, somewhat analogous to a calculator for math but in natural language. However you have to be smarter than your tools and know what answer you should be getting
TheBigFeIIa t1_it9vf3l wrote
Reply to comment by danielv123 in 8K Industry Faces Challenge with New EU Regulatory Ruling by SalmonellaTizz
Hardly surprising given decades of fuzzy analog TV
TheBigFeIIa t1_j8wajxv wrote
Reply to comment by theoxygenthief in ChatGPT is a robot con artist, and we’re suckers for trusting it by altmorty
Since the AI is not itself intelligent, it would depend on the reward structure of the model and the data set used to train it.