Viewing a single comment thread. View all comments

SilentRunning OP t1_je2navi wrote

It is programmed to know when some data is incorrect, it doesn't realize anything. But yet it can't correct the method that brought the incorrect data until a human corrects the program. Until that happens it continues to bring incorrect results if the prompts are the same. This give the impression that it is learning on it's own, but is actually far from the truth. Each version of GPT was updated by human coders, it has learned anything on it's own and is far from being able to.

0