Submitted by altmorty t3_113x9ir in technology
gurenkagurenda t1_j8voslg wrote
Reply to comment by TheBigFeIIa in ChatGPT is a robot con artist, and we’re suckers for trusting it by altmorty
Log probabilities are the actual output of the model (although what those probabilities directly mean once you're using reinforcement learning seems sort of nebulous), and I wonder if uncertainty about actual facts is reflected in lower probabilities in the top scoring tokens. If so, you could imagine encoding the scores in the actual output (ultimately hidden from the user), so that the model can keep track of its past uncertainty. You could imagine that with training, it might be able to interpret what those low scoring tokens imply, from "I'm not sure I'm using this word correctly" to "this one piece might be mistaken" to "this one piece might be wrong, and if so, everything after it is wrong".
Viewing a single comment thread. View all comments