Submitted by ChipsAhoiMcCoy t3_119t9fn in singularity
Significant_Pea_9726 t1_j9oyrvf wrote
Reply to comment by Coderules in How long do you estimate it's going to be until we can blindly trust answers from chatbots? by ChipsAhoiMcCoy
Right. There is no “100% accuracy”. And for every 1 question that has an easy “correct” answer, there are 1000 that are subject to at least a modicum of context and assumptions.
Eg seemingly obvious geometry questions would have different answers depending on if we are talking about Euclidean vs non Euclidean geometry.
And - “ is Taiwan its own country?” cannot in principle have a 100% “correct” answer.
[deleted] t1_j9pkso7 wrote
It is absurd to say there will be a model with 100% accuracy.
The secret sauce is exactly that it will always give an answer no matter what right now exactly like a human and not give a probabilistic response.
It would have to give answers like there is:
60% probability of A
30% probability of B
10% probability of C
That is most likely what it is already doing but then just saying the answer is A. When the answer is actually B we say it is "hallucinating".
If you add a threshold that can be adjusted then even at 61% it would say it doesn't know the answer at all.
This is not going to be "solved" without ruining the main part of the magic trick. We want to believe it is super human when it says A and we happen to be within that 60% of the time that the answer is A.
AsuhoChinami t1_j9rk7o7 wrote
It's not "absurd." Or at least, it isn't unless you misconstrue what's actually being said. Eliminating hallucinations and being perfect and omniscient aren't the same thing. It's not about being 100 percent perfect, but simply that the points for which It's docked won't be the results of hallucinations. Maybe it won't know an answer, and will say "I don't know." Maybe it will have an opinion on something subjective that's debatable, but that opinion doesn't include hallucination and is simply a shit take like human beings might have.
Coderules t1_j9ozb48 wrote
Right!
In the case of a better AI model, I'd really like to have AI ask me the prompts instead of the current design where I have to ask the "right" questions to trigger the desired response.
For example, I post to AI, "I'm bored and want to read a book but not sure which one. Help me." Then it asked me a series of questions to get to some selection of available books I own or can acquire.
Viewing a single comment thread. View all comments