Viewing a single comment thread. View all comments

[deleted] t1_j9pkso7 wrote

It is absurd to say there will be a model with 100% accuracy.

The secret sauce is exactly that it will always give an answer no matter what right now exactly like a human and not give a probabilistic response.

It would have to give answers like there is:

60% probability of A

30% probability of B

10% probability of C

That is most likely what it is already doing but then just saying the answer is A. When the answer is actually B we say it is "hallucinating".

If you add a threshold that can be adjusted then even at 61% it would say it doesn't know the answer at all.

This is not going to be "solved" without ruining the main part of the magic trick. We want to believe it is super human when it says A and we happen to be within that 60% of the time that the answer is A.

4

AsuhoChinami t1_j9rk7o7 wrote

It's not "absurd." Or at least, it isn't unless you misconstrue what's actually being said. Eliminating hallucinations and being perfect and omniscient aren't the same thing. It's not about being 100 percent perfect, but simply that the points for which It's docked won't be the results of hallucinations. Maybe it won't know an answer, and will say "I don't know." Maybe it will have an opinion on something subjective that's debatable, but that opinion doesn't include hallucination and is simply a shit take like human beings might have.

2