Submitted by ChipsAhoiMcCoy t3_119t9fn in singularity

I'm imagining a use-case for video games which have loads and loads of wiki entries. Something like terraria, for example. You almost permenantly have to have the wiki open for that particular game, and it can be very annoying. I'm imagining a future where we can simply say something like "Hey GPT, I just killed this boss, what do I do next?" and immediately get an answer that's correct.

​

As it stands right now, while some AI can answer questions like this, (I just tried Perplexity's Chrome Extension, and it's pretty cool.) they aren't facutally accurate. I just asked Perplexity which quests in Act 1 of Path of Exile grant a book of Skill, and it missed one of the quests, for example.

​

What do you guys think?

9

Comments

You must log in or register to comment.

atchijov t1_j9o6zgn wrote

Never. You should not “blindly trust” anybody or anything. Everything has an agenda.

38

ChipsAhoiMcCoy OP t1_j9rbui5 wrote

I don’t really care if the AI has an agenda if it finds me a really nice terraria build, though. Or tells me how to craft an iron pickax in Minecraft. I’m not exactly going to be forming political opinions through the use of a chat bot anytime soon, trust me.

6

rixtil41 t1_j9qdl5z wrote

Not in the probability sense. Then you should "blindly trust it." As it does more good than harm.

5

AsuhoChinami t1_j9o0i91 wrote

https://youtu.be/peHkL_MaxTU?t=1456

According to this there's an algorithm or something that's been developed which improves accuracy to 100 and completely gets rid of hallucinations and is going to start being tested out by others soon.

7

Coderules t1_j9ot8sd wrote

I can see for calculations and things with specific "correct" answers. But how will this handle the more "grey" area question where the answers are pretty subjective and as we've already seen dependant on the LLM data at the time?

5

Significant_Pea_9726 t1_j9oyrvf wrote

Right. There is no “100% accuracy”. And for every 1 question that has an easy “correct” answer, there are 1000 that are subject to at least a modicum of context and assumptions.

Eg seemingly obvious geometry questions would have different answers depending on if we are talking about Euclidean vs non Euclidean geometry.

And - “ is Taiwan its own country?” cannot in principle have a 100% “correct” answer.

6

[deleted] t1_j9pkso7 wrote

It is absurd to say there will be a model with 100% accuracy.

The secret sauce is exactly that it will always give an answer no matter what right now exactly like a human and not give a probabilistic response.

It would have to give answers like there is:

60% probability of A

30% probability of B

10% probability of C

That is most likely what it is already doing but then just saying the answer is A. When the answer is actually B we say it is "hallucinating".

If you add a threshold that can be adjusted then even at 61% it would say it doesn't know the answer at all.

This is not going to be "solved" without ruining the main part of the magic trick. We want to believe it is super human when it says A and we happen to be within that 60% of the time that the answer is A.

4

AsuhoChinami t1_j9rk7o7 wrote

It's not "absurd." Or at least, it isn't unless you misconstrue what's actually being said. Eliminating hallucinations and being perfect and omniscient aren't the same thing. It's not about being 100 percent perfect, but simply that the points for which It's docked won't be the results of hallucinations. Maybe it won't know an answer, and will say "I don't know." Maybe it will have an opinion on something subjective that's debatable, but that opinion doesn't include hallucination and is simply a shit take like human beings might have.

2

Coderules t1_j9ozb48 wrote

Right!

In the case of a better AI model, I'd really like to have AI ask me the prompts instead of the current design where I have to ask the "right" questions to trigger the desired response.

For example, I post to AI, "I'm bored and want to read a book but not sure which one. Help me." Then it asked me a series of questions to get to some selection of available books I own or can acquire.

2

RiotNrrd2001 t1_j9ovc22 wrote

They will never be 100% accurate. They are like people. Even the smartest of people doesn't know things, has blind spots, has been trained incorrectly, etc. They are no different. We can trust them the way we can trust people. Perhaps eventually with a very high degree of confidence, but never with 100% blind trust.

6

Lawjarp2 t1_j9og5au wrote

You mean equivalent to being as trustworthy as a search engine. Probably 1-3 years.

5

Ortus14 t1_j9ogy9j wrote

People already do. I was talking to some one a few weeks ago online, and they sourced ChatGPT in their argument. Of course ChatGPT halucinated half the facts.

People generally don't care about truth, they go with whatever sources are most convenient or entertaining and then trust those.

4

Professional-Song216 t1_j9okiu5 wrote

If we start blindly trusting anything it better basically be god. We have a long way to go

4

dayaz36 t1_j9opl12 wrote

I already do that

4

ShowerGrapes t1_j9oravs wrote

about as long as it will take us to blindly trust any human being.

2

Saeker- t1_j9pem0s wrote

Perhaps a secondary layer of checkbots might scour the results of the chatbots and offer some kind of feedback on accuracy.

2

MrLampwick t1_j9qyl1l wrote

Rn, that bitch is writing me all the code, almost have everything to make my video game

2

ChipsAhoiMcCoy OP t1_j9qzctg wrote

Even asking it simple questions nets some incorrect answers though

1

No_Ninja3309_NoNoYes t1_j9psywn wrote

It depends. If you have a fact with a certain probability p, it could have been confirmed n times. The standard error if I remember correctly is the square root of p(1-p)/n. So let's say p = 0.9, the std is like a margin of error sqrt(0.9 * 0.1/n). You want n to be high to have a low std.

1

PaperCruncher t1_j9q0wqt wrote

There are many requirements for factual question answering. As a start, it would need to find sources known to be reliable for the specific topic or if the question is more complex, all the topics it references. Then retrieve the correct information from the possibly many pages of answers. It would need to pick which source should be listened to if the answers are conflicting, and if the answer is biased or highly subjective and the question is fact-reliant, it would need to either find another source or just present all the biased answers. Finally, it would have to rephrase the answer to a user-selected level of complexity (a doctor wouldn’t take a paragraph from a technically-worded research paper and give it to you, they would make it understandable but still accurate enough).

How long this will all take to be created, I don’t know. Maybe it already has been but not all put together. Anyway, I’m probably missing some steps.

1

psycmike t1_j9st5u1 wrote

The first thing is understanding the difference between objective truth, and personal truth. Things like how long a mile or kilometer are are objective truths. These types of truths cannot be tainted by political or other agenda.

1

celticlo t1_jaejsy8 wrote

if they can show that errors are completely eliminated and the chatbot provides accurate answers every time, people will accept without question. doesn't mean there wont be biased answers about things people disagree on.

1