Viewing a single comment thread. View all comments

TheBigFeIIa t1_j8v6w58 wrote

ChatGPT is able to give confident but completely false or misleading answers. It is up to the user to be smart enough to distinguish a plausible and likely true answer from a patently false one. You don’t need to know the exact and precise answer, but rather the general target you are aiming for.

For example, if I asked a calculator to calculate 2+2, I would probably not expect an answer of √-1

11

5m0k37r3353v3ryd4y t1_j8v89kd wrote

Agreed.

But again, to be fair, in your example, we already know the answer to 2 + 2, those unfamiliar with irrational numbers might not know when to expect a rad sign with a negative integer in a response.

So, having a ballpark is good, but if you truly don’t know what type of answer to expect, Google can still be your friend.

3

TheBigFeIIa t1_j8va9ol wrote

Pretty much hit the point of my original post. ChatGPT is a great tool if you already have an idea of what sort of answer to expect. It is not reliable in generating accurate and trustworthy answers to questions that you don’t know the answer to, especially if there are any consequences to being wrong. If you did not know 2+2 = 4 and ChatGPT confidently told you the answer was √-1, you would now be in a pickle.

A sort of corollary point to this, is that the clickbait and hype over ChatGPT replacing jobs like programmers for example, is at least in its current form rather overstated. Generating code with ChatGPT requires a programmer to frame and guide the AI in constructing the code, and then a trained programmer to evaluate the validity of the code and fix any implementation or interpretation errors in the generation of the said code.

6

majnuker t1_j8varna wrote

Yes but the difference here, argumentatively, is that for soft-intelligence such as language and facts determining what is absolutely correct can be much harder and people's instinct for what is correct can be very off base.

Conversely, we understand numbers, units etc. enough. But, I suppose the analogy also works in a different way: most people don't understand quadratic equations anymore, or advanced proofs, but most people also don't try to use a calculator for that normally.

Conversely, we often need assistance and look up soft-intelligence information and rely on accuracy, while most citizens lack the knowledge necessary to easily identify a problem with the answer.

So, sort of two sides to the same coin about human fallibility and reliance on knowledge-based tools.

1

theoxygenthief t1_j8vv7c0 wrote

Yeah that’s fine for questions with clear, simplex or nuance free answers. But integrated with search engines for complex questions? Seems like a dangerous idea to me. If I asked an AI enhanced search engine if vaccines cause autism is it going to give more weight to studies with correct methodologies?

1

TheBigFeIIa t1_j8wajxv wrote

Since the AI is not itself intelligent, it would depend on the reward structure of the model and the data set used to train it.

1