Viewing a single comment thread. View all comments

5m0k37r3353v3ryd4y t1_j8v3sal wrote

If we knew what answers we should be getting, why would we ask the question, though?

To your analogy, I don’t plug numbers into a calculator because I already know the answer I’m gonna get.

I think the move is just to fact check the AI if the correctness of the answer is so important, right? At least while it’s in Beta.

It’s very clear about it’s limitations right up front.

7

TheBigFeIIa t1_j8v6w58 wrote

ChatGPT is able to give confident but completely false or misleading answers. It is up to the user to be smart enough to distinguish a plausible and likely true answer from a patently false one. You don’t need to know the exact and precise answer, but rather the general target you are aiming for.

For example, if I asked a calculator to calculate 2+2, I would probably not expect an answer of √-1

11

5m0k37r3353v3ryd4y t1_j8v89kd wrote

Agreed.

But again, to be fair, in your example, we already know the answer to 2 + 2, those unfamiliar with irrational numbers might not know when to expect a rad sign with a negative integer in a response.

So, having a ballpark is good, but if you truly don’t know what type of answer to expect, Google can still be your friend.

3

TheBigFeIIa t1_j8va9ol wrote

Pretty much hit the point of my original post. ChatGPT is a great tool if you already have an idea of what sort of answer to expect. It is not reliable in generating accurate and trustworthy answers to questions that you don’t know the answer to, especially if there are any consequences to being wrong. If you did not know 2+2 = 4 and ChatGPT confidently told you the answer was √-1, you would now be in a pickle.

A sort of corollary point to this, is that the clickbait and hype over ChatGPT replacing jobs like programmers for example, is at least in its current form rather overstated. Generating code with ChatGPT requires a programmer to frame and guide the AI in constructing the code, and then a trained programmer to evaluate the validity of the code and fix any implementation or interpretation errors in the generation of the said code.

6

majnuker t1_j8varna wrote

Yes but the difference here, argumentatively, is that for soft-intelligence such as language and facts determining what is absolutely correct can be much harder and people's instinct for what is correct can be very off base.

Conversely, we understand numbers, units etc. enough. But, I suppose the analogy also works in a different way: most people don't understand quadratic equations anymore, or advanced proofs, but most people also don't try to use a calculator for that normally.

Conversely, we often need assistance and look up soft-intelligence information and rely on accuracy, while most citizens lack the knowledge necessary to easily identify a problem with the answer.

So, sort of two sides to the same coin about human fallibility and reliance on knowledge-based tools.

1

theoxygenthief t1_j8vv7c0 wrote

Yeah that’s fine for questions with clear, simplex or nuance free answers. But integrated with search engines for complex questions? Seems like a dangerous idea to me. If I asked an AI enhanced search engine if vaccines cause autism is it going to give more weight to studies with correct methodologies?

1

TheBigFeIIa t1_j8wajxv wrote

Since the AI is not itself intelligent, it would depend on the reward structure of the model and the data set used to train it.

1

HippoIcy7473 t1_j8vs1cc wrote

Let’s say an airline misplaced your luggage.

  1. Instruct chat GPT to write a letter to whatever the airline is.
  2. Ask it to insert any pertinent info
  3. Ask it to remove any incorrect info
  4. Ask it to be more or less terse and friendlier or firmer. Send letter to airline.

Time taken ~5 minutes for a professional syntactically correct 300 word email.

3

ddhboy t1_j8w7cuv wrote

Yeah, I think that the Bing/Google Search case is wrong for ChatGPT, but something like it’s Office 365 integration of writing something based on a prompt is better. More practically outside of that, something like a more fully featured automated customer support could reduce the need for things like call centers in the next couple of years.

5

MPforNarnia t1_j8w2wpd wrote

Exactly, it's time. We can do all calculations by time (and the knowledge) it just takes longer.

Theres a few tasks at my work that chatgpt has made more efficient.

2

loldudester t1_j8wckfj wrote

> To your analogy, I don’t plug numbers into a calculator because I already know the answer I’m gonna get.

You may not know what 18*45 is, but if a calculator told you it was 100 you'd know that's wrong.

1