torchma

torchma t1_jccsuru wrote

>I'm asking why they don't recognize that a portion of the answers they provide will be useless without being able to solve simple math problems.

?? They absolutely recognize that it's not good at math. It's not meant to be good at math. If you're still using it for math despite being told it's not good for math and despite the obvious results showing it's not good at math, then that's your problem, not theirs. That it's not good at math hardly negates its core competencies, which are remarkable and highly valuable.

>They could certainly build in a calculator now

What? That would be absolutely pointless. They might as well just tell you to use your own calculator. In fact, that's what Bing Chat would tell people if you asked it to do a math problem back before they neutered it.

2

torchma t1_jccg5ej wrote

Because basic calculation is already a solved problem. OpenAI is concerned with pushing the frontiers of AI, not with trivial integration of current systems. No one is going to use GPT for basic calculation anyways. People already have ubiquitous access to basic calculators. It's not part of GPT's core competency, so why waste time on it? What is part of GPT's core competency is an advanced ability to process language.

That's not to say that they are necessarily ignoring the math problem. But the approach you are suggesting is not an AI-based approach. You are suggesting a programmatic approach (i.e. "if this, then do this..."). If they were only concerned with turning ChatGPT into a basic calculator, that might work. But that's a dead-end. If OpenAI is addressing the math problem, they would be taking an AI approach to it (developing a model that learns math on its own). That's a much harder problem, but one with much greater returns from solving it.

1

torchma t1_jcbubdj wrote

I don't get your comment. You know it's a language model and not a calculator and yet are surprised that it got a calculation wrong? And no, it doesn't send anything to anything else. It's a language model. It's just predicting that the sequence of words "I'm sending this calculation to python" is the most likely sequence of words that should follow.

2

torchma t1_j65cls7 wrote

If you're going by OSM classification of roads, you should know that it's highly inconsistent. Two roads that are in fact very similar (surface, width, frequency of use) are often classified differently and vice versa. That's what you get with crowd sourced data.

1

torchma t1_j3x0ssc wrote

That's hardly a more detailed explanation. You still don't answer the questions. If you are reporting survey data you need to fully describe your sampling frame, what defines the sample population, how you obtained contact information, how you contacted potential respondents, how many potential respondents you contacted and how many actually participated (so that you can report the all-important response rate), etc., etc.

This reads like a high school project.

12

torchma t1_iwtiph5 wrote

> Google’s advertising structure has made it so there’s plenty of shit sites out there that only exist to trick us into clicks to serve us ads

That has nothing to do with Google's ad model. You can't pay google to improve your page's rank on a search. Ads are completely separate from search results. The internet has simply gotten much bigger and much shittier, diluting useful information.

2

torchma t1_ivzchnc wrote

> Ultimately FTX got into such a pickle that Binance (#1) bought them for likely pennies on the dollar.

How is this being upvoted? Binance PULLED OUT of the deal. That's been the biggest story of the past couple days. How did you get that wrong?

10