venustrapsflies

venustrapsflies t1_jbkdce8 wrote

I’m sure they’re not using it to make any sort of meaningful decisions. There’s a lot of big game being talked about ChatGPT, but it’s not going into anything critical at any place serious about making money. It’ll be used for like, internal utilities to save their people time.

5

venustrapsflies t1_j8l8o54 wrote

How would you quantify the lack of intelligence in a cup of water? Prove to me that the flow patterns don’t represent a type of intelligence.

This is a nonsensical line of inquiry. You need to give a good reason why a statistical model would be intelligent, for some reasonable definition. Is a linear regression intelligent? The answer to that question should be the same as the answer to whether a LLM is.

What people like you do is to conflate multiple very different definitions of a relatively vague concept line “intelligence”. You need to start with why on earth you would think a statistical model has anything to do with human intelligence. That’s an extraordinary claim, the burden of proof is on you.

1

venustrapsflies t1_j8l4ftf wrote

No, bad science would pretending that just because you don’t understand two different things, they are likely the same thing. Despite what you may believe, these algorithms are not some mystery that we know nothing about. We have a good understanding of why they work, and we know more than enough about them to know that they have nothing to do with biological intelligence.

0

venustrapsflies t1_j8kkovy wrote

I am literally a scientist who works on ML algs for a living. Stop trying to philosophize yourself way into believing what you want to. Just because YOU don’t understand it doesn’t mean you can wave your hands and act like two different things are the same.

1

venustrapsflies t1_j8kck2g wrote

No, it's not at all designed to be logically correct, it's designed to appear correct based on replications of the training dataset.

One the one hand, it's pretty impressive that it can do what it does using nothing but a statistical model of language. On the other hand, it's a quite unimpressive example of artificial intelligence because it is just a statistical language model. That's why it's abysmal at even simple math and logic questions, things that computers have historically been quite good at.

Human intelligence is nothing like a statistical language model. THAT is the real point, the one that both you and the OC, and frankly much of this sub at large, aren't getting.

7

venustrapsflies t1_j8jp5jv wrote

If I had a nickel for every time I saw someone say this on this sub I could retire early. It’s how you can tell this sub isn’t populated by people who actually work in AI or neuroscience.

It’s complete nonsense. Human beings don’t work by fitting a statistical model to large datasets, we learn by heuristics and explanations. A LLM is fundamentally incapable of logic, reasoning, error correction, confidence calibration, and innovation. No, a human expert isn’t just an algorithm, and it’s absurd that this idea even gets off the ground.

15

venustrapsflies t1_j4v6tdk wrote

I mean I hated homework, especially writing, but I have no clue how I would’ve learned to write an essay otherwise.

And with few exceptions students can’t learn complex and abstract concepts without practicing on their own. You get rid of homework and you make math and writing illiteracy worse.

6

venustrapsflies t1_j25hll1 wrote

For many/most jobs it’s also basically impossible to find out how well someone can actually do a job, or even know if they’re not going to be a total disaster, without actually hiring them. That’s because you really have to work with and build a relationship with someone to know how well they’ll work with you. If you know some guy and you know he doesn’t completely suck, it’s a safer bet.

14

venustrapsflies t1_ix0hk1d wrote

yeah the biggest hurdle in figuring out what an article of clothing will actually look like on you is the fact that the vast majority of us aren't shaped like clothing models. the question is how it lays, not how it looks dynamically stretched out over a 2D image.

38