Viewing a single comment thread. View all comments

sticky_symbols t1_j9rf56v wrote

Many thousands of human hours are cheap to buy, and cycles get cheaper every year. So those things aren't really constraints except currently for small businesses.

4

MinaKovacs t1_j9rfnej wrote

True, but it doesn't matter - it is still just algorithmic. There is no "intelligence" of any kind yet. We are not even remotely close to anything like actual brain functions.

−2

gettheflyoffmycock t1_j9rqd5w wrote

Lol, downvotes. this subreddit has been completely overran by non engineers. I guarantee no one here has ever custom trained and inferred with a model outside of API calls. Crazy. Since ChatGPT, open enrollment ML communities are so cringe

2

Langdon_St_Ives t1_j9rsn1f wrote

Or maybe downvotes because they’re stating the obvious. I didn’t downvote for that or any other reason. Just stating it as another possibility. I haven’t seen anyone here claim language models are actual AI, let alone AGI.

3

royalemate357 t1_j9rsqd3 wrote

>We are not even remotely close to anything like actual brain functions. Intelligence need not look anything remotely close to actual brain functions though, right? Like a plane's wings don't function anything like a bird's wings, yet it can still fly. In the same sense, why must intelligence not be algorithmic?

At any rate I feel like saying that probabilistic machine learning approaches like GPT3 are nowhere near intelligence is a bit of a stretch. If you continue scaling up these approaches, you get closer and closer to the entropy of natural language/whatever other domain, and if youve learned the exact distribution of language, imo that would be "understanding"

2

wind_dude t1_j9rvmbb wrote

When they scale they hallucinate more, produce more wrong information, thus arguably getting further from intelligence.

3

royalemate357 t1_j9rzbbc wrote

>When they scale they hallucinate more, produce more wrong information

Any papers/literature on this? AFAIK they do better and better on fact/trivia benchmarks and whatnot as you scale them up. It's not like smaller (GPT-like) language models are factually more correct ...

1

wind_dude t1_j9s1cr4 wrote

I'll see if I can find the benchmarks, I believe there are a few papers from IBM and deepmind talking about it. And a benchmark study in relation to flan.

1

MinaKovacs t1_j9s04eh wrote

It's just matrix multiplication and derivatives. The only real advance in machine learning over the last 20yrs is scale. Nvida was very clever and made a math processor that can do matrix multiplication 100x faster than general purpose CPUs. As a result, the $1bil data center, required to make something like GPT-3, now only costs $100mil. It's still just a text bot.

1

sticky_symbols t1_j9w9b6e wrote

There's obviously intelligence under some definitions. It meets a weak definition of AGI since it reasons about a lot of things almost as well as the average human.

And yes, I know how it works and what its limitations are. It's not that useful yet, but discounting it entirely is as silly as thinking it's the AGI we're looking for.

2