Viewing a single comment thread. View all comments

Really_McNamington t1_ja22jqp wrote

First sentence - "The rise of transformer-based architectures, such as ChatGPT and Stable Diffusion, has brought us one step closer to the possibility of creating an Artificial General Intelligence (AGI) system

Total bollocks. Bullshit generators is all they are.

Try this

−1

jamesj OP t1_ja23wyd wrote

Are you saying the transformer has brought us no closer to AGI?

2

Really_McNamington t1_ja2k482 wrote

No, the rapture of the nerds is as remote as ever it was. From the article I linked-

>How are we drawing these conclusions? I'm right here doing this work, and we have no clue how to build systems that solve the problems that they say are imminent, that are right around the corner.” – Erik Larson

I probably spend too much time at r/SneerClub to buy into the hype.

−2

phillythompson t1_ja36t1g wrote

This dude references Netflix recommendation system, Amazon recommendations, and Facebook for “what we think true AI is”.

That is so far removed from what many are discussing right now. He doesn’t touch on LLMs at all in that interview. He talks about inference and thinking, and dismisses AI’s capabilities because “all it is in inference”.

It’s a common pushback: “the AI doesn’t actually understand anything.” And my response is, “..so?”

If it gives the illusion of thinking. If it can pass the Turing test to most of the population. If it can eventually get integrated with real-fine data , images, video, and sound — does it honestly matter if it’s “truly thinking as a human does”? Hell, do we even know how HUMANS think?

2

Really_McNamington t1_ja4s8u1 wrote

>Hell, do we even know how HUMANS think?

Hell no. So why the massive overconfidence that we're on the right track with these bullshit generators?

1

phillythompson t1_ja4sny6 wrote

It’s not confidence that they are similar at all. There is potential, that’s what I’m saying — and folks like yourself a the once being overconfident that “the current AI / LLM are definitely not smart or thinking.”

I’ve yet to see a reason why we’d dismiss the idea that these LLMs aren’t similar to our own thinking or even intelligent. That’s my piint

1

Really_McNamington t1_ja4vs33 wrote

Look, I'm reasonably confident that there will eventually be some sort of thinking machines. I definitely don't believe it's substrate dependent. That said, nothing we're currently doing suggests we're on the right path. Fairly simple algorithms output bullshit from a large dataset. No intentional stance, to borrow from Dennett, means no path to strong AI.

I'm as materialist as they come, but we're nowhere remotely close and LLMs are not the bridge.

1

phillythompson t1_ja4xclz wrote

I’m struggling to see how you’re so confident that we aren’t on a path or close.

First, LLMs are neural nets— as our our brains. Second, one could make the argument that humans take in data and output “bullshit”.

So I guess I’m trying to see how we are different given what we’ve seen thus far. I’m again not claiming we are the same, but I am not finding anything showing why we’d be different.

Does that make sense? I guess it seems like your making a concrete claim of “these LLMs aren’t thinking, and it’s certain” and I’m saying, “how can we know that they aren’t similar to us? What evidence is there to show that?”

1

Really_McNamington t1_ja6vq1o wrote

Bold claim that we actually know how our brains work. Neurologists will be excited to hear that we've cracked it. The ongoing work at openworm suggests there may still be some hurdles.

To my broader claim, chatgpt3 is just a massively complex version of Eliza. It has no self-generated semantic content. There's no mechanism at all by which it can know what it's doing. Even though I don't know how I'm thinking, I know I'm doing it. LLMs just can't do that and I don't see a route to it becoming an emergent thing via this route.

1