phillythompson

phillythompson t1_jar7p83 wrote

We don’t know how other minds work, either. Animals and all that you listed, I mean.

And complexity doesn’t imply… anything, really. And you have a misunderstanding of what LLMs do — they aren’t “memorizing” necessarily. They are predicting the next text based on a massive amount of data and then a given input.

I’d argue that it’s not clear we are any different than that. Note I’m not claiming we are the same! I am simply saying I don’t see evidence to say with certainty that we are different / special.

1

phillythompson t1_jaqpaew wrote

Isn’t the octopus example completely wrong because it was only “trained” on a small sample of text / language?

The point is — what if the octopus had seen/ heard all about situations of stranded island dwellers. All about boats, survival, etc.

With more context, it could interpret the call for help better.

And while this author might claim “it’s just parroting a reply, it doesn’t actually think”— I’ll ask how the hell she knows what human thinking actually is.

People are so confident to claim humans are special, yet we have zero idea how our own minds work.

2

phillythompson t1_ja4xclz wrote

I’m struggling to see how you’re so confident that we aren’t on a path or close.

First, LLMs are neural nets— as our our brains. Second, one could make the argument that humans take in data and output “bullshit”.

So I guess I’m trying to see how we are different given what we’ve seen thus far. I’m again not claiming we are the same, but I am not finding anything showing why we’d be different.

Does that make sense? I guess it seems like your making a concrete claim of “these LLMs aren’t thinking, and it’s certain” and I’m saying, “how can we know that they aren’t similar to us? What evidence is there to show that?”

1

phillythompson t1_ja4sny6 wrote

It’s not confidence that they are similar at all. There is potential, that’s what I’m saying — and folks like yourself a the once being overconfident that “the current AI / LLM are definitely not smart or thinking.”

I’ve yet to see a reason why we’d dismiss the idea that these LLMs aren’t similar to our own thinking or even intelligent. That’s my piint

1

phillythompson t1_ja36t1g wrote

This dude references Netflix recommendation system, Amazon recommendations, and Facebook for “what we think true AI is”.

That is so far removed from what many are discussing right now. He doesn’t touch on LLMs at all in that interview. He talks about inference and thinking, and dismisses AI’s capabilities because “all it is in inference”.

It’s a common pushback: “the AI doesn’t actually understand anything.” And my response is, “..so?”

If it gives the illusion of thinking. If it can pass the Turing test to most of the population. If it can eventually get integrated with real-fine data , images, video, and sound — does it honestly matter if it’s “truly thinking as a human does”? Hell, do we even know how HUMANS think?

2

phillythompson t1_j9yhzwx wrote

Are you me? I could’ve written this exact post.

People continue to say, “psh, it doesn’t actually know or think.” And I say, “tell me how humans know sometbing or think.” And there’s not ever an answer !

Yet they think we are somehow special and protected from AI simply because we are made of meat.

I am concerned as I am excited (potentially more the former), yet I feel crazy talking about it in real life.

2