Viewing a single comment thread. View all comments

i0i0i t1_jdmj6q5 wrote

We don’t have a rigorous definition of intelligence. How sure are you that you’re ever being truly creative? Next time you’re talking to someone, as your speaking pay close attention to the next word that comes out of your mouth. Where did it come from? When did you choose that specific word to follow the previous? What algorithm is being followed in your brain that resulted in the choice of that word? The fact is that we don’t know, and not having a real understanding human intelligence should make us at least somewhat open to the possibility that an artificial system that is quickly becoming indistinguishable from an intelligent agent may in fact be or become an intelligent agent.

1

ErikTheAngry t1_jdn193d wrote

We don't really need a rigorous definition, when we already have a general definition that it fails.

Intelligence is the ability to gain and apply knowledge and skills.

You're very right that human behaviour involves a lot of mimicry. I've noticed more than just words being influenced in my behaviour, when I'm getting to know someone. Part of that is an evolved behaviour intended to aid in socialization (as humans are social creatures).

I write code every now and then while I'm working. That code is from scratch. I'm applying knowledge to solve a task. And I choose coding, specifically, because ChatGPT is remarkably good at developing code.

Until it isn't. It makes mistakes, because it's just regurgitating code that seems to fit. It can get me 80% of the way there, and it's a wonderful tool for that, but that other 20% has to be corrected because it doesn't understand what the code does, it's just "copying and pasting" (and that's an oversimplification, but only slightly so).

The difference between my coding and ChatGPT's coding is that when I read code, I know what I'm trying to do. I can apply my knowledge to say "this will work" or "this won't work" or "what the fuck is this?" even before I even try to compile.

1

i0i0i t1_jdnfsy0 wrote

I think we do need a rigorous definition. Otherwise we’re stuck in a loop where the meaning of intelligence is forever updated to mean whatever it is that humans can do that software can’t. The God of the gaps applied to intelligence.

What test can we perform on it that would convince everyone that this thing is truly intelligent? Throw a coding challenge at most people and they’ll fail, so that can’t be the metric. We could ask it if it’s afraid of dying. Well that’s already been done - the larger the model size the more likely it is to report that it has a preference to not be shut down (without the guardrails put on after the fact).

All that to say that I disagree with the idea that it’s “just” doing anything. We don’t know precisely what it’s doing (from the neural network perspective) and we don’t know precisely what the human brain is doing, so we shouldn’t be quick to dismiss the possibility that what often seems to be evidence of true intelligence actually is a form of true intelligence.

1

ErikTheAngry t1_jdnzzie wrote

I mean... if you want a rigorous definition of intelligence to compare it to, then I guess you'll have to start there, and then when it's broadly accepted as a thing, we can compare it to that.

For now, with the definitions we do have, it's not intelligent. It's just a retrieval system, with no more intelligence than my filing cabinet.

1