i0i0i

i0i0i t1_jdnfsy0 wrote

I think we do need a rigorous definition. Otherwise we’re stuck in a loop where the meaning of intelligence is forever updated to mean whatever it is that humans can do that software can’t. The God of the gaps applied to intelligence.

What test can we perform on it that would convince everyone that this thing is truly intelligent? Throw a coding challenge at most people and they’ll fail, so that can’t be the metric. We could ask it if it’s afraid of dying. Well that’s already been done - the larger the model size the more likely it is to report that it has a preference to not be shut down (without the guardrails put on after the fact).

All that to say that I disagree with the idea that it’s “just” doing anything. We don’t know precisely what it’s doing (from the neural network perspective) and we don’t know precisely what the human brain is doing, so we shouldn’t be quick to dismiss the possibility that what often seems to be evidence of true intelligence actually is a form of true intelligence.

1

i0i0i t1_jdmj6q5 wrote

We don’t have a rigorous definition of intelligence. How sure are you that you’re ever being truly creative? Next time you’re talking to someone, as your speaking pay close attention to the next word that comes out of your mouth. Where did it come from? When did you choose that specific word to follow the previous? What algorithm is being followed in your brain that resulted in the choice of that word? The fact is that we don’t know, and not having a real understanding human intelligence should make us at least somewhat open to the possibility that an artificial system that is quickly becoming indistinguishable from an intelligent agent may in fact be or become an intelligent agent.

1