Viewing a single comment thread. View all comments

i0i0i t1_jdnfsy0 wrote

I think we do need a rigorous definition. Otherwise we’re stuck in a loop where the meaning of intelligence is forever updated to mean whatever it is that humans can do that software can’t. The God of the gaps applied to intelligence.

What test can we perform on it that would convince everyone that this thing is truly intelligent? Throw a coding challenge at most people and they’ll fail, so that can’t be the metric. We could ask it if it’s afraid of dying. Well that’s already been done - the larger the model size the more likely it is to report that it has a preference to not be shut down (without the guardrails put on after the fact).

All that to say that I disagree with the idea that it’s “just” doing anything. We don’t know precisely what it’s doing (from the neural network perspective) and we don’t know precisely what the human brain is doing, so we shouldn’t be quick to dismiss the possibility that what often seems to be evidence of true intelligence actually is a form of true intelligence.

1

ErikTheAngry t1_jdnzzie wrote

I mean... if you want a rigorous definition of intelligence to compare it to, then I guess you'll have to start there, and then when it's broadly accepted as a thing, we can compare it to that.

For now, with the definitions we do have, it's not intelligent. It's just a retrieval system, with no more intelligence than my filing cabinet.

1