Viewing a single comment thread. View all comments

gravitas_shortage t1_jedxxtx wrote

You're only arguing that at some arbitrary level of complexity, consciousness may emerge. Which, well, yes. But that's not enough to posit that we are there already, or might be soon, or even that we might conceivably get there by going in the GPT/transformer direction. You need to provide a plausible definition of consciousness and show that at the very minimum the basic infrastructure is there in GPT, otherwise there's no reason to think that it's any less of a Chinese room than Clippy.

5

Galactus_Jones762 OP t1_jef6fps wrote

We mostly agree. It may be a wise default position to say it’s like clippy til proven otherwise. But my claims are actually this fwiw: at some as to yet be pinpointed, consciousness may emerge. It’s not arbitrary, it’s unknown but we know it’s there, at least in brains. We don’t know if it’s there in the most advanced AI systems. We also don’t know it’s not there, by it, I mean a level at which it could potentially emerge with some form of what we might usefully label “consciousness.”

I’m also saying that even if we can define consciousness, we can’t prove it’s in our own brains merely by looking at them. What we can plausibly say there is “likely” a basic infrastructure in our brains that yield subjectivity, qualia; all that good stuff, and that infrastructure has been more or less eluded to in my article, reductionist, sure, but I talked about the ion channels, firing rate, action potential, number of synaptic connections, networks, and brain regions.

You are telling me that in order to say it is “likely” there’s a basic infrastructure yielding something beyond mere automaton behavior in AI, something like consciousness, I need more evidence. I agree.

Which is why I’m not saying it’s likely. I am merely offering that it is possible. And that’s plenty to divorce the Chinese Room from this conversation.

Searle wrote his experiment when he had “eyes” on all the pieces of the model’s functionality. It’s a braindead obvious observation but also an important one.

His original conception of sequential token manipulation does not account for unexplained emergence in systems where we don’t have a model for how the data is structured at these massive scales.

Bottom line: one can’t argue that just because humans are made up of sub atomic particles we can extrapolate that they can never be conscious or are entirely mechanical automatons.

We are entirely mechanical. Yet we don’t know how consciousness arises in us or any creature. We know it’s mechanical and have tracked the culprit to the brain but still don’t know precisely how it creates this subjective experience we have.

What’s new and different about AI as opposed to older versions is we are starting to see edge case activity where we really and truly no longer know why or how these things do what they the do.

When emergence takes place in complex systems you have to put Searle’s experiment on notice—it’s in danger of being increasingly irrelevant as the behaviors become increasingly unexplainable by sequential token manipulation alone.

We should not blindly invoke history, blindly rely too heavily on these prosaic “rules” handed down by great thinkers. It’s important we understand their role but know when to come out of the rain.

If any scholar invokes Chinese room as a one and done answer to smart layman who are noticing something weird is happening in the AGI research field, that’s lazy.

It’s fine with a child who mistakes a chatbot for a true friend. It’s not adequate for those who are fully aware of Searle and what his dictum meant.

We have to be open to the possibility that AI is showing early signs of graduating from its Chinese room shackles. Maybe we need a new thought experiment.

0

gravitas_shortage t1_jefhclx wrote

Sure, just one thing: consider a probabilistic argument. There are a quadrillion things out there that most definitely do not have consciousness, from rocks to stars to Clippy. There is only one we know does, a brain, and even then it's disputable that all types do. An argument that anything has consciousness must provide at least the beginning of a reason that it does, saying "it might" is not enough, because the huge, huge majority of things don't.

1

Galactus_Jones762 OP t1_jefvb0v wrote

Saying it “might” may seem irrelevant but it’s not technically wrong to claim something might have consciousness. It’s sort of a “definitely maybe” thing as opposed to a “definitely not, a priori impossible” type thing. That’s the tension I’m working between.

There’s probably a continuum of things between a rock and a person, where the possibility increases. I think AI is climbing this continuum as it begins to exhibit emergent properties and agentic behaviors, which it has.

Also consider panpsychism. There is no good reason to be certain that something isn’t conscious. We just don’t know enough about consciousness to talk in certainties. I think doing so is a trap.

There is also no good reason to be certain something IS conscious, except perhaps ourselves, as we experience our subjective reality directly. We should live as if others are conscious but we have to fall short of certainty.

1