Viewing a single comment thread. View all comments

ErisWheel t1_ja99svb wrote

Yeah, sorry if it seemed nit-picky, but I think these are important distinctions when we're talking about where consciousness comes from or the presence of what disparate elements might/might not be necessary conditions for it. Missing the entire limbic system and still having consciousness is almost certainly impossible without some sort of supernatural explanation of the later.

Similarly, with locked-in syndrome, I think there's some argument there about whether we really would know if those patients were conscious in the absence of some sort of external indicator. What does "consciousness" entail, and is it the same as "response to stimuli"? If they really can't "feel, speak or interact with the world" in any way, what is it exactly that serves as independent confirmation that they are actually conscious?

It's an interesting quandary when it comes to AI. I think this professor's argument falls pretty flat, at least the short summary of it that's being offered. He's saying things like "all information is equally valuable to AI" and "dopamine-driven energy leads to intention" which is somehow synonymous with "feeling" and therefore consciousness, but these points he's making aren't well-supported, so unless there's more that we're not seeing, the dismissal of consciousness in AI is pretty thin as presented.

In my opinion, it doesn't seem likely that what we currently know as AI would have something that could reasonably be called "consciousness", but a different reply above brought up an interesting point - when a series of increasingly nuanced pass/fail logical operations gets you to complex formulations that appear indistinguishable from thought, what is that exactly? It's hard to know how we would really separate that sort of "instantaneous operational output" from consciousness if it became sophisticated enough. And with an AI, just given how fast it could learn, it almost certainly would become that sophisticated, and incredibly quickly at that.

In a lot of ways, it doesn't seem all that different from arguments surrounding strong determinism in regards to free will. We really don't know how "rigid" our own conscious processes are, or how beholden they might be to small-scale neurochemical interactions that we're unable to observe or influence directly. If it turns out that our consciousness is emerging as something like "macro-level" awareness arising from strongly-determined neurochemical interactions, it's difficult to see how that sort of scenario is all that much different from an AI running billions of logical operations around a problem to arrive at an "answer" that could appear as nuanced and emotional as our conscious thoughts ever did. The definition of consciousness might have to be expanded, but I don't think it's a wild enough stretch to assume that it's "breathless panic" to wonder about it. I think we agree that the article isn't all that great.

2