Submitted by owlthatissuperb t3_y938ux in philosophy
wow_button t1_it8sogx wrote
Reply to comment by bread93096 in Artificial Suffering and the Hard Problem of Consciousness by owlthatissuperb
Right - but you're missing my point. That super-fast computer would be doing exactly the same thing that the XKCD comic does with rocks, just faster. Its Turing complete, so it does everything that is possible to do with any conventional computer. But its obvious that there is no consciousness or feeling in the pattern of rocks.
What I'm saying is that if we build AI, it will be because we created a certain configuration of matter that registers feelings, not because we've written code. Code could pretend to feel, but not feel.
bread93096 t1_it8uqtj wrote
Ah I see. Basically you’re referring to the Chinese black box problem. I’d argue that’s more a problem with our perception than with consciousness itself. It is impossible for us to determine from the outside whether any system is conscious or not. This is true even of other human beings as the p-zombie problem illustrates. But it would certainly be possible for an artificial system to be conscious in fact. We just wouldn’t know about it.
wow_button t1_it98z33 wrote
Yeah its analogous to the black box problem, that's a good point. But what I'm saying is that computers are demonstrably a mechanistic black box. I get that maybe that's controversial? But that is literally what computers do. I've read arguments like Tononi's IIT, but the whole 'when its complex and integrated, consciousness happens' does not convince me (though my understanding is admittedly shallow).
I can create a computer program that capitalizes all of the letters or words you type with a few lines of code. Does part of the computer understand what its doing? No. The same way a see-saw does not understand what its doing when you push on the high end and it comes down and the other side goes up. The computer a mechanistic, deterministic machine that happens to be able to do some really cool and complicated stuff.
All other computer programs, including the most sophisticated current AI, are just more complicated versions of my simple program.
bread93096 t1_it9a9r9 wrote
The counter argument would be that the human brain is also an amalgam of relatively simple sub-processors, and consciousness is the result of these many sub-processors interacting. It’s supported by the fact that the parts of the brain that are associated with consciousness and sentience develop relatively late in the evolutionary timeline of most intelligent species. However until we can say conclusively how consciousness works in the human brain, we can’t say whether it is possible in an artificial system, and we are not at all close to solving that problem.
wow_button t1_it9joo8 wrote
Well said - my reasoning above is why I'm so drawn to Analytic Idealism. I can't get past my own experience with programming to draw the leap that there is some magic number of logic gates, memory and complex processing that emerges into consciousness. Materialism kind of dictates that that must be the case. Panpsychism also appealed (consciousness is fundamental to the material wold), but AI scratches that itch in a much more satisfying way. Ultimately I guess I'm skeptical that a pure materialist perspective will grant us the necessary insights into consciousness necessary to create a compelling AI. Thanks for the article and the convo!
Viewing a single comment thread. View all comments