Viewing a single comment thread. View all comments

wow_button t1_it7y573 wrote

If I write something that feels, its not in the code. What you described - a 'configuration of matter' - yeah maybe we could build that. But its not just information. That's what I mean about it being 'artificial life'.

2

bread93096 t1_it88av1 wrote

Code itself may not be capable of feeling, but code in combination with a network of physical processors that mimic organic brain structures could conceivably feel.

1

wow_button t1_it8bnrn wrote

Agree - but that is my point that now you are not creating artificial intelligence in the form of a program that can be run on any computer, you are building artificial life - the 'network of physical processors' would be where feelings reside.

To grok my point about what a computer is - watch this thing: computer made of people. or this: https://xkcd.com/505/

How would you write a program that ran on that computer that had feelings. And before you object that its too simple - all computers are just faster, more complicated versions of this.

1

bread93096 t1_it8h09l wrote

Computers in the future could become sufficiently advanced that the processors in an average home computer would be capable of running an AI program that is conscious. The only barrier is what hardware is widely available.

1

wow_button t1_it8sogx wrote

Right - but you're missing my point. That super-fast computer would be doing exactly the same thing that the XKCD comic does with rocks, just faster. Its Turing complete, so it does everything that is possible to do with any conventional computer. But its obvious that there is no consciousness or feeling in the pattern of rocks.

What I'm saying is that if we build AI, it will be because we created a certain configuration of matter that registers feelings, not because we've written code. Code could pretend to feel, but not feel.

1

bread93096 t1_it8uqtj wrote

Ah I see. Basically you’re referring to the Chinese black box problem. I’d argue that’s more a problem with our perception than with consciousness itself. It is impossible for us to determine from the outside whether any system is conscious or not. This is true even of other human beings as the p-zombie problem illustrates. But it would certainly be possible for an artificial system to be conscious in fact. We just wouldn’t know about it.

2

wow_button t1_it98z33 wrote

Yeah its analogous to the black box problem, that's a good point. But what I'm saying is that computers are demonstrably a mechanistic black box. I get that maybe that's controversial? But that is literally what computers do. I've read arguments like Tononi's IIT, but the whole 'when its complex and integrated, consciousness happens' does not convince me (though my understanding is admittedly shallow).

I can create a computer program that capitalizes all of the letters or words you type with a few lines of code. Does part of the computer understand what its doing? No. The same way a see-saw does not understand what its doing when you push on the high end and it comes down and the other side goes up. The computer a mechanistic, deterministic machine that happens to be able to do some really cool and complicated stuff.

All other computer programs, including the most sophisticated current AI, are just more complicated versions of my simple program.

1

bread93096 t1_it9a9r9 wrote

The counter argument would be that the human brain is also an amalgam of relatively simple sub-processors, and consciousness is the result of these many sub-processors interacting. It’s supported by the fact that the parts of the brain that are associated with consciousness and sentience develop relatively late in the evolutionary timeline of most intelligent species. However until we can say conclusively how consciousness works in the human brain, we can’t say whether it is possible in an artificial system, and we are not at all close to solving that problem.

3

wow_button t1_it9joo8 wrote

Well said - my reasoning above is why I'm so drawn to Analytic Idealism. I can't get past my own experience with programming to draw the leap that there is some magic number of logic gates, memory and complex processing that emerges into consciousness. Materialism kind of dictates that that must be the case. Panpsychism also appealed (consciousness is fundamental to the material wold), but AI scratches that itch in a much more satisfying way. Ultimately I guess I'm skeptical that a pure materialist perspective will grant us the necessary insights into consciousness necessary to create a compelling AI. Thanks for the article and the convo!

1