Viewing a single comment thread. View all comments

reconditedreams t1_j2831tc wrote

It's a fallacy to say that we need to fully understand the processes underlying human consciousness in order to accurately emulate the function of human consciousness to a good enough degree so as to be practically indistinguishable.

Obviously computers are nothing like a human brain, they're two completely different kinds of physical systems. One is made of circuit gates and silicon, the other is made of carbon and neurons.

Computers are also nothing like the weather, but that doesn't mean we can't use them to emulate the weather to a close enough degree so as to be practically useful for predicting stormfronts and temperatures.

We don't need to fully understand how human consciousness works in order to have AGI. We only need to quantify the function of human consciousness closely enough to practically mimic a human. To develop a decent statistical understanding of the input-output relationship of human consciousness.

It is reasonable to predict that modern digital computers will never be able to truly simulate the full depth of human consciousness, because doing so will require hardware more similar to the brain.

It is not reasonable to say that they will never come close to accurately predicting and recreating the output of human consciousness. This is frankly a ludicrous claim. The brain is a deterministic physical system and there is nothing magical about its output. There is no inherent reason why human behavior cannot be modelled algorithmically using computers.

The hard problem of consciousness, the philosophical zombie, the chinese room, ect these are all totally irrelevant to the practical/engineering problem of AGI. You shouldn't mistake the philosophical problem with the engineering problem. Whether an AGI running on a digital computer is truly capable of possessing qualia and subjective mental states is a problem for philosophers to deal with. Whether an AGI running on a digital computer can accurately emulate the output of the human brain to a precise degree is an altogether different question.

27

Mental-Swordfish7129 t1_j2847gt wrote

>The hard problem of consciousness, the philosophical zombie, the chinese room, ect these are all totally irrelevant to the practical/engineering problem of AGI.

This is such an important point!

11

reconditedreams t1_j285a5h wrote

Yeah, this is my entire point. I often see people mistake the metaphysics question for the engineering question. It doesn't really matter if we understand the metaphysics of human qualia, only that we understand the statistical relationship between human input data(sensory intake) and human output data(behavior/abilities).

It's no more nessecery for ML engineers to understand the ontology of subjective experience than it is for a dog catching a ball in midair to have a formal mathematical understanding of Newton's laws of motion. They only need to know how to jump towards the ball and put it in their mouth. How the calculus gets done isn't really important.

Midjourney probably isn't capable of feeling sad, but it certainly seems to understand how the concept of "sadness" corresponds to pixels on a screen. Computers may or may not be capable of sentience in the same way humans are, but there's no reason they can't understand human creativity on a functional level.

11

Mental-Swordfish7129 t1_j28826y wrote

It's no wonder the ill-informed see creating AGI as such an unachievable task. They're unwittingly adding so very much unnecessary sophistication to their expectations. The mechanisms producing general intelligence simply cannot be all that sophisticated in relation to other evolved mechanisms. And the substrate of GI will have as much deadweight as is typically found in other evolved structures. It likely won't require anywhere near 80 billion parallel processing units. I may have an inkling of it running on my computer with around 1800 units right now.

6

Mental-Swordfish7129 t1_j28g001 wrote

>There is no inherent reason why human behavior cannot be modelled algorithmically using computers.

I think we can make an even stronger claim... If we examine a "behavior" we see that it is only a behavior because the relevant axons happen to terminate at an end effector like muscle tissue. If these same axons were transposed to instead terminate at other dendrites, we might label their causal influence an attentional change or a "shifting thought". So, by extending your argument, there is no good reason to suspect we cannot model ANY neural process whatsoever. This is how causal influence proceeds in the model I have created. It's a stunning thing to observe.

2

shmoculus t1_j290fym wrote

I think it's easier for people to understand AGI as a reasoning machine, reason is not necessarily tied to being conscious / self-awareness (though some self awareness helps in acting in the world so will likely be implicitly learned)

1