Viewing a single comment thread. View all comments

Mental-Swordfish7129 t1_j2847gt wrote

>The hard problem of consciousness, the philosophical zombie, the chinese room, ect these are all totally irrelevant to the practical/engineering problem of AGI.

This is such an important point!

11

reconditedreams t1_j285a5h wrote

Yeah, this is my entire point. I often see people mistake the metaphysics question for the engineering question. It doesn't really matter if we understand the metaphysics of human qualia, only that we understand the statistical relationship between human input data(sensory intake) and human output data(behavior/abilities).

It's no more nessecery for ML engineers to understand the ontology of subjective experience than it is for a dog catching a ball in midair to have a formal mathematical understanding of Newton's laws of motion. They only need to know how to jump towards the ball and put it in their mouth. How the calculus gets done isn't really important.

Midjourney probably isn't capable of feeling sad, but it certainly seems to understand how the concept of "sadness" corresponds to pixels on a screen. Computers may or may not be capable of sentience in the same way humans are, but there's no reason they can't understand human creativity on a functional level.

11

Mental-Swordfish7129 t1_j28826y wrote

It's no wonder the ill-informed see creating AGI as such an unachievable task. They're unwittingly adding so very much unnecessary sophistication to their expectations. The mechanisms producing general intelligence simply cannot be all that sophisticated in relation to other evolved mechanisms. And the substrate of GI will have as much deadweight as is typically found in other evolved structures. It likely won't require anywhere near 80 billion parallel processing units. I may have an inkling of it running on my computer with around 1800 units right now.

6