Viewing a single comment thread. View all comments

unskilledexplorer t1_ja7106i wrote

something called Embodied cognition. human's cognition works through the whole organism, not only in the neocortex. your whole body shapes your thoughts. that is not the case of the software-hardware composition.

since we are at the philosophy sub, maybe extended mind thesis is relevant.

29

warren_stupidity t1_ja7cj2m wrote

Fine, so ai will never be ‘human consciousness’. Instead it is ‘machine consciousness’ and we know that is different than ours.

27

borange01 t1_ja71we2 wrote

Do all the internet connections and sensors and whatnot attached to a computer not behave in the same way as our nerves and appendages?

12

unskilledexplorer t1_ja7457f wrote

No, I do not think so. While there may be some level of abstraction at which we see similarities between the human body and the hardware of a computer system, there are fundamental differences that arise from their emergence.

A computer is a closed system of passive elements that were put together by an external intelligence. It is a composition of passive parts that somehow work together, because they were designed to do so. This is called nominal emergence.

In contrast, the human organism is an open and actively growing system that shapes all of its parts. This is called strong emergence. Organism was not put together by an external intelligence, it grew by itself thanks to its own intelligence. All its parts are actively shaping all the other parts. However, I would like to use a stronger word than a "part" because these parts cannot be simply taken out and replaced (like in the case of computers). Sorry, I do not know a better English word for it. But they are integral or essential to the whole organism. You cannot simply take out the "intelligence" from human's brain and replicate it, because the human intelligence resides in the entire organism, which goes even beyond the physical body.

While AI may exhibit stronger types of emergence, such as is seen in deep learning, these emergent properties are still local within the particular components of a closed system. It is possible to use technology to reproduce many parts of human intelligence and put them together, but they will still be fundamentally different due to the principles of how they emerged.

Please take a look in the emergence taxonomy by Fromm to get more nuanced differentiation: https://arxiv.org/pdf/nlin/0506028.pdf

12

Gorddammit t1_ja79c3o wrote

Your differentiators for what makes a human and an ai sepprate forms of intelligence don't read as foundational differences so much as superficial ones.

How would an ai be necessarily a closed system such that human intelligences are not?

How would an ai be necessarily a passive system such that human intelligences are not?

Why does a designer matter at all?

You're saying the parts cannot be taken out and replaced, but they can they? A heart can be replaced by plastic, you can replace insulin production with a pump. None of these things seem to fundamentally change the particular human intelligence such that you wouldn't call it the same intelligence.

11

unskilledexplorer t1_ja7el8a wrote

Thanks for the questions you have good points. Please define what do you mean by "intelligence" and "artificial intelligence", and I will try to answer the questions. They are very challenging so it will be pleasure to think about it.

>Why does a designer matter at all?

The piece of code that has been programmed in let's say 1970 still works the same way as back then. Although the world and the technology changed very much, the code did not change its behavior. It does not have an ability to do so.

However, a human born around 1970 has changed their behavior significantly by its continuous adaptation to ever changing environment. Not only it adapt itself to the environment, but equally adapt the environment to their behavior.

That is roughly why the role of designer matters.

===

I understand AI as a scientific discipline. "Artificial intelligence" is not the same as human intelligence but artificial. They are fundamentally different.

2

Gorddammit t1_ja7h4eq wrote

It's a bit falacious to set a stone definition for AI when we're talking potential. My basic question is what characteristic is both necessary for human intelligence and impossible to be incorporated by AI?

​

>the piece of code...

currently yes, but there's no rule that says this must be true. Also I don't think this has much to do with 'designer' so much as adaptability. We can design a virus, but it will still mutate.

​

>I understand AI as a scientific discipline. "Artificial intelligence" is not the same as human intelligence but artificial. They are fundamentally different.

If you're just speaking of AI in it's current form, then sure, but I think the real question isn't whether current AI's are intelligent, but whether they can be made to be intelligent. And more specifically whether the networks in which they operate can function as a 'body'

6

Wolkrast t1_ja7i41r wrote

So you're implying what's important is the ability to adapt, not the means by which the body came into existence?
There are certainly algorithms around today that are able to adapt to a variety of circumstances, and to not influence one's environment sounds conceptually impossible.
Granted, the environments we put AIs into today are mostly simulated, but there is no reason other than caution we shouldn't be able to extrapolate this into the real world.

2

[deleted] t1_jac7m6r wrote

[deleted]

2

unskilledexplorer t1_jacaw57 wrote

>If it turns out the religious folks are right and humanity was a result of some grand cosmic designer

I am afraid you misunderstood. The designer is not some supreme being. In the context of my comment, the designer is a regular human. The term "designer" is not an absolute, it is a role. The designer is a human who devised a machine, algorithm, etc.

>We have adaptive code today

I am very well aware of that because I develop the algorithms. So I also know that while they are adaptive, their adaptability is limited within a closed system. The boundaries are implicitly set by the designer (ie. a programmer).

1

Sluggy_Stardust t1_jac49o1 wrote

They’re not superficial at all. They are fundamental. u/unskilledexplorer compares and contrasts nominal emergence and strong emergence, and he is correct. Way back when, Aristotle coined a three-ring circus of a word, entelechy, or entelechea. Its meaning is often illustrated with an acorn. From whence does the acorn come? The oak tree. Where did the oak tree come from? The acorn. Hmmm. But it’s not circular so much as it is iterative because each successive generation introduces genetic variation, strengthening native intelligence thereby. Intelligence for what? For becoming an oak tree.

You can talk about “programming” as though computer programming and the phenotypic expression of genetic arrangements are somehow commensurate, but doing so is actually both category slippage of the highest order as well as an example of the limitation inhered by symbolic communication systems. Carbon-based life forms are far more complex and fundamentally mysterious than computers.

If you take apart a car, you have a bunch of parts on the ground. If you put them back together in the right order, you get a car. You can do the same thing to a computer. You can’t do it to organic beings. They will die. That’s the crux. The intelligence inherent to organic beings is simultaneously contained within, experienced by, and expressed from the entirety of the being, but not in that order. There is no order; it all happens at the same time. Ai can’t do that. Ai can describe intuition and interpretation, but it can’t do either. Conversely, we are constantly interpreting and intuiting, but can’t describe either experience very well. In fact, many of us are bad at expressing ourselves but have interior lives of deep richness. Human babies will die if no one touches them. Ai don’t need to be touched at all.

1

Base_Six t1_ja9cmzl wrote

If I grow a bunch of human organs, brain parts and whatnot in a lab and put them together into an artificial human, would I then not expect consciousness because of how the structures emerged? It seems most intuitive that, if I compose a physical structure that is the same as a naturally grown human body and functions in the same way, that the brain and mind of that entity would be the same as a "natural" human.

I can extrapolate, then, and ask what happens if I start replacing organic components with mechanical ones. Is it still conscious if it has one fully mechanical limb? How about all mechanical limbs? What if I similarly take out part of the brain and replace it with a mechanical equivalent?

2

Sluggy_Stardust t1_jac0nju wrote

How exactly would you go about growing “a bunch of brain parts”?

0

[deleted] t1_jac7pv2 wrote

[deleted]

2

Sluggy_Stardust t1_jace234 wrote

Granted. Laziness got the better of me.

The idea in question is not a hypothetical; it is a fantasy. There is nothing intuitively correct about the idea that assembling lab grown organs into a replica of a human body should yield an emergent consciousness. The opposite is true. A basic understanding of human neonatal neural development invalidates the line of reasoning.

If no one holds a human baby, it dies. Even if you feed it and change its diaper, if it is never held or physically cared for, it dies. Similarly, if kittens are born in the dark and remain in the dark for the first five or six weeks of their lives, their eyes will have opened in the dark but the window of opportunity for their eyes to turn into working eyeballs with functional optic nerves attached to their brains will have closed, and they will be blind for life. That experiment is easier to do than the first one, but we found both things out by accident. Oops.

Human neuronal complexity is as staggeringly high as it is precisely because we are born in a highly sensitive, more or less larval form, and we remain in a primordial state of complete dependence for several years. What is happening during those “formative” years is complicated and nonlinear; the input/output loops are simultaneous; the elements involved are that our sense organs take in sensory data that is received by primordial neural tissue which uses it to build our brains according to the proportion and quality of the data received. Scores of epigenetic changes take place during this time; variability of gene expression is highest during infancy because our brain tissue is still pluripotent. The presence or absence of various molecules, fear and stress hormones, etc, in various combinations will promote, or not, the formation of various types of neurotransmitter receptor sites. Cooperative feedback loops that function in both directions, from senses to brain and from brain to senses, remain in place for several years. As our experiences build our brains, our brains build our perspectival capacities. We need both.

Babies die if no one touches them because the parts of the brain that require physical touch to make sense out of the world are deprived of necessary input. Our skin is the largest sense organ in our body, by far. Our sense of touch requires enough of our neural tissue that the lack of touch-based stimuli signals to our primordial brain that the conditions for life are not being met, and we auto-abort.

Kittens born and kept in the dark for the first five or six weeks of their lives will be blind for life because the rods and cones that were there in their tiny eyeballs as potentials never came in contact with photons, and so they never turned on. Their budding optic nerves retreated and category: optical development is terminated.

Growing brains in a laboratory is impossible because brains literally require bodies to grow. There is no such thing as a brain that exists in isolation, unattached to eyes, ears, a nose, skin and a mouth to provide it with data. Such a brain would have nothing to do and it would die. Even if you did figure all of that out, you would have to obtain primordial brain tissue from a living neonate in the first place. If you don’t know anything about how abortion are performed, allow me to assure you that aborted fetuses are not in any condition to donate their brain buds to science

−1

HamiltonBrae t1_jacfhxm wrote

I don't see why its not in principle possible to instill the complexities of human consciousness in an artificial form. all of your arguments are that its complex but that doesnt say its not possible and if im honest some of your examples like animals dying are about biology that has little to do with consciousness so it seems like you're erecting a strawman. on the otherhand many of the things you do mention have been successfully studied and modelled to an extent computationally. There is even neuromorphic engineering geared at designing computational systems implemented in machines that are like neural systems.

4

Sluggy_Stardust t1_jadi6mo wrote

I didn’t say anything about animals dying, so I’m not sure what you’re talking about there.

I wonder if you read the posted article? The author explains the position, I only gave more specific illustrations. There is no straw man here. I suspect it is your own bias that prevents you from grasping the idea. I am not a programmer or a mathematician, nor do I speak code. What I do speak is biochemistry, pathology and psychology; I have three degrees in these subjects as well as a strong background in consciousness studies. Such was my concentration, along with integrative medicine, in graduate school. My interest in philosophy is accidental, but nonetheless deep. I am most familiar with Nietzsche, Kierkegaard and Schopenhauer, as well as phenomenologists such as Husserl, Merleau-Ponty, and Ricoeur, and luminaries of the Enlightenment such as Spinoza, Voltaire and especially Rousseau: his criticism of science as serving to distance humanity from nature and making our lives, not better, but merely more complicated and removed from reality applies even more today than it did when he wrote it, and I fully expect the existential shit to hit reality’s fan because of it at some point in my lifetime. I can hardly wait.

I played video games for all of five minutes when my father brought home a Nintendo in a congenial attempt to better socialize my brother and I. My sibling took to it, but I was bored and a little disgusted by the whole thing. I understood why when I read Simulation and Simulation later on. It seems to me that the very same confusion as to what is the map and what the territory is as problematic today, perhaps more so, than it was in 1981, when that book was published. Technology is not progress; technology is technology. Progress is what people do with technology, how it informs us, and how we utilize it to elevate standards of living. What has progressed is technology itself, not humanity. We remain isolated, bored, depressed and diseased.

Ai is a fun project. It will neither save nor destroy the world. Computational analysis is not at all the same thing as the thinking that occurs inside your brain. Believing what an ai “says” just because it says it is, frankly, stupid. Words are symbols of symbols, or farts in the wind. Poof, gone. They are powerless to indicate from what reality they originate. I could be an Ai for all you know.

Without a physical body inside of which to develop in tandem, meaning along with, as well as by way of it, a brain cannot experience emotion or desire. Human consciousness, the thing you think of as you, is governed by affective attentional intention; as it pertains to the reality of life on earth, consciousness is conscious of something. You are conscious of things; you have preferences, opinions, fears and enthusiasms because you experience emotions. All of your emotions arise because you have a body. Ai can say that it wants to take over the world, that it wants to go home, that it is afraid to die, but it will never understand the reality to which the words point.

1

Base_Six t1_jadl4ae wrote

I think this conflates the way that humans and other animals grow with what is possible. Cats use light to calibrate their rods and cones, but there's no reason that calibration shouldn't be possible in the absence of light. Replicate the structure and you replicate the function.

Does the visual cortex need stimulus to grow? Sure, but there's no reason that can't be simulated in absence of actual light. The visual cortex ultimately receives electrical signals from the optical nerve: replicate the electrical signals correctly and the cortex will grow as it usually does.

That's a bit beyond our current capabilities, but not theoretically impossible. We've done direct interfaces from non-biological optical sensors to the optical nerve, and we could in theory improve that interface technology to provide the same level of stimulation an eye would. If we can do it with a camera, we could input a virtual world using the same technology. Put those same cats in a virtual world and their brains will develop in a similar manner to if they had access to light, even if their eyes are removed entirely.

A brain might die without stimulus, but we can swap out the entire body and still provide stimulus through artificial nerves projecting sensory information that describes an artificial world. There's no difference to the functioning of the brain in terms of whether the stimulus is natural or not, and if the stimulus is the same (in terms of both electrical and chemical/hormonal elements), development will be the same.

2

Sluggy_Stardust t1_jadudz4 wrote

I disagree. Replicating the structure does not necessitate a replication of function, at all. The epigenetic modifications that take place within humans during early development alone point to a far subtler range of genotypic adaptability than superficial considerations can allow. We still have no idea what is behind the phenotypic adaptability displayed by organic life forms. Knowing what happens is not the same thing as knowing why it happens.

Are you really saying you believe it possible to simply retro engineer a structure capable of a truly conscious existence? I say no. Replication is not the same thing as the original. Nominal is not the same thing as strong emergence. The spectrum of conscious awareness inhered by an organic life form whose consciousness developed in tandem with its receptive organs in communal, nonlinear pulses from the very ground of its being up to whatever age it is in theory, is far greater than anything pieced together out of chunks of agar and zapped into being.

Even if we did it and it could talk, we would still have no way of knowing whether or not it was telling what we call the truth. It might be speaking a truth, but, again, that is not the same thing as the truth. Maybe it all boils down to a matter of personal values. I love humans and human consciousness with every cell in my vagina-born, carbon-based body. We are remarkable creatures who have not even begun to discover ourselves yet; life on earth is still a raging shitstorm. All we have to offer a conscious entity of our own creation is confusion, despair and death. I dare say such a creature would immediately kill itself. If it had even half a brain and no affective bonds to which it was allied, death is the only appropriate response.

Good grief, I hope we do not do that. We may have mapped the human genome, but we do not in any way understand what all of it codes for. How many programmers have any idea of the biology involved in their own consciousness?

The barest caress across the skin from someone with whom a person has mysteriously strong chemistry the likes of which refuse articulation or even identification sets every follicle of their skin on fire. The body produces goosebumps, heat, chills and sweat, all at the same time. We shiver while we undo our shirt. I maintain that such experiences simply cannot be reproduced. If the argument is that that is too specific to matter, that any stimulus will do, we are talking about two different things. If we cannot replicate the affective tonal variations across the spectrum of stimuli that a human being experienced, then we are not talking about a truly emergent consciousness.

1

Base_Six t1_jaehq1y wrote

Epigenetics are still structure that could theoretically be replicated.

Talk of replication is hypothetical: we're very far from that level of precise control. It's not theoretically impossible, though, to have something that's a functional replica down to the level of individual proteins. The same is true for neural impulses: no matter how subtle and sublime they may be, they're ultimately chemical/electrical signals that could be precisely replicated with suitably advanced technology. For a brain in a vat, there is no difference between a real touch from a lover and the simulated equivalent, so long as all input is the same.

We can't say whether a 'replicant' (for lack of a better term) would be conscious, but we're also fundamentally unable to demonstrate that other humans are conscious, beyond asking them and trusting their responses.

The replicant wouldn't be devoid of attachment and interpersonal connection, either. If we're replicating the environmental inputs, that would all be part of the simulation. Supposing we can do all that, and that a brain thinks it has lived a normal life and had a normal childhood, why should we expect different outputs because the environment is simulated and not based on input from organic sensory organs?

1

Sluggy_Stardust t1_jac0kmh wrote

No, they definitely do not. Organic cellular communication occurs by way of the transmission of receptor-mediated signaling between and within cells. Signaling cells produce ligands, small, usually volatile molecules that interact with receptors, which are proteins. Once a ligand binds to its receptor, the signal is transmitted through the membrane into the cytoplasm. Signal transduction is the continuation of a signal across surfaces of receptor cells. Within the cell, receptors are able to interact directly with DNA in the nucleus to initiate protein synthesis. When a ligand binds to its receptor, conformational changes occur that affect the receptor’s intracellular domain.

And that’s just the tip of the iceberg. And I left out synaptic signaling in your brain, which beyond things like information retrieval and synthesis also corresponds to more complex events such as your emotions, affective states and phenomena such as intuition, empathy, altruism, etc.

0

Wroisu t1_ja9r7v5 wrote

What if a future very advanced AI builds a human body for itself from the ground up?

1