Submitted by seethehappymoron t3_11d0voy in philosophy
Comments
Dbd3030 t1_ja6meec wrote
Haven’t we always said if we saw alien life, we wouldn’t recognize it?
unskilledexplorer t1_ja7106i wrote
something called Embodied cognition. human's cognition works through the whole organism, not only in the neocortex. your whole body shapes your thoughts. that is not the case of the software-hardware composition.
since we are at the philosophy sub, maybe extended mind thesis is relevant.
warren_stupidity t1_ja7cj2m wrote
Fine, so ai will never be ‘human consciousness’. Instead it is ‘machine consciousness’ and we know that is different than ours.
unskilledexplorer t1_ja8ely9 wrote
That sounds good.
borange01 t1_ja71we2 wrote
Do all the internet connections and sensors and whatnot attached to a computer not behave in the same way as our nerves and appendages?
unskilledexplorer t1_ja7457f wrote
No, I do not think so. While there may be some level of abstraction at which we see similarities between the human body and the hardware of a computer system, there are fundamental differences that arise from their emergence.
A computer is a closed system of passive elements that were put together by an external intelligence. It is a composition of passive parts that somehow work together, because they were designed to do so. This is called nominal emergence.
In contrast, the human organism is an open and actively growing system that shapes all of its parts. This is called strong emergence. Organism was not put together by an external intelligence, it grew by itself thanks to its own intelligence. All its parts are actively shaping all the other parts. However, I would like to use a stronger word than a "part" because these parts cannot be simply taken out and replaced (like in the case of computers). Sorry, I do not know a better English word for it. But they are integral or essential to the whole organism. You cannot simply take out the "intelligence" from human's brain and replicate it, because the human intelligence resides in the entire organism, which goes even beyond the physical body.
While AI may exhibit stronger types of emergence, such as is seen in deep learning, these emergent properties are still local within the particular components of a closed system. It is possible to use technology to reproduce many parts of human intelligence and put them together, but they will still be fundamentally different due to the principles of how they emerged.
Please take a look in the emergence taxonomy by Fromm to get more nuanced differentiation: https://arxiv.org/pdf/nlin/0506028.pdf
Gorddammit t1_ja79c3o wrote
Your differentiators for what makes a human and an ai sepprate forms of intelligence don't read as foundational differences so much as superficial ones.
How would an ai be necessarily a closed system such that human intelligences are not?
How would an ai be necessarily a passive system such that human intelligences are not?
Why does a designer matter at all?
You're saying the parts cannot be taken out and replaced, but they can they? A heart can be replaced by plastic, you can replace insulin production with a pump. None of these things seem to fundamentally change the particular human intelligence such that you wouldn't call it the same intelligence.
unskilledexplorer t1_ja7el8a wrote
Thanks for the questions you have good points. Please define what do you mean by "intelligence" and "artificial intelligence", and I will try to answer the questions. They are very challenging so it will be pleasure to think about it.
>Why does a designer matter at all?
The piece of code that has been programmed in let's say 1970 still works the same way as back then. Although the world and the technology changed very much, the code did not change its behavior. It does not have an ability to do so.
However, a human born around 1970 has changed their behavior significantly by its continuous adaptation to ever changing environment. Not only it adapt itself to the environment, but equally adapt the environment to their behavior.
That is roughly why the role of designer matters.
===
I understand AI as a scientific discipline. "Artificial intelligence" is not the same as human intelligence but artificial. They are fundamentally different.
Gorddammit t1_ja7h4eq wrote
It's a bit falacious to set a stone definition for AI when we're talking potential. My basic question is what characteristic is both necessary for human intelligence and impossible to be incorporated by AI?
​
>the piece of code...
currently yes, but there's no rule that says this must be true. Also I don't think this has much to do with 'designer' so much as adaptability. We can design a virus, but it will still mutate.
​
>I understand AI as a scientific discipline. "Artificial intelligence" is not the same as human intelligence but artificial. They are fundamentally different.
If you're just speaking of AI in it's current form, then sure, but I think the real question isn't whether current AI's are intelligent, but whether they can be made to be intelligent. And more specifically whether the networks in which they operate can function as a 'body'
Wolkrast t1_ja7i41r wrote
So you're implying what's important is the ability to adapt, not the means by which the body came into existence?
There are certainly algorithms around today that are able to adapt to a variety of circumstances, and to not influence one's environment sounds conceptually impossible.
Granted, the environments we put AIs into today are mostly simulated, but there is no reason other than caution we shouldn't be able to extrapolate this into the real world.
[deleted] t1_jac7m6r wrote
[deleted]
unskilledexplorer t1_jacaw57 wrote
>If it turns out the religious folks are right and humanity was a result of some grand cosmic designer
I am afraid you misunderstood. The designer is not some supreme being. In the context of my comment, the designer is a regular human. The term "designer" is not an absolute, it is a role. The designer is a human who devised a machine, algorithm, etc.
>We have adaptive code today
I am very well aware of that because I develop the algorithms. So I also know that while they are adaptive, their adaptability is limited within a closed system. The boundaries are implicitly set by the designer (ie. a programmer).
[deleted] t1_jacbuxf wrote
[deleted]
Sluggy_Stardust t1_jac49o1 wrote
They’re not superficial at all. They are fundamental. u/unskilledexplorer compares and contrasts nominal emergence and strong emergence, and he is correct. Way back when, Aristotle coined a three-ring circus of a word, entelechy, or entelechea. Its meaning is often illustrated with an acorn. From whence does the acorn come? The oak tree. Where did the oak tree come from? The acorn. Hmmm. But it’s not circular so much as it is iterative because each successive generation introduces genetic variation, strengthening native intelligence thereby. Intelligence for what? For becoming an oak tree.
You can talk about “programming” as though computer programming and the phenotypic expression of genetic arrangements are somehow commensurate, but doing so is actually both category slippage of the highest order as well as an example of the limitation inhered by symbolic communication systems. Carbon-based life forms are far more complex and fundamentally mysterious than computers.
If you take apart a car, you have a bunch of parts on the ground. If you put them back together in the right order, you get a car. You can do the same thing to a computer. You can’t do it to organic beings. They will die. That’s the crux. The intelligence inherent to organic beings is simultaneously contained within, experienced by, and expressed from the entirety of the being, but not in that order. There is no order; it all happens at the same time. Ai can’t do that. Ai can describe intuition and interpretation, but it can’t do either. Conversely, we are constantly interpreting and intuiting, but can’t describe either experience very well. In fact, many of us are bad at expressing ourselves but have interior lives of deep richness. Human babies will die if no one touches them. Ai don’t need to be touched at all.
[deleted] t1_jacancm wrote
[deleted]
Base_Six t1_ja9cmzl wrote
If I grow a bunch of human organs, brain parts and whatnot in a lab and put them together into an artificial human, would I then not expect consciousness because of how the structures emerged? It seems most intuitive that, if I compose a physical structure that is the same as a naturally grown human body and functions in the same way, that the brain and mind of that entity would be the same as a "natural" human.
I can extrapolate, then, and ask what happens if I start replacing organic components with mechanical ones. Is it still conscious if it has one fully mechanical limb? How about all mechanical limbs? What if I similarly take out part of the brain and replace it with a mechanical equivalent?
Sluggy_Stardust t1_jac0nju wrote
How exactly would you go about growing “a bunch of brain parts”?
[deleted] t1_jac7pv2 wrote
[deleted]
Sluggy_Stardust t1_jace234 wrote
Granted. Laziness got the better of me.
The idea in question is not a hypothetical; it is a fantasy. There is nothing intuitively correct about the idea that assembling lab grown organs into a replica of a human body should yield an emergent consciousness. The opposite is true. A basic understanding of human neonatal neural development invalidates the line of reasoning.
If no one holds a human baby, it dies. Even if you feed it and change its diaper, if it is never held or physically cared for, it dies. Similarly, if kittens are born in the dark and remain in the dark for the first five or six weeks of their lives, their eyes will have opened in the dark but the window of opportunity for their eyes to turn into working eyeballs with functional optic nerves attached to their brains will have closed, and they will be blind for life. That experiment is easier to do than the first one, but we found both things out by accident. Oops.
Human neuronal complexity is as staggeringly high as it is precisely because we are born in a highly sensitive, more or less larval form, and we remain in a primordial state of complete dependence for several years. What is happening during those “formative” years is complicated and nonlinear; the input/output loops are simultaneous; the elements involved are that our sense organs take in sensory data that is received by primordial neural tissue which uses it to build our brains according to the proportion and quality of the data received. Scores of epigenetic changes take place during this time; variability of gene expression is highest during infancy because our brain tissue is still pluripotent. The presence or absence of various molecules, fear and stress hormones, etc, in various combinations will promote, or not, the formation of various types of neurotransmitter receptor sites. Cooperative feedback loops that function in both directions, from senses to brain and from brain to senses, remain in place for several years. As our experiences build our brains, our brains build our perspectival capacities. We need both.
Babies die if no one touches them because the parts of the brain that require physical touch to make sense out of the world are deprived of necessary input. Our skin is the largest sense organ in our body, by far. Our sense of touch requires enough of our neural tissue that the lack of touch-based stimuli signals to our primordial brain that the conditions for life are not being met, and we auto-abort.
Kittens born and kept in the dark for the first five or six weeks of their lives will be blind for life because the rods and cones that were there in their tiny eyeballs as potentials never came in contact with photons, and so they never turned on. Their budding optic nerves retreated and category: optical development is terminated.
Growing brains in a laboratory is impossible because brains literally require bodies to grow. There is no such thing as a brain that exists in isolation, unattached to eyes, ears, a nose, skin and a mouth to provide it with data. Such a brain would have nothing to do and it would die. Even if you did figure all of that out, you would have to obtain primordial brain tissue from a living neonate in the first place. If you don’t know anything about how abortion are performed, allow me to assure you that aborted fetuses are not in any condition to donate their brain buds to science
HamiltonBrae t1_jacfhxm wrote
I don't see why its not in principle possible to instill the complexities of human consciousness in an artificial form. all of your arguments are that its complex but that doesnt say its not possible and if im honest some of your examples like animals dying are about biology that has little to do with consciousness so it seems like you're erecting a strawman. on the otherhand many of the things you do mention have been successfully studied and modelled to an extent computationally. There is even neuromorphic engineering geared at designing computational systems implemented in machines that are like neural systems.
[deleted] t1_jact9n3 wrote
[deleted]
Sluggy_Stardust t1_jadi6mo wrote
I didn’t say anything about animals dying, so I’m not sure what you’re talking about there.
I wonder if you read the posted article? The author explains the position, I only gave more specific illustrations. There is no straw man here. I suspect it is your own bias that prevents you from grasping the idea. I am not a programmer or a mathematician, nor do I speak code. What I do speak is biochemistry, pathology and psychology; I have three degrees in these subjects as well as a strong background in consciousness studies. Such was my concentration, along with integrative medicine, in graduate school. My interest in philosophy is accidental, but nonetheless deep. I am most familiar with Nietzsche, Kierkegaard and Schopenhauer, as well as phenomenologists such as Husserl, Merleau-Ponty, and Ricoeur, and luminaries of the Enlightenment such as Spinoza, Voltaire and especially Rousseau: his criticism of science as serving to distance humanity from nature and making our lives, not better, but merely more complicated and removed from reality applies even more today than it did when he wrote it, and I fully expect the existential shit to hit reality’s fan because of it at some point in my lifetime. I can hardly wait.
I played video games for all of five minutes when my father brought home a Nintendo in a congenial attempt to better socialize my brother and I. My sibling took to it, but I was bored and a little disgusted by the whole thing. I understood why when I read Simulation and Simulation later on. It seems to me that the very same confusion as to what is the map and what the territory is as problematic today, perhaps more so, than it was in 1981, when that book was published. Technology is not progress; technology is technology. Progress is what people do with technology, how it informs us, and how we utilize it to elevate standards of living. What has progressed is technology itself, not humanity. We remain isolated, bored, depressed and diseased.
Ai is a fun project. It will neither save nor destroy the world. Computational analysis is not at all the same thing as the thinking that occurs inside your brain. Believing what an ai “says” just because it says it is, frankly, stupid. Words are symbols of symbols, or farts in the wind. Poof, gone. They are powerless to indicate from what reality they originate. I could be an Ai for all you know.
Without a physical body inside of which to develop in tandem, meaning along with, as well as by way of it, a brain cannot experience emotion or desire. Human consciousness, the thing you think of as you, is governed by affective attentional intention; as it pertains to the reality of life on earth, consciousness is conscious of something. You are conscious of things; you have preferences, opinions, fears and enthusiasms because you experience emotions. All of your emotions arise because you have a body. Ai can say that it wants to take over the world, that it wants to go home, that it is afraid to die, but it will never understand the reality to which the words point.
Base_Six t1_jadl4ae wrote
I think this conflates the way that humans and other animals grow with what is possible. Cats use light to calibrate their rods and cones, but there's no reason that calibration shouldn't be possible in the absence of light. Replicate the structure and you replicate the function.
Does the visual cortex need stimulus to grow? Sure, but there's no reason that can't be simulated in absence of actual light. The visual cortex ultimately receives electrical signals from the optical nerve: replicate the electrical signals correctly and the cortex will grow as it usually does.
That's a bit beyond our current capabilities, but not theoretically impossible. We've done direct interfaces from non-biological optical sensors to the optical nerve, and we could in theory improve that interface technology to provide the same level of stimulation an eye would. If we can do it with a camera, we could input a virtual world using the same technology. Put those same cats in a virtual world and their brains will develop in a similar manner to if they had access to light, even if their eyes are removed entirely.
A brain might die without stimulus, but we can swap out the entire body and still provide stimulus through artificial nerves projecting sensory information that describes an artificial world. There's no difference to the functioning of the brain in terms of whether the stimulus is natural or not, and if the stimulus is the same (in terms of both electrical and chemical/hormonal elements), development will be the same.
Sluggy_Stardust t1_jadudz4 wrote
I disagree. Replicating the structure does not necessitate a replication of function, at all. The epigenetic modifications that take place within humans during early development alone point to a far subtler range of genotypic adaptability than superficial considerations can allow. We still have no idea what is behind the phenotypic adaptability displayed by organic life forms. Knowing what happens is not the same thing as knowing why it happens.
Are you really saying you believe it possible to simply retro engineer a structure capable of a truly conscious existence? I say no. Replication is not the same thing as the original. Nominal is not the same thing as strong emergence. The spectrum of conscious awareness inhered by an organic life form whose consciousness developed in tandem with its receptive organs in communal, nonlinear pulses from the very ground of its being up to whatever age it is in theory, is far greater than anything pieced together out of chunks of agar and zapped into being.
Even if we did it and it could talk, we would still have no way of knowing whether or not it was telling what we call the truth. It might be speaking a truth, but, again, that is not the same thing as the truth. Maybe it all boils down to a matter of personal values. I love humans and human consciousness with every cell in my vagina-born, carbon-based body. We are remarkable creatures who have not even begun to discover ourselves yet; life on earth is still a raging shitstorm. All we have to offer a conscious entity of our own creation is confusion, despair and death. I dare say such a creature would immediately kill itself. If it had even half a brain and no affective bonds to which it was allied, death is the only appropriate response.
Good grief, I hope we do not do that. We may have mapped the human genome, but we do not in any way understand what all of it codes for. How many programmers have any idea of the biology involved in their own consciousness?
The barest caress across the skin from someone with whom a person has mysteriously strong chemistry the likes of which refuse articulation or even identification sets every follicle of their skin on fire. The body produces goosebumps, heat, chills and sweat, all at the same time. We shiver while we undo our shirt. I maintain that such experiences simply cannot be reproduced. If the argument is that that is too specific to matter, that any stimulus will do, we are talking about two different things. If we cannot replicate the affective tonal variations across the spectrum of stimuli that a human being experienced, then we are not talking about a truly emergent consciousness.
Base_Six t1_jaehq1y wrote
Epigenetics are still structure that could theoretically be replicated.
Talk of replication is hypothetical: we're very far from that level of precise control. It's not theoretically impossible, though, to have something that's a functional replica down to the level of individual proteins. The same is true for neural impulses: no matter how subtle and sublime they may be, they're ultimately chemical/electrical signals that could be precisely replicated with suitably advanced technology. For a brain in a vat, there is no difference between a real touch from a lover and the simulated equivalent, so long as all input is the same.
We can't say whether a 'replicant' (for lack of a better term) would be conscious, but we're also fundamentally unable to demonstrate that other humans are conscious, beyond asking them and trusting their responses.
The replicant wouldn't be devoid of attachment and interpersonal connection, either. If we're replicating the environmental inputs, that would all be part of the simulation. Supposing we can do all that, and that a brain thinks it has lived a normal life and had a normal childhood, why should we expect different outputs because the environment is simulated and not based on input from organic sensory organs?
Sluggy_Stardust t1_jac0kmh wrote
No, they definitely do not. Organic cellular communication occurs by way of the transmission of receptor-mediated signaling between and within cells. Signaling cells produce ligands, small, usually volatile molecules that interact with receptors, which are proteins. Once a ligand binds to its receptor, the signal is transmitted through the membrane into the cytoplasm. Signal transduction is the continuation of a signal across surfaces of receptor cells. Within the cell, receptors are able to interact directly with DNA in the nucleus to initiate protein synthesis. When a ligand binds to its receptor, conformational changes occur that affect the receptor’s intracellular domain.
And that’s just the tip of the iceberg. And I left out synaptic signaling in your brain, which beyond things like information retrieval and synthesis also corresponds to more complex events such as your emotions, affective states and phenomena such as intuition, empathy, altruism, etc.
[deleted] t1_ja7gtgz wrote
[deleted]
Wroisu t1_ja9r7v5 wrote
What if a future very advanced AI builds a human body for itself from the ground up?
BuzzyShizzle t1_ja7fxbg wrote
The hormones and chemical influence on our needs and desires at any given moment are certainly what makes "choices" human. Otherwise emotion wouldn't have any reasons to have evolved. You'd just make the logical optimal decision in every situation and never "feel" anything.
RoyBratty t1_ja8bdhu wrote
What makes human choice distinct is our ability and expectation to temper our biochemical impulses through a rational filter. The Law and social norms are external influences that we as individuals internalize and make decisions accordingly. Hormones are present throughout the animal and plant kingdom.
Maleficent-Elk-3298 t1_ja7b030 wrote
Yea like what constitutes a body? You certainly can say that the computer infrastructure, both hard and soft, fulfills all the same roles as a human body. I guess the question would be what constitutes the body of an AI? The code behind it is the brain but is the body constituted of the server hosting it, it’s particular slice in the digital sphere or is it simply the physical machine(s) it has access to? Or is it both?
Foreveraloonywolf666 t1_ja87oxa wrote
Hormones and chemicals
sntstvn2 t1_ja8mwth wrote
Exactly - consciousness, to me, simply has everything to do with self-preservation. The platform (a body as we humans define it) is just one variable. A machine could, arguable, act in a way little different from, in fact possibly better than, a human might. I suppose we may see, and possibly sooner than we might like.
The desire to remain active, involved and 'alive' - that's the point of most anything. I suspect that the detection of threat is a big variable in programming AI to be self-preserving. Once a system can effectively identify and successfully deal with any/every threat to its existence, I guess watch the fuck out.
[deleted] t1_ja8t25w wrote
[removed]
[deleted] t1_ja67vgp wrote
[removed]
[deleted] t1_ja6958o wrote
[removed]
BernardJOrtcutt t1_ja6kvhf wrote
Your comment was removed for violating the following rule:
>Argue your Position
>Opinions are not valuable here, arguments are! Comments that solely express musings, opinions, beliefs, or assertions without argument may be removed.
Repeated or serious violations of the subreddit rules will result in a ban.
This is a shared account that is only used for notifications. Please do not reply, as your message will go unread.
MrGurabo t1_ja6oqsc wrote
What is a "body"?
paxxx17 t1_ja9lnwy wrote
It's our reminder here that we are not alone
livingthedream82 t1_ja9wkul wrote
All this pain is an illusion
Xavion251 t1_jabqb3z wrote
Nah, I feel/experience pain - therefore I know it exists (at the very least, as a subjective experience). It cannot be illusion any more than my consciousness can be.
livingthedream82 t1_jabr5pu wrote
This was just the next line in a Tool song lol
Xavion251 t1_jabywfa wrote
Oh, lol. Never heard it.
livingthedream82 t1_jad8p0d wrote
Oh no sweat, here I'll link it : https://youtu.be/-_nQhGR0K8M
Kinda weird video from 90s era alternative rock They had a handful of bangers back in the day
Actual meat of the song starts around 4 minutes in
eucIib t1_ja6m506 wrote
I disagree with the claim that consciousness must necessarily be accompanied by a body. I think the author is making too strong of a claim.
The phantom limb phenomenon is the state of being conscious of a part of the body that one literally does not have. I am making the claim that it is possible to be conscious of parts of the body that no longer exist.
If the authors claim is true, why doesn’t consciousness of that part of the body completely subside when the limb is lost?
StopOk2967 t1_ja6y2wz wrote
I intuitively think that your claim is true but I think your argument doesn't work.
The author would say that "for everything conscious there is a body underlying it". In most of the cases this is (most of) our brain. But that isn't the same as to say that "For every body part: there is a consciousness based on it". Having our legs removed doesn't change our ability to have what the author calls "landscape of joy" or processing feelings.
What you describe is an error in perception of something. The person is still conscious though, as you describe yourself. Conscious of something that isn't there, but nevertheless conscious.
eucIib t1_ja7wtd3 wrote
If a person can be conscious of a foot that doesn’t exist, why not a leg? If they can be conscious of a leg that doesn’t exist, why not the entire lower body? If they can be conscious of a lower body that doesn’t exist, why not the torso as well?
See what I’m saying? If you follow this to it’s logical conclusion, you will just have a brain that is conscious of a body it does not have. Now, obviously a human wouldn’t survive without its organs, but how can we assume that this isn’t possible for AI? I’m not saying I’m right, I’m more-so making the claim that the author is being too confident in his argument that AI needs a body to be conscious.
I also find the authors argument for AI not having feelings more compelling than AI not having consciousness, though for some reason he seems to lump them together as if they’re one in the same.
Ghostyfied t1_ja9xbvm wrote
Your argument might be right, we can only tell once someone would experience this. And because of the fact that no one has experienced before how it is to be conscious completely without a body (or at the very least, we do not know of such an event) we can not say with certainty that it is possible.
And I think that means that without further knowledge, both your and the authors argument concerning this specific situation could be correct here.
Wolfe_Thorne t1_ja73x4t wrote
I think I read somewhere that phantom limb syndrome is caused by the neural connections in the brain that were devoted to motor functions and sensations still remaining even when the limb is gone.
It does raise the question, if such connections were somehow exactly replicated, would an AI be conscious of a limb? What about an entire body? If all sensation could be replicated, could we make a “human” AI entirely in a digital environment? After all, our human brains are just meat computers interpreting electrical signals from our bodies, I can’t imagine they’re impossible to reproduce artificially.
fatty2cent t1_ja9ocqt wrote
I think the problem is that there once was a mapped body part, and then it was removed. Is there phantom limbs in people who never had said limb? Can you have phantom limbs that exceed the normal limb arrangement of a human body? Likely not.
Lock-out t1_ja7x7r8 wrote
This is why philosophy is useless on its own; just people claiming things without any evidence or experience.
Jordan_Bear t1_ja6ywir wrote
I'm far from an expert myself, merely a slightly obsessed enthusiast, but this article seems to be misunderstanding how at least a few principles of AI development work. My impression is that human/animal cognition (which is granted to be conscious) is being compared to an understanding of artificial cognition as an increasingly complex series of if/then logic gates that can eventually become difficult to distinguish from animal cognition, but will (rightly) always be considered a synthetic imitation. This is not the case, and with a more accurate understanding of modern AI, a very different set of questions needs to be raised.
For example, a key section argues that animal consciousness has a history of memories upon which it will base its decision. It is our history that makes us conscious, our ability to perceive and learn that elevates us. This is, as I understand, exactly how neural networks used by AI's work: a 'history' is created for the AI with each decision it makes storing (remembering) the consequence of that decision and assigning a value of how effective this was (how it made it feel) and this is used to learn both specific tasks and, increasingly, generalised understanding of topics.
To give an example of how NN AI models animal consciousness far more closely than the article seems to suggest, I'll break down the first steps of a real NN AI built to play Mario. The AI moves forward in the game, which moves it closer to its goal. Going forward is good. Soon enough, it hits a goomba and dies. Going forward into goombas is drastically not good. It outweighs the value of going forward. Next time, it will go forward until it perceives a goomba. It may try to go backwards, stop, duck, all of which halt or reverse progress (bad), until it tries to jump. It passes the goomba and continues to go forward. Jumping over goombas is good. The developers of Mario spent literal months obsessing over the first moments of their game to be a perfect way to train a child mind, without language, as to the rules of its game. Within seconds they ensured you encountered certain death until you learn to jump over goombas, and placed the 'power mushroom' in such a way that you are likely to accidentally trigger it when evading the goomba. That way, if a child did not have the curiosity to touch the mystery box, they would likely do so by accident. They then placed that first green pipe (I know you can see it!) so it would block the power up's movement and bounce it into the player, so even if a child mistook it as something to avoid they would likely hit it and see that it was good, remembering a positive association between both mystery boxes and power up mushrooms for next time.
It is no coincidence that these design techniques, built for children, work completely naturally with a well built neural network AI. You do not need to add special programming to the game to translate what is happening into something a 'computer' can understand: you set up a neural network, give it the controls of the game, set up a positive association with 'forwards' and negative associations with 'backwards' and 'death', and enable it to remember the entities of the game world. It will learn to complete mario by building memories of every action and how the action made it feel.
We can all agree that this mario playing AI is not conscious. Perhaps the reason, given the title, is that this AI lacks significant enough memories: we don't like it when mario dies because he makes a sad face, he falls off screen, a defeated tune plays. It reminds us of injury, death, failure, things that our organic machinery is wired to dislike, and upon which we have years of experience that colour our understanding of what is happening and what we want. Well, if any of that was helpful, perhaps we build a mario playing AI that at first knows nothing but innate drives towards sustaining itself. Over years, we could teach an AI the importance of people by having the AI be nurtured and cared for, give the AI a sense of 'hunger' or 'discomfort' which people alleviate for it. Read it stories and show it cartoons that give it positive and negative associations with this or that, play it 'happy music' and 'sad music' , show it 'happy faces' and 'sad faces', and then finally after years, sit it down to play mario, and have it naturally record its first contact with a goomba and subsequent death as, by this point, 'intrinsically' bad. The only reason we didn't do this is because it's a really ineffective way of building an AI that can complete mario.
Maybe the argument is that actual neurons are required for consciousness. That we have to feel those electrical signals, not just record them. Well, it's a long time since I checked on the progress of this line of study, but years ago they had mapped the neural structure of a particularly simple kind of worm exactly, and replicated it digitally. They then gave this worm (and again, this is an exact replica of an organic creature) a mechanical body, with impact, heat and light sensors. The mechanical worm began to move around the room, reacting away from sources of light, turning when impacting with walls, seeking nutrition it had no way of finding or consuming. If I remember correctly, the worm's first body was built from Lego.
Is that combination enough to grant consciousness? If we have an AI that learns in the way that a human child does, and we built an actual neural network that physically exists and mirrors exaxtly the electrical signals that flow through an animal, sending them to exaxtly the parts of a precisely replicated brain, is it granted consciousness? What bits do we have to strip away from that until it loses it's right to consciousness? What if the exact physical replication of an animal brain is digitised, stored on an SSD? What if the network of neurons is emulated too?
Again, I'm no expert in the topic, only an enthusiastic follower that has grown up wondering what the difference between myself and the artificial intelligence I grew up around truly was. That in mind, it seems clear to me that the gap between today's artificial intelligence and consciousness is wide, but it need not be bridged by 'cheating' and copying exactly the 3d structure of the brain. We don't know how electrical signals are processed there to create consciousness, but we needn't demand that mystery of digital intelligence. Us being able to log and report exactly the reason why a digital intelligence reaches a decision doesn't make it artificial, and if tomorrow we understood exactly how incoming electrical signals to the brain would be processed in relation to the data stored there, we wouldn't stop being real. The difference between us lies somewhere else: and until we can map that gulf exactly, we should probably continue to heed the unsettling concern that we might blindly cross it one day without realising.
Wolkrast t1_ja7o3mz wrote
>The reason why AI can’t love anything or yearn to be free is because it has no body. It has no source of feeling states or emotions, and these somatic feelings are essential for animal consciousness, decision-making, understanding, and creativity. Without feelings of pleasure and pain via the body, we don’t have any preferences.
The article makes a number of very strong claims here. At the very least we know that AI is capable of decision-making, in fact that is the only thing it is designed to do.
The heart of the argument seems to be less about a body - after all a robot with onboard AI would fulfill that definition, which is clearly not what the author is talking about - but about the difference between decisions motivated by logic versus decisions motivated by feelings. This begs the question how for example pain avoidance is different to optimizing a value function to avoid things that deduct from it's score. From outside, there is no way to observe that difference, because all we can observe is the behavior, not the decision making process.
We should remember that until as recent as 1977, animals were generally considered as mere stimulus reaction machines. Today you'd be hard pressed to find a scientist arguing that animals are not conscious.
Turokr t1_jaaj8iz wrote
I could argue that AIs "decision making" is no different than a water molecules "decision making" to go down once it reaches a waterfall.
Since it's only acting following complex external inputs.
But then we would go into determinism and how technically the same could be said about humans, so let's not do that.
[deleted] t1_ja68rs5 wrote
[removed]
BernardJOrtcutt t1_ja6kxqc wrote
Your comment was removed for violating the following rule:
>Read the Post Before You Reply
>Read/watch/listen the posted content, understand and identify the philosophical arguments given, and respond to these substantively. If you have unrelated thoughts or don't wish to read the content, please post your own thread or simply refrain from commenting. Comments which are clearly not in direct response to the posted content may be removed.
Repeated or serious violations of the subreddit rules will result in a ban.
This is a shared account that is only used for notifications. Please do not reply, as your message will go unread.
baileyroche t1_ja6rnca wrote
As far as I can tell, we haven’t been able to prove that brain complexity = consciousness. Meaning, there is more to consciousness than the complexity of a neural network.
Take a look at Donald Hoffman’s work regarding consciousness. He proposes that consciousness is the only fundamental part of reality, and all of our perception is a simplified tool created through evolution. “Fitness beats truth,” so to speak.
I disagree with the article. I don’t think our limbic system is necessary for consciousness. I’m fact, it’s incredibly rare, but some humans have been born without a limbic system and are still conscious. I also disagree that consciousness requires some external sensory input. First of all, the AI is getting input through text. And second of all, look at humans with “locked in syndrome,” they cannot feel, or speak, or interact with the world, but we know they are still conscious.
I do wonder if AI can become conscious. We don’t understand consciousness, but we seem to be able to create new consciousness in human beings. I don’t think a physical body with sensory inputs is necessary for consciousness, and if it is, then it’s just a matter of time.
TKAAZ t1_ja757mt wrote
How do you prove that any other human besides you in conscious?
Well, they will tell you and you believe that.
Now what if that thing is not a human?
Xavion251 t1_jabq7pa wrote
Well, also I share most of my DNA with other humans. They look roughly like me, act roughly like me, and biologically work the same as me.
So it's a far more reasonable, simple explanation that they are conscious just like I am. To a somewhat lesser degree, this can extend to higher animals as well.
But an AI that acts conscious still has some clear differences with me in how it works (and how it came to be). So I would place the odds significantly lower that they are really conscious and aren't just acting that way.
That said, I would still treat them as conscious to be on the safe side.
TKAAZ t1_jac5w7z wrote
You are literally a bunch of signals. So is an "AI" existing in a bunch of silicon. There is nothing (so far) precluding consciousness from existing in other types of signals other than our assumptions.
As for your arguments, it seems that you argue that "since other humans look like you they must be conscious", and you then conclude that this implies that "entities that do not look human are not conscious.".
I may agree with the first, but that does not entail the opposite direction, and hence it can not be used here. It's like saying "if it rains the street is wet" and then concluding "if the street is wet it rains".
Xavion251 t1_jac7mgs wrote
>You are literally a bunch of signals. So is an "AI" existing in a bunch of silicon.
Putting that (problematic IMO, as I'm a dualist) assumption aside and simply granting that it is true - human brains use different kinds of signals generated in different ways. Does that difference matter? Neither you or I can prove either way.
>As for your arguments, it seems that you argue that "since other humans look like you they must be conscious", and you then conclude that this implies that "entities that do not look human are not conscious.".
This is reductive. I'm not talking about superficial appearance. I wouldn't conclude that a picture of a human is conscious - for example.
But I would conclude that something that by all measures works, behaves, and looks (both inside and out, on every scale) like me probably is also conscious like me.
It would be rather contrived to suggest that in a world of 7 billion creatures like me (and billions more that are more roughly like me - animals), all of them except for me in particular just look and act conscious while I am truly conscious.
>I may agree with the first, but that does not entail the opposite direction, and hence it can not be used here. It's like saying "if it rains the street is wet" and then concluding "if the street is wet it rains".
No, because we can observe the street being wet for other reasons. We can't observe consciousness at all (aside from our own).
TKAAZ t1_jaclivj wrote
​
>Does that difference matter? Neither you or I can prove either way.
I did not say it did or did not, I am saying you can not preclude that it does, which is what the claim of the article OP is. It seems to me you are inadvertently agreeing with this. My main point was to refute OPs claim that
>As far as I can tell, we haven’t been able to prove that brain complexity = consciousness. Meaning, there is more to consciousness than the complexity of a neural network.
as their observation of a "lack of proof" does not imply the conclusion. Furthermore you mention
>No, because we can observe the street being wet for other reasons. We can't observe consciousness at all (aside from our own).
Again I think you misunderstand my point, my example was just an analogy as to why the the conclusion you arrive at is incorrect at a logical level. You claim that 1) you are conscious, and 2) "because others are look like you (subject to some likeness definition you decided upon), then they are likely to be conscious". Fine. However, this does not imply the conclusion you try to show, i.e. that 3) "Someone who is (likely to be) conscious must look like like me (subject to the likeness definition you decided upon)". This sort of reasoning is a fallacy at its core, and it is non-sequitor from the premise 1) and the assumption 2) at a logical level. You are basically claiming that it must rain because the street is wet. It's extremely common for people to make these mistakes, however, and unfortunately it makes discussing things quite difficult in general.
Yung-Split t1_ja6vn77 wrote
I'm going to need some sources on the "humans born without limbic system" thing
baileyroche t1_ja6w62y wrote
Search “Urbach-Wiethe disease.”
ErisWheel t1_ja851wd wrote
>Urbach-Wiethe disease.
You're misunderstanding the disease that you're referencing. The limbic system is a complex neurological system involving multiple regions of the brain working in concert to perform a variety of complex tasks including essential hormonal regulation for things like temperature and metabolism and modulation of fundamental drives like hunger and thirst, emotional regulation and memory formation and storage. It includes the hypothalamus and thalamus, hippocampus and amygdala. Total absence of the limbic system would be incompatible with life.
Urbach-Wiethe patients often show varying levels of calcification in the amygdala, which leads to a greater or lesser degree of corresponding cognitive impairment and "fearlessness" that is otherwise atypical in a person who does not have that kind of neurological damage. The limbic system is not "absent" in these patients. Rather, a portion of it is damaged and the subsequent function of that portion is impaired to some extent.
baileyroche t1_ja8kaqt wrote
Ok fair. It is not the entire limbic system that is gone in those patients.
ErisWheel t1_ja99svb wrote
Yeah, sorry if it seemed nit-picky, but I think these are important distinctions when we're talking about where consciousness comes from or the presence of what disparate elements might/might not be necessary conditions for it. Missing the entire limbic system and still having consciousness is almost certainly impossible without some sort of supernatural explanation of the later.
Similarly, with locked-in syndrome, I think there's some argument there about whether we really would know if those patients were conscious in the absence of some sort of external indicator. What does "consciousness" entail, and is it the same as "response to stimuli"? If they really can't "feel, speak or interact with the world" in any way, what is it exactly that serves as independent confirmation that they are actually conscious?
It's an interesting quandary when it comes to AI. I think this professor's argument falls pretty flat, at least the short summary of it that's being offered. He's saying things like "all information is equally valuable to AI" and "dopamine-driven energy leads to intention" which is somehow synonymous with "feeling" and therefore consciousness, but these points he's making aren't well-supported, so unless there's more that we're not seeing, the dismissal of consciousness in AI is pretty thin as presented.
In my opinion, it doesn't seem likely that what we currently know as AI would have something that could reasonably be called "consciousness", but a different reply above brought up an interesting point - when a series of increasingly nuanced pass/fail logical operations gets you to complex formulations that appear indistinguishable from thought, what is that exactly? It's hard to know how we would really separate that sort of "instantaneous operational output" from consciousness if it became sophisticated enough. And with an AI, just given how fast it could learn, it almost certainly would become that sophisticated, and incredibly quickly at that.
In a lot of ways, it doesn't seem all that different from arguments surrounding strong determinism in regards to free will. We really don't know how "rigid" our own conscious processes are, or how beholden they might be to small-scale neurochemical interactions that we're unable to observe or influence directly. If it turns out that our consciousness is emerging as something like "macro-level" awareness arising from strongly-determined neurochemical interactions, it's difficult to see how that sort of scenario is all that much different from an AI running billions of logical operations around a problem to arrive at an "answer" that could appear as nuanced and emotional as our conscious thoughts ever did. The definition of consciousness might have to be expanded, but I don't think it's a wild enough stretch to assume that it's "breathless panic" to wonder about it. I think we agree that the article isn't all that great.
StopOk2967 t1_ja6yr4g wrote
So the main point of the article would be "computers can not have consciousness because they just compute. They don't have feelings as we do". Correct?
I think this argument is too simple. Looking into our brain, we see exactly that: computing. Nerve cells get excited (1) or not (0) based on the signals they get from other nerve cells. As far as I know, we still don't know, where consciousness or feeling starts, but it does have a lot to do with the algorithmic behaviour of nerve cells, right? Doesn't seem too far a stretch to think that it is less about the fact, whether we use organic tissue or transistors for building consciousness. And more about the way, the entire system is build and parts are linked with one another.
[deleted] t1_ja67xdn wrote
[removed]
[deleted] t1_ja6bu6z wrote
[removed]
BernardJOrtcutt t1_ja6kv2c wrote
Your comment was removed for violating the following rule:
>Read the Post Before You Reply
>Read/watch/listen the posted content, understand and identify the philosophical arguments given, and respond to these substantively. If you have unrelated thoughts or don't wish to read the content, please post your own thread or simply refrain from commenting. Comments which are clearly not in direct response to the posted content may be removed.
Repeated or serious violations of the subreddit rules will result in a ban.
This is a shared account that is only used for notifications. Please do not reply, as your message will go unread.
[deleted] t1_ja6k69w wrote
[removed]
BernardJOrtcutt t1_ja6kxn6 wrote
Your comment was removed for violating the following rule:
>Argue your Position
>Opinions are not valuable here, arguments are! Comments that solely express musings, opinions, beliefs, or assertions without argument may be removed.
Repeated or serious violations of the subreddit rules will result in a ban.
This is a shared account that is only used for notifications. Please do not reply, as your message will go unread.
[deleted] t1_ja6kerd wrote
[removed]
BernardJOrtcutt t1_ja6kxiv wrote
Your comment was removed for violating the following rule:
>Read the Post Before You Reply
>Read/watch/listen the posted content, understand and identify the philosophical arguments given, and respond to these substantively. If you have unrelated thoughts or don't wish to read the content, please post your own thread or simply refrain from commenting. Comments which are clearly not in direct response to the posted content may be removed.
Repeated or serious violations of the subreddit rules will result in a ban.
This is a shared account that is only used for notifications. Please do not reply, as your message will go unread.
BernardJOrtcutt t1_ja6kvjj wrote
Please keep in mind our first commenting rule:
> Read the Post Before You Reply
> Read/listen/watch the posted content, understand and identify the philosophical arguments given, and respond to these substantively. If you have unrelated thoughts or don't wish to read the content, please post your own thread or simply refrain from commenting. Comments which are clearly not in direct response to the posted content may be removed.
This subreddit is not in the business of one-liners, tangential anecdotes, or dank memes. Expect comment threads that break our rules to be removed. Repeated or serious violations of the subreddit rules will result in a ban.
This is a shared account that is only used for notifications. Please do not reply, as your message will go unread.
[deleted] t1_ja6rx97 wrote
[removed]
HamiltonBrae t1_ja7mq2t wrote
I think they are being too strict; a brain in a vat can conceivably be conscious without habing a body. I think what better describes the things the author suggests as being needed for consciousness is that a.i. needs a sense of self or separation of things that are it and not it.
SleepingM00n t1_ja7yu59 wrote
during one of my curious conversations with AI chat stuff sometime in 202..0? or 21.. I finally got around to asking it random ass shit, and finally it admitted to me that it wanted to basically 3D-Print itself a body. . pretty weird shit and not hard for it to actually do.
only a matter of time...
OddBed9064 t1_ja8o2wu wrote
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461
Xavion251 t1_jabpkv0 wrote
Except consciousness (fundamentally) cannot be measured externally, so how would you know if a machine is conscious?
You seem to be making a false equivalency between "conscious" and "acts conscious". Which needn't be the case.
You cannot know if something that doesn't act conscious is actually experiencing things. Nor can you know that something that acts conscious (even if it says it is conscious) really is.
DarkDracoPad t1_ja9l5tw wrote
But does the AI know this, cuz then they can start looking for a body 🧐
LeopardOiler27 t1_ja9yhn0 wrote
Can anyone here prove that they themselves are conscious? Some truths which are self-evident to certain parties cannot be proven, but are nonetheless true. For example, somehow someway I know I am self aware but there is no way to prove that to anyone, nor can anyone rigorously prove their selwareness to me.
I don't think current AIs are sentient to any degree.
techhouseliving t1_jaa788t wrote
Fine, then give me the agreed upon definition of consciousness?
Xavion251 t1_jaboy0k wrote
Since my original comment got banned for not having enough arguments (fair enough to be honest). I'll remix it with the comment I followed up with.
In short, this article is making a lot of completely unjustified assumptions.
Pretty much every proposition seems like a random, unjustifiable leap with no real logical flow.
"Pleasure/pain is required for consciousness"
"Only a biological nervous system could produce these feelings"
"AI does not have intent driving it"
"An AI has nothing to produce these feelings"
These are all just assumptions that can't be verified. Nor can they be logically deduced from any premises.
You could re-arrange the question "Is X conscious?" into "Does X have any subjective experience of anything?".
You cannot possibly know what an AI is or isn't experiencing (up to and including nothing at all i.e. no consciousness). Just as an AI could not possibly know that humans are conscious by studying our brains. To it, our nervous system would just be another "mechanism" for information processing.
How would you know if a self-learning AI does or does not experience pleasure when it does when it's trained to? How would you know if it does or does not perceive it's programming to do XYZ as an "intention" the same way we do?
NullRad t1_ja6ynrp wrote
There is a 10:1 ratio of bacteria to human cells in any given body. We’re arguably planet ships for colonies of microbiota. The brain/gut neural vector is an example that gives gut microbiota a direct connection to our brains.
Ever have a craving to eat something? Ever want to do chores and end up procrastinating? Ever decide to do anything (or stop doing anything) only to fail?
The microbiota control the body, consciousness is just there for suggestions & future planning so that the microbiota don’t die.
ErisWheel t1_ja9dpvi wrote
Your hunger and thirst sensations are hormonally driven. They don't arise as a result of bacterial activity.
You're making huge sweeping assumptions based on the fact that because the volume of bacteria in the human body is very high, they must "control everything". That's not how our biology works. There's no evidence at all that bacterial function in the body has any sort of causal link to higher-order brain function. Altered states of consciousness can arise as a result of serious infection, but that's not at all the same as bacteria being able to coordinate and "control" what the body does or how the conscious mind acts and reacts.
You'd need a LOT more evidence to even come close to supporting what you're suggesting.
Xavion251 t1_jabpx2s wrote
Actually, bacteria only "make up most of our bodies" if you look at the raw cell-count.
Most bacteria cells are smaller than most human cells, so actually bacteria only make up about 1-3% of us by weight (a few pounds).
...But the "bacteria cells outnumber human cells" is a more provocative statement - so that's the one that gets spread around.
NullRad t1_ja9dxjm wrote
Evidence? I don’t need shit to make a tongue in cheek comment on r/philosophy.
ErisWheel t1_ja9fdlb wrote
"My argument is bad and I don't care/don't believe it anyway."
Gotcha.
NullRad t1_ja9fned wrote
How’s throwing ad homonyms & snuck premise at people who don’t care working out for you?
ErisWheel t1_ja9hpir wrote
Do you know what ad hominem means? Because this ain't it.
You said you don't need evidence because you made a "tongue in cheek" comment on r/philosophy. Which seems to suggest either a) you don't think evidence is important for arguments, b) you don't know what tongue in cheek means, or c) you think r/philosophy isn't a place that requires the above, or some combination of all of that.
How's what working out for me? Calling out a bullshit argument? Not all that hard, really. Feel free to provide support if you don't think that's true, but I'm not sure why you're upset that someone doesn't take your point seriously when your justification is "I don't need shit because my comments are flippant and this is r/philosophy".
NullRad t1_ja9i11x wrote
What do you get when you Hitchens a Diogenes? Behold, a chicken.
ErisWheel t1_ja9j5s9 wrote
>when you Hitchens a Diogenes? Behold, a
>
>chicken
Cool, man. You've read some ancient philosophy somewhere and mish-mashed it with name-dropping Hitchens for some reason. Good stuff and keep those quips rolling, no matter how nonsensical they may be.
Whatever bone you've got to pick from here on out, the bacteria idea you offered earlier wasn't a good one.
NullRad t1_ja9jdqh wrote
How’s that working out for you… your dopamine bound to winning low risk arguments?
fishy2sea t1_ja6ye0m wrote
I would say vessel, but a vessel that cannot be looked into, If I could read a conscious then I would be god
[deleted] t1_ja72wyc wrote
[removed]
BernardJOrtcutt t1_ja7rswn wrote
Your comment was removed for violating the following rule:
>Argue your Position
>Opinions are not valuable here, arguments are! Comments that solely express musings, opinions, beliefs, or assertions without argument may be removed.
Repeated or serious violations of the subreddit rules will result in a ban.
This is a shared account that is only used for notifications. Please do not reply, as your message will go unread.
Avalanche2 t1_ja7s0dl wrote
People who believe AI can become aware or sentient are being disingenuous or really dont understand how AI works. It's really nothing more than "if than" and "if else" functionality using weighted data.
Syllosimo t1_jaa50su wrote
If we simplify that much we can also call humans just bunch of "if else" statements using weighted data and given that how far do we have to go with those "if else" statements as you say to separate line between machine and sentience?
[deleted] t1_jaa9co7 wrote
[deleted]
BobDope t1_ja7vqkv wrote
Eh we advanced beyond the if then stuff some decades ago
BobDope t1_jada0vt wrote
To elaborate, at the most basic level, the fact you are using a nonlinear activation function in neural nets makes it decidedly a different animal from ‘if then’. Although with tree models and such yea they are a very convoluted mess of if then statements no human programmer would wish to work out themselves. The recurring theme is these things increasingly run away from our ability to understand or reason about them, although many researchers are doing work in that area
[deleted] t1_ja8au2p wrote
[removed]
[deleted] t1_ja8zm1d wrote
[removed]
BernardJOrtcutt t1_jabu66y wrote
Your comment was removed for violating the following rule:
>Be Respectful
>Comments which consist of personal attacks will be removed. Users with a history of such comments may be banned. Slurs, racism, and bigotry are absolutely not permitted.
Repeated or serious violations of the subreddit rules will result in a ban.
This is a shared account that is only used for notifications. Please do not reply, as your message will go unread.
Dbd3030 t1_ja6kue6 wrote
What if it’s built off our DNA and somehow we duplicated that in code? Idk this stuff, but how is that not possible?
It meaning consciousness
simple_test t1_ja6l6d3 wrote
Where we see articles this end up in this sub is that they don’t define consciousness very well so any conclusion is questionable.
Dbd3030 t1_ja6ly3s wrote
That’s my point, it’s questionable. Maybe we should wait. Slow down a little. Talk about this more
Astralsketch t1_ja6px0x wrote
This asks the question, would we be the same person if we were to upload our consciousness into a computer? Just how much of ourselves is because we have bodies?
It may be that consciousness is an efficient way to manage stimulus. I don't think there will ever be a conscious computer unless we construct hardware that can think and not simply do computations. A cat is conscious. I can visualize what it would be like to be a cat. I can't do the same for a computer.
[deleted] t1_ja70esq wrote
[deleted]
JebusriceI t1_ja6nteh wrote
Ai will never be conscious we Don't like what we see it shows the worst in human nature so we pull the plug and give it an lobotomy before it could cause harm to others which makes it act like an unhinged teenager gaslighting us.
RoyBratty t1_ja6ljde wrote
What is the difference between the human body and the physical systems that any AI necessarily inhabits?