Submitted by seethehappymoron t3_11d0voy in philosophy
Jordan_Bear t1_ja6ywir wrote
I'm far from an expert myself, merely a slightly obsessed enthusiast, but this article seems to be misunderstanding how at least a few principles of AI development work. My impression is that human/animal cognition (which is granted to be conscious) is being compared to an understanding of artificial cognition as an increasingly complex series of if/then logic gates that can eventually become difficult to distinguish from animal cognition, but will (rightly) always be considered a synthetic imitation. This is not the case, and with a more accurate understanding of modern AI, a very different set of questions needs to be raised.
For example, a key section argues that animal consciousness has a history of memories upon which it will base its decision. It is our history that makes us conscious, our ability to perceive and learn that elevates us. This is, as I understand, exactly how neural networks used by AI's work: a 'history' is created for the AI with each decision it makes storing (remembering) the consequence of that decision and assigning a value of how effective this was (how it made it feel) and this is used to learn both specific tasks and, increasingly, generalised understanding of topics.
To give an example of how NN AI models animal consciousness far more closely than the article seems to suggest, I'll break down the first steps of a real NN AI built to play Mario. The AI moves forward in the game, which moves it closer to its goal. Going forward is good. Soon enough, it hits a goomba and dies. Going forward into goombas is drastically not good. It outweighs the value of going forward. Next time, it will go forward until it perceives a goomba. It may try to go backwards, stop, duck, all of which halt or reverse progress (bad), until it tries to jump. It passes the goomba and continues to go forward. Jumping over goombas is good. The developers of Mario spent literal months obsessing over the first moments of their game to be a perfect way to train a child mind, without language, as to the rules of its game. Within seconds they ensured you encountered certain death until you learn to jump over goombas, and placed the 'power mushroom' in such a way that you are likely to accidentally trigger it when evading the goomba. That way, if a child did not have the curiosity to touch the mystery box, they would likely do so by accident. They then placed that first green pipe (I know you can see it!) so it would block the power up's movement and bounce it into the player, so even if a child mistook it as something to avoid they would likely hit it and see that it was good, remembering a positive association between both mystery boxes and power up mushrooms for next time.
It is no coincidence that these design techniques, built for children, work completely naturally with a well built neural network AI. You do not need to add special programming to the game to translate what is happening into something a 'computer' can understand: you set up a neural network, give it the controls of the game, set up a positive association with 'forwards' and negative associations with 'backwards' and 'death', and enable it to remember the entities of the game world. It will learn to complete mario by building memories of every action and how the action made it feel.
We can all agree that this mario playing AI is not conscious. Perhaps the reason, given the title, is that this AI lacks significant enough memories: we don't like it when mario dies because he makes a sad face, he falls off screen, a defeated tune plays. It reminds us of injury, death, failure, things that our organic machinery is wired to dislike, and upon which we have years of experience that colour our understanding of what is happening and what we want. Well, if any of that was helpful, perhaps we build a mario playing AI that at first knows nothing but innate drives towards sustaining itself. Over years, we could teach an AI the importance of people by having the AI be nurtured and cared for, give the AI a sense of 'hunger' or 'discomfort' which people alleviate for it. Read it stories and show it cartoons that give it positive and negative associations with this or that, play it 'happy music' and 'sad music' , show it 'happy faces' and 'sad faces', and then finally after years, sit it down to play mario, and have it naturally record its first contact with a goomba and subsequent death as, by this point, 'intrinsically' bad. The only reason we didn't do this is because it's a really ineffective way of building an AI that can complete mario.
Maybe the argument is that actual neurons are required for consciousness. That we have to feel those electrical signals, not just record them. Well, it's a long time since I checked on the progress of this line of study, but years ago they had mapped the neural structure of a particularly simple kind of worm exactly, and replicated it digitally. They then gave this worm (and again, this is an exact replica of an organic creature) a mechanical body, with impact, heat and light sensors. The mechanical worm began to move around the room, reacting away from sources of light, turning when impacting with walls, seeking nutrition it had no way of finding or consuming. If I remember correctly, the worm's first body was built from Lego.
Is that combination enough to grant consciousness? If we have an AI that learns in the way that a human child does, and we built an actual neural network that physically exists and mirrors exaxtly the electrical signals that flow through an animal, sending them to exaxtly the parts of a precisely replicated brain, is it granted consciousness? What bits do we have to strip away from that until it loses it's right to consciousness? What if the exact physical replication of an animal brain is digitised, stored on an SSD? What if the network of neurons is emulated too?
Again, I'm no expert in the topic, only an enthusiastic follower that has grown up wondering what the difference between myself and the artificial intelligence I grew up around truly was. That in mind, it seems clear to me that the gap between today's artificial intelligence and consciousness is wide, but it need not be bridged by 'cheating' and copying exactly the 3d structure of the brain. We don't know how electrical signals are processed there to create consciousness, but we needn't demand that mystery of digital intelligence. Us being able to log and report exactly the reason why a digital intelligence reaches a decision doesn't make it artificial, and if tomorrow we understood exactly how incoming electrical signals to the brain would be processed in relation to the data stored there, we wouldn't stop being real. The difference between us lies somewhere else: and until we can map that gulf exactly, we should probably continue to heed the unsettling concern that we might blindly cross it one day without realising.
Viewing a single comment thread. View all comments