Viewing a single comment thread. View all comments

AndromedaAnimated t1_j0tb3wt wrote

I meant both, a hypothetical newly created artificial mind, or a human mind who used to have a body. The sensor and motor cortical areas are well known as is the cerebellum. We are also already able to simulate spatial perception. Simulating a body that can „move“ in virtual space and provide biofeedback to the brain shouldn’t be so difficult. The Synchron Stentrode interface for example already allows people with upper body paralysis to move a cursor and access the internet with their motor cortex - no real hands or arms necessary. And the motor cortex would be not difficult to simulate.

So yeah. I think it won’t be as difficult as we think to simulate human minds. It’s all a question of processing power.

1

Superschlenz t1_j0te7zz wrote

And how is tested whether these simulations really do what the corresponding part of the brain does?

By some argumentation in the form of: "Brains have oscillations in the alpha, beta, and theta range. My model has oscillations in the alpha, beta, and theta range, too! So I have built a brain. Where is my Nobel prize?" (Or the equivalent with pieces of dead rat cortex' firing patterns and one billion euros.)

> The Synchron Stentrode interface

An interface to the real thing is not a replacement.

1

AndromedaAnimated t1_j0tf89h wrote

You are probably joking about the EEG waves, aren’t you? Because it is pretty strange to assume that you will be able to measure EEG correlates of sentience in an AI by placing electrodes on its imagined head. Or in its imagined brain. We won’t need to recreate a three-dimensional physical model of the brain to simulate it.

I don’t want to assume that you don’t know a lot about the brain, but your reasoning really starts to confuse me. Of course the interface to the brain is not the replacement for the brain, that’s just logical 🫤 But that was not the reason why I mentioned it.

I mentioned the Synchron interface to show that motor activity of the body can be replaced by a simulated motor activity. Meaning the physical body can be simulated if needed for the development of human brain. Since that was what you were talking about. A simulated „human-like“ mind being not able to exist without a physical human body.

1

Superschlenz t1_j0x7n5v wrote

>You are probably joking about the EEG waves, aren’t you?

Of course I was joking, because https://www.izhikevich.org/human_brain_simulation/Blue_Brain.htm#Simulation%20of%20Large-Scale%20Brain%20Models mentions only alpha and gamma rhythms, but not beta and theta.

>I mentioned the Synchron interface to show that motor activity of the body can be replaced by a simulated motor activity. Meaning the physical body can be simulated if needed for the development of human brain. Since that was what you were talking about.

The human body is not just the output of ~200 motors and input of their corresponding joint angles and forces (proprioception). It is also the input of ~1M touch sensors from the skin. This input would have to be simulated as well. As much touch information in childhood comes from social interaction with the mother, you would have to simulate her, too. This may be possible in theory, but at the moment, neither a simulated mother for a simulated baby nor a real robot baby with full body touch sensitive skin for a real mother is possible. My personal experience with the MuJoCo simulator in 2016 had shown me that it is so buggy, it can't even simulate some nuts and bolts correctly. If it even fails at such a simple mechanical rigid object physics task, how could it simulate deformable skin or a virtual mother?

2

AndromedaAnimated t1_j0y5h90 wrote

I am still pretty sure that we don’t need to simulate a three-dimensional brain to simulate a mind, but okay I got now that you were joking (the model you wrote about is still a cool thing, and I see lots of further research and application possibilities).

Touch sensors would not necessarily be needed. The brain doesn’t get touched, it gets signals mediated by oxytocin and other chemicals. So simulating a holding, touching mother would not be this difficult. If you wanted to do that in the first place instead of simulating a mind that automatically gets its „touch needs“ fulfilled by other types of communication. Or a mind that has simulated memories of being touched directly at the time of it being put into function.

But this is actually a very interesting idea you mentioned. Simulating a mother with deformable, touchable skin or a robot baby with feeling skin. This would be akin to simulating touch in the virtual world generally.

I agree that we are not jet there. But the engine is already gaining steam so to say. I would say we only need around 2 to 3 more years max to simulate a functioning human mind. Can imagine that your timeline would be different here.

By the way, thank you for the very civil discussion. I have made very different experiences with others. Thank you. You‘re cool.

2