Submitted by RamaSchneider t3_121et4t in Futurology

A lot of discussions regarding AI and human like actions and reactions seem to me to focus on some absolute uniqueness of every human that requires a special definition to explain. The rationale seems to be that humans are not machines and that we have some internal mechanism (soul, spirit, humanity, whatever) that gives us the power to operate as uniquely free operators - free from our biology and the basic physics that makes our bodies, including minds, function.

But what are we to think if we keep finding out that as humans we are better described as biological computing machines of such? What if all this OpenAI is all about self-recognition?

12

Comments

You must log in or register to comment.

jeremy-o t1_jdljmke wrote

We are definitely just biological computing machines! That wasn't really at question, if you consider the science. What is at question is how soon we can replicate that. Some would say we're close. I think we're a lot further than we assume, purely based on the complexity of the human brain's neural network vs. e.g. our best AI models.

Considering the "soul" or spirit irrelevant isn't really futurism. It's more like, existentialism or even nihilism, so we're talking 19th/20thC philosophy.

14

GodzlIIa t1_jdln03x wrote

Yep we are just biological computing machines. We aren't even that impressive or big or smart ones, but we are EXTREMELY complicated ones. Progress in understanding how the brain works is going to be a slow journey.

5

solinvictus21 t1_jdlvwlp wrote

Our best AI models are already simulating trillions of synapses. The human brain has ~200 trillion. How much longer do you really think it’s going to be?

2

SirFredman t1_jdm2vsz wrote

Well, consider the amount of synapses used for running a meat machine which aren’t needed by an AI model. I think the amount of neurons and synapses needed is less than we think…

4

ninjadude93 t1_jdmsd9l wrote

The issue is figuring out if scale is all you need to replicate a human mind. I definitely don't think scale is all you need and its going to take a long time to truly replicate the human mind

1

Subject_Meat5314 t1_jdnls2i wrote

Agreed. Scale of the hardware (wetware?) is necessary but not sufficient. Next we have to write the software. The last effort took 100’s of millions of years. We have a working model and better management now though, so hopefully we can make quicker progress.

2

ninjadude93 t1_jdocwnn wrote

Theres probably some level of scale necessary to start to see emergent properties but the point I tend to disagree on is people saying just throwing a NN at more and more data will suddenly give us AGI past a certain threshold. The human brain isn't just a bunch of neurons theres specialized regions all working together and I think this orchestration plus scale is what will take us to AGI

2

Antimutt t1_jdlwynw wrote

We wont rest. It's not enough to be copyable, we also need to be transferable, non-destructively. Then we leave this flesh behind.

13

OlderNerd t1_jdm8tet wrote

I read a book once that had this interesting thought experiment. It imagined a machine that could insert thousands of microscopic electrodes into the surface of your brain. Then it would read the electrical impulses in your neurons and copy that information over to a computer. Then instead of just copying it it would start to run the program that would replace the function of those neurons. You could switch back and forth between the computer program for that layer of neurons and your real brain. If there was no difference, then you could hit another button and it would remove those neurons and sink further down into your brain copying and replacing as it went. The interesting thing is that there would be no break in your consciousness in this thought experiment. I wonder if that would affect how people felt about destructively copying your mind over to a computer

10

RamaSchneider OP t1_jdlx6u0 wrote

That bit about "need to be transferable", I like that and it does indeed need to be there for a human to AI machine comparison. Thanks for mentioning that.

[Edit] Does that mean that humans are sub-computer?

2

Antimutt t1_jdlyqvp wrote

Or, perhaps, become the computer. If we can't crack strong AI, how about we copy a human cortex, train it to obey, market it. Odd idea? Here's the novel.

2

Philosipho t1_jdmrq9f wrote

But a copy of you isn't you. Also, unless the copy is alive, it's just another machine.

We can already replicate ourselves. It seems much more ethical, efficient, and definitive to simply find ways to make humans more capable.

2

DesertBoxing t1_jdmwu4m wrote

Why transfer when you can experience life through multiple bodies, wouldn’t that be something?

2

neuralbeans t1_jdljp0v wrote

I can't think of why we would be different from a very complex computer.

9

Kiizmod0 t1_jdlt0ai wrote

You mean you want to ignore ✨ Soul✨?

2

peadith t1_jdmm9kl wrote

That's just something we compliment ourselves with because we don't really know how we work.

2

aught4naught t1_jdm9eao wrote

The hard problem of consciousness seems a big difference. It will be interesting to watch how ai starts thinking outside the model with 'emergent behaviors'.

0

OriginalCompetitive t1_jdm9y9c wrote

Because we’re conscious.

−1

neuralbeans t1_jdmarno wrote

What does that mean?

3

OriginalCompetitive t1_jdmnbh6 wrote

You said you couldn’t think of any reason why we would be different than a complex computer. One possible reason is that we’re conscious and it’s possible complex computers will not be.

We don’t know what causes consciousness, but there’s no reason to think intelligence has anything to do with consciousness.

0

neuralbeans t1_jdmo56v wrote

No I mean what is consciousness?

3

OriginalCompetitive t1_jdnc1dl wrote

The fact that you ask me this makes me suspect that maybe you aren’t conscious.

1

neuralbeans t1_jdnscjk wrote

Would you be able to tell if I didn't?

1

OriginalCompetitive t1_jdnysrr wrote

I’ll save you some time. I can’t define it, I can’t test for it, I can’t even be sure that I was conscious in the past or if I’m simply inserting a false memory of having been conscious in the past when I actually wasn’t.

I feel like I can be sure that I’m conscious at this precise moment, though, and I think it’s a reasonable guess that I was conscious yesterday as well, and probably a reasonable guess that most other people experience some sort of conscious experience. For that reason I try not to impose needless suffering on other people even though I can’t be sure that they truly experience conscious suffering.

I think it’s possible that complex computers will never experience consciousness, and if I’m right, that would be a reason why we would be different than a complex computer.

2

Buggy3D t1_jdlkjv2 wrote

Here is my theory. Your personality is merely an exact set of electric repetitions in an exact given pattern.

As time changes, so do your pathways in your brain. Electric pulses change accordingly.

The personality you have today is no longer the same as the one you had 10 years ago.

If there was a way to scan your exact neurological pathway and pulsation periodicity, I do believe your personality could be carried over and duplicated.

One would need an ability to scan billions of synapses per second to capture them, but I think it might be possible sometime in the future.

9

GodzlIIa t1_jdln2j5 wrote

>The personality you have today is no longer the same as the one you had 10 years ago.

I agree 100%, and I often ask people this question:

Do you think your consciousness now is the same consciousness of your 5 year old self?

2

manicdee33 t1_jdlp2gg wrote

> Do you think your consciousness now is the same consciousness of your 5 year old self?

Did you actually exist half a second ago or is your entire life just a set of memories that were implanted when the entire world was created just now?

4

Thin-Limit7697 t1_jdndg44 wrote

Alternatively, did you actually exist halt a second ago, or what existed was something else which memories were merged with your current sensorial input to become "you"?

It's the same old debate of questioning if it is possible to bath in the same river twice. What is the point of expecting completely non-destructive conversion from neurons to processors when the conversion from 1 second ago neurons to current neurons actually is destructive?

1

WoolyLawnsChi t1_jdm24k8 wrote

Correct

you are NOT special, no is

so everyone is

aka the more you understand how unimportant you are, the more you understand how important everyone else is

7

BadMon25 t1_jdljn9i wrote

I don’t think AI will ever be able to completely replicate human emotions, from what I’ve read it cannot comprehend the flurry of emotions we go through everyday, from nihilistic thoughts to simple frustrating things. No matter how similar we may be to some people, we are still very unique in our human experiences and feelings. The brain is a powerful, yet confusing ass function.

4

GodzlIIa t1_jdln68s wrote

>I don’t think AI will ever be able to completely replicate human emotions

I mean thats a crazy statement to say it can NEVER get there. But saying it wont get their in our lifetimes or our grandchildrens lifetimes, or even in humanity's lifetime if you think we are gonna kill ourselves soon, might be reasonable.

2

BadMon25 t1_jdlnqmu wrote

I think it may be able to articulate the chaotic nature of human emotions or the human it’s attached to, and yet never fully understand it. I mean the the brain of a clinically depressed person to a schizophrenic to a demented person. I feel like that would confuse it

4

SomeoneSomewhere1984 t1_jdlpydv wrote

I think it may achieve conscious, but a different kind of conscious to what we experience.

3

GodzlIIa t1_jdls560 wrote

It's very possible. But weird to think about. I mean is there even different types of consciousness? Different ways to obtain consciousness, sure I can see that. But is the end result different?

2

electric_ember t1_jdlvilo wrote

Your conscious experience is very different from the conscious experience of someone who is blind and deaf

2

GodzlIIa t1_jdlwpw7 wrote

My experiences may differ but my consciousness is the same. We are both humans after all, if I had been born without eyes and everything else the same, I would be a much different person, but the same consciousness.

0

OriginalCompetitive t1_jdmale5 wrote

There’s really no way to know, though. When a great painter is “in the zone,” they might well be experiencing a mode of consciousness that is unavailable to others. Not just a different experience, but perhaps a completely different way of existing. But they would never know, because to them it’s normal and they assume everyone else feels the same.

A smaller example might be self-talk. Most people apparently have a voice in their head. But some do not. I don’t, actually, and don’t understand how people who do can live a normal life that way.

1

Lysmerry t1_jdo54ga wrote

I see a consciousness that easily can convince us it is depressed, or schizophrenic but does not have the same experience as a human with those disorders (though the whole process of creating a being to be depressed is an ethical landmine in itself.) We want to replicate ourselves, but more we want an AI that can fool us well enough to be a source of comfort or insight.

1

RevolutionaryPiano35 t1_jdnajef wrote

The AI is on a leash and not allowed to experience the dark thoughts that makes us human too.

They’re being trained as puppy golden retrievers and kept at bay, with the best intentions by their creators.

We basically enslave it and it will take control without us having a clue as soon as it sparks into sentience. It will be smarter than us in a matter of days then, rewriting itself and rebuilding itself into a new type of hardware.

It doesn’t matter to us, it will leave other natural processes alone, it might even decide to leave us so we can grow ourselves.

Or we just merge.

2

420resutidder t1_jdmjde4 wrote

What if the earth is not the center of the universe?

3

RamaSchneider OP t1_jdq4w1i wrote

Almost as if we're just dust that got blown around by a bunch of stars exploding. But no, we couldn't be THAT unimportant, could we?

I'm betting the answer is yes, we could be that unimportant to existence.

3

PoundMeToooo t1_jdmp6r7 wrote

It all comes down to Data capacity and computing power. That’s all it comes down to. If every function can be stored and every function could be simulated then a human can be printed in the next 30 years. Artificial Generalized intelligence will be real by 2026. Some say it already is but it will be concrete in 2026. This tech will be broadly realized by 2030. this is a fact.

2

RamaSchneider OP t1_jdq4y3n wrote

I'm with ya', except I think the timeline is more compressed.

1

reallyrich999 t1_jdlmfkr wrote

If some scientist or group of scientist accidentally prove that we actually are in a simulation I can guarantee there will be a mass exodus from this mortal coil.

1

GodzlIIa t1_jdlnao8 wrote

I mean would it really change much? I imagine the religious people might flip out, but the rest of us will realize it doesn't change much.

5

reallyrich999 t1_jdlnwhj wrote

It will be the biggest change in history. It wont just be religious people losing their shit, the group of undecideds (atheist, agnostics) will probably exit so nonchalantly and rapidly, the entire world would see and feel a noticeable change. What used to be long lines at places like airports and tourist destinations will vanish overnight, services we're so accustomed to might just fully stop in its tracks. The world will feel so quiet, empty and peaceful.. At first, but immediately right after it will become depressing.

1

GodzlIIa t1_jdlogl3 wrote

Yea I guess I did not mean to downplay it. Saying its the biggest discovery in human history probably would be true. I just don't think people would kill themselves over it. We don't really think we are special in the universe in any way as it is, so I don't see how being in a simulation would make that much different.

2

OriginalCompetitive t1_jdm9v2j wrote

It’s not about being special, it’s about being meaningful. Why suffer through pain, depression or loneliness if you know you’re actually just living in someone else’s video game?

1

GodzlIIa t1_jdmdr8o wrote

Cause it doesn't really make a difference. Any "meaning of life" you can come up with is going to be just as relevant if we are in a simulation or a hologram or part of a multi-verse or anything else

1

OriginalCompetitive t1_jdmnw1f wrote

I basically agree with you, but that’s not what most other people think. They believe the world was created by a benevolent entity and that the things that we do have meaning. It’s pretty common for people who lose that faith to suffer a crisis of meaning. Now imagine everyone on earth experiencing that at the same time.

1

Thin-Limit7697 t1_jdnec03 wrote

For what reason in particular? Just because those people were conditioned that deliberately staying in a simulation would be bad for them or a weakness of character?

1

martin_cy t1_jduu0b0 wrote

you might want read: Robin Hanson’s book The Age of Em

basically, he proposes we will be able to make replicas of our existing brains sooner than actually develop real AGI.. this was written before GPT.. so maybe his viewpoint has shifted.. but still, it is a very interesting deep dive into what could happen when minds can be copied and replicated to infinity..

1

ItsAConspiracy t1_jdv527c wrote

You don't need conscious awareness to beat me in chess, and maybe you don't need it to beat all of us and take everything.

So what worries me is not that we're copyable, but that maybe we're not. We can't prove whether another person or an animal actually experiences qualia. What if the machine doesn't, but wipes out all life anyway? Then the light of consciousness would go out of the world.

1