Viewing a single comment thread. View all comments

muzukashidesuyo t1_j6nmt40 wrote

Or perhaps we overestimate what exactly consciousness is? Are we more than the electric and chemical signals in our brains?


dmarchall491 t1_j6p00sx wrote

> Or perhaps we overestimate what exactly consciousness is?

Certainly, however that's not the issue here. The problem with language model is simply that it completely lacks many fundamental aspects of consciousness, like being aware of its environment, having memory and stuff like that.

The language model is a static bit of code that gets some text as input and produces some output. That's all it does. It can't remember past conversations. It can't learn. It will produce the same output for the same input all the time.

That doesn't mean that it couldn't be extended to have something we might call consciousness, but as is, there are just way to many import bits missing.


AUFunmacy OP t1_j6nqi4t wrote

As I am studying neuroscience in medical school I feel I am semi-qualified to answer this.

I don't think we are any more than the electric and chemical signals in our brains, simply because there isn't anything else that we can point at yet. The fundamental fact is that all human processes, what you could call the entirety of human physiology acts via the comunication between neurons in the nervous system, which is pretty well understood.

You would be dead the very moment (1 planck second) after your neurons stopped conducting - because at that point everything stops, literally everything.


littlebitsofspider t1_j6nukkl wrote

The roboticist Pentti Haikonen has put forth the idea that natural (and by extension) artificial consciousness hinges on qualia, and that we won't develop said artificial consciousness until we can implement qualia-centric hardware of sufficient complexity. Considering that human wetware functions on a similar premise, i.e. that our conscious existence depends on inter-neural communication that is independent of objectivity, would you think this theory holds water?


JustAPerspective t1_j6paw7w wrote

>I don't think we are any more than the electric and chemical signals in our brains, simply because there isn't anything else that we can point at yet.


The limitation of the practice is that it presumes anything humans haven't discovered yet isn't relevant... while simultaneously refusing to allow for what people haven't learned.

Yet science is merely observation of what is - any incomplete observation will be suspect in its conclusions due to the variables not yet grasped.

That the atoms comprising your system shift by 98% annually indicates that - at some level - what makes up "you" is not physical.

Which leaves a lot of room for learning.


AUFunmacy OP t1_j6peiqq wrote

I’m so confused, do you know what “pragmatic” means? Because it just seems like you compliment my way of thinking and then say that I am ignorant and so are the rest of people who learn neuroscience and god forbid - choose to believe it.

No idea what you mean by atoms shifting 98% that’s just complete nonsense you wrote to make yourself seem more credible. At least give context to the things you say or provide some evidence? Either would be great.


SkipX t1_j6npg5t wrote

It's an interesting misunderstanding isn't it, but natural in a way. For oneself to know or rather experience that there is consciousness and then to make the connection that similar creatures as oneself must have that same property feels just right, even logical. But the fact that there is no scientifically quality to quantify that observation makes consciousness quite naturally a rather mythical property.


tkuiper t1_j6o3tsj wrote

It's why I think pansychism is right. There's no clear delineation for when a subjective experience emerged and I definitely am conscious, therefore so is everything else. I think the part everyone gets hung up on is human-like conscious, the scope of experience for inanimate objects is smaller to the point of being nigh unrecognizable to a conscious human. But you do know what its like to be inanimate: the timeless, thoughtless void of undreaming sleep or death. We experience a wide variety of forms of consciousness with drugs, sleep deprivation, etc. and thats likely a small sliver of possible forms.


Schopenschluter t1_j6oqfwf wrote

> timeless, thoughtless void

I would argue that time is absolutely essential to anything we call experience and consciousness—these only take place in time. Dreamless sleep is neither experience nor consciousness, but really the absence thereof. We don’t really know what it’s like to be in this “inanimate” state because we always reconstruct it after the fact through metaphors and negations (timeless, thoughtless, dreamless).

In other words, I don’t think this is evidence for panpsychism but rather demonstrates that humans consciousness shuts down completely at times. So saying that it is akin to the consciousness of, say, a stone would be to say that a stone doesn’t have consciousness at all.


tkuiper t1_j6otjpd wrote

But I would also say we experience middling states between dreamless and fully conscious. Within dreams, partial lucidity, or heavy inebriation all have fragmented/shortened/discontinuous senses of time. In those states my consciousness is definitely less complete, but still present. Unconsciousness represents the lower limit of the scale, but is not conceptually separate from the scale.

What I derive from this is that anything can be considered conscious, so the magnitude is what we really need to consider. AI is already conscious, but so are ants. We don't give much weight to the consciousness of ants because it's a very dim level. A conscious like a computer for example, has no sense of displeasure at all. It's conscious but not in a way that invites moral concern, which I think is what we're getting at. When do we need to extend moral considerations to AI. If we keep AI emotionally inert, we don't need to regardless of how intelligent it becomes. We also will have a hard time grasping its values, which is an entirely different type of hazard.


Schopenschluter t1_j6ozacy wrote

I totally agree about middling and “dim” states of consciousness but I don’t agree that experience or consciousness takes place at the lowest limit of the scale, where there would be zero temporality or awareness thereof.

In this sense, I think of the “scale” of consciousness more like a dimmable light switch: you can bring it very very close to the bottom and still have some light, but when you finally push it all the way down, the light goes out.

Are computers aware (however dimly) of their processing happening in time, or does it just happen? That, to me, is the fundamental question.


thegooddoctorben t1_j6pd651 wrote

>Are we more than the electric and chemical signals in our brains?

Yes: speaking loosely, we have organic bodies with highly sensitive nerves and hormonal pathways. Those are the basis of emotion and sensation. That's the foundation of consciousness or awareness.

An AI without our organic pathways is categorically different. That's what makes it artificial.

At some point, if we combine an AI with organic sensitivity, we will be creating intelligence itself, not artificial intelligence. So we can't ever create AI with consciousness, but we could artificially create consciousness.