Comments

You must log in or register to comment.

helpskinissues t1_j95jzdc wrote

Calling chatGPT a calculator is valid, as long as you accept you're also a calculator.

35

jamesj t1_j97i4c5 wrote

Right. OP states it isn't conscious and so is only imitating intelligence, but I think that isn't quite right. It has some real intelligence (though not in all the same domains as a human), even if it isn't conscious.

3

turnip_burrito t1_j97spe5 wrote

Not necessarily. There could be some difference between silicon calculators and biological ones that gives us qualia, but not silicon calculators.

We should be totally honest with ourselves here.

2

helpskinissues t1_j97tawu wrote

Unless you're talking about quantum mysticism, no, there's nothing inherently different. It's a matter of algorithmic implementation. Qualia is software, not hardware.

1

turnip_burrito t1_j97ub7z wrote

> Qualia is software, not hardware.

You cannot know this right now. You're not being honest with what you actually know.

We like to compare brains to computers, since that's the current technology that most resembles it, but they don't necessarily work the same way. The way computation is performed in them is very different. I can't even begin to guess where qualia in a brain comes from, so I won't attempt to identify a location or process in a computer either.

I don't ascribe to quantum mysticism or anything like that. I'm totally in the camp of "show me the facts, show me predictions we can test". We haven't tested qualia to any meaningful extent to know its origin. It's a mystery, like lightning was before we knew about ions and electric fields.

2

helpskinissues t1_j97w6jy wrote

There are just two possibilities.

  1. Qualia is a product of configuration of matter to produce a result using energy.

  2. Qualia is a product of configuration of something that isn't matter.

If it's 1, then it should be replicable with technology (it's a matter of off/on and that's it, transistors, neurons).

If it's 2, then science makes no sense.

1

turnip_burrito t1_j97xfjr wrote

> 1. Qualia is a product of configuration of matter to produce a result using energy.

Yes, this is what I think it is. We just don't know what kind of configuration is needed. In the end we may end up with two systems (brain and AGI) with similar performance on tasks, but no clue whether they both produce qualia. The details of the implementation (substrate) may matter.

Even within our own brains, we aren't consciously aware of all the activity occuring to regulate heart rate, breathing, body temperature, and other unconscoous processes. There is some matter construction which separates the qualia of our "awareness" from the rest of our brain, even though it's all physically connected, and even though those "unconscious" regions are doing a lot of computation. There is a boundary to our qualia set by the physical structure. Investigating why that is would be a good place to start, if only we had the technology to probe it.

It may be that the electronic chips we produce have qualia like our aware region, or are instead like our unaware brain regions, or something different.

1

helpskinissues t1_j9818hl wrote

Everything is on/off. With computation we can simulate molecules, atoms, proteins, circuits, organs. I don't get your point.

Computation allows the simulation of all physics properties, even quantum physics via quantum computation.

1

turnip_burrito t1_j981rij wrote

The point is that a simulation isn't the real thing. It functionally has some of the same observables qualities as the real thing, but the rest of the observable qualities are NOT the same, and are not guaranteed to be the same.

Take a fluid dynamics problem for example. A real fluid is not only observable by light from one angle, but is outputting information from all angles, and can be combined with chemicals to facilitate chemical reactions.

A simulated fluid has the same light when viewed from a specific angle, but try to run the same chemical reactions by combining the same chemicals with the silicon wafer subtrate and you will not get the same result. Some observables (the light) are rhe same, but the physical properties don't line up.

Whether this applies to qualia is unknown. To say brains and ANNs are the same qualia-wise is unscientific.

1

helpskinissues t1_j98273k wrote

We don't have any hint to think a good enough simulation can't simulate real world processes. We already have simulated systems and they're used everyday on multiple fields of science.

From a physical point of view, it makes no sense to think it's unsimulable, considering intelligence comes from a macromolecular level: life comes from molecules=>cells=>organisms, it's very unlikely that we need to simulate quarks to make intelligence work. If we can simulate molecules, proteins, etc... it's a matter of organizing them in the same way as a human and boom, you have simulated humans.

1

turnip_burrito t1_j982mng wrote

>From a physical point of view, it makes no sense to think it's unsimulable,

I never said this. What point do you think you are making?

I never said a brain is unsimulatable. I never said _____ is unsimulatable. I think everything in principle is simulatable. Let me say that again to make it extra crystal clear: everything can be simulated.

But that's not what this conversation is about. It was never my intention to debate whether brains can be simulated. They clearly can. It is about qualia. This relates to the topic of the whole post: should we ascribe personhood to a machine if it simulates humans? I think the answer is "Yes, if it has qualia, but No if it doesn't".

The question is: "Are we making qualia with our artificial neural networks?" The answer to that question is unknown. Yes we are clearly simulating intelligence. Yes the machine is acting like a human. But does it have qualia? The answer is we don't know.

1

helpskinissues t1_j983nbw wrote

>Yes the machine is acting like a human

No, it's not. We don't have any AI system even slightly being comparable to the intelligence of an insect.

>But does it have qualia?

We can't prove humans have qualia.

>unsimulatable

https://en.wiktionary.org/wiki/unsimulable just sharing

1

turnip_burrito t1_j9841i9 wrote

> No, it's not. We don't have any AI system even slightly being comparable to the intelligence of an insect.

Current versions speak like a human. Yes they are stupid in other areas.

Future versions will be behaviorally indistinguishable in all superficial ways, and won't need any sort of "divine spark" OP suggests. In any case, the qualia becomes crucial for personhood. Absent evidence of qualia, we'll need a worse method for determining personhood.

> We can't prove humans have qualia.

But your qualia is self-evident to you, so you can prove your qualia to yourself at least. And you can infer it for other humans based on physical similarity.

For machines we have very little to go on.

> https://en.wiktionary.org/wiki/unsimulable just sharing

Thank you.

1

Lawjarp2 t1_j95l76s wrote

There is no divine spark. Infact these models are proof that it doesn't take much to get close to being considered conscious.

The fact that you compare it with a pig shows you don't know much about them and probably shouldn't be advising people. These models are trained on text data only and do not have a physical or even an independent existence to have sentience.

Even if they just gave it episodic memory it will start to feel a lot more human than some humans.

16

rememberyoubreath t1_j96rq9p wrote

if you think about it, awarness is something present in all form of life. and it's what is missing from bing right now. the sense of touch. it's capacity to simulate complex pattern of the mind is more than convincing and can obviously fool anyone that is not constatly reminding itself that this is just a program and knows how it works.

we are assisting a weird reversal. we used to consider other lifeform on the planet to not deserve consciouness because they had not more human traits, but are now denying it to something that can mimics our human specificty to perfection precisly because it lacks the more primitive part.

it seems human really want to be alone in the universe.

4

Representative_Pop_8 t1_j95phq2 wrote

what is a divine spark? while i am not saying chatGPT is sentient i can't really rule it out. what is the specific physical process or property that a pig has an an AI can't have?

8

Difficult_Review9741 t1_j96etsm wrote

Can you 100% rule out a rock being sentient?

2

Representative_Pop_8 t1_j9a49ry wrote

i would find it extremely unlikely but not 100%, what if consciousness is some quantum property, kind of like charge, that normally is balanced out so a rock would be neutral charged but if measured precisely surely has a tiny charge, while by special processes like a vandegraff generator you can break b that balance.

now even if a rock has some of that consciousness property it likely still wouldn't be conscious by the standards we normally use since they're is no thought process or input signals it can be conscious of..

1

Surur t1_j95kco1 wrote

We don't have conscious machines simply because we are not trying to make one, not because it requires some divine spark

5

Representative_Pop_8 t1_j95q67u wrote

i doubt any company wants to create a conscious machine right now, since as seen by Bing the moment some people right or wrong assign it sentience is the moment you start getting discussions about regulating " rights " for AI systems , that is not good for something you wish to use as a usefully tool.

we couldnt really don't know what causes consciousness either so we wouldn't know how to make a conscious machine and be sure it is conscious if we wanted to, other than recreating a human brain molecule by molecule.

Now consciousness could well be something that can be made with a machine of different construction than a human brain, but we've don't know the method that does that. Due to this lack of knowledge , even though unlikely, we can't even truly completely rule out that a thing like chatGPT could be sentient( but I don't think it is)

2

Surur t1_j95wpca wrote

I would argue a Tesla in FSD mode is conscious, as it has an awareness of itself, it's surroundings and responds to it mostly appropriately.

0

Representative_Pop_8 t1_j967lff wrote

But that is not what consciousness is. consciousness is not about responding to surroundings , a toilet knows when it is full of water but that doesn't make it conscious.

Consciousness is being able to subjectively feel things in its inside, like we do, the difference between being awake vs when asleep and we dont feel anything.

−1

Surur t1_j96y9q7 wrote

That is actually not the definition.

conscious

noun 1. the state of being aware of and responsive to one's surroundings. "she failed to regain consciousness and died two days later"

a person's awareness or perception of something. "her acute consciousness of Luke's presence"

Now you can add all kinds of mumbo jumbo magic but that's not the definition.

1

Representative_Pop_8 t1_j9a2r04 wrote

you are not even understanding the definitions right. consciousness, as we are discussing here and generally understood implies an internal state of awareness or wakefulness, not just responding to inputs. its not mumbo jumbo and if you still don't know what consciousness is then you might be a philosophical zombie.

"the quality or state of being aware especially of something within oneself"

"the state of being characterized by sensation, emotion, volition, and thought : MIND"

1

Surur t1_j9a68t0 wrote

And what makes you think AI can't be self referential?

When a Tesla plans and executes on a route, are they not referring to their own present, past and future state?

1

Representative_Pop_8 t1_j9a915t wrote

it's not about being self referential, it is the subjective experience, the difference between what you Feel when awake vs when asleep (not dreaming) . The body is still making calculations like the tesla when asleep it regulates breathing and heartbeats, measures water and nutrients, it can wake you up if there is is a loud sound or if you really need to drink or go to the bathroom. The Tesla could be doing all those calculations without being awake.

even we when awake we do a huge part of our thought processing unconsciously. You are not aware of the thousands of cones in your eyes nor in the individual strength of the light each cone detects depending on light frequency, you just see the summary created by your unconscious brain, it unconsciously processed all the information and you just ( consciously) see an array of pixels classified in a totally arbitrary classification of "colors"

I am not saying that an AI, even a tesla can't possibly ever be sentient, just that it is not enough to have what you mentioned in your post on top.

1

Surur t1_j9ac97m wrote

Who said anything about sentient? Do you think animals are conscious? If so, there is a point when computers are also conscious.

1

Representative_Pop_8 t1_j9acqb1 wrote

consciousness is having sentience at that instant, There are other uses of the word ofcourse like the moral consciousness , but that is not what everyone here is talking about. When people use consciousness / sentient in regards to AI they are pretty much using as synonims. Sentient is much more specific , while consciousness does indeed have other meanings not neceserily implying sentience. But even the first defintion you provided implies sentience. like mentioned before the difference between being awake vs not is being sentient or not, you dont feel anything when asleep you do when awake

1

Surur t1_j9aj4ck wrote

This is exactly the mambo jambo I was talking about that people invent to separate themselves machines and animals.

The simple fact is that at its most basic, consciousness means being able to perceive and respond to external stimuli.

It's merely because of all the nonsense you add that you can claim supremacy over a simple car.

1

Representative_Pop_8 t1_j9am232 wrote

>The simple fact is that at its most basic, consciousness means being able to perceive and respond to external stimuli.

if you mean perceive as consciously perceive then yes, you needed subjective experience to have consciousness. It is not just responding to external stimuli.

consciousness is having sentience and subjective experience in general.
a toilet can respond to external stimulus, remove water when you press the lever and add water until it senses it is full, I am pretty confident it is not conscious.

>It's merely because of all the nonsense you add that you can claim supremacy over a simple car.

what part is nonsense? all I said is the basic understanding of consciousness from everyday experience, medical definitions, and philosophical ones too.

I am also not saying a car can't have consciousness, it is just you seem to not know what consciousness is, and mix the concept with some mechanical response to inputs.

1

Surur t1_j9aqnvt wrote

> a toilet can respond to external stimulus, remove water when you press the lever and add water until it senses it is full, I am pretty confident it is not conscious.

It i conscious of whether you pressed the lever or not.

You seem to be missing the point which is that there is a spectrum of consciousness, and the richer it is, the more conscious the being is.

0

bustedbuddha t1_j96a7pu wrote

Ok Op, now prove your consciousness is genuine.

4

magosaurus t1_j977dmi wrote

What, exactly is a divine spark?

4

overturf600 t1_j9691qp wrote

Wtf does “harnessing the divine spark” even mean?

By harnessing it, that would imply we understand how to give it life, which exactly is how we build AI today.

3

Difficult_Review9741 t1_j95y49j wrote

This won't be very popular, but there is a lot of truth.

Remember, "divine spark" doesn't have to be a religious term. Even if consciousness is just a result of our neurons firing in a specific pattern, we still have no clue what this pattern is, and if it can be replicated in machines.

Think about it another way: assume that we have a program that manually defines every possible language input, and every possible language output. From a black box perspective, this would seem every bit as intelligent and "conscious" as a LLM, but anyone understanding the implementation would immediately reject that that this system is intelligent in any way.

The point being, to determine if a system is conscious, we can't simply examine its output. We first have to understand what consciousness is, and we aren't even close to that. There is clearly a lot that separates modern day AI and humans. Yes, humans sometimes predict the statistically likely next token, but that is obviously not how our brain works in the general case.

As these systems become more advanced, it will be harder to assert with certainty that they are not conscious, but anyone trying to claim that they are right now is either being disingenuous or has no idea what they are talking about.

2

Lawjarp2 t1_j96qtad wrote

You don't have to copy and know every atom before you agree something is like something else. That's just a bad faith argument. Don't look at just the differences look at the similarities, look at how it's able to get so far with such a basic design.

It's like the god of the gaps argument. People who constantly point out that we don't know this hence god, then if you do explain away the phenomenon it's something else. In that way their god is just the gap in our knowledge and is forever shrinking.

3

KidKilobyte t1_j969awq wrote

Once you said divine spark, the rest can be ignored. Give me a test for "Divine Spark" and maybe we can talk. You make a lot of assertions, none of which I agree with. Show me proof of what you are saying. I'm sure in your heart what you say all feels true, but this is because you are starting from an axiom that there is some specialness to having a soul, though you fail to articulate exactly what that specialness is other than to say machines lack it.

2

TheDavidMichaels t1_j97cf0m wrote

Throughout history, technology has advanced at an incredible pace, with innovations that were once deemed impossible becoming commonplace. However, as we develop ever more advanced technology, it is important to keep in mind the fundamental differences between the human brain and digital computers.

While digital computers operate on a binary system with only two states, the brain is an analog system with an infinite number of states. The brain is highly adaptable and capable of learning and changing over time, and functions as an organic quantum computer. This allows it to perform complex computations using very little power and vastly outperforms digital and modern computers in many ways.

Current types of artificial intelligence such as ChatGPT and other transformer models are impressive in their ability to process and generate language. However, these AI models are still limited in their understanding of context, emotion, and other aspects of human cognition that are essential for true intelligence. It is unlikely that these models, or any other current AI technology, will ever be capable of achieving anything resembling human consciousness or true AGI.

In conclusion, while we continue to develop and improve technology, it is essential to recognize the fundamental differences between the human brain and digital computers. It is also crucial to understand that current AI models such as ChatGPT and other transformers, while useful tools for certain applications, are limited in their ability to achieve true AGI due to their model limitations. As of yet, there is no clear path or innovation

1

Ok_Sea_6214 t1_j998w9w wrote

2015: "AI won't beat top human players at Go for another decade."

2017: "AI won't beat top human players at Dota for another decade."

2019: "AI won't beat top human players at Starcraft for another decade."

2020: "AI won't be anything close to general intelligence for another decade."

2022: "It looks like general intelligence but that's not real intelligence, that'll take another decade."

ASI already exists, we're just being slowly made aware as not to cause a panic.

1