Viewing a single comment thread. View all comments

Clean_Livlng t1_irts9zi wrote

It's going to act identical to the way it would if it was conscious, no matter how intelligent it gets. Right?

It's also not something that we know how to test for, and it's possible we'll never be able to know if something other than ourselves is conscious. It's reasonable to assume other humans are because we experience consciousness, so why not other humans who have human brains like we do?

We don't know what it is that causes consciousness. Would perfectly simulating a human brain within a computer give rise to consciousness, or does it still lack something?

If something isn't conscious then pain doesn't actually 'hurt' it. It's just reacting to stimuli, but it's not have a subjective experience of unpleasantness. Do we treat AI as if it could possibly be conscious and make it illegal to cause it pain? Whatever we've got going on in our brain to make pain feel so bad, we couldn't replicate that in AI and then trigger it intentionally. Or we assume it can't possibly be conscious and anything is fine.

If a human copies their brain into a computer, are they going to have any legal protection from being tortured? We don't know if they can be conscious, but we know they're intelligent and they seem to us to be the same person they were outside of the computer. Imagine someone decides to torture them or does something else it'd be illegal to do to a person, do we punish the flesh&blood human who did this?

It's going to act identical to the way it would if it was conscious, unless being conscious or not changes how it behaves. What difference would we notice?

https://en.wikipedia.org/wiki/Philosophical_zombie

>​ "A philosophical zombie or p-zombie argument is a thought experiment in philosophy of mind that imagines a hypothetical being that is physically identical to and indistinguishable from a normal person but does not have conscious experience, qualia, or sentience.[1] For example, if a philosophical zombie were poked with a sharp object it would not inwardly feel any pain, yet it would outwardly behave exactly as if it did feel pain, including verbally expressing pain. Relatedly, a zombie world is a hypothetical world indistinguishable from our world but in which all beings lack conscious experience.
>
>Philosophical zombie arguments are used in support of mind-body dualism against forms of physicalism such as materialism, behaviorism and functionalism. These arguments aim to refute the possibility of any physicalist solution to the "hard problem of consciousness" (the problem of accounting for subjective, intrinsic, first-person, what-it's-like-ness). Proponents of philosophical zombie arguments, such as the philosopher David Chalmers, argue that since a philosophical zombie is by definition physically identical to a conscious person, even its logical possibility would refute physicalism, because it would establish the existence of conscious experience as a further fact.[2] Such arguments have been criticized by many philosophers. Some physicalists like Daniel Dennett argue that philosophical zombies are logically incoherent and thus impossible;[3][4] other physicalists like Christopher Hill argue that philosophical zombies are coherent but not metaphysically possible.[5] "

If someone says that pain is an illusion and we're not really conscious, pinch that person as hard as you can. It's ok, they themselves have said they're not really experiencing suffering. It's self evidently false. Creatures that experience stimuli that results in them avoiding damage aren't necessarily conscious or suffering, unless something that isn't intelligent can experience suffering...so that's not what's happening for us.

Pain causes some people to kill themselves. It's not an advantage to suffer, and if we weren't conscious we could be intelligent, and respond to pain signals in more helpful ways. "Broken leg? Don't stand on it". An intelligent brain (not conscious) decides the body has a broken leg and doesn't walk on it, all without a conscious experience of suffering being necessary.

I'm going to make a logical leap, and I don't know how far because I'm closing my eyes first...consciousness could be necessary for a brain to achieve good results when combined with a body, once you get beyond a certain lower limit of intelligence. Perhaps it also requires that the brain needs to simulate future events, and keep track of a 'social/conceptual inner world'. We have these ideas in our minds about what's going on 'out there', and perhaps consciousness arises to deal with the complexity.

Once you have consciousness, perhaps it no longer works to have the pain signals be in the form of information that isn't experienced as suffering. So our brain needs to metaphorically 'whip us' for us to behave correctly. Because all the times consciousness occurred in our evolutionary past and we didn't experience pain as suffering, subjectively, we didn't end up passing on as many offspring that were fit for the local conditions. In a kinder world with no predators ad lower gravity, perhaps there wouldn't have been enough selective pressure for consciousness to arise.

In saying this, I'm implying that it might be possible for a creature to be intelligent but not conscious. That consciousness could serve a particular purpose, and that by chance evolution selected for it in us. We don't know if the 'physical brain' of a computer based AI would have the necessary 'ingredients' to form consciousness, or even if it did, whether we'd chance upon designing AI in a way that'd make it conscious. Especially since AI might not need to have a conscious experience in order to survive, it's programming is absolute, even if we don't know why it made a decision.

If our 'biological programming' was absolute, we wouldn't need a conscious experience of pain/suffering in order to avoid things that harm us. From this, I hastily and recklessly conclude, to the point that someone is, right now, trying to talk me down from the logical ledge that I'm about to leap off....that our programming is not absolute. Or, that our subjective experience of pain and suffering is entirely unnecessary. One or the other.

​

Are we conscious because we're intelligent...or does our intelligence come as a result of us being conscious first? Human babies are conscious at some point, are they conscious before we'd consider them intelligent?

I am jumping all over the place logically, in the dark, in the hopes that my feet find solid ground. Or that by falling, that others can know where not to jump.

It's incredible that we can be conscious, not just intelligent, but be having a subjective experience of sense data. I'm paraphrasing and also exaggerating the quote; someone once said that if you enlarged the brain so that every atom was the size of a windmill, and you went inside to look around, you wouldn't find anything that could be responsible for consciousness, just gears turning.

There is something special about the way the stuff of the universe can make consciousness happen. Something we can't even guess at in a way that makes sense. We can say "quantum foam" but nobody really understands how these things could relate to consciousness.

​

I sometimes feel that it should be impossible that our physical brains, based entirely on the physical mechanisms of which they're made, are able to generate the subjective experience of consciousness I'm having. At the same time, everything that exists must be considered to be natural, so there is no supernatural element that it's possible for consciousness to be generated by.

The only reason I'd entertain the idea that consciousness could exist in humans, is because I am having the direct subjective experience of it right now.

So of course I believe it's possible that AI might not have the particular physical 'special sauce' to generate consciousness that we do, because that something could be the thing that makes the thing that makes the thing....etc that makes the smallest fundamental particles we're aware of work the way they do.

It's physically caused, but we don't know of any physics we're able to observe that should, or could, result in us having a subjective conscious experience of sense data.

We don't know how, we don't know of any ideas that'd explain it that make sense.

​

TLDR:

AI can either be conscious or not, and we don't know which it is. It's possible we can never know if AI can be conscious, not even with the most advanced technology and knowledge it's possible for us to acquire in the distant future.

We don't know. We can't know. We won't know.

1