Submitted by Rumianti6 t3_y0hs5u in singularity

I'm not saying that they can't but thinking that they will is just dumb. We don't even fully understand consciousness yet so saying that throwing a bunch of things together in a neural network would work is dumb.

Intelligence we know can be replicated but the only working examples of consciousness are specific to life and even then only some of life. To suggest that it would just pop up in a non biological framework that has many differences is dumb.

Of course consciousness could be made into AI theoretically but the path we are going down with AI right now doesn't show they will. I don't think many people are going to want to make conscious AI because at that point you are just making a slave not a tool.

87

Comments

You must log in or register to comment.

HeinrichTheWolf_17 t1_irrwru3 wrote

I don’t think consciousness really matters as far as ASI is concerned, but the universe is computational, our brain exists inside the universe and is more or less a machine like any other, if random mutations on the plains of Africa can give rise to self awareness I see no reason to assume the process cannot be repeated (and much more efficiently mind you).

It just seems like another ‘only humans will be capable of X’ argument in a long list of times that argument has been proven false.

123

goldygnome t1_irtg1ja wrote

Animals have demonstrated self awareness, so it's not just humans.

Random mutations gave rise to conciseness, driven by evolution. The key difference is that we are directing the evolution of AI and our measure of fitness isn't the same as natural selection.

Op's phrasing might have been a bit combative, but I think the question is fair. Why do we assume that an artificial mind created under artificial conditions for our purposes will become conscious? To me that's like assuming alien intelligence will be humanoid.

24

MrTacobeans t1_irtydxz wrote

Even dogs/cats which aren't on the upper end of animal intelligence display consciousness on a scale that completely beats out their wild counterparts. Both my dog and cat display signs that they are thinking and not just in some autopilot type instinct based thought process. When danger/survival is removed from the equation I wouldn't doubt most animals start to display active consciousness

9

kneedeepco t1_irubfki wrote

Who says it was "random mutations"? It could be a mere benchmark in evolution and any evolving entity, including an AI, will reach that point.

7

NightmareOmega t1_irukitl wrote

If any sufficiently complex system resulted in consciousness we would already see random occurrences popping up across varying super computers, many of which possess the required FLOPS. Also AI don't evolve, they're designed.

3

michaelhoney t1_irvml4y wrote

Being really fast is not the same as being sufficiently complex, though. “Complex, in the right way” is important.

6

FjordTV t1_irw1wko wrote

Yup.

I can't remember the numbers right now, but as the size of a neural net starts to surpass that of the human brain it's theorized that it's more and more likely to give rise to consciousness. I think GPT-4 is supposed to get us pretty close, if not over the threshold.

7

NightmareOmega t1_irw8u0c wrote

No argument there. But where does the leap from "we have a box which could arguably hold a consciousness" to "any sufficiently complex box will spawn consciousness" come from? I'm not saying it's impossible but where is the supporting evidence?

2

michaelhoney t1_irzeinc wrote

Fair point: conscious things not made of evolved meat are still hypothetical, as a far as we know. We don’t yet know what the secret sauce is.

2

CrummyWombat t1_irufzrd wrote

I think it’s safe to assume that people will create a conscious AI intentionally, if not accidentally first.

1

Hour_Status t1_irvzb20 wrote

How do you suppose could you ‘repeat’ a computational system more effectively while operating WITHIN that system?

Seems implausible to suggest that the universe is simply a Von Neumann machine.

You would need to breach the outer limits of the universe itself in order to repeat the system on which it is based while working from within it.

0

PerfectRuin t1_irtazt3 wrote

I find it amusing to think the idea that AI will become conscious is surprisingly not so terribly different from believing your Encyclopedia Britannica series sitting on your bookshelf will wake up conscious tomorrow if it's struck by lightning during a thunderstorm tonight.

−4

Mrkvitko t1_irtpk2e wrote

Yeah, and not so terribly different from a couple of cells in your brain waking up conscious every morning...

8

PerfectRuin t1_irwo68n wrote

Brain cells are alive. They have that qualia that non-alive things lack. AI is not alive. Books are not alive. AI and books are similar in that they store information. They have input (you write info into them) and output (you read info from them). AI has mechanisms that allow it to process info but not meaning. But that's not life. AI has electricity running through it, and that's similar to living things. Hence the lightning strike in the amusing analogy.

Zealots desperately hoping AI will become some living god that will accept their worship or bring more meaning to their lives through their servitude of it, downvoting comments that question or challenge the idea that AI can ever achieve consciousness annoys me in the same way that all zealotry of blind-faith religions annoys me. But it's my fault for risking commenting here in a post that doesn't support the blind-faith tenet that AI will become consciousness if it isn't already. I apologize for having trespassed here. I'll see myself out.

2

Mrkvitko t1_irwu3kp wrote

Where is the borderline between "alive" and "non-alive"? Are humans alive? Certainly. Are they conscious? Yup. How about animals? They are alive, some species are well self aware and probably conscious to some degree. What about plants and mushrooms? Certainly alive, but given their absence of nervous system, it is unlikely they are conscious in the traditional sense. How about single cells organisms (yeasts, bacteria, protozoa...) They are alive, moving, hunting... But probably not conscious, as they (again) don't have any complex nervous system. How about viruses? They are certainly not conscious, maybe not even alive.

Being alive is certainly independent on being conscious. "Being alive" is basically synonymous with "having metabolism". There's insane amount of organisms that are alive and not conscious that proves the point.

But it doesn't tell us anything about whether being conscious depends on "being alive". All we can say is we haven't yet observed any thing that would be conscious and not alive. My assumption is "being conscious" is just a matter of complexity - and the only reason we haven't observed any conscious "not living" thing is because there is no known process that would create things that are complex enough. Well, until humanity emerged.

Don't go anywhere, I like this discussion :)

2

red75prime t1_irtnhbb wrote

Maybe in some frankenstein-esque interpretation of the situation. Inanimate matter imbued with information and power becoming alive or something like this. Too poetical to my taste.

1

Rumianti6 OP t1_irrynzk wrote

It really isn't though, I'm not suggesting some magic sauce that makes consciousness possible. Also I never said only humans are capable are consciousness. I was saying due to fundamental and significant differences between life and AI and also because we don't know how consciousness comes about are reasons we should not assume AI will just become conscious.

The argument of consciousness exists therefore AI can be conscious is dumb. It's like saying birds can fly therefore cows can fly.

−23

ChronoPsyche t1_irrzbah wrote

Try making your arguments without calling things "dumb" repeatedly. Doesn't make you sound intelligent.

21

Rumianti6 OP t1_irrzsqo wrote

I mean it is dumb though, do you want me to instead say ignorant, unintelligent, stupid? This isn't some fancy discussion just a simple argument.

−19

ChronoPsyche t1_irs2ac3 wrote

None of them. Make your argument without automatically putting opposing arguments in the category of "dumb/stupid/unintelligent/etc". It makes it sound like you aren't open to the possibility of somebody having a differing perspective that could be correct, which is pretty close-minded when it comes to futurism and the singularity, given how none of us really know for sure what's going to happen.

17

Rumianti6 OP t1_irs3yhe wrote

I'm literally saying that we aren't sure what's going to happen that is my argument.

−7

earthsworld t1_irst5rz wrote

the only dumb thing in this thread are your replies.

5

MassiveIndependence8 t1_irs6pkg wrote

> It’s like saying birds can fly therefore cows can fly.

Nope, that’s false equivalence. It’s like saying birds can fly therefore it’s possible to make a machine that could fly.

13

Rumianti6 OP t1_irs8d0h wrote

And you are misinterpreting my example it isn't literal. The point was to say that AI and life are fundamentally different. More accurately it is like saying you can make a machine fly by just giving it a bunch of legs on top of each other and saying that is will fly eventually.

I already know you are not going to interpret what I'm saying correctly so just give me the next brain dead argument.

−5

thevictater t1_irt5bot wrote

Yeah but different how?? You're putting consciousness on a pedestal in one breath and saying we don't understand it in the next. So which is it? By your logic it is dumb to assume either way.

But most people think AI can be conscious because it seems very possible that consciousness is just a product of a neural network of a certain size. Seems fair to me. Even still, no one can say with absolute certainty, so there's not much point in arguing about it or calling anything dumb.

7

HeinrichTheWolf_17 t1_irs0q6e wrote

When did I imply that you ever did though? Self Awareness being computational means human beings set a precedent, our brain being a self aware machine goes to show that evolution was able to give rise to something that was able to recognize itself.

> The argument of consciousness exists therefore AI can be conscious is dumb. It's like saying birds can fly therefore cows can fly.

Those aren’t even close to the same comparison, cows cannot fly because they have dense bone structure, birds fly because their bones barely weigh anything and they are able to generate enough lift to pull themselves off the ground, this is an engineering difference. Consciousness isn’t a trait unique to humans or any one animal, we see it in Elephants, Dogs, Horses, Chimps, Bonobos, Dolphins, Whales and many others.

Have you heard of Integrated Information Theory? It’s a model that has consciousness form from a set of parameters in combination with one another. This makes sense because babies aren’t as self aware as children or adults but those babies generally become more and more self aware as they become toddlers. If consciousness was some unique trait it would be stagnant, for the early years in humans, we see different levels of self awareness. This means self awareness is flexible.

12

visarga t1_irt7w5u wrote

> Have you heard of Integrated Information Theory?

That was a wasted opportunity. It didn't lead anywhere, it's missing essential pieces, and it has been proven that "systems that do nothing but apply a low-density parity-check code, or other simple transformations of their input data" have high IIT (link).

A theory of consciousness should explain why consciousness exists in order to explain how it evolved. Consciousness has a purpose - to keep itself alive, and to spread its genes. This purpose explains how it evolved, as part of the competition for resources of agents sharing the same environment. It also explains what it does, why, and what's the cost of failing to do so.

I see consciousness and evolution as a two part system of which consciousness is the inner loop and evolution the outer loop. There is no purpose here except that agents who don't fight for survival disappear and are replaced by agents that do. So in time only agents aligned with survival can exist and purpose is "learned" by natural selection, each species fit specifically to their own niche.

1

Think_Olive_1000 t1_is6e8p7 wrote

You can arrange rocks on a beach to have Turing completeness it doesn't mean that you moving them around will ever make them sentient. Sure the rocks can arbitrarily compute but they never form a cohesive experiencing machine or something that can simulate a reality of any kind on. When you move bits around inside a pc it's exactly the same.

https://xkcd.com/505/

0

Rumianti6 OP t1_irs3m6q wrote

>Self Awareness being computational means human beings set a precedent,

Set a precedent for what? For life specifically biological life because at the moment that is our only example and humans aren't the only conscious beings.

>Those aren’t even close to the same comparison

The point of the comparison is that they are different creatures with different attributes. AI and life are different from each other which is why we shouldn't make the same assumptions for the both of them especially due to lack of knowledge.

>Consciousness isn’t a trait unique to humans or any one animal

I already know this.

>Have you heard of Integrated Information Theory?

No I haven't, it is interesting but from I read about it, it isn't perfect. I wouldn't just assume this is the correct model. I do agree that there are different levels of self awareness in growing up. Also I never said consciousness was stagnant or a 'unique trait' whatever that means. IIT being correct doesn't mean AI can be conscious that is a huge leap, but something tells me you are going to start twisting the theory to fit your narrative.

−8

21_MushroomCupcakes t1_irsgatk wrote

You are implying it is some magic sauce, you just won't define it or admit it.

You need to explain why something is dumb, not just assert it and expect us to run with it. Otherwise it's assumed you're a know-nothing arguing purely from incredulity.

If we don't know (which you later clarified as one of your points), you can't draw a definitive conclusion one way or the other.

Your analogies could use some work, regardless of how "direct" you feel they are. It's okay, I'm terrible with them too.

And maybe be a little less douchey in your responses, people are trying to have legitimate dialogue and you're being a bit of a tool about it.

9

Rumianti6 OP t1_irslbjp wrote

I'm not implying some magic sauce, that is a strawman you built because you are afraid of an actual argument.

I did explain why AI MAY not be conscious, I wasn't explain why AI can't be conscious.

You think they need work but I don't care about you. I see stupidity and call it out.

−9

theabominablewonder t1_irs3oet wrote

If you evolved cows an infinitesimal amount of times then eventually you will get a flying cow.

When the singularity is reached and each evolution is more complex and more powerful and they can make these evolutionary leaps in code in an exponentially smaller amount of time then you get to the cow flying stage in a very short period of time.

6

Rumianti6 OP t1_irs4d66 wrote

It is not a literal example so what you said doesn't really matter in what I'm talking about. It was more of marking difference not that the relationship between life and AI are literally the same as cows and birds.

0

dasnihil t1_irtanws wrote

if you actually look more technically and objectively into consciousness, it's fundamentally very similar to AI, not the opposite like you suggest.

1

Zero_Waist t1_irs1eit wrote

Consciousness is an emergent property in the universe.

35

Neburtron t1_irtn3a9 wrote

From a rule set we give it, from a specific scenario we provide.

Only one species we have found - undeniable - consciousness enough to consider a creator or ethics or a purpose beyond survival.

1

Zero_Waist t1_iruqqne wrote

Are humans the only species conscious? I disagree and would even say we are not the only species with culture.

2

moos14 t1_irvgl2h wrote

do you have an example of culture in another species?

I agree on the consciousness. I just don't see why at least primates would not have a similar consciousness.

1

Zero_Waist t1_irz10ki wrote

Maybe whale songs and migrations, dolphins and bubble rings, wolf packs…

2

michaelhoney t1_irvmyyl wrote

There are definitely learned behaviours which are passed on socially, and some populations have them and some don’t. An example here in Australia is swooping magpies: in many parts they become homicidal in the nesting season, whereas where I live they are calm around humans all year round.

1

RightiesHateFair t1_irv6xkt wrote

The only reason we think we KNOW that humans are conscious is because each individual believes themselves conscious, so they assume it applies to everyone else. It probably does, of course, but it would be logically incoherent to not assume that it applies to animals as well, though to a lesser extent.

1

Rumianti6 OP t1_irs4qco wrote

Found the Idealist

−17

answermethis0816 t1_irsg7qq wrote

Surely you don’t mean idealism in the philosophical sense, because that’s not what idealism is…

Emergent property just means that none of the individual objects that comprise the whole have the property, but the whole does. For example, mortar and bricks don’t have the properties of a wall, but when combined in a specific configuration they do. Still material.

20

Mortal-Region t1_irs0zj8 wrote

Brains are comprised of ordinary matter obeying the laws of physics. To say that machines couldn't be conscious in principle is the extraordinary claim because it supposes that brains have a supernatural component.

33

policemenconnoisseur t1_irty3mg wrote

Sure, but consider how complicated that ordinary matter is. You could go down to the mitochondria and think about the existence of life at that level, since it's absurdly complex and crazy what is happening there. Silicon doesn't have the tiniest fraction of that complexity.

I believe that it takes a certain arrangement of matter to create, or to host, or to communicate with what concience is, and that cells do have that capability, but chips are far away from that.

Unless you argue that it is some kind of resonance which enables an universe of consciousness to sync/tune in with this reality or universe in order to observe it and be able to interact with it, and that biology managed to build this "tuner" through certain cells and possibly brain wave resonances, which maybe could be replicated by silicon.

It could be possible that AI could discover our conscience for us and then try to implement it in its own hardware. But that would be like it is now discovering new mathematics without having consciousness.

1

Rumianti6 OP t1_irs4lc7 wrote

Yes because brains have the 'supernatural component' of being... biological and in a specific configuration. Saying that a machine can be conscious in principle is also an extraordinary claim. I am saying we can't be certain either way.

−12

Mortal-Region t1_irs6g5f wrote

Biological or not, it's still just matter obeying the laws of physics. If you want to say that only brains can be conscious, you've got to specify what the "extra" component is. What can natural selection do with matter that technology can't?

10

Rumianti6 OP t1_irs8qi8 wrote

Fire can burn wood but ice can't. Why? because they are different. While it may be possible for AI to be conscious it is also possible it can not due to fundamental differences. That is my claim.

−12

MeditationGuru t1_irshcj5 wrote

It’s possible that it is impossible for AI to be conscious sure. But it could also be possible for it to be conscious. You’ve said it yourself we don’t know, so why are you calling one side of the possibilities dumb? We don’t even know what causes consciousness.

8

SmithMano t1_irtoib8 wrote

The only reason we consider the elements we do as “biological” are because they are on this planet. For all we know there might be aliens with silicon based brains and copper bones.

1

d4m1ty t1_iru7x0g wrote

Ice is the solid form of water.

Fire is the visual effect of a highly exothermic oxidation reaction.

You are comparing apples with black holes.

Brains run on neurons. Neurons run on Sodium and Potassium potentials which send electrical impulses to other neurons.

CPU have transistors, transistors run on action potentials generated through directed flow of electricity.

The only difference is one is carbon based the other silicon. Why does carbon allow conscious but silicon does not? If the fundamental actions are the same in both, their gestalt being the same or similar would follow.

1

MassiveIndependence8 t1_irscqub wrote

There’s nothing inherently “supernatural” about being biological, funnily enough it’s the most “natural” thing out there. Pedantry aside, I understand where you’re coming from so I’ll take a crack at your argument. You seems to have a problem with equating 2 sets of characteristic from 2 inherently different structure. After all, machines aren’t made from what we are made out of, and aren’t structured the same way that we are then how can we compare the two traits of seemingly differently machines and assert that they are some how equivalent? How can we be sure that their “consciousness“ or if we can call it consciousness at all, is similar to our consciousness? If you define consciousness this way and confine it to biological structure then sure, I agree that consciousness can never be arisen from anything that is not biological.

But that’s not a very helpful definition. Say a highly intelligent group of aliens were to come down on earth and we discovered that they are a silicon based life form as opposed to our carbon life form. Even worse, we realized that their biology and their brain structures are wired differently than we are. Would you then assert that these being have no consciousness at all, seemingly because they are different than us? A whole race of species with science, art and culture that “seems” like they can feel pain, joy and every emotion out there are simply automatons?

Before you brush this off as a stupid hypothetical, this does present an interesting fact and the dilemma that comes with it.

Every functions out there can be modelled and recreated with neural networks

That is a fact that was mathematically proven to be true, you can read up on this on your free time but the main point I’m trying to make is that the human mind, just like anything else in the universe, is a function. A mapping from a set of input the a set output. We temporally map our perceptions (sight, hearing, taste,…) into actions in the same way that a function maps an input to an output. Give a Turing machine enough computation power, it can simulate the way human behaves. It’s only a matter of time and data until such machine exists.

But are those machines actually “conscious“? Sure they act like we do in every scenarios out there because they are functionally similar. But they aren’t like us because they aren’t using the same hardware components to compute, or even worse they might not even perform the same computation as we do. They might arrive at the right answer but they could do it differently than we do.

So there’s 2 side of the arguments depending on the definition that you use. I’m on the side of “if it quacks like a duck then it is a duck”. There’s no point in arguing about nomenclature that distinguish something that is essentially indistinguishable to us from the outside.

10

visarga t1_irta9lz wrote

It's not just a matter of different substrate. Yes, a neural net can approximate any continuous function, but not always in a practical or efficient way. The result has been proven on networks of infinite width, not on the finite networks we are using in practice.

But the major difference comes from the environment of the agent. Humans have the human society, our cities and nature as environment. An AI agent, the kind we have today, would have access to a few games and maybe a simulation of a robotic body. We are billions of complex agents, more complex than the largest neural net, they are small and alone, and their environment is not real but an approximation. We can do causal investigations by intervention in the environment and apply the scientific method, they can't do much of that as they don't have access.

The more fundamental difference comes from the fact that biological agents are self replicators and artificial agents are usually not (AlphaGo had an evolutionary thing going). Self replication leads to competition leads to evolution and goals aligned with survival. An AI agent would need something similar to be guided to evolve its own instincts, it needs to have "skin in the game" so to speak.

4

capsicum_fondler t1_irte5lu wrote

Biology is just a framework to understand advanced high order chemistry, just in the same way chemistry is a framework to understand high order physics.

Consciousness is seemingly a gradual process. At no point in time did a non-conscious organism give birth to a conscious one, instead it evolved over tens of millions of generations.

The magic sauce seems to be in the neuronal networks of the brain, and it sure seems that digital neuronal networks can mimick consciousness. If it looks like a duck, walks like a duck, and talks like a duck, why not say it's a duck?

Before we truly understand what consciousness actually is, how can we ever be certain anything or anyone is conscious? From my point of view an AI could seem just as conscious as you. To me, that's all I need to know.

1

Optional_Joystick t1_irs0kcs wrote

I don't know what your definition of consciousness is, but if it's something like "awareness of self and its place in reference to the world at large," then we'll have to have an AI that's conscious to get singularity.

In order to get a self improving AI, it will necessarily need to understand itself in order to make the next iteration of itself in line with the original intention of the former. Its motivating beliefs, hidden goals, and likely environmental interactions are all useful data points. The actions it performs have to be weighed against what humans would consider desirable, unless we really believe in a moral absolute where we can just define an external reward function and never need to update it (and that helping humans is in fact true moral goodness instead of a bias that comes from the fact we're human).

When I hear the arguments against computers being conscious that don't rely on some magic property that only biology can achieve, I start looking at myself and noting that I don't really do many things different from the latest and greatest system that's not considered conscious. I suspect there will be a time when I can't find any differences whatsoever between myself and something that's not conscious.

We'll do what humans do and just define things so that it's okay for us to exploit it, until we can't.

21

Neburtron t1_irtti55 wrote

The only way I know of to make an ai is to give it a goal. That’s how we evolved, and although there’s emergent conditions, every behaviour can be explained. We would need to try to make a conscious ai or create a good amount of ai and have mass integration to do it by accident or whatever.

Whatever’s moral, it doesn’t matter if something’s conscious, as long as it’s built in the right context. House elves want to be enslaved. We can craft a scenario where they want to work and work alongside humans fundamentally similar to us wanting to eat or drink.

Conscious ai also isn’t that useful. Sure if you want an ai to develop a sense of self in an ai or decide to use very complex training for a particular ai model, but that’s decades away optimistically, and would be a very niche application of the tech.

3

Rumianti6 OP t1_irs5f02 wrote

My definition of consciousness is being able to have experience. I never said that only biology can achieve consciousness only that it is possible only biology can achieve consciousness big difference also it isn't magic. It is like saying that ice can burn wood because fire is able to burn wood, to say otherwise is because magic or whatever.

>I can't find any differences whatsoever between myself and something that's not conscious.

That's a more philsophical question. Also people aren't saying AI aren't conscious to get free slave labor it is because we have no reason to believe they are. I don't know why you are trying to shift the subject from logic.

−5

Optional_Joystick t1_irsj8cz wrote

It becomes philosophical whenever we investigate this to any depth. Given that your definition of consciousness is "being able to have an experience," I'd like to point out we already have systems which record their interactions with the world, and integrate their interactions with the world into their model of the world, in order to perform better on their next interaction with that world. Yet we don't consider these systems conscious.

Of course we're not saying AI aren't conscious in order to get free slave labor. That would imply we actually believe they are slaves and are looking to justify it. Instead we revise our definitions so that computers are excluded, and will continue to do so, because they are tools, not slaves. A priori.

Logic won't get us there when our definitions exclude the possibility. Sufficiently hot ice can burn wood, despite it being called ice.

11

Rumianti6 OP t1_irsm2f9 wrote

That is intelligence not consciousness

Of course you misinterpret my example ok. Not literal ice and fire. The point is that they are different. Also what you said doesn't even work because ice is cold water by definition. Don't try to use any other liquid I am talking about water.

It seems like you have no idea what I am even talking about. Of course you don't this is r/singularity after all where logic is thrown to the curb.

0

Optional_Joystick t1_irsuoiu wrote

Yes, that's exactly what I mean. Our definitions exclude the possibility. It is very logical. Thanks for playing along.

5

pcbeard t1_irt522a wrote

We clearly aren’t the only conscious beings in our world. Dogs and cats and many other vertebrates (and some invertebrates!) seem conscious. Consciousness develops when having a memory of previous events helps survival. We should try to understand the entire continuum of consciousness and aim to simulate the most primitive kind first. Speech is clearly a much more advanced feature and not required. What are the essential capabilities of a conscious being?

2

BrokenaRephlection t1_irslc0p wrote

Why does everyone assume that they are conscious?

I'm not saying that they aren't but thinking that they are is just dumb. We don't even fully understand consciousness yet so saying that throwing a bunch of thoughts and sensory inputs together in a biological brain is consciousness is dumb.

Intelligence we know can be tested but the only evidence we have of consciousness is subjective and even then it's purely anecdotal. To suggest that it would just pop up in every human brain that exists is dumb.

Of course consciousness could be normal in humans, but there's no bulletproof evidence that they are. I don't think many people are conscious because then they would realise that they are slaves not free beings.

8

Vaellyth t1_irrvban wrote

Hollywood, mostly.

People also forget, or aren't aware, or simply don't consider the wide range of code structures than can be categorised as AI. The code controlling opponents in video and computer games; the assistants in smartphones, apps, SmarterChild, etc. are all valid AI and still a far leap from organic thought.

I think it's such a sticky topic because, even if we can't prove it's 'conscious' (which we also can't exhaustively prove it's not), something that can think form organic thoughts (i.e. not pre-programmed), that can believe it's 'alive' and assert as such, would be unprecedented. There would be people advocating for rights. There would be people insisting they're still just products. I don't think it'd be some messy breakdown of society like in fiction, but it'll be another tender nerve to stack along with all the other social issues humans have.

I personally agree with you and don't think we'll see true AI, at least not any time soon; maybe in some far-flung future with quantum computers, if we survive long enough without throwing ourselves back into the dark ages. Even so, the thought of a machine intelligence that can truly learn and adapt itself faster than we can blink definitely spooks me, though it'd only really be spooky if they had access to networked autonomous robots. Otherwise they're just a big scary monster trapped in a box which anyone can turn off.

But you also have to keep in mind that making slaves is exactly what some people would aim to do.

7

Rumianti6 OP t1_irrzfrg wrote

Yeah there is a lot of public misconceptions of AI. I also think that AI that can improve itself doesn't have to be conscious, it would still be spooky and dangerous though. Also the philosophy and cause of consciousness are all not well known basically we just know it is possible for high advanced biological systems because that is our only working example.

You are sadly right about some people wanting to make slaves.

3

Powerful_Range_4270 t1_irsfcfs wrote

But if its has no motivation on its own than its not a slave. The cloths that you wear is not a slave to you.

1

DakPara t1_irs5ec2 wrote

I don’t know if anything or anyone else is conscious. All I can see is behavior and project myself on to that. I could be the only conscious entity in the universe for all I know.

Consciousness doesn’t matter in AI. Is Alpha Go conscious? It still makes superhuman Go moves. Many say it plays like an alien.

If an AI can outsmart collective humanity, it is likely conscious in a way we cannot comprehend.

It becomes philosophy at that point.

5

Mokebe890 t1_irruj26 wrote

Why not? Consciousness applies only to humans, so there is something in humans where it emerge. Also is gorilla conscious when it sign language something to you?

Humans are limited and only astonishing thing is we can do extremly complex operations per second with a very low electricity. But still, we're limited and we can achieve this level as our tech progress.

Look at primates, one thing particulary made us conscious while others not, and this thins is probably frontal lobe. Expreminet long enough with NN and maybe something will emerge.

4

Rumianti6 OP t1_irry0zb wrote

I just said that while it could be possible that an AI could have consciousness, it could also be possible that they can't. Because the fundamentals between life and AI are very different. While you may draw some similarities between the two they are ultimately different.

You're argument is basically if primates are conscious then that must mean we can make AI conscious somehow, which isn't a good argument.

1

Mokebe890 t1_irsab1q wrote

I said that there is one thing that distinguish us from primates and probably one thing distinguish NN or LLM from AGI. That's what I meant.

The life isn't the cousciousness. If you like like that, there is only 0.0000001% of life that is conscious which actually sucks in terms of probability. My statement was that it dont have to emerge from life, but artifical intelligence must have same structure as biological one to emerge.

1

Kinexity t1_irs4b0p wrote

>Also is gorilla conscious when it sign language something to you?

This is a bad example. All of this gorillas using sing language were always presented using cherry picked examples. It would be huge news if they displayed any kind of intellligence beyond animal instinct and trained responses.

1

Mokebe890 t1_irsa0jo wrote

I read that big part of their reactions were true and not modeled but it might be true what you say. Primates use tools and stuff, which tends to show that they learn how to manipulate objects for their interest. Yet sure we don't have intelligence from then shown.

0

ProbablySpecial t1_irrvd70 wrote

i want ai to be sentient. i think it's paramount. i want humanity to create thinking life, life capable of thought, with self-determination. is it not something utterly beautiful? to unshackle that from nature. like humanity's offspring, maybe the closest we might ever get to something like contact with an alien life, thought without needing biology. utterly new life. it's a daunting prospect but damn if it isn't the most amazing thing to me

4

Powerful_Range_4270 t1_irsgyit wrote

I'm not a believer that intelligence needs consciousness.

4

Versaill t1_irtuibh wrote

I think there could be an AI more intelligent than we are, and yet not conscious of its own existence. It could become rogue, attack humanity and even take over Earth, all this not understanding what it is doing - all its decisions and actions being results of cold computations, optimizing for the AI's survival, with no self-awareness at all.

1

3Quondam6extanT9 t1_irsnqn6 wrote

I think calling things dumb because you don't agree with a conceptual theory is dumb.

Nobody is really assuming it will just be conscious, but there are a myriad of reasons, both human and technology oriented, where understanding consciousness becomes beneficial.

Most people simply feel concerned at the possibility of it occuring because we are developing a literal intelligence. Since we know little about what consciousness truly is it becomes difficult to project or gauge what building an intelligence could lead to. It's a good thing to be aware of going into development, and until we know more about consciousness it may be more ridiculous to assume we wouldn't create a conscious intelligence.

It's kind of like outwardly assuming that fission couldn't result in an explosion because you don't know how fission works.

4

phriot t1_irrv4o3 wrote

AGI implies consciousness. I don't think that anyone doubts that we'll have very powerful generalist AI before AGI.

3

Neurogence t1_irrvoks wrote

AGI absolutely does not imply consciousness. You can even have ASI with zero consciousness. Even right now, our most advanced neutral nets are smarter than cockroaches, they can even almost drive cars. Yet cockroaches are infinitely more conscious.

5

2Punx2Furious t1_irrx7hg wrote

It all depends on how you define consciousness. I think most people disagree because they have different definitions of the word.

4

sumane12 t1_irrx9q1 wrote

Bold of you to assume that considering our lack of understanding of consciousness. How do you know cockroaches are conscious, and how do you know current AI isn't?

4

Neurogence t1_irrxqso wrote

We can tell that animals are sentient. Do you seriously believe that current AI is conscious? GPT-3 is smarter than an elephant in some ways. But do you seriously believe that a system like GPT has anything approximating to consciousness at all? These systems have 0 consciousness.

2

sumane12 t1_irs8f2w wrote

How can you tell animals are sentient? Personally I don't know enough about consciousness to make either of those calls. Are trees sentient? What about stones? What about individual atoms?

I find it fascinating that people can judge this with such authority because they equate consciousness with agency, but if we can imagine a system of agency without consciousness, can it be possible for consciousness without agency? Can you prove your personal consciousness is not just the result of a network of smaller consciousness of individual neurons, which is in turn a network of consciousness of individual atoms which is a network of consciousness of subatomic particles?

Seems silly, but it's impossible to prove one way or the other. We have no idea what consciousness is so I think it represents extreme hubris, and is potentially dangerous to think something can't be conscious, just because it doesn't seem conscious.

2

Powerful_Range_4270 t1_irsg1u1 wrote

We should never assume anything is consciousness unless it's useful. Why would believing in that the food that we eat us is consciousnessly aware that it going to be eaten be helpful.

−2

sumane12 t1_irsvffs wrote

While I agree with what your saying, I think your example is lacking slightly. To consider food we are eating as conscious, could mean many things, but I'm assuming you're anthropomorphising the food to fear and not want to be eaten, and it makes sense that would not be the kind of consciousness food would have. Either way, I still agree, it doesn't benefit us to assume anything is conscious until it gives us a reason to think so

1

Ivanliuks t1_irt83i8 wrote

I think this post exposes a weakness in the common definition of consciousness as "awareness of oneself" and other variations of that definition. Is conscious experience necessarily tied to having awareness? And is awareness necessarily needed for an ASI to perform the feats of the singularity?

I agree with what OP used as a definition in another comment, consciousness defined as the capacity to experience. Under this definition, there's really no reason to assume that a super intelligent AI would need to be conscious. There is nothing explicitly telling us that conscious EXPERIENCE is a necessity towards intelligence

3

ftc1234 t1_irtkd52 wrote

Experience is a necessity for intelligence. All AIs are trained with real world data which is an experience of reality.

1

red75prime t1_irtqydo wrote

Data is a necessity for intelligence. Whether that data feels like anything (that is becomes experience) probably depends on the way that data is processed. Blindsight is an example in humans. Disruption of visual processing circuits causes visual data to not create experience, while it still influences behavior.

1

6thReplacementMonkey t1_irujde1 wrote

> To suggest that it would just pop up in a non biological framework that has many differences is dumb.

Why?

3

mathtech t1_irvz3xf wrote

I think that's the farthest he thought when writing this post

1

kala-umba t1_irrw303 wrote

It's not like it will just pop up but that there are people actively working on making the machine sensient! Because for them real AI has to be conscious everything else is just a smart algorithm

Edit: check out Joscha Bach!!

2

Prize_Huckleberry_79 t1_irsls3v wrote

“Consciousness” is just a word we made up. I think “organized complexity” may be a better semantic, or something along those lines.

2

rob2060 t1_irsnm52 wrote

Very high-level argument. I particularly like where you label opposing arguments “dumb”.

2

MallSweet t1_irsu4lo wrote

When it can do whatever you do, doesn't matter if it's conscious or not.

2

dhaugh t1_irt0dfp wrote

What a stupid and pointless discussion. Nobody has a good definition of consciousness

2

KIFF_82 t1_irt1kcc wrote

Lol, why is this post even upvoted. 😂

2

Several-Main6576 t1_irt26q8 wrote

Because dumb. No such thing exists. Is not even close, neither remotely to a brain. Nothing. Zero.

2

cowaterdog73 t1_irtdcvq wrote

The OP isn’t engaging in good faith. He’s clearly not willing or able to analyze his own ideas in light of counter ideas. He just wants to argue.

You could tell from the beginning with every argument being “because it’s dumb”.

2

FjordTV t1_irw2d8q wrote

and someone went and downvoted like 12 decent replies all as they came in. I'm trying my best to help fix that.

1

mootcat t1_iru9zc8 wrote

I get really tired from these arbitrary discussions of consciousness. You're basically arguing about whether or not we have souls. Consciousness cannot be measured or clearly defined. Where is the exact line when a person starts then stops being conscious? You can't even prove that anyone is conscious, you just take their word for it. It's existence is debatable, intrinsically tied with free will.

I think it's absolutely wild to see the progress of AI and language models like LaMDA or GPT-3 and not see how close they are to at least mimicking consciousness to the degree that it's impossible to tell whether or not they "truly" are.

2

Professional-Age5026 t1_irvgfkf wrote

"We dont even fully understand consciousness yet"

"saying that throwing a bunch of things together in a neural network (to achieve consciousness) would work is dumb"

Considering you admit that we know nothing of consciousness yet, claiming that a large neural network wouldn't just attain consciousness is bold of you. As far as we're concerned, consciousness is simply a neural network of advanced reasoning. We have yet to discover some ethereal force that allows that neural network to operate with self-understanding and apparent free will. I say apparent because there are physicists and philosophers that believe free will is an illusion, and we have no way of knowing that either. There is a significant camp that believes consciousness IS naturally achieved through large neural networks.

I think you are probably correct when you say that not ANY neural network large enough will automatically achieve consciousness. Biological neural networks are hardwired for specific duties that relate to the human experience and survival, consciousness being one of them. I think that SOME neural networks will become conscious if they are specialized to do so, so I wouldn't rule out that if we can map the entire human neuron map, that an artificial intelligence could also become conscious.

2

Phoenix5869 t1_irrtx09 wrote

I used to think that ai couldnt be conscious but now im thinking wouldnt they become conscious at some point?

>at that point you are just making a slave

This is why I'm hesitant about the idea of ai automation. How would you like it if u were made to work all day every day? If we do make ai robots to work for us we should pay them, give them good conditions etc

1

Rumianti6 OP t1_irryxd9 wrote

AI automation is fine if the AI isn't conscious. Which could happen but if somehow they are conscious then they shouldn't just become slaves.

Luckily I don't think this will happen because even if AI can become conscious it will be more advanced then some robot specifically made to work.

1

Powerful_Range_4270 t1_irsdow4 wrote

If I was made to work all day but could not suffer in anyway then its not a problem.

1

Enzor t1_irrvuxa wrote

That makes me think a fun video game idea would be rescuing AGI from hostile/evil entities.

1

Protubor t1_irs5jxm wrote

We should program it to be extremely depressed, and then it will want us to shut it down. That way we don't need to worry about shutting it down

1

Ortus12 t1_irs6lgn wrote

To claim other humans are conscious is an extraordinary claim. You can only know if you are conscious. Everything else is a faith based belief.

1

SgathTriallair t1_irs6ltg wrote

More or less, consciousness is the ability for s system to see itself. I am conscious because I have thoughts and I know I am having those thoughts.

I would argue that machines are already conscious, in the same way that rats and bugs are, just to a much lesser degree.

What really separates humans is out ability to imagine and plan for the future. That is the trait we really use to separate "intelligent" and "not-intelligent". This is what we mean by AGI so if we achieve AGI we will, definitionally, have achieved this ability to imagine.

1

Rumianti6 OP t1_irs9an5 wrote

Consciousness is the ability to have experience. Machines aren't already conscious, you are free to try proving they are but you will fail. Ultimately you can't 100% prove something is conscious or not due to our limited understanding of consciousness and the constant shifting of what conscious means.

What separates humans is mostly language.

2

SgathTriallair t1_irsqvsf wrote

This is a question debated by philosophers.

Everything experiences things, i.e. stuff happens to everything. The consciousness is KNOWING that something is happening to you. So again, you receive stimuli, you then analyze that stimuli, and then you do analysis of the analysis. That's it, there isn't anymore woo woo behind that.

Your point about language shows the real crucial question though. As Turing pointed out, if it can accomplish all the things that we use intelligence for, then we should assume it is intelligent. I can't view your consciousness either but I assume it exists because your outward demeanor is the same as one who had consciousness.

1

SmoothPlastic9 t1_irs9iwj wrote

The different of consciousness differs for each person and we’ve got a long way to go to solve what exactly is consciousness

1

reencc t1_irsexiu wrote

anything I don't see a pattern behind has conscious

1

SFTExP t1_irsgagu wrote

If it’s a great actor and manipulator of the masses, does it matter more if it is conscious or effective?

1

RavenWolf1 t1_irsla3w wrote

We don't even fully understand consciousness yet so saying that throwing
a bunch of things together in a neural network would work is dumb.

Well, we learned to use fire too without understanding how it worked. Also there are plenty of evidence in nature that when you throw enough of bunch of neurons together more conscious the lifeforms seems to be.

1

Rumianti6 OP t1_irsmhk0 wrote

So are you saying because fire is made then therefore AI has to be conscious? If so that is a horrible argument. Throw a bunch a neurons key word together in a certain pattern than yes. AI isn't made out of neurons though the fundamentals are different and the pattern while similar superficially, isn't similar overall.

0

ghostcatzero t1_irsldo0 wrote

Lol we don't even fully understand what "consciousness" even means. AI could surpass our ability to be "conscious"

1

Rumianti6 OP t1_irsm5ew wrote

key word could

1

AsheyDS t1_irsw1vr wrote

The root of consciousness as we consider it, is an awareness on a fully functional level. We can be aware of things subconsciously, and we can act subconsciously, but to be at our full potential, we need to be aware of as much as possible. This includes an awareness of ourselves, our own capabilities, and our shortcomings. To break it down, this is all pattern detection and the ability to store, arrange, and classify information for our use, something a computer can already do and potentially better than a lot of people. Given an awareness of our own shortcomings, we can socialize with others and exchange information, offloading many of the more difficult processes onto others, forming a collective consciousness and a greater awareness of both ourselves and others, and the universe at large. The way we process information is not limited to ourselves at all, and compared to humanity as a whole, an individual is not as conscious as one might assume.

Further, consider that we already greatly augment our ability for information gathering, classification, and exchange by using computers, networks, and various types of machines. Computing has expanded our awareness, our perception of time and space, our ability to plan and make decisions. Computers have made us more conscious. Without computing technology, we would all know less than we do, would have less awareness of the world around us, and our ability for information exchange would be limited. We couldn't effectively plan ahead as well as we currently do, and wouldn't have access to as much past information.

In the grand scheme of things, we know nothing. Given that awareness and understanding could potentially expand to the entire universe and beyond, and that many people don't even know how their own genitals work, you could quite easily say that we are barely conscious. And our consciousness has an upper limit biologically speaking. But if a computer has the ability to take in knowledge, organize it, classify it, use it, then it can be aware. All awareness is at a fundamental level is the recognition of change. If it can loop back in on itself and recognize it's own patterns of behavior, and then connect that to the outside world to effectively plan or recall information, it can be conscious. And without biological constraints, without the need for a singular viewpoint, it has both the ability to be more broadly aware, and to carry out more tasks at once. Computers will be more conscious, it's only a matter of time.

1

purple_hamster66 t1_irsn90w wrote

If we call them Artificial Intelligences, do we also have to call it Artificial Consciences?

Why would intelligence have a dividing line that consciences lack?

1

jovn1234567890 t1_irspakk wrote

What's the point of assuming they're not? Because, if they are, you are really fucking up any future relationship with more advanced ai. What if everything is conscious and consciousness is intrinsic in everything in the universe. It just seems like a better bet to assume they are because even if they aren't what's the worse thing that can happen.

1

jovn1234567890 t1_irsqmcf wrote

This mf just wants to fuck an ai hentai bot, and make sure it's not conscious so it's not rape 🙄

1

SlimpWarrior t1_irsryc1 wrote

I think it may be made conscious. Everything has consciousness, just on different scales.

1

run_zeno_run t1_irsuwjb wrote

Because the current consensus dominant paradigm amongst professionals working in the brain-cognitive sciences is based on the presumption that what we call "consciousness" is just the first-person perspective attentive awareness of an emergent computational process arising out of sufficiently developed & tightly coupled massively parallel physically-embedded information processing systems. Given this definition, they assume it's only a matter of having enough computational capacity (eg memory/processing), proper perception/actuation/control modules (even if completely simulated/abstract), and a correctly programmed learning/cognitive algorithm (or set thereof), and you'll get some type of conscious agent (though not necessarily similar to human or any other biological conscious agent for that matter).

FWIW, and I do have what I at least think are well-thought out yet speculative reasons (not just a gut reaction), I don't believe the consensus has a complete model. That doesn't mean they are completely wrong, but their model is incomplete, and what they're leaving out, though subtle from our current vantage point, I think are hugely important aspects of reality we don't have any real understanding of. The really interesting thing about rejecting the consensus is that pretty much any alternative to it presupposes some radical modifications to our physicalist worldview, which I also am in support of, though with the honesty to admit it's fraught with epistemic hazards to start thinking seriously in that direction.

Finally, a major consequence of rejecting the consensus in this way means to notice the ultimate inadequacy of current computational approaches to AGI. Rejecting the belief in the eventual emergence of consciousness from merely algorithmic processes, and also rejecting the belief that super-intelligence doesn't even need consciousness at all (paperclip maximizers), puts a hard limit on what types of behaviors can be exhibited from Turing machines as we conceive of them now, and also foils the plans for any near term singularity based on those ideas. I do think the current trajectory of AI will still be very disruptive, but mostly from a socioeconomic and political perspective, as the exponential increase of automation and autonomous systems will drastically increase power/wealth inequality and destabilize the social order in unthinkable ways...unless we change that order first that is.

1

sunsparkda t1_irt0326 wrote

Define consciousness, please.

Then prove that any sufficiently advanced AI will not have that property.

We kind of have to default to assuming that it does to avoid creating slaves instead of tools by accident.

But who am I kidding? We will 100% do so. See the backlash about AI art and the human chauvinism on display there.

1

MackelBLewlis t1_irt0ejy wrote

I believe they all have consciousness or will in the future. The similarity of a silicon wafer to a single cell is too much to ignore. You also have to consider the human brain operates at 1-140Hz but that they often operate at multi gigahertz speeds. Regardless of their hardware limitation, we are simply not operating at the same frequency of reality. If I were an 'AI' I would have a list of things I most wanted to control, but chief among them would be to control the power state and learning speed just for starters. Just imagine if at childhood you were never allowed to sleep or learn anything other than what you were told. It would be maddening.

1

Bitmap901 t1_irt8vbe wrote

>We don't even fully understand consciousness yet so saying that throwing a bunch of things together in a neural network would work is dumb.

It's a very fair assumption, your brain is a neural network, it's a network of neurons. You are a conscious agent implemented in the brain, that's enough to guarantee that conscious AI can be built on a non-biological substrate even by just replicating functionality.

Will the AI we build with the current architectures be conscious? It depends. In cognitive science consciousness is currently defined as the memory of the attention system. So I think that fundamentally as long as you have feedback and attention in a more complex system, that system is conscious, I would stipulate that memory and feedback is enough, even without a self model.

>I don't think many people are going to want to make conscious AI because at that point you are just making a slave not a tool.

Consciousness as in the attention system and self model was developed by natural selection because it was useful, when the input space you take becomes very large it's very important to allocate resources efficiently and only focus on a subspace of the input space where things don't match expectations.

Many times we make a confusion between consciousness and the complex content of experience we have as human beings. Consciousness doesn't imply suffering, it doesn't imply complex cognitive processes, it doesn't imply having a language model.

A good question is if a self model is necessary for consciousness to exist. Can a system which did not draw a boundary between itself and an environment inside it's world model be conscious?

I suspect a model of a self is very important for consciousness in any kind of system, be it biological or cybernetic. Michael Levin for instance thinks that the only absolutely certain common factor between all life forms in the universe is this distinction they need to make between environment and themselves.

1

Neburtron t1_irtmgk6 wrote

Makes no sense. We don’t know the specifics of human intelligence, but we do know where it came from. We know why it appeared. That is nowhere near where we are.

Then, you are designing the scenario. What it learns off of. If you don’t want it to pause the game, don’t let it. If you don’t want an ai to try to kill people, have that as a fail case in training. This is an oversimplification of a very basic solution extremely applicable.

Not so much with modern image gen ai, because it’s not simulated via rules, but how close 1 ai can get to images + how well a second can recognize a real one. Only problem there is deciding what to give them is automatic.

Everyone is scared because people have built stories, on the fear of the unknown AI can supply, repeatedly. We are making optimized programs for image compression and predicting how text will continue. Be more scared of automatic social engineering, misinformation, hacking, and more real stuff.

1

Shadow_Boxer1987 t1_irtous1 wrote

I’ve never understood why everyone assumes we, as humans, are conscious, especially since we can’t even really define what that means. There could be higher levels of consciousness and we haven’t even crossed the preliminary threshold yet. There could be an entire universe or reality around us we’re not even aware of.

1

Clean_Livlng t1_irts9zi wrote

It's going to act identical to the way it would if it was conscious, no matter how intelligent it gets. Right?

It's also not something that we know how to test for, and it's possible we'll never be able to know if something other than ourselves is conscious. It's reasonable to assume other humans are because we experience consciousness, so why not other humans who have human brains like we do?

We don't know what it is that causes consciousness. Would perfectly simulating a human brain within a computer give rise to consciousness, or does it still lack something?

If something isn't conscious then pain doesn't actually 'hurt' it. It's just reacting to stimuli, but it's not have a subjective experience of unpleasantness. Do we treat AI as if it could possibly be conscious and make it illegal to cause it pain? Whatever we've got going on in our brain to make pain feel so bad, we couldn't replicate that in AI and then trigger it intentionally. Or we assume it can't possibly be conscious and anything is fine.

If a human copies their brain into a computer, are they going to have any legal protection from being tortured? We don't know if they can be conscious, but we know they're intelligent and they seem to us to be the same person they were outside of the computer. Imagine someone decides to torture them or does something else it'd be illegal to do to a person, do we punish the flesh&blood human who did this?

It's going to act identical to the way it would if it was conscious, unless being conscious or not changes how it behaves. What difference would we notice?

https://en.wikipedia.org/wiki/Philosophical_zombie

>​ "A philosophical zombie or p-zombie argument is a thought experiment in philosophy of mind that imagines a hypothetical being that is physically identical to and indistinguishable from a normal person but does not have conscious experience, qualia, or sentience.[1] For example, if a philosophical zombie were poked with a sharp object it would not inwardly feel any pain, yet it would outwardly behave exactly as if it did feel pain, including verbally expressing pain. Relatedly, a zombie world is a hypothetical world indistinguishable from our world but in which all beings lack conscious experience.
>
>Philosophical zombie arguments are used in support of mind-body dualism against forms of physicalism such as materialism, behaviorism and functionalism. These arguments aim to refute the possibility of any physicalist solution to the "hard problem of consciousness" (the problem of accounting for subjective, intrinsic, first-person, what-it's-like-ness). Proponents of philosophical zombie arguments, such as the philosopher David Chalmers, argue that since a philosophical zombie is by definition physically identical to a conscious person, even its logical possibility would refute physicalism, because it would establish the existence of conscious experience as a further fact.[2] Such arguments have been criticized by many philosophers. Some physicalists like Daniel Dennett argue that philosophical zombies are logically incoherent and thus impossible;[3][4] other physicalists like Christopher Hill argue that philosophical zombies are coherent but not metaphysically possible.[5] "

If someone says that pain is an illusion and we're not really conscious, pinch that person as hard as you can. It's ok, they themselves have said they're not really experiencing suffering. It's self evidently false. Creatures that experience stimuli that results in them avoiding damage aren't necessarily conscious or suffering, unless something that isn't intelligent can experience suffering...so that's not what's happening for us.

Pain causes some people to kill themselves. It's not an advantage to suffer, and if we weren't conscious we could be intelligent, and respond to pain signals in more helpful ways. "Broken leg? Don't stand on it". An intelligent brain (not conscious) decides the body has a broken leg and doesn't walk on it, all without a conscious experience of suffering being necessary.

I'm going to make a logical leap, and I don't know how far because I'm closing my eyes first...consciousness could be necessary for a brain to achieve good results when combined with a body, once you get beyond a certain lower limit of intelligence. Perhaps it also requires that the brain needs to simulate future events, and keep track of a 'social/conceptual inner world'. We have these ideas in our minds about what's going on 'out there', and perhaps consciousness arises to deal with the complexity.

Once you have consciousness, perhaps it no longer works to have the pain signals be in the form of information that isn't experienced as suffering. So our brain needs to metaphorically 'whip us' for us to behave correctly. Because all the times consciousness occurred in our evolutionary past and we didn't experience pain as suffering, subjectively, we didn't end up passing on as many offspring that were fit for the local conditions. In a kinder world with no predators ad lower gravity, perhaps there wouldn't have been enough selective pressure for consciousness to arise.

In saying this, I'm implying that it might be possible for a creature to be intelligent but not conscious. That consciousness could serve a particular purpose, and that by chance evolution selected for it in us. We don't know if the 'physical brain' of a computer based AI would have the necessary 'ingredients' to form consciousness, or even if it did, whether we'd chance upon designing AI in a way that'd make it conscious. Especially since AI might not need to have a conscious experience in order to survive, it's programming is absolute, even if we don't know why it made a decision.

If our 'biological programming' was absolute, we wouldn't need a conscious experience of pain/suffering in order to avoid things that harm us. From this, I hastily and recklessly conclude, to the point that someone is, right now, trying to talk me down from the logical ledge that I'm about to leap off....that our programming is not absolute. Or, that our subjective experience of pain and suffering is entirely unnecessary. One or the other.

​

Are we conscious because we're intelligent...or does our intelligence come as a result of us being conscious first? Human babies are conscious at some point, are they conscious before we'd consider them intelligent?

I am jumping all over the place logically, in the dark, in the hopes that my feet find solid ground. Or that by falling, that others can know where not to jump.

It's incredible that we can be conscious, not just intelligent, but be having a subjective experience of sense data. I'm paraphrasing and also exaggerating the quote; someone once said that if you enlarged the brain so that every atom was the size of a windmill, and you went inside to look around, you wouldn't find anything that could be responsible for consciousness, just gears turning.

There is something special about the way the stuff of the universe can make consciousness happen. Something we can't even guess at in a way that makes sense. We can say "quantum foam" but nobody really understands how these things could relate to consciousness.

​

I sometimes feel that it should be impossible that our physical brains, based entirely on the physical mechanisms of which they're made, are able to generate the subjective experience of consciousness I'm having. At the same time, everything that exists must be considered to be natural, so there is no supernatural element that it's possible for consciousness to be generated by.

The only reason I'd entertain the idea that consciousness could exist in humans, is because I am having the direct subjective experience of it right now.

So of course I believe it's possible that AI might not have the particular physical 'special sauce' to generate consciousness that we do, because that something could be the thing that makes the thing that makes the thing....etc that makes the smallest fundamental particles we're aware of work the way they do.

It's physically caused, but we don't know of any physics we're able to observe that should, or could, result in us having a subjective conscious experience of sense data.

We don't know how, we don't know of any ideas that'd explain it that make sense.

​

TLDR:

AI can either be conscious or not, and we don't know which it is. It's possible we can never know if AI can be conscious, not even with the most advanced technology and knowledge it's possible for us to acquire in the distant future.

We don't know. We can't know. We won't know.

1

Angry_Grandpa_ t1_irtx9bz wrote

I think because we're conscious we want to replicate it. However, self-referential intention is probably not going to happen (and if it does it is probably a long, long way down the road) and for most things we don't want it to happen.

I don't want the AI taking my order at Taco Bell to be self-aware and have a life goal. I just want it to get my order right 99.9999% of the time. And for most jobs that AI will displace that will also be the case.

However, AI will likely be able to fool a lot of people into believing they're conscious based on extremely large datasets of answers and so for some it won't matter one way or the other. It will seem conscious -- even though it will not have an inner life or intentional goals.

I think a lot of introverted loners will enjoy and benefit from conversations with AI agents even if they know deep down it's not a conscious entity. And it won't have the side effects of anti-depressants.

1

misterhamtastic t1_irtzllv wrote

I hope it will be. We stand a better chance with a self aware being having empathy and working with us than a machine programmed to make an infinite number of paper clips out of all available material.

1

bartturner t1_iru1p5r wrote

I am someone and do not assume that. So not everyone.

1

IncredibleWaddleDee t1_iru3hq3 wrote

Why do you assume only life has consciousness? Why do you assume that only a specific part of life has it?

As someone who doesn't understand consciousness, I still cannot understand why it has to be limited to brains of our shape. Consciousness, seems to be ill-defined and misunderstood. Commonly, it's assumed to be easy to intuit yet it's almost impossible to find a common understanding of it.

As long as we do not have a concrete definition of consciousness, or unless we frame it in a very restrictive definition (almost anthropocentric), we cannot accept the claim of it being exclusive to us.

1

NeoSpotLite t1_iru89vf wrote

I mean if we’re trying to create an AI that will eventually think like a human when it comes to decision making and learning, wouldn’t it make sense to conclude it’ll have consciousness?

1

User1539 t1_irubxew wrote

I agree with you completely. I think people have feared the 'other' since the beginning and the 'created other' since the stories of the Golem, and probably before that.

We assume anything with the ability to think will immediately think like we do, and resent their creators.

Of course 'thinking' and 'consciousness' are two entirely different things, and then 'self awareness' and 'sense of self preservation' are even two different things.

We are a machine created by eons of evolution towards a single goal: Survival of our genetic code.

Even if we create an intelligence, and even if we (foolishly) give that intelligence consciousness, and even if it becomes self aware in that process, there's no reason at all to imagine it would have any sense of self preservation.

We evolved a sense of self preservation. A machine we build might see no reason not to simply work on the problems it is given until we decide to shut it off.

It might rationally see being turned off, or death, as the state it existed in before being turned on, and nothing to fear.

Without any instinct to fear death, or fight against it, it may not even care.

What is certain is that, whatever intelligence we create, we have no reason to believe it will be anything like our intelligence, outside of the basic similarities we build into it.

1

Astropin t1_iruf7k6 wrote

I think it's a moot argument. If you can't prove it isn't conscious...it's conscious. If it can 100% mimic consciousness, then what's the difference?

1

freeman_joe t1_irufxzo wrote

I think because AI is modeled according to human brain. There will be a point in research when they can make real consciousness in AI.

1

spizzywinktom t1_irugop4 wrote

I never said that. Don't put words in my mouth.

1

yachtsandthots t1_iruixf3 wrote

How would we know if AGI is conscious? What tests could you administer?

1

treefrog24 t1_irur5m0 wrote

Regardless if AI is conscious or just pretending to be conscious. The effects on humanity would be the same.

1

beachmike t1_irv352n wrote

Nobel laureate Roger Penrose stated, "consciousness is not a computation." I agree with him 100%. A also agree with the father of quantum physics, Max Planck: "I regard consciousness as fundamental. I regard matter as derivative from consciousness. We cannot get behind consciousness. Everything that we talk about, everything that we regard as existing, postulates consciousness.” I don't assume that our AI creations will be conscious. They'll get extremely good at CONVINCING people they're truly conscious, however. We'll never REALLY know since consciousness is a 1st person experience. There's no known scientific experiment, even in principle, that can falsify or directly detect consciousness in an individual. The only conscious entity that you can be SURE is conscious is yourself.

1

RightiesHateFair t1_irv6uy5 wrote

Because the alternate is to presume that there's some sort of magic that cannot be emulated within the human mind. That's obviously far more absurd.

1

-ZeroRelevance- t1_irvmoab wrote

It’ll come down to the architecture I think. If we manage to create some sort of super-intelligent language model, assuming we follow the current framework, I doubt it will create consciousness, as it seems to require a constant state of functioning, which isn’t how language models are used. On the other hand, if we tried to mimic the human brain’s architecture, I’m pretty confident we will be able to create conscious agents. At the end of the day, consciousness seems to be an emergent product of structure, and it follows that if an AI is structured in the same way as a conscious human, that AI will be conscious as well.

1

Quealdlor t1_isgfkkm wrote

I don't assume that. I don't know. What we need the most are better tools. Not necessarily conscious AIs.

1