Submitted by wtfcommittee t3_1041wol in singularity
joseph20606 t1_j32b44c wrote
We can’t even be sure the person next to you is actually having a subjective experience of consciousness. So I am siding with GPT on this one xD
ProShortKingAction t1_j32sn33 wrote
Yeah I also feel like we are reaching a point where moving the goal post of what is consciousness any farther gets into some pretty dark territory. Sure an AI expert can probably explain that ChatGPT isn't conscious but it's going to be in a way that no one outside of the field is going to understand at this point. I keep on seeing takes like "oh it isn't conscious because it doesn't keep its train of thought in-between conversations"... OK so your grandfather with dementia isn't conscious? Is that really a point these people want to make?
I feel like it's getting close enough that we should stop putting so much weight into the idea of what is and isn't conscious before we start moving this goal post past what a lot of human beings can compete with.
ReignOfKaos t1_j32tesd wrote
The only reason we think other people are conscious is via analogy to ourselves. And the only reason we think animals are conscious is via analogy to humans. The truth is we really have no idea what consciousness is nor do we have a way to measure it, so any ethics based on a rigid definition of consciousness is very fallible.
Ortus14 t1_j34hehj wrote
I like ethics based on uncertainty. We don't know who is or isn't conscious but it's safer to act as if more entities are conscious to not hurt others.
Noname_FTW t1_j33bbbu wrote
Additionally: While we know how these AI's work on a technical level and can therefore explain their behavior it doesn't mean that this isn't consciousness. People tend differentiate this between the human experience because we do not yet have the intricate understanding on how exactly the brain works. We know a lot about it but not to the degree we could recreate it.
The solution should be that we will develop an "intelligence level chart" so to speak. I heard this as a reference somewhere in some scifi material but can't remember where.
The point would be that we are going to start develop a system with levels in which we classifiy AI's and biological beings in terms of their intelligence.
It would look similar on how we classify autonomous vehicles today with level 5 being entirely autonomous.
The chart could go from 0-10 where humans would be somewhere around 5-8. 10 being a super AGI and viruses being 0.
Each level would be assigned properties that are associated with more intelligence.
Having this system it would help to assign rights to species of each category.
Obviously it would have to be under constant scrutiny to be as accurate and objective as it can be.
ProShortKingAction t1_j33gery wrote
Honestly it sounds like it'd lead to some nazi shit, intelligence is incredibly subjective by its very nature and if you ask a thousand people what intelligence is you are going to get a thousand different answers. On top of that we already have a lot of evidence that someone's learning capability in childhood is directly related to how safe the person felt growing up presumably because the safer someone feels the more willing their brain is to let them experiment and reach for new things, which typically means that people who grow up in areas with higher violent crime rates or people persecuted by their government tend to score lower on tests and in general have a harder time at school. If we take some numbers and claim they represent how intelligent a person is and a group persecuted by a government routinely scores lower than the rest of that society it would make it pretty easy for the government to claim that group is less of a person than everyone else. Not to mention the loss of personhood for people with mental disabilities. Whatever metric we tie to AI quality is going to be directly associated with how "human" it is to the general public which is all fine when the AIs are pretty meh but once there are groups of humans scoring worse than the AI then it's going to bring up a whole host of issues
Noname_FTW t1_j33pquo wrote
>nazi shit
The system wouldn't classify individuals but species. You are not getting your human rights by proofing you are smart enough but by being born a human.
There are certainly smarter apes than some humans. We still haven't given apes human rights even though there are certainly arguments to do so.
I'd say to a small part that is because we haven't yet devloped a science based approach towards the topic of studying the differences. There is certainly science in the area of intelligence but it needs some practical application in the end.
The core issue is that this problem will sooner or later arise when you have the talking android which seems human. Look at the movie Bicentennial Man.
If we nip the issue in the bud we can prevent a lot of suffering.
ProShortKingAction t1_j33rvrl wrote
Seems like it requires a lot of good faith to assume it will only be applied to whole species and not to whatever arbitrary groups are convenient in the moment
Noname_FTW t1_j33srty wrote
True. The whole eugenics movement and the application by the nazis is leaving its shadow. But if don't act we will come to more severe arbitrary situations like we are currently in. We have human apes that can talk through sign languages and we still keep some of them in zoos. There is simply no rational approach being made but just arbitrary rules.
Ortus14 t1_j34j2kh wrote
I don't think consciousness and intelligence are correlated. If you've ever been very tired and unable to think straight, you'll remember your conscious experience was at least as palpable.
Noname_FTW t1_j34psxs wrote
I am not an expert in the field. I am simply saying that without such classification we will run into a moral gray area where we will eventually consider some AI's "intelligent" and/or deserving of protection while still exploiting other algorithms for labor.
Ortus14 t1_j34vlpq wrote
We build Ai's to enjoy solving our problems. Those are their reward functions so I'm not too worried about exploiting them because they will solve our problems because they really enjoy doing so.
The only moral worry I have is creating Ai's to torcher or hurt them such as in video games, NPCs, and even "bad guys" for the player to battle against.
Marcus_111 t1_j37cpwj wrote
Exactly
eve_of_distraction t1_j34bmle wrote
We have no good reason whatsoever to think they don't. You're choosing to side with solipsism? Really?
williamfwm t1_j35h7s5 wrote
This is the Other Minds Problem. You don't have to be a solipsist to recognize The Problem Of Other Minds (though that's one position you could take)
But consider this: It's common to suppose that consciousness is "caused by some physical, biological process", yes? Well, take a good look at the way nature operates....we constantly find that, for any feature you can imagine, biology will sometimes fail to give it to some organisms. People are born without the expected physical features all the time, and if consciousness is caused by some physical bit of biology, everybody consistently receiving it is the LEAST likely outcome. The more likely consequence of that assumption, the more reasonable expectation, is that some people have consciousness, and some people don't, as an accident of birth.
Furthermore, if people without consciousness are nearly identical except for having a different philosophy then they probably have the same fitness (or close) and little selection pressure working against them. A large segment of the population could be p-zombies - they could even be the majority.
eve_of_distraction t1_j366y9w wrote
>It's common to suppose that consciousness is "caused by some physical, biological process", yes?
It's common, but I think it's highly erroneous. I don't subscribe to mainstream physicalism. I'm an Analtytical Idealist. I suspect what we see when we observe the brain is the image of life, as seen within our own perception. Not the thing in itself. It's like the dials on a dashboard.
As a side note, just out of interest, do you really believe there are humans without subjective inner experience?
williamfwm t1_j3xqqb1 wrote
> As a side note, just out of interest, do you really believe there are humans without subjective inner experience?
I do. I also believe the other minds problem is unsolvable in principle, so I can't ever be certain, but I've come to lean strongly on that side.
I haven't always thought so. It was that essay by Jaron Lanier I linked above that started me on that path. I read it a few years ago and started to warm to the idea. Lanier once described that 1995 essay has having been written "tongue firmly in cheek", that he believes in consciousness "even for people who say it doesn't exist", and he also has a sort of moral belief that it's better pragmatically if we fall on the side of assuming humans are special, but, he has also teased over the years[1] since that deniers may just not have it, so it's hard to tell exactly where he falls. I may be the only one walking around today taking the idea seriously....
For me, I feel like it's the culmination of things that have been percolating in the back of my mind since my teen years. Taking that position brings clarity
The main point for me, as referenced, is that it clarifies the "talking past" issue. People do mental gymnastics to rationalize that both sides are talking about the same thing in consciousness debates, yet appear to be talking at cross-purposes. They always start these discussions by saying "We all know what it is", "It couldn't be more familiar", etc But do we all know? What if some don't, and they lead us into these doomed arguments? Sure, one can take up any goofy position for the sake of argument and try to defend it as sport, but people like Dennett are so damn consistent over such a long time. He himself is saying "I don't have it" [and nobody does] so maybe we should just believe him? Maybe it is true for him?
I also can't wrap my head around why it doesn't bother some people! I've been plagued by the consciousness problem since my teen years. And before that, I recall first having the epiphany of there being a problem of some sort in middle school; I remember catching up with a friend in the halls on a break period between classes and telling him about how I came to wonder why does pain actually hurt (and him just giving me a what-are-you-talking-about look). I'm sure it was horribly uneloquently phrased, being just a kid, but the gist was....why should there be the "actual hurt" part and not just....information, awareness, data to act on?
Some people just don't think there's more, and don't seem to be on the same page on what the "more" is even if you have long, drawn out discussions with them trying to drill down to it. It would make a lot of sense if they can't get it because it isn't there for them.
I also realized that we take consciousness of others as axiomatic, and we do this due to various kinds of self-reinforcing circular arguments, and also due to politeness; it's just mean and scary to suggest some might not have it (back to Lanier's pragmatism). I call it "The Polite Axiom". I think we're free to choose a different axiom, as after all axioms are simply....chosen. I choose to go the other way and choose some-people-don't-have-it based on my equally foundation-less gut feelings and circular self-reinforcing observations and musings.
Lastly, I'm basically a Mysterian a la McGinn etc, because I don't see any possible explanation for consciousness that would be satisfactory. I can't even conceive of what form a satisfactory explanation would take[2]. I also came to realize in the past few years that even neurons shouldn't have authority in this issue. Why should it be in there compared to anywhere else? (Why do sloshing electrolytes make it happen? If I swish Gatorade from glass to glass does it get conscious?). And, unlike McGinn, I don't think we know that it's in there and only there. Nope! We know[3] that it's one mechanism by which consciousness expresses itself, and if we're being disciplined that's the most we can say.
Bonus incredibly contentious sidenote: Penrose's idea, though often laughed off as quantum-woo-woo, has the advantage that it would solve the issue of Mental Privacy in a way that computationalism fails at (the difficulty of coherence would keep minds confined to smaller areas)
[1] One example: I absolutely love this conversation here from 2008, the bit from about 20:00 to 30:00, where Lanier at one point taunts Yudkowsky as being a possible zombie. A lot of the commenters think he's a mush-mouthed idiot saying nothing, but I think it's just brilliant. On display is a nuanced understanding of a difficult issue from someone's who's spent decades chewing over all the points and counterpoints. "I don't think consciousness 'works' - it's not something that's out there", and the number line analogy is fantastic, so spot on re:computationalism/functionalism....just so much packed in that 10 mins I agree with. Suppose people like Yudkowsky gravitate to hardnosed logical positivist approaches because they don't have the thing and so don't think there's any thing to explain?
[2] The bit in the video where Lanier just laughs off Yudkowsky's suggestion that "Super Dennett or even Super Lanier 'explains consciousness to you'". It is "absurd [....] and misses the point". There's just nothing such an explanation could even look like. There's certainly no Turing machine with carefully chosen tape and internal-state-transition matrix that would suffice (nor, equivalently, any deeply-nested jumble of Lambda Calculus. I mean, come on)
[3] "Know" under our usual axiom, at that! We assume it's there, then see it the "evidence" of it there, but we've axiomatically chosen that certain observations should constitute evidence, in a circular manner....
eve_of_distraction t1_j3z62ol wrote
Very interesting. I'll need a while to think about this perspective.
turnip_burrito t1_j350gah wrote
It's actually a pretty interesting question that we might be able to test and answer. Not solipsism, but the question of whether our machines have qualia, assuming we believe other humans have it.
There some specific subset of our brains that has neural activity that aligns with our conscious experience. If we try adding things to it, or temporarily removing connectivity to it, we can determine what physical systems have qualia, and which ones separate "qualia-producing" systems and unconscious systems while still allowing information to flow back and forth.
We have to tackle stuff like:
-
Why are we not aware of some stuff in our brains and not other parts? The parts we are unaware of can do heavy computational work too. Are they actually also producing qualia too, or not? And if so, why?
-
What stops our conscious mind from becoming aware of background noises, heart beat, automatic thermal regulation, etc?
Then we can apply this knowledge to robots to better judge how conscious they are. Maybe it turns out that as long as we follow certain information-theoretic rules or build using certain molecules, we avoid making conscious robots. For people that want to digitize their mind, this also would help ensure the digital copy is also producing qualia/not a philosophical zombie.
Viewing a single comment thread. View all comments