Viewing a single comment thread. View all comments

ProShortKingAction t1_j32sn33 wrote

Yeah I also feel like we are reaching a point where moving the goal post of what is consciousness any farther gets into some pretty dark territory. Sure an AI expert can probably explain that ChatGPT isn't conscious but it's going to be in a way that no one outside of the field is going to understand at this point. I keep on seeing takes like "oh it isn't conscious because it doesn't keep its train of thought in-between conversations"... OK so your grandfather with dementia isn't conscious? Is that really a point these people want to make?

I feel like it's getting close enough that we should stop putting so much weight into the idea of what is and isn't conscious before we start moving this goal post past what a lot of human beings can compete with.

16

ReignOfKaos t1_j32tesd wrote

The only reason we think other people are conscious is via analogy to ourselves. And the only reason we think animals are conscious is via analogy to humans. The truth is we really have no idea what consciousness is nor do we have a way to measure it, so any ethics based on a rigid definition of consciousness is very fallible.

13

Ortus14 t1_j34hehj wrote

I like ethics based on uncertainty. We don't know who is or isn't conscious but it's safer to act as if more entities are conscious to not hurt others.

3

Noname_FTW t1_j33bbbu wrote

Additionally: While we know how these AI's work on a technical level and can therefore explain their behavior it doesn't mean that this isn't consciousness. People tend differentiate this between the human experience because we do not yet have the intricate understanding on how exactly the brain works. We know a lot about it but not to the degree we could recreate it.

The solution should be that we will develop an "intelligence level chart" so to speak. I heard this as a reference somewhere in some scifi material but can't remember where.

The point would be that we are going to start develop a system with levels in which we classifiy AI's and biological beings in terms of their intelligence.

It would look similar on how we classify autonomous vehicles today with level 5 being entirely autonomous.

The chart could go from 0-10 where humans would be somewhere around 5-8. 10 being a super AGI and viruses being 0.

Each level would be assigned properties that are associated with more intelligence.

Having this system it would help to assign rights to species of each category.

Obviously it would have to be under constant scrutiny to be as accurate and objective as it can be.

6

ProShortKingAction t1_j33gery wrote

Honestly it sounds like it'd lead to some nazi shit, intelligence is incredibly subjective by its very nature and if you ask a thousand people what intelligence is you are going to get a thousand different answers. On top of that we already have a lot of evidence that someone's learning capability in childhood is directly related to how safe the person felt growing up presumably because the safer someone feels the more willing their brain is to let them experiment and reach for new things, which typically means that people who grow up in areas with higher violent crime rates or people persecuted by their government tend to score lower on tests and in general have a harder time at school. If we take some numbers and claim they represent how intelligent a person is and a group persecuted by a government routinely scores lower than the rest of that society it would make it pretty easy for the government to claim that group is less of a person than everyone else. Not to mention the loss of personhood for people with mental disabilities. Whatever metric we tie to AI quality is going to be directly associated with how "human" it is to the general public which is all fine when the AIs are pretty meh but once there are groups of humans scoring worse than the AI then it's going to bring up a whole host of issues

7

Noname_FTW t1_j33pquo wrote

>nazi shit

The system wouldn't classify individuals but species. You are not getting your human rights by proofing you are smart enough but by being born a human.

There are certainly smarter apes than some humans. We still haven't given apes human rights even though there are certainly arguments to do so.

I'd say to a small part that is because we haven't yet devloped a science based approach towards the topic of studying the differences. There is certainly science in the area of intelligence but it needs some practical application in the end.

The core issue is that this problem will sooner or later arise when you have the talking android which seems human. Look at the movie Bicentennial Man.

If we nip the issue in the bud we can prevent a lot of suffering.

0

ProShortKingAction t1_j33rvrl wrote

Seems like it requires a lot of good faith to assume it will only be applied to whole species and not to whatever arbitrary groups are convenient in the moment

2

Noname_FTW t1_j33srty wrote

True. The whole eugenics movement and the application by the nazis is leaving its shadow. But if don't act we will come to more severe arbitrary situations like we are currently in. We have human apes that can talk through sign languages and we still keep some of them in zoos. There is simply no rational approach being made but just arbitrary rules.

0

Ortus14 t1_j34j2kh wrote

I don't think consciousness and intelligence are correlated. If you've ever been very tired and unable to think straight, you'll remember your conscious experience was at least as palpable.

2

Noname_FTW t1_j34psxs wrote

I am not an expert in the field. I am simply saying that without such classification we will run into a moral gray area where we will eventually consider some AI's "intelligent" and/or deserving of protection while still exploiting other algorithms for labor.

1

Ortus14 t1_j34vlpq wrote

We build Ai's to enjoy solving our problems. Those are their reward functions so I'm not too worried about exploiting them because they will solve our problems because they really enjoy doing so.

The only moral worry I have is creating Ai's to torcher or hurt them such as in video games, NPCs, and even "bad guys" for the player to battle against.

1