Submitted by wtfcommittee t3_1041wol in singularity
ProShortKingAction t1_j33gery wrote
Reply to comment by Noname_FTW in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
Honestly it sounds like it'd lead to some nazi shit, intelligence is incredibly subjective by its very nature and if you ask a thousand people what intelligence is you are going to get a thousand different answers. On top of that we already have a lot of evidence that someone's learning capability in childhood is directly related to how safe the person felt growing up presumably because the safer someone feels the more willing their brain is to let them experiment and reach for new things, which typically means that people who grow up in areas with higher violent crime rates or people persecuted by their government tend to score lower on tests and in general have a harder time at school. If we take some numbers and claim they represent how intelligent a person is and a group persecuted by a government routinely scores lower than the rest of that society it would make it pretty easy for the government to claim that group is less of a person than everyone else. Not to mention the loss of personhood for people with mental disabilities. Whatever metric we tie to AI quality is going to be directly associated with how "human" it is to the general public which is all fine when the AIs are pretty meh but once there are groups of humans scoring worse than the AI then it's going to bring up a whole host of issues
Noname_FTW t1_j33pquo wrote
>nazi shit
The system wouldn't classify individuals but species. You are not getting your human rights by proofing you are smart enough but by being born a human.
There are certainly smarter apes than some humans. We still haven't given apes human rights even though there are certainly arguments to do so.
I'd say to a small part that is because we haven't yet devloped a science based approach towards the topic of studying the differences. There is certainly science in the area of intelligence but it needs some practical application in the end.
The core issue is that this problem will sooner or later arise when you have the talking android which seems human. Look at the movie Bicentennial Man.
If we nip the issue in the bud we can prevent a lot of suffering.
ProShortKingAction t1_j33rvrl wrote
Seems like it requires a lot of good faith to assume it will only be applied to whole species and not to whatever arbitrary groups are convenient in the moment
Noname_FTW t1_j33srty wrote
True. The whole eugenics movement and the application by the nazis is leaving its shadow. But if don't act we will come to more severe arbitrary situations like we are currently in. We have human apes that can talk through sign languages and we still keep some of them in zoos. There is simply no rational approach being made but just arbitrary rules.
Ortus14 t1_j34j2kh wrote
I don't think consciousness and intelligence are correlated. If you've ever been very tired and unable to think straight, you'll remember your conscious experience was at least as palpable.
Noname_FTW t1_j34psxs wrote
I am not an expert in the field. I am simply saying that without such classification we will run into a moral gray area where we will eventually consider some AI's "intelligent" and/or deserving of protection while still exploiting other algorithms for labor.
Ortus14 t1_j34vlpq wrote
We build Ai's to enjoy solving our problems. Those are their reward functions so I'm not too worried about exploiting them because they will solve our problems because they really enjoy doing so.
The only moral worry I have is creating Ai's to torcher or hurt them such as in video games, NPCs, and even "bad guys" for the player to battle against.
Viewing a single comment thread. View all comments