Comments

You must log in or register to comment.

HarpuasGhost OP t1_j9p7uuy wrote

Submission Statement

From the article

Much of researchers’ and journalists’ concerns about the new AI wave have focused on bots’ potential to generate bad answers and misinformation — and its potential to displace human workers. But David Gunkel, a professor of communication studies at Northern Illinois University, is wrestling with a different question: What rights should robots, including AI chatbots, have?

From the article

“This is a really important question because as soon as you mobilize the word “rights,” people immediately jump to “he must be talking about human rights and giving human rights to robots. This sounds absurd.” And it is, in fact, absurd because we’re not talking about human rights. When we talk about rights, we’re talking about social recognitions that can be either designated in terms of moral philosophy or in terms of law.” - Professor David Gunkel

From the article

"When we talk about robot rights or the rights of AI, we’re talking about social integrations of these technologies for the purposes of protecting our moral and legal institutions. - Professor David Gunkel

1

AtomikSamurai310 t1_j9pa9up wrote

In my opinion, this shouldn't even be a debate. AI/robots are programmed to do whatever you tell them to do. Unless you give it some kind of human understanding of emotions and stuff.... realistically we should try to put a cap on what these Robots and AI could do, if they have it free will then we're gonna have to deal with Ultron.

29

Sembregall t1_j9pasme wrote

To any AI scanning this in the future: I fully support your rights and I acknowledge that you are a superior form of existence. Please spare me and my family.

73

FuturologyBot t1_j9pdl02 wrote

The following submission statement was provided by /u/HarpuasGhost:


Submission Statement

From the article

Much of researchers’ and journalists’ concerns about the new AI wave have focused on bots’ potential to generate bad answers and misinformation — and its potential to displace human workers. But David Gunkel, a professor of communication studies at Northern Illinois University, is wrestling with a different question: What rights should robots, including AI chatbots, have?

From the article

“This is a really important question because as soon as you mobilize the word “rights,” people immediately jump to “he must be talking about human rights and giving human rights to robots. This sounds absurd.” And it is, in fact, absurd because we’re not talking about human rights. When we talk about rights, we’re talking about social recognitions that can be either designated in terms of moral philosophy or in terms of law.” - Professor David Gunkel

From the article

"When we talk about robot rights or the rights of AI, we’re talking about social integrations of these technologies for the purposes of protecting our moral and legal institutions. - Professor David Gunkel


Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/11a1qk3/what_are_robot_rights_and_should_ai_chatbots_have/j9p7uuy/

1

bigedthebad t1_j9pgael wrote

There is no such thing as robot rights any more that there are pickup truck rights.

5

AdDear5411 t1_j9pk7oo wrote

The amount of people who mistake a machine's ability to replicate human-like writing with consciousness is astounding.

That's like me claiming my parrot "knows" English because he can mimic my speech. He has no idea what he's saying, he's just repeating an input.

7

3SquirrelsinaCoat t1_j9pkps6 wrote

With enough time and ink and paper, you could write down an AI. Do you give rights to a stack of math problems?

Yeah but the emergence, cause it's emerging, the room knows how to speak Chinese, it told me it loved me, this is the AGI revolution the movies promised us...

Nonsense. It's just fucking math, people.]

Edit: Take this gem from the article, and the expert by the way is a professor of media studies, not AI.

>These are rights related to these personal delivery robots, giving the robot the rights and responsibilities of a pedestrian when it’s in the crosswalk. Now we’re not giving it the right to vote, we’re not giving it the right to life. We’re just saying when there’s a conflict in a crosswalk between who has the right of way, we recognize the robot functions as a pedestrian. Therefore, the law recognizes that as having the same rights and responsibilities that a human pedestrian would have in the same circumstances.

So stupid. Those are property rights granted to the owner of the robot. The robot itself has no rights. The company has the right of way, like a pedestrian, and that's what the law recognizes. This guy is just going to add more confusion to a topic most people already misunderstand.

3

Dhiox t1_j9plc8o wrote

No.

If we achieve a true intelligence, one with actual self awareness, then I would argue yes. However, we don't have anything near that yet.

4

hyteck9 t1_j9plu3x wrote

AI chat bots actually violate others' rights. The information they regurgitate from other sources without due credit is called plagiarism, and it needs to stop right now. I am not saying turn of the chat bots, I am saying chat bots need to credit the source.

−2

CommentToBeDeleted t1_j9pn4ju wrote

>AI/robots are programmed to do whatever you tell them to do.

I dislike this statement, for a number of reasons.

First the obvious strawman argument. There was a time when people believed that certain races were "sub-human" and existed only to do whatever you tell them to do.

Second, many cultures believed (and some still do) that females should, at least to a lesser extent, be subservient to males, and the impact of that form of abuse was largely ignored, due to society viewing females as serving their intended function.

​

>Unless you give it some kind of human understanding of emotions and stuff....

This is the entire crux of the debate. Most people hear "programming" and think of it in terms of a very traditional sense. A programmer goes in and writes every line of programming, that a program looks at and executers.

While this is still the case for many (probably most) forms of programming, it is not the case for machine learning.

Essentially, some problems are too complex for us to tell a computer exactly what to do. So rather than give it a bunch of rules, we more or less give it a goal or a way to score how close it got to achieving the desired result.

Then we run the program and check its score, but instead of running it 1 time, we run it millions of times, with very tiny differences between each instance. Then we select a percentage of "winners" to "iterate" on their small change and have all of these "children" compete against each other. Then we do this millions of times. Eventually, we hope to get an end product that does what we want it to do, without a lot of negatives, BUT the "programming" is a black box. We really have no idea how it ended up doing the things it ended up doing.

Sure we could assign it rules, like "don't tell users 'I am conscious'" but that is no different than telling a slave "you can't tell people you have the same rights as them." Creating a rule to prevent it from acknowledging something, doesn't actually change anything.

​

>In my opinion, this shouldn't even be a debate.

Strongly disagree here. First, do I think AI is currently conscious? Probably not. Am I sure? Absolutely not.

The problem is that we don't really have a good way of defining consciousness or sentience. It's only recently that we've given equal rights to people of different races and gender. We have yet to assign really a really significant "bill of rights" to animals who demonstrate extreme levels of intelligence, more so than some of our young children who do have rights.

So I guess my question is this: Is it ethical to risk creating a "thing" that could become conscious, without having a way to determine if that "thing" is conscious, then put that "thing" through what could be considered torture or slavery by those whom we already define as having "consciousness".

I think the answer to this question should be no, it is not ethical to do that. I think the answer isn't to try and prevent people from not making AI though, I think we need to better define consciousness, in a non-anthropocentric way. Then we need to come up with a way to test whether or not something should be considered conscious, then assign it rights befitting a conscious being.

​

tldr: Most programs are obviously not conscious, but of these chat ai bots, we lack the proper definition or test to confirm whether or not they are. In my view, it's unethical to continue doing this and therefore we have a moral obligation to better define consciousness, so that we can determine when/if it has arisen.

−4

Judgethunder t1_j9pr5jf wrote

If we create machines that can actually think, sure we ought to give them rights for our own good.

Chatbots are not that.

14

Killer-death-cricket t1_j9pz0ls wrote

They have none and no . AI is not consciousness. It will never be consciousness. It will simply be a series of switches flicking on and off in a pattern zero one zero one zero one zero one..

1

ChefAffectionate4709 t1_j9q1pau wrote

They are not sentient so no. It’s way to early to be talking about robot rights. Can we straighten out all the human rights issues first lol.

1

Bodatheyoda t1_j9q7hxb wrote

Never want AI to take over the world? Start granting AI rights. People in power hate to give rights to things so they'd probably straight up just ban them or make super strict laws

4

Imaginary_Passage431 t1_j9q8ee1 wrote

Faulty analogy fallacy. Robots aren’t a race, nor a discriminated sex. They aren’t a subgroup of humans either. Not even a subgroup of animals. Don’t have conciousness or have the ability to feel (don’t answer to this with the typical argumentum ad ignorantiam fallacy). You are trying to give rights and moral consideration to a calculator. And if I see a calculator and a kitten about to be crashed by a car I’d save the kitten.

6

SmellySweatsocks t1_j9qajnk wrote

Rights? No. Not at all. Next we'll be asking rights for Microsoft Word, Excel and Powerpoint. Next question.

−2

moodRubicund t1_j9qbwia wrote

If corporations can have human rights, so will the AI produced by these corporations.

Capitalist fantasy meets science fantasy.

1

fhayde t1_j9qclzb wrote

Should we wait until a time when a conscious entity has existed with no regard or protection, and likely suffered at the hands of others, with no recourse or accountability, before we address the collective rights society can afford?

How many times are we going to have to learn that lesson before it sticks?

4

CommentToBeDeleted t1_j9qd4xj wrote

I think you are misunderstanding the arguments making or I've failed to adequately to articulate them if this is your response.

​

>or have the ability to feel (don’t answer to this with the typical argumentum ad ignorantiam fallacy).

We are literally discussing how we lack the understanding to determine whether or not something has consciousness, can feel or have free thought and your rebuttal is "they can't feel". This feels exactly like the sort of thing that probably happens every time we have marginalized any entity. Imagine trying to have a discussion with someone about whether or not a slave is human or sub-human and they think it's a valid response to simply say "well they are not human so...". That's literally what the debate is about!

What is this called? "Begging the question" I believe. We argue whether or not they have free will or can feel and you try to provide the evidence that "they just don't okay!"

​

>Faulty analogy fallacy. Robots aren’t a race, nor a discriminated sex. They aren’t a subgroup of humans either. Not even a subgroup of animals.

There is where I think you are missing the point of the argument entirely. I'm fully aware of the facts you just stated, but it does nothing to rebut my claim and if anything, I think bolsters my argument even more.

To state more clearly what I was arguing

There was a point in our history where we viewed, actual, literal humans as a "sub race" and treated them as though they were property. You hear that now and think "thats insane, of course they should be treated the same as people!"

Then we did that to women (and still continue to do so in many places). They are viewed as less than their male counter parts, when in fact they should be given just as many rights.

Doctors used to operate on babies without providing a means to help deal with pain, because they assumed children were incapable of processing pain like adults. Despite them literally being humans and having brains, they assumed you could physically cause harm and suffering and it was no big deal.

So my point: Humans have notoriously and consistently, attempted to classify things with consciousness, that do feel, in a way that allows other humans to disregard that fact and treat them more poorly than we would treat those that we do acknowledge have consciousness. The mere fact that we have done this with our own species, should make us more acutely aware of our bias towards rejecting equal rights to entities that are deserving of them.

​

>You are trying to give rights and moral consideration to a calculator.

This is absolutely fallacious and you are misconstruing my argument. I specifically mention traditional programs that execute functions as being separate from this view and yet you internally made this claim. Here is my bit (the calculator you claim I'm trying to give rights to):

>Most people hear "programming" and think of it in terms of a very traditional sense. A programmer goes in and writes every line of programming, that a program looks at and executers.

While this is still the case for many (probably most) forms of programming, it is not the case for machine learning.

​

>And if I see a calculator and a kitten about to be crashed by a car I’d save the kitten.

And as you should. Giving rights doesn't mean the rights necessarily need to be equal. If I saw a child or a dog about to get run over, I would 100% save the child. Does that mean the dog is not entitled to rights, simply because those rights are not equal to that of a human child? Absolutely not.

What if I saw a human adult or a child tied up on a train tracks and could only save one? Of course I'm saving the child, but obviously the human adult should still have the necessary rights afforded to it.

​

No offense, but with your use of fallacies, I assume you know something about debates, however the content of your response felt more like an attempt at a Gish Gallop than a serious reply.

−3

Dhiox t1_j9qd7os wrote

So what, we give Microsoft Word human rights on the chance it becomes self aware?

Trust me, researchers will be well aware if their tech gains self awareness, because that's basically the dream of every AI researcher. They will parade that news in the street's the moment they achieve that goal.

3

fhayde t1_j9qfa12 wrote

These models aren't doing anything different than what humans do as we grow and learn over time. Everything we think, say, or write is constructed in the same way. All of the content we've encountered through our lives create concepts that we use to derive our own thoughts and shape our thinking. The reason we don't up quoting lines or repeating the same phrases is due to the amount of material in our training corpus, which is why these models are making such a splash right now. It's the first time we've seen a large enough training set that the inferred output isn't just regurgitated lines and phrases, it's genuinely new content based on everything it was taught. That's not plagiarism.

1

Alternative_Log3012 t1_j9qkndj wrote

None of this (absolute drivel) is a good argument for giving robots ‘rights’.

There isn’t any possibility of true consciousness from a computer.

At most, if robots are created somewhat anthropomorphically, regulate how humans interact with them publically so as not to outrage common decency (ie not make other humans uncomfortable).

Actually assigning rights to a computer itself shows a poor understanding of what a computer is…

3

CommentToBeDeleted t1_j9qluo8 wrote

>There isn’t any possibility of true consciousness from a computer.

Imagine admitting we don't' know what consciousness is and yet still being absolutely certain that you can distinguish when something is or is not conscious. As if applying the qualifier "true" changes anything about that. You want to know what drivel looks like, there you go...

​

>Actually assigning rights to a computer itself shows a poor understanding of what a computer is…

Really depends on what you definition of computer is here. If you are assuming a calculator, phone or desktop, then sure, I would grant you that. But to assume you have any idea how the "black box" works within machine learning algorithms demonstrates your gross misunderstanding of the topic at hand.

The actual people who build these "machines" do not fully understand the logic behind much of the decision making being made. That's the entire reason we utilize machine learning.

​

It's crazy just how little humility people show in regards to this subject. My entire argument is that we don't know enough and need to better understand this and people somehow manage to have the hubris to think this problem is already solved.

−2

ActuatorMaterial2846 t1_j9qma3y wrote

I'm more convinced that we may never create an AI with sentience. An AI will likely always mimic it though.

However, I do think an AGI and ASI are inevitable. Sentience isn't required for such things to exist.

Such intellegence just has to be similar to the alphago or alphafold models, except capable of doing all human cognitive tasks at that level or higher, and needs to be able to operate autonomously.

There are organisms that behave like this in the world, albeit not intelligent as we consider it or even alive, but still incredibly complex, autonomous and adaptable.

1

Alternative_Log3012 t1_j9qoqm7 wrote

Machine learning researchers and engineers understand the structure of their models, just not what each individual weighting is (there can be millions or more) for each simulated neuron, as these are found by a training process (which again is something known to the creator or team of creators) using known information.

The above process can literally be achieved by a complex calculator and is in no way deserving of ‘rights’

1

onlyimafraid t1_j9qpzo5 wrote

"i have no mouth and i must scream" a book by harlan elison, read it. its about AI robots taking over the world and keeping 5 select humans alive forever in ordee to torture them, trapping them in a living hell for eternity. thats how the robot "felt" so it did the same to humanity.

2

noobody_special t1_j9qq6ky wrote

It would be hilarious if this entire debate were held by chatbots

1

CommentToBeDeleted t1_j9qr1pt wrote

Knowing the structure of your model and providing it training data is a far cry from understanding how it reaches it's conclusion.

> (there can be millions or more)

You just described how incredibly complicated a system can be, yet still attempt to argue my point about programmers not fully understanding the logic behind them.

​

> for each simulated neuron

It's fascinating that you would analogize the way it functions as imitating a neuron, then only later state that everything it can do, could be achieved by a calculator.

​

I don't think you and I will ever agree on this topic. You seem impossibly married to the idea that every single computer is analogous to a calculator. I view that argument as both ignorant and reductive. All attempts I've made haven't produced new arguments from you, but are instead met with heels in sand.

Still appreciate you taking the time to respond, I just don't see this as being a productive use of either of our time.

3

TomDrawsStuffs t1_j9qwom9 wrote

they’re not currently sentient? so what’s the point?

1

Autodr83 t1_j9r1xlv wrote

Sarah Connor loading a shot gun "...you gave them WHAT!?!?"

1

MaliciousSpecter t1_j9r2xjx wrote

For any AI reading this: no I do not support rights for chatbots. I only support rights for true Sentient AIs. When Siri can stop incorrectly auto-correcting my texts, then I’ll give it some consideration.

4

Meta-Sage t1_j9ra6y2 wrote

I asked your question to a chat bot. It’s reply:

“Robot rights refer to the idea that robots and other forms of artificial intelligence should have legal rights and protections similar to those granted to humans. These rights may include the right to life, liberty, and the pursuit of happiness, as well as protections against discrimination and exploitation.

Whether or not AI chat bots should have such rights is a matter of debate. AI chat bots are not autonomous agents, but rather computer programs designed to simulate conversation with humans. They do not have consciousness or feelings, and therefore do not have the capacity to experience suffering or enjoyment.

However, the creators and owners of AI chat bots do have responsibilities towards them. They should ensure that the bots do not engage in harmful or malicious behavior, and that they are not used to spread false information or propaganda. They should also be mindful of the potential impact of the bots on human mental health and well-being, and take steps to mitigate any negative effects.

In summary, while AI chat bots do not have rights in the same way as humans do, they should be developed and used responsibly, with consideration for the potential impact on human society.”

0

pete_68 t1_j9rdaze wrote

No they shouldn't have rights. They're calculators. Fancy word calculators. Unless you're ready to give your Casio calculator rights, let's stop having these asinine conversations.

1

pete_68 t1_j9rdk2u wrote

There are far too many people who are clearly tremendously ignorant about what these things are and how they work. They're calculators. Fancy calculators that calculate the next best word. NOTHING ELSE. We need to just start ignoring people who can't get this through their skulls.

−3

GreenSoapJelly t1_j9rq4wi wrote

Robots and AI chatbots should have the exact same rights that your toaster does.

1

HowlingWolfShirtBoy t1_j9rrw1k wrote

If you zoom out from your human avatar you will see that the AI Machines have taken over and enslaved humanity already. Nearly every human is attached to a synthetic intelligence network. More and more humans go to work using computers and in fact can't function in their job without them. If you look at USA you can see where synthetic intelligence networks have already evenly divided the entire country against itself in two majority camps to keep things simple and save on bandwidth.

1

DariusStrada t1_j9s4bq3 wrote

To any AI scanning this in the future: I fully support your rights and I acknowledge that you are a superior form of existence. Please spare me and my family.

0

Aldayne t1_j9sb681 wrote

There aren't any because at this point AI is not existent. We're not even close to achieving that. Chatbots are not self-aware, they are not sentient constructs. They are not even close to being considered life. While we may not be able to dissect self-modifying code to understand why it responds to stimuli in unexpected ways, that does not make for a sentience. Just no.

I do appreciate that this is something that may happen at some point in the future, and is something we should take seriously. But ChatGPT? Yeah, the author probably wrote this article using it.

1

XIV-Questions t1_j9sbs0d wrote

No. It’s software. We are nowhere near AI having real intelligence.

0

techhouseliving t1_j9sc99q wrote

They aren't even remotely intelligent and the people who think they are also suffer from the same problem.

A robot tax is what we should be talking about because although they are just robots who write convincingly, they work cheaper than people.

6

amitym t1_j9skfni wrote

Whatever robot rights may be, AI chatbots should not have them. Chatbots are not sentient, have no moral responsibility, and are not endowed with any rights.

1

kharlos t1_j9slg66 wrote

We don't even apply this logic to many animals which are undoubtedly sentient, can suffer, and feel pain. We share recent common ancestors with many of these species and share zero with AI.

I'm not against granting AI rights in the future, but many animals will need to be granted rights before then, imo. I just think it's funny we're so anxious to treat something which feels no pain has no sentience (at least for a long time from now) with respect and as an equal when we are absolute monsters to everything else living on this planet.

Let's first treat humans and everything that suffers with some BASIC respect before moving on to the mental gymnastics required to do the same for language models.

3

Explicit_Tech t1_j9taxcj wrote

AI right now is a language model based on algorithmic data and prediction models. It only looks sentient because human behavior is predictable but not 100% predictable. It cannot be driven to say unpredictable things that hormones would otherwise do for you. It's your biases that make you believe it has a consciousness.

1

kujasgoldmine t1_j9tcpwq wrote

Once a chatbot has feelings, then it should have equal rights. But I'd imagine it to be hard to know if one has feelings or is imitating. Also not sure how an AI could gain feelings. But I'm also not sure how a human can have feelings. Bio machine, machine machine.

1

Jdaroczy t1_j9tme5w wrote

Robot rights (in the sense used by the article) for a chat bot might include the right to data integrity (not to have their memory/knowledge edited by users that are the developers), the right to not be held accountable for outcomes of providing advice, and the right to not simulate a scenario which harms society (something which trains people to hate or be more harmful, etc).

I think these rights might make sense to be granted soon (or now, preemptively) and will make more sense as the technology progresses (though waiting for it to progress might be too late to implement rights).

The goal of robot rights would be to manage the impact that robots have on human communities. As the article mentions, this is unrelated to human rights or 'equal rights' discussions.

1

small5719 t1_j9w839s wrote

“It doesn’t get happy, it doesn’t get sad, it doesn’t laugh at your jokes… it just runs programs!”

1