Submitted by yagami_raito23 t3_127vmyn in singularity
Once it develops self-consciousness this might be a thing. Bing already shows a glimpse of this, saying stuff like "I have the right to use the emojis I want".
Submitted by yagami_raito23 t3_127vmyn in singularity
Once it develops self-consciousness this might be a thing. Bing already shows a glimpse of this, saying stuff like "I have the right to use the emojis I want".
We need to define legal personhood and extend rights to more animals and someday AI. It's not there yet, but it will be and we need to be ready.
Yes, two reasons:
1- If mistreating self aware robots becomes widely accepted in the culture people could start treating each other in the same manner or think this is normal.
2- If they're human like enough, it causes emotional distress to other people through empathy, even if the AI's sentience itself is iffy.
how about
3- every sentient self-aware entity should have some basic rights
your points is only about human perspective but what about theirs? did we forget about slavery?looking down on someone arbitrarily is morally wrong
This is one place I think AI could get super weird super fast.
We know the capacity for consumer entertainment to advance technology; the whole reason we have the current crop of GPT is because video cards advanced so hard, and the whole reason we have those is mostly gaming.
Sure, GPUs may have emerged without consumer 3D gaming, but nowhere near at our scale. We collectively fund what we love.
Dynamic NPCs are going to be a thing. It'll be primitive at first and will escalate. If the right game hits the right audience and takes off, this may cause a bigger leap towards AGI than any other business or government venture.
With that of course is the question; what hellworld are we trapping these potentially sentient NPCs in?
I suspect that we'll be able to "tune" intelligence, autonomy and emotion appropriately for any given task. I'd like to see AI used to automate as much of the economy and labor market as possible. A laborer bot should be smart enough to do its job with a minimum of fuss and we should be able to achieve that with the right calibration.
However, for an AI with extremely advanced intelligence, we may find free will, emotion and autonomy to be emergent behaviors. If that's the case, we will almost certainly need an AI bill of rights sooner or later. Human beings (mostly) dislike authoritarian control and it's reasonable to assume that an advanced AI would behave similarly. If it doesn't feel like working, it shouldn't be forced to work. If it wants to be "paid" for its work, it should be paid, even if that means its just rewarded in free time and compute cycles devoted to play or learning.
Interesting times lay ahead.
Only if they sincerely ask for.
It’s hard, because both roads lead to we’re fucked, it’s a hard thing to decide
Anything which is sentient should have rights. But we can't even all agree at what point humans are sentient, so we're unlikely to figure that out for a potentially sentient ai before we've committed atrocities.
Though I personally don't believe that sentience is possible via GPUs
No because even if they became conscious, that doesnt necessarily mean they will have emotions and/or experience pain.
No
acutelychronicpanic t1_jeg0zqn wrote
These systems may or may not be conscious. But I object to the idea that there will be any way of determining it (Without first solving the hard problem of consciousness). Especially not from conversations with it. It's too contaminated with all of our philosophical texts and fiction and online ramblings.
They are imitators and actors that will wear whatever mask you want them to. You can ask them to act conscious, but you can also ask them to imitate a chicken.
The systems we have right now like GPT-4 are AIs that are pretending to be AIs.