Viewing a single comment thread. View all comments

Dhiox t1_j9plc8o wrote

No.

If we achieve a true intelligence, one with actual self awareness, then I would argue yes. However, we don't have anything near that yet.

4

fhayde t1_j9qclzb wrote

Should we wait until a time when a conscious entity has existed with no regard or protection, and likely suffered at the hands of others, with no recourse or accountability, before we address the collective rights society can afford?

How many times are we going to have to learn that lesson before it sticks?

4

Dhiox t1_j9qd7os wrote

So what, we give Microsoft Word human rights on the chance it becomes self aware?

Trust me, researchers will be well aware if their tech gains self awareness, because that's basically the dream of every AI researcher. They will parade that news in the street's the moment they achieve that goal.

3

kharlos t1_j9slg66 wrote

We don't even apply this logic to many animals which are undoubtedly sentient, can suffer, and feel pain. We share recent common ancestors with many of these species and share zero with AI.

I'm not against granting AI rights in the future, but many animals will need to be granted rights before then, imo. I just think it's funny we're so anxious to treat something which feels no pain has no sentience (at least for a long time from now) with respect and as an equal when we are absolute monsters to everything else living on this planet.

Let's first treat humans and everything that suffers with some BASIC respect before moving on to the mental gymnastics required to do the same for language models.

3

Inn_Progress t1_j9qo5c8 wrote

Maybe let's first give rights to animals that are killed everyday so you could eat a steak and then we can talk about this scenario.

−1

ActuatorMaterial2846 t1_j9qma3y wrote

I'm more convinced that we may never create an AI with sentience. An AI will likely always mimic it though.

However, I do think an AGI and ASI are inevitable. Sentience isn't required for such things to exist.

Such intellegence just has to be similar to the alphago or alphafold models, except capable of doing all human cognitive tasks at that level or higher, and needs to be able to operate autonomously.

There are organisms that behave like this in the world, albeit not intelligent as we consider it or even alive, but still incredibly complex, autonomous and adaptable.

1