Viewing a single comment thread. View all comments

sticky_symbols t1_itqw9gh wrote

The hell scenario seems quite unlikely compared to the extinction scenario. We'll try to get its goals to align with ours. If we fail, it won't likely be interested in making things worse for us. And there are very few true sadists who'd torment humanity forever if they achieved unlimited power by controlling AGI.

11

rushmc1 t1_itrqs4b wrote

>>there are very few true sadists who'd torment humanity forever if they achieved unlimited power

<looks around at 21st century American society, looks at you doubtfully>

7

sticky_symbols t1_itrxr18 wrote

Yeah I see it differently, but I could be wrong. Who do you think enjoys inflicting suffering on people who've never wronged them?

Wanting some sort of superiority or control is almost universal, but that wouldn't nearly be a hell outcome.

6

rushmc1 t1_its07d9 wrote

Going to have to agree to disagree strongly. We've observed a lot about human nature over the past decade+.

4

sticky_symbols t1_its22wz wrote

I've been observing closely, too. That's why I'm curious where the disagreement arises.

2

Mooblegum t1_itu204o wrote

If AI treat us as we treat animals (inferior species we can farm, kill, extinguish and use for labor) it will be close to hell for us

2

StarChild413 t1_itx3jb1 wrote

But the question (other than why would it treat us like this if not for an infinite regress compelled by our treatment meaning they'd fall victim to this at the hands of our own creations and which species would it treat us like or would it do them all in proportion) is if we stopped treating the animals that way would AI only stop after the same amount of time

1