Viewing a single comment thread. View all comments

Calm_Bonus_6464 t1_j1sfta7 wrote

You're assuming AI would be benevolent enough to delegate power to humans, I see no reason to believe that in a post singularity world. What's stopping AI from deciding what's best for humanity if its infinitely more intelligent than us?

What you're describing is how governance will be post AGI. By that point it will be just recommendations. But ASI and Singularity change everything.

1

4e_65_6f OP t1_j1sgmei wrote

> What's stopping AI from deciding what's best for humanity if its infinitely more intelligent than us?

Well so far the only "goal" it has been programmed to follow is human instructions. It does that even when it's uncalled for (car hotwiring suggestions for instance). I can totally see that being a reality in your system where you're allowed to be very stupid in a very smart way using AI.

1