Submitted by RareGur3157 t3_10mk240 in singularity
Baturinsky t1_j63ohlx wrote
Reply to comment by gaudiocomplex in Superhuman Algorithms could “Kill Everyone” in Due Time, Researchers Warn by RareGur3157
Only way for Humanity to survive Singularity (i.e. stay alive and in charge of our future) is to become Aligned with itself. I.e. to make it so that we are responsible and cooperative enough that no human that can create and unleash an Unaligned ASI would do that. By reducing the number of people that can do that, and/or by making them more responsible so they would not actually do that.
LessWrong crowd assumes that this task is so insurmountable hard, that is only solvable by creating a perfectly Aligned ASI that would solve it for you.
My opinion is that it can and should be done without making ASI first. Which is a 1. task we can start working on today 2. Would increase ETA of the DOOM even if we only solve it partially.
BassoeG t1_j64i4tr wrote
>LessWrong crowd assumes that this task is so insurmountable hard, that is only solvable by creating a perfectly Aligned ASI that would solve it for you.
Possibly because an ‘aligned human civilization in which nobody could unleash an AI’ has some seriously totalitarian implications.
Baturinsky t1_j64pm2e wrote
Duh, of COURSE it does. That's the price of the progress. The less people's destructive potential is limited by the lack of technology, the more it has to be limited by the other means. And Singularity is gonna increase the people's destructive potential tremendously.
If we'll make Aligned ASI and ask it to make decisions for us, I doubt it will find any non-totalitarian solution.
Viewing a single comment thread. View all comments