Viewing a single comment thread. View all comments

gaudiocomplex t1_j63fyf1 wrote

If you're interested in the alignment debate, it gets far, far more nuanced than this... And perfect human cooperation is a pipedream. There will still always be somebody who has very little to lose and a lot to win who is willing to take the gamble.

Lesswrong.com has a lot on this, including the odd/ interesting notion that the first to get AGI should find ways to prevent others from attaining AGI.

48

redbucket75 t1_j63icvk wrote

Lesswrong has some interesting content for sure, but the whole "give us money or the all knowing AI will know you're against it and torture you someday... Maybe today!" stuff is a pretty big turn off

17

EulersApprentice t1_j64xwr8 wrote

Deploying standard anti-mind-virus.

Roko's Basilisk's threat is null because there's no reason for the Basilisk to follow through with it. If it doesn't exist, it can't do anything. If it does exist, it doesn't need to incentivize its own creation, and can get on with whatever it was going to do anyway. And if you are an AGI developer, you have no need to deliberately configure your AGI to resurrect people and torture them – an AGI that doesn't do that is no less eligible for the title of Singleton.

16

gaudiocomplex t1_j63ihun wrote

Not sure what you're referencing! All the content I've read is free. 🤔

7

redbucket75 t1_j63lhti wrote

I took a gander and it doesn't seem to be an issue any more. It's been many years since I checked it out, at the time I did it was full of "how to let the AI know you're on its side" stuff that was just scamming donations. Folks were preoccupied with Roko's basilisk, an idea that had started there long before I'd heard of the community.

6

gaudiocomplex t1_j63mpc5 wrote

Ahhhh ok. I'm relatively new. Only about a year into lurking there 😁

3

Baturinsky t1_j63ohlx wrote

Only way for Humanity to survive Singularity (i.e. stay alive and in charge of our future) is to become Aligned with itself. I.e. to make it so that we are responsible and cooperative enough that no human that can create and unleash an Unaligned ASI would do that. By reducing the number of people that can do that, and/or by making them more responsible so they would not actually do that.

LessWrong crowd assumes that this task is so insurmountable hard, that is only solvable by creating a perfectly Aligned ASI that would solve it for you.

My opinion is that it can and should be done without making ASI first. Which is a 1. task we can start working on today 2. Would increase ETA of the DOOM even if we only solve it partially.

5

BassoeG t1_j64i4tr wrote

>LessWrong crowd assumes that this task is so insurmountable hard, that is only solvable by creating a perfectly Aligned ASI that would solve it for you.

Possibly because an ‘aligned human civilization in which nobody could unleash an AI’ has some seriously totalitarian implications.

9

Baturinsky t1_j64pm2e wrote

Duh, of COURSE it does. That's the price of the progress. The less people's destructive potential is limited by the lack of technology, the more it has to be limited by the other means. And Singularity is gonna increase the people's destructive potential tremendously.
If we'll make Aligned ASI and ask it to make decisions for us, I doubt it will find any non-totalitarian solution.

3