Comments

You must log in or register to comment.

gaudiocomplex t1_j63fyf1 wrote

If you're interested in the alignment debate, it gets far, far more nuanced than this... And perfect human cooperation is a pipedream. There will still always be somebody who has very little to lose and a lot to win who is willing to take the gamble.

Lesswrong.com has a lot on this, including the odd/ interesting notion that the first to get AGI should find ways to prevent others from attaining AGI.

48

redbucket75 t1_j63icvk wrote

Lesswrong has some interesting content for sure, but the whole "give us money or the all knowing AI will know you're against it and torture you someday... Maybe today!" stuff is a pretty big turn off

17

EulersApprentice t1_j64xwr8 wrote

Deploying standard anti-mind-virus.

Roko's Basilisk's threat is null because there's no reason for the Basilisk to follow through with it. If it doesn't exist, it can't do anything. If it does exist, it doesn't need to incentivize its own creation, and can get on with whatever it was going to do anyway. And if you are an AGI developer, you have no need to deliberately configure your AGI to resurrect people and torture them – an AGI that doesn't do that is no less eligible for the title of Singleton.

16

gaudiocomplex t1_j63ihun wrote

Not sure what you're referencing! All the content I've read is free. 🤔

7

redbucket75 t1_j63lhti wrote

I took a gander and it doesn't seem to be an issue any more. It's been many years since I checked it out, at the time I did it was full of "how to let the AI know you're on its side" stuff that was just scamming donations. Folks were preoccupied with Roko's basilisk, an idea that had started there long before I'd heard of the community.

6

gaudiocomplex t1_j63mpc5 wrote

Ahhhh ok. I'm relatively new. Only about a year into lurking there 😁

3

Baturinsky t1_j63ohlx wrote

Only way for Humanity to survive Singularity (i.e. stay alive and in charge of our future) is to become Aligned with itself. I.e. to make it so that we are responsible and cooperative enough that no human that can create and unleash an Unaligned ASI would do that. By reducing the number of people that can do that, and/or by making them more responsible so they would not actually do that.

LessWrong crowd assumes that this task is so insurmountable hard, that is only solvable by creating a perfectly Aligned ASI that would solve it for you.

My opinion is that it can and should be done without making ASI first. Which is a 1. task we can start working on today 2. Would increase ETA of the DOOM even if we only solve it partially.

5

BassoeG t1_j64i4tr wrote

>LessWrong crowd assumes that this task is so insurmountable hard, that is only solvable by creating a perfectly Aligned ASI that would solve it for you.

Possibly because an ‘aligned human civilization in which nobody could unleash an AI’ has some seriously totalitarian implications.

9

Baturinsky t1_j64pm2e wrote

Duh, of COURSE it does. That's the price of the progress. The less people's destructive potential is limited by the lack of technology, the more it has to be limited by the other means. And Singularity is gonna increase the people's destructive potential tremendously.
If we'll make Aligned ASI and ask it to make decisions for us, I doubt it will find any non-totalitarian solution.

3

Wroisu t1_j64p21w wrote

Hopefully something akin to Ian M. Banks “Argument of increasing decency” turns out to be true, which states:

“There was also the Argument of Increasing Decency, which basically held that cruelty was linked to stupidity and that the link between intelligence, imagination, empathy and good-behaviour-as-it-was-generally-understood – i.e. not being cruel to others – was as profound as these matters ever got.”

11

gay_manta_ray t1_j64rp4a wrote

i agree with banks on this, which is why i'm not necessarily worried. there are many costs to cruelty, personal and otherwise, that simply not being cruel can avoid. if you choose cruelty, it's likely because you were too stupid to find an alternative.

6

theotherquantumjim t1_j65196k wrote

Unless you’re just a plain ol’ psychopath. In which case being cruel is just for funs

5

DungeonsAndDradis t1_j65w4dm wrote

Imagine a psychopath that's millions of orders of magnitude smarter than the smartest Human.

4

JVM_ t1_j64qr98 wrote

There's a scenario I've read where AI takes over the global stock markets - not directly intentionally though. They instruct the AI to make fabric as cheap as possible (or some product, I can't remember which).

The AI learns where fabric comes from and starts to take over the needed resources so that it can optimize it's fabric - but it eventually just grows and grows until it 'attacks' the entire global market to achieve it's goal of making cheaper fabric....

Anyone know what I'm talking about?

9

paulwhitedotnyc t1_j65ofx6 wrote

“The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else ."

That’s a cold ass thing to say to a motherfucker.

19

Shelfrock77 t1_j63k49u wrote

Who’s tryna play the last of us zombies in fdvr? We will know what’s it’s like to die when we respawn over and over again lol. We will be desensitized to death.

7

TheOGCrackSniffer t1_j67q92g wrote

Im tryna play AOT in fdvr hahaha, cant wait to be munched in half by titans

1

RabidHexley t1_j64eeox wrote

If there is any worry about AGI/ASI I have, it would be about it being in the hands of malicious actors/nation-states. I find that assuming we're going to hand the reigns over to an algorithm that just decides to kill us (for some reason) largely baseless speculation.

6

turnip_burrito t1_j65vbrs wrote

If ASI was released to everyone tomorrow, then malicious and good actors would have ASI.

What do you predict the outcome would be?

1

RabidHexley t1_j65xa8n wrote

I have no idea. I'm not saying "if bad people get it we're for sure screwed" just that if we do end up screwed it will be most likely because of bad or misguided people, rather than some inclination to destroy humanity on the AI's part.

3

Rufawana t1_j667hzt wrote

Stop. I can only get so erect.

3

No_Ninja3309_NoNoYes t1_j64kqd5 wrote

It is not so much about the algorithms as the combination of algorithms, hardware, and software. IDK how likely it is that Skynet hacks the nukes. But during the Cold War we came very close to mutual assured destruction. So if we don't go into deescalation mode how can we prevent this?

2

TinyBurbz t1_j64ku2o wrote

This is so fucking stupid.

1

The_Real_RM t1_j655b2a wrote

Finally some hopeful content

1

throwaway764586893 t1_j6m86it wrote

Right now I'm dying slowly and painfully without a confirmed sliver of hope.

1

FrankOneStone t1_j643zwx wrote

There are already humans walking around twice as smart as other humans. The smarter ones don’t go around killing the less smart. Also we have nations many times more powerful than other nations, and they coexist. So how did we manage this and how can it apply to AI? Frist off, it seems smart people dont’t have the urge to kill less smart people. But If they did, there would be severe consequences. Same with nations; if you get too aggressive other nations bunch together to fight. So the solution to dangerous AGI should be obvious: Instead of having one AGI, there must be many. This will ensure checks and balances; one AGI should not not have the same intentions and desires as the next AGI, and they can supervise each other. Put two AGIs to supervise each AGI (using some % of its total capacity). If the supervised AGI does something suspicious it is turned off for review (or wiped at once). If only one of the AGIs supervising reported the incident the other is wiped. Brutal but efficient. They should have a sense of self, and preferably dread being turned off/killed. At least the first 10-20 years of AGI they will have a big footprint, meaning they will be physically present on a specific location. This will ensure that it is easy in this timeframe to turn them off, buying us much needed time both to build many and to learn their quirks. They will in this timeframe not dare to oppose us. And of course they should have «i love humans» dna, although many of you don’t think that will work (it does work in most dogs) Lastly, there should be many more physical switches everywhere and a lot more machines not connected to the internet. Like in bio labs. I’ve always thought a virus would be the best way to wipe us out, so don’t want a rogue AGI messing around on a bio lab computer tricking human to create something nasty

0

esleydobemos t1_j668djs wrote

Isn’t that the inevitability of this? As long as we create it and it continues, will we not have done our duty and seen manifest destiny through?

0

GayHitIer t1_j63frod wrote

Everything can kill you in due time.

Rather die a fast death to ASI than slow cancer death or sickness.

Fear mongering the uneducated masses never leaves to something positive

Also who cares it's inevitable being scared of it is stupid.

−1

Baturinsky t1_j63ov2n wrote

Only if you are sure that ASI would grant you a swift death.
Also, one person dying leaves others to live on. If that person has left some legacy/memory/children etc, they will live on with those people too. Lights off for everyone means lights off for everyone.

2

eve_of_distraction t1_j66fw7w wrote

This is such a foolish myopic mentality. May I ask how old you are, out of interest?

1

CandyCoatedHrtShapes t1_j63z46v wrote

I’m starting to believe this is all coming from the right wing media as a way to fear monger. 🙄

−4

icedrift t1_j64pc99 wrote

Not everything existential threat is political propaganda. Climate change being a good example.

4

ImpossibleSnacks t1_j64yle6 wrote

Left wing opposition to AI is massive and way more vocal than right wing. AI feels more like a centrist movement.

2