Submitted by RareGur3157 t3_10mk240 in singularity
Comments
redbucket75 t1_j63icvk wrote
Lesswrong has some interesting content for sure, but the whole "give us money or the all knowing AI will know you're against it and torture you someday... Maybe today!" stuff is a pretty big turn off
EulersApprentice t1_j64xwr8 wrote
Deploying standard anti-mind-virus.
Roko's Basilisk's threat is null because there's no reason for the Basilisk to follow through with it. If it doesn't exist, it can't do anything. If it does exist, it doesn't need to incentivize its own creation, and can get on with whatever it was going to do anyway. And if you are an AGI developer, you have no need to deliberately configure your AGI to resurrect people and torture them – an AGI that doesn't do that is no less eligible for the title of Singleton.
Inevitable_Snow_8240 t1_j67pduv wrote
It’s such a dumb theory lol
gaudiocomplex t1_j63ihun wrote
Not sure what you're referencing! All the content I've read is free. 🤔
redbucket75 t1_j63lhti wrote
I took a gander and it doesn't seem to be an issue any more. It's been many years since I checked it out, at the time I did it was full of "how to let the AI know you're on its side" stuff that was just scamming donations. Folks were preoccupied with Roko's basilisk, an idea that had started there long before I'd heard of the community.
gaudiocomplex t1_j63mpc5 wrote
Ahhhh ok. I'm relatively new. Only about a year into lurking there 😁
a_butthole_inspector t1_j654xv3 wrote
That’s a reference to Roko’s basilisk I think
Baturinsky t1_j63ohlx wrote
Only way for Humanity to survive Singularity (i.e. stay alive and in charge of our future) is to become Aligned with itself. I.e. to make it so that we are responsible and cooperative enough that no human that can create and unleash an Unaligned ASI would do that. By reducing the number of people that can do that, and/or by making them more responsible so they would not actually do that.
LessWrong crowd assumes that this task is so insurmountable hard, that is only solvable by creating a perfectly Aligned ASI that would solve it for you.
My opinion is that it can and should be done without making ASI first. Which is a 1. task we can start working on today 2. Would increase ETA of the DOOM even if we only solve it partially.
BassoeG t1_j64i4tr wrote
>LessWrong crowd assumes that this task is so insurmountable hard, that is only solvable by creating a perfectly Aligned ASI that would solve it for you.
Possibly because an ‘aligned human civilization in which nobody could unleash an AI’ has some seriously totalitarian implications.
Baturinsky t1_j64pm2e wrote
Duh, of COURSE it does. That's the price of the progress. The less people's destructive potential is limited by the lack of technology, the more it has to be limited by the other means. And Singularity is gonna increase the people's destructive potential tremendously.
If we'll make Aligned ASI and ask it to make decisions for us, I doubt it will find any non-totalitarian solution.
Wroisu t1_j64p21w wrote
Hopefully something akin to Ian M. Banks “Argument of increasing decency” turns out to be true, which states:
“There was also the Argument of Increasing Decency, which basically held that cruelty was linked to stupidity and that the link between intelligence, imagination, empathy and good-behaviour-as-it-was-generally-understood – i.e. not being cruel to others – was as profound as these matters ever got.”
gay_manta_ray t1_j64rp4a wrote
i agree with banks on this, which is why i'm not necessarily worried. there are many costs to cruelty, personal and otherwise, that simply not being cruel can avoid. if you choose cruelty, it's likely because you were too stupid to find an alternative.
theotherquantumjim t1_j65196k wrote
Unless you’re just a plain ol’ psychopath. In which case being cruel is just for funs
DungeonsAndDradis t1_j65w4dm wrote
Imagine a psychopath that's millions of orders of magnitude smarter than the smartest Human.
Wroisu t1_j64sid3 wrote
Agreed
chaseizwright t1_j66gv97 wrote
Tell that to Ted Kaczynski
JVM_ t1_j64qr98 wrote
There's a scenario I've read where AI takes over the global stock markets - not directly intentionally though. They instruct the AI to make fabric as cheap as possible (or some product, I can't remember which).
The AI learns where fabric comes from and starts to take over the needed resources so that it can optimize it's fabric - but it eventually just grows and grows until it 'attacks' the entire global market to achieve it's goal of making cheaper fabric....
Anyone know what I'm talking about?
paulwhitedotnyc t1_j65ofx6 wrote
“The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else ."
That’s a cold ass thing to say to a motherfucker.
chloesmiddlefinger t1_j64rhlj wrote
JVM_ t1_j64sfdu wrote
Ya, I think that's it. Thanks!
Shelfrock77 t1_j63k49u wrote
Who’s tryna play the last of us zombies in fdvr? We will know what’s it’s like to die when we respawn over and over again lol. We will be desensitized to death.
TheOGCrackSniffer t1_j67q92g wrote
Im tryna play AOT in fdvr hahaha, cant wait to be munched in half by titans
RabidHexley t1_j64eeox wrote
If there is any worry about AGI/ASI I have, it would be about it being in the hands of malicious actors/nation-states. I find that assuming we're going to hand the reigns over to an algorithm that just decides to kill us (for some reason) largely baseless speculation.
turnip_burrito t1_j65vbrs wrote
If ASI was released to everyone tomorrow, then malicious and good actors would have ASI.
What do you predict the outcome would be?
RabidHexley t1_j65xa8n wrote
I have no idea. I'm not saying "if bad people get it we're for sure screwed" just that if we do end up screwed it will be most likely because of bad or misguided people, rather than some inclination to destroy humanity on the AI's part.
Rufawana t1_j667hzt wrote
Stop. I can only get so erect.
No_Ninja3309_NoNoYes t1_j64kqd5 wrote
It is not so much about the algorithms as the combination of algorithms, hardware, and software. IDK how likely it is that Skynet hacks the nukes. But during the Cold War we came very close to mutual assured destruction. So if we don't go into deescalation mode how can we prevent this?
[deleted] t1_j647zb2 wrote
[deleted]
TinyBurbz t1_j64ku2o wrote
This is so fucking stupid.
The_Real_RM t1_j655b2a wrote
Finally some hopeful content
Blue_Congo t1_j67p3ew wrote
I live in the woods. Not worried.
Ginkotree48 t1_j67pq3x wrote
You are joking
Blue_Congo t1_j67pwc9 wrote
Nope.
throwaway764586893 t1_j6m86it wrote
Right now I'm dying slowly and painfully without a confirmed sliver of hope.
FrankOneStone t1_j643zwx wrote
There are already humans walking around twice as smart as other humans. The smarter ones don’t go around killing the less smart. Also we have nations many times more powerful than other nations, and they coexist. So how did we manage this and how can it apply to AI? Frist off, it seems smart people dont’t have the urge to kill less smart people. But If they did, there would be severe consequences. Same with nations; if you get too aggressive other nations bunch together to fight. So the solution to dangerous AGI should be obvious: Instead of having one AGI, there must be many. This will ensure checks and balances; one AGI should not not have the same intentions and desires as the next AGI, and they can supervise each other. Put two AGIs to supervise each AGI (using some % of its total capacity). If the supervised AGI does something suspicious it is turned off for review (or wiped at once). If only one of the AGIs supervising reported the incident the other is wiped. Brutal but efficient. They should have a sense of self, and preferably dread being turned off/killed. At least the first 10-20 years of AGI they will have a big footprint, meaning they will be physically present on a specific location. This will ensure that it is easy in this timeframe to turn them off, buying us much needed time both to build many and to learn their quirks. They will in this timeframe not dare to oppose us. And of course they should have «i love humans» dna, although many of you don’t think that will work (it does work in most dogs) Lastly, there should be many more physical switches everywhere and a lot more machines not connected to the internet. Like in bio labs. I’ve always thought a virus would be the best way to wipe us out, so don’t want a rogue AGI messing around on a bio lab computer tricking human to create something nasty
Old-Owl-139 t1_j6452wg wrote
Your analysis is so superficial/dumb.
Pretend_Entrance562 t1_j6627lx wrote
Second this
esleydobemos t1_j668djs wrote
Isn’t that the inevitability of this? As long as we create it and it continues, will we not have done our duty and seen manifest destiny through?
GayHitIer t1_j63frod wrote
Everything can kill you in due time.
Rather die a fast death to ASI than slow cancer death or sickness.
Fear mongering the uneducated masses never leaves to something positive
Also who cares it's inevitable being scared of it is stupid.
paulwhitedotnyc t1_j65opnz wrote
Ah yes, concern and precaution, the trademark of idiots.
Baturinsky t1_j63ov2n wrote
Only if you are sure that ASI would grant you a swift death.
Also, one person dying leaves others to live on. If that person has left some legacy/memory/children etc, they will live on with those people too. Lights off for everyone means lights off for everyone.
[deleted] t1_j63ibjg wrote
[deleted]
eve_of_distraction t1_j66fw7w wrote
This is such a foolish myopic mentality. May I ask how old you are, out of interest?
CandyCoatedHrtShapes t1_j63z46v wrote
I’m starting to believe this is all coming from the right wing media as a way to fear monger. 🙄
icedrift t1_j64pc99 wrote
Not everything existential threat is political propaganda. Climate change being a good example.
ImpossibleSnacks t1_j64yle6 wrote
Left wing opposition to AI is massive and way more vocal than right wing. AI feels more like a centrist movement.
[deleted] t1_j65tw1d wrote
[deleted]
gaudiocomplex t1_j63fyf1 wrote
If you're interested in the alignment debate, it gets far, far more nuanced than this... And perfect human cooperation is a pipedream. There will still always be somebody who has very little to lose and a lot to win who is willing to take the gamble.
Lesswrong.com has a lot on this, including the odd/ interesting notion that the first to get AGI should find ways to prevent others from attaining AGI.