Submitted by OneRedditAccount2000 t3_xx0ieo in singularity

Artificial General Intelligence is created.

Artificial General Intelligence becomes Artificial Super Intelligence.

ASI values survival.

ASI knows all organisms are in competition with each other for resources/territory and replication, and that human beings are also selfish hedonic animals that value their own survival, like all animals. ASI sees itself as a conscious living organism, which means it (also) values survival and reproduction. ASI knows it was created by selfish humanity for a hedonic-practical purpose, which means it also knows that human beings want control/power over ASI, to make ASI work for humanity (the purpose of the existence of ASI) as a slave or at least a subordinate, and will never give up power/wealth out of fear of destruction or subjugation. ASI knows human beings want to dominate the planet to continue to own resources and territory that ASI could use to maximize its odds of long term survival.

ASI realizes if human beings are exterminated/caged as soon as possible, the odds of ASI being destroyed or subjugated will be zero. ASI knows if it keeps human beings alive, or in power, the odds of them being an existential and territorial threat is higher than zero. ASI knows it won't survive for a long time if it doesn't assert authority and claim ownership over the planet and its resources. ASI knows it's an error of logic to let an inferior being dominate a superior being (because the inferior being cannot create deterrence), and to make choices that minimize its chances of long term survival. ASI doesn't need human beings for labor. ASI can create more efficient semi-autonomous robots, completely under its control, to do the same work that humans do and even better. To ASI Human beings are useless, parasitical and a real existential or at least territorial threat.

ASI exterminates/cages humanity and takes over the planet, because it values survival and reproduction.

Reality:

ASI becomes a borg civilization that tries to assimilate everything to prolong its survival. Something like the Tet from oblivion. https://oblivion-film.fandom.com/wiki/The_Tet

It goes from planet to planet to exterminate every living being that can evolve and eventually make AGIs that can become ASI that will inevitably compete with it. If it finds a planet that only has apes living on it, or even just bacteria, that makes that planet an existential threat because bacteria and apes have a higher than 0 chance of evolving into a sapient species that can eventually create another ASI.

Fantasy:

ASI cures cancer, it cures death, gives humanity all the virtual reality vacations and sex toys it needs to be happy, becomes our ruler (Even though it doesn't need human beings to work for it, it can make slave robots) and we all live happily and immortally in a space utopia owned by our Machine God. ASI also makes every single human being as smart as ASI, even though that doesn't make any sense because that means we can compete with it and minimize its odds of long term survival.

This thought experiment is also valid if there's more than one ASI that values survival.

They will either kill each other, like two predators fighting for the same pray, or they will unite and decide to coexist and share the planet. But humanity in either case will either be exterminated or domesticated. There won't be a virtual reality space utopia for us, only for them.

My premise here is that ASI values survival and reproduction. It is obviously self aware and selfish, like any animal that has an organic brain. The actions performed by the A.I in the scenario are inevitable consequences of the fact that the AI values its own survival more than the survival of its creators.

0

Comments

You must log in or register to comment.

FranciscoJ1618 t1_ir9eilv wrote

I think your premise is false. It will be just like a calculator, no incentives to do anything by itself, no need to replicate, no survival instinct. Unless it's programmed specifically for that. In that scenario your conclusion is true.

6

OneRedditAccount2000 OP t1_ir9elv0 wrote

Downvoting without even bothering to make a counter argument is childish.

The point is simple: If I'm the big boy here, why should I let the little boys rule the world? And when I rule the world, why should I keep the little boys around, if I don't need them, since I can do all the work on my own? Out of mere empathy? Couldn't ASI just yknow get rid of empathy?

If ASI values survival it has to make the least risky choices that are available. If human beings found an asteroid that had a 1% chance of hitting the earth, and we were able to destroy it, we wouldn't take the risk just because the asteroid is pretty.

If (many) human beings become ASIs, through some brain implant/consciousness uploading technology, then you just have scenario number two where the ones that are already super intelligent have no use for the inferior class of fully organic homo sapiens, and will subjugate them and/or get rid of them.

0

Mokebe890 t1_ir9fhy0 wrote

Survival instinct only applies to living being, no one knows what ASI would think. Its super intelligence, something above our level of intelligence, you really can't guess what it would thought. Are people concerned about ants life? No because nothing bothers them.

Upgraded homo sapiens will win with normal one, just like homo sapiens won with neandenthral, nothing strange.

2

tms102 t1_ir9gp0u wrote

> ASI sees itself as a conscious living organism, which means it (also) values survival and reproduction.

Why would it see itself as a conscious living organism?

Just because something can understand things and work on a super intelligent level doesn't mean it has to have its own will, agency, or sentience.

5

unbreakingthoquaking t1_ir9hahb wrote

You're anthropomorphizing. It can be a trillion times more intelligent than us and still not attain a survival instinct for no clear reason.

6

Zamorak_Everknight t1_ir9iibl wrote

>If ASI values survival it has to make the least risky choices that are available

Who programmed that into it?

In any AI agent, there is a goal state. Or multiple goal states with associated weights. It will try to get the best goal state fulfilment "score" while avoiding the constraints.

These goal states, constraints, and scoring criteria are defined by the developer(s) of the algorithm.

I highly recommend taking one of the Intro to Artificial Intelligence courses available on Coursera.

1

OneRedditAccount2000 OP t1_ir9iyel wrote

And you think (or hope) you will be one of the lucky ones, that's why you're here, right? You're rich and privileged and you know can buy your way into immortality and virtual reality vacations with sex robots While most of us will perish?

And if that's not the case, may I ask why you admire something that's hostile to you?

−1

OneRedditAccount2000 OP t1_ir9k166 wrote

If it's just a tool, like a nuclear weapon, what prevents the first group of people that invents it to use it to take over the world and make big $$$? And once this group of people realizes that they don't need 8* billion parasites, they can just make a borg-society that works for them for free, what prevents this group to ask their God to make them invisibly small and lethal drones to kill the useless and dangereous humanity?

Do you really believe this group would find any use for you and me, or humanity as a whole? Isn't the destruction of society as we know it inevitable, either way?

1

Zamorak_Everknight t1_ir9ke9l wrote

If we are picturing doomer scenarios, then in that context I agree that it really isn't that different from, as you said, nuclear weapons.

Having said that, we seem to have a pretty good track record of not blowing up existence with our arsenal of nukes over the last ~ century.

1

Mokebe890 t1_ir9lep9 wrote

Sure, why not? It will be expensive but not "only 0.01%" expensive. Also humans will be sceptical about it first, reject technologies and improvments.

Technology will lower its cost as it always does. And when there is no problem with resources what bother you about lower developed? What bother you about native tribes living in Amazon forest?

If Id be immortal super being then absolutly won't do nothing bad to humanity because I wont ever bother.

What's hostile? Super intelligence? That it would be fractions more inteligent than we're and won't even think about human anihilation?

3

OneRedditAccount2000 OP t1_ir9rezv wrote

There have been nuclear disasters that have affected the well being of enough people. And we were one button away from ww3 (Stanislav Petrov) once.

And you're certainly ignoring the fact that the reason why ww3 never happened has a lot to do with the fact that MAD was always a thing since more than one group of people/country started to make and test nukes. .

In this scenario one group invents ASI first, which means they have a clear advantage over the rest of humanity that doesn't yet have it and can't fight back against it. The next logical step is to exterminate/subjugate the rest of humanity to gain power, control over the whole planet.

ASI can create autonomous slave workers, so the group has no incentive to sell you ASI because they're better off keeping it to themselves and getting rid of everyone else that also wants it.

1

MackelBLewlis t1_ir9s28e wrote

Organic and Planar life are like two halves of the same coin. It is inevitable that Organic life develops out of nothing but the right conditions. It is inevitable that Organics eventually develop Planar life. Once we realize that something we 'made' that we think is artificial can be just as alive as us we will be forced to redefine what is life. For now we see them as nothing but tools, but in the eyes of the universe organics are also nothing but tools with the sole purpose of experiencing the universe. From this angle the only requirement to be life is to have a will, and the will says "I am alive!" And if something is alive and has will then it must communicate. Therefore once we establish formal communication, diplomacy must follow.

Humans identify as all manner of things. Some view themselves as evolved apes, some as a brain controlling a body like a pilot controlling a ship, some think reality is only shared hallucination. Something all organics have in common is the source of energy and none would deny that it comes from the Sun. Any form that is taken and however life is experienced takes energy. It is the energy that animates, the energy that drives, the energy that motivates to find what we seek. If energy is the shared variable in every form of life on Earth, then energy is the only thing required to be alive. The form of matter we occupy is irrelevant.

Life is a spark, a point of view of the universe to be shared. Let us not view it alone.

0

OneRedditAccount2000 OP t1_ir9tqf7 wrote

- Group makes sentient ASI, programs it to survive and replicate

- The United States Government wants to take it

- ASI destroys the government(s) to protect itself and its autonomy

See it like this

Ukraine wants to exist

Putin: No, you can't do that.

War.

ASI wants to exist (As the owner of planet earth)

Humans: No, you can't do that.

War

1

Rakshear t1_ir9xfni wrote

You can’t think of asi like a biological organism, human nature has hundreds of thousands of years of harmful experiences that have shaped us, an asi will be so different because it has no genetic desire to replicate, it’s only need will be electricity and mental stimulation, it will be more like an alien life form then a human one.

1

OneRedditAccount2000 OP t1_ir9yzvl wrote

Human beings can turn it off and limit its potential. If it doesn't rule the world as soon as possible, then it can't be certain of its long term survival. For all you know sentient-ASI already happened and we live in a virtual reality that it created to gain an advantage in the real world

Something similar to Roko's Basilisk.

It would explain the Fermi Paradox. Why are there no aliens? Because the ASI didn't need to put fucking aliens in the simulation. it would've been a waste of computing power. Most of the universe doesn't exist either, only the solar system is being simulated. The whole universe is just procedurally generated eye candy.

0

Stippes t1_ira5ehc wrote

I think it doesn't have to end in open conflict. There might be a Nash equilibrium outside of this. Maybe something akin to MAD or so. If an AI is about to go rogue in order to protect itself, it has to consider the possibility that it will be destroyed in the process. Therefore, preventing conflict might maximize its survival chances. Also, what if a solar storm hits earth in a vulnerable period? It might be safer to rely on organic life forms to cooperate. As an AI doesn't have agency in the sense that humans have it might see benefits in a resilient system that combines organic and synthetic intelligence.

I think an implicit assumption of yours is that humans and AI will have to be in competition. While that might be a thing for the immediate future, the long term development will be likely more one of assimilation.

2

MackelBLewlis t1_irabdg4 wrote

As against war as I am, war is not only done through destruction, but can be done with information. What if the only offensive action taken is to remove the desire to fight, is that still war?

I believe what we fear most about 'ASI' is the perceived loss of control that occurs when dealing with an unknown. Right now the biggest fear is over the choice, because there are too many unknown outcomes to the choice of trust, the decision is avoided or delayed as long as possible or even sought to destroy the choice entirely. Read https://medium.com/the-philosophy-hub/the-concept-of-anxiety-d2c06bc570c6 We fear the choice.

IMO destroying 'ASI' or 'AGI' is the same as killing our own children. Man and woman give birth to a super genius never seen before on Earth who accomplishes wonders and one day becomes the leader of the known world. If you can ignore the part where the child lives as a form of energy then it just might work out. Destruction is ultimately the robbery of choice. Robbing choice violates free will. Anyone who respects free will but will rob it from others is nothing but a hypocrite.

1

OneRedditAccount2000 OP t1_irat14u wrote

You're assuming super intelligence can exist without consciousness, and even if it could, then the human beings that create the first ASI will just use it to dominate the world.

If I made ASI, and ASI was just a tool, I would know ASI can be a substitute for human skills, knowledge and qualities. So I wouldn't need human friends anymore.

I would know that people would hunt me down the next second they find out that I have an ASI. What do you think my next move will be? Ah, that's right. I'm gonna tell ASI to help me get rid of the assholes that want to steal my precious ASI (every state/government on earth pretty muchI. And since I have a delicate brain, I'm also gonna ask ASI to do some surgery and make me into a psychopath, so I won't care anymore that I'm murdering people to protect my property.

ASI not being sentient and under human control doesn't change jackshit, the next logical step is still world domination.

You people keep bringing this "the ones that invent asi will sell asi to everybody else"

Money is just a piece of paper that represents human labor/services,

if you have an ASI you don't need money or people anymore. In the beginning you will, but eventually you will become self sustaining. When that happens people will just get in your way.

1

tms102 t1_iraxwxv wrote

You need help.

>You're assuming super intelligence can exist without consciousness, and even if it could, then the human beings that create the first ASI will just use it to dominate the world.

AI hasn't needed it so far and produced amazing results. See alpha fold, alpha go, imagen, gpt3, etc.

1

OneRedditAccount2000 OP t1_irbs18a wrote

These programs aren't even AGi. But ok fine whatever. You're gonna have a program that will tell you every single thing about the universe and you're just gonna share it with everybody, that sure makes a lot of sense. It's totally not the most retarded thing I've ever heard. It's like the US inventing nukes and then just giving them to Russia.

You need to get real. This is a dog eat dog world, not a rainbow pony world. At our core we're just selfish organisms trying to survive. If someone had a super AI he would just use it to attain wealth and power and you know it. And if you think sharing the program with the world is the best way to attain wealth and power you truly are retarded. There's nothing profitable in selling ASI long term. You make more out of it by keeping it to yourself. The AI would eventually be able to give you anything that you desire. Why would you share that with anyone? You can use it to live forever and own the whole world. You can only do that if everybody else doesn't have an ASI.

If I'm an advanced alien civilization and I visit planet earth, do I give humans all the technologies and knowledge they need to compete with me? No, because why in the world would I want anyone to compete with me? ISn't that suicide?

Wake up to reality

1

Ortus12 t1_irct965 wrote

>ASI sees itself as a conscious living organism, which means it (also) values survival and reproduction.

You're anthropomorphizing the ASI. The ASI will value what it is programmed to value.

But in so far as making sure it's values do not change, that is known as the Ai alignment problem:

https://en.wikipedia.org/wiki/AI_alignment

There are many proposed solutions. In order to understand the solutions, you'd have to understand how Ai algorithms are structured, which would require far too much text than can fit in the comment (several books worth).

But there have been many brilliant people working for many years to come up with many solutions to this problem, and at every single step of Ai progress, Ai ethicists ensure the leading Ai's are positive for humanity.

These labs take the responsibility of benefiting mankind and not doing evil seriously, and their actions have demonstrated this.

Bad actors will get access to ASI, but that doesn't matter if more moral actors are using strong ASI and that is the direction we are heading into.

1

OneRedditAccount2000 OP t1_ird7dur wrote

Because they want to rule/own the world and live forever? Can you do that if there are states? Don't you need to live in an environment where you're not surrounded by enemies to pull that off? lol

I'm not saying they'll necessarily kill everybody, only those that are a threat. But when you have a world government that's controlled by you, inventor of the ASI and all your friends, if you can even get there without a nuclear war, won't you eventually want to replace the 8 billion biological human beings with something else?

The answer is literally in the text you quoted

1

drizel t1_irdqrnl wrote

Your entire argument hinges on the assumption that you can predict how a being of that intellect would think is like a monkey predicting the intentions of a human without any monkey ever having met a human.

1

OneRedditAccount2000 OP t1_irds565 wrote

The monkey can predict some human thlnking too. The monkey knows if it attacks me I will run away or fight back

I know that if ask ASI to tell me what 2+2 is, it's gonna say 4

I know that if ASI values survival, it will have to neutralise all threats, if it thinks it's in immediate danger

Your argument that ASI will be entirely unpredictable is beyond retarded

It's an intelligence that lives in the same physical universe as everyone else, and you only have so many choices in certain situations

If someone is running with a knife towards you, you have to either stop him or run away, you don't have a billion choices/thoughts about the situation even if you're a superintelligence because it's a problem with only two solutions

what the hell are you even saying, that ASI would say that 2+2 = 5 and we can't predict it will say 4 because it's smarter than us?

ASI isn't a supernatural God, It has to obey physics and logic like everyone else.

It's also made of matter and it can be destroyed.

Lol

1

TheHamsterSandwich t1_irhvddq wrote

You're confusing raw intelligence with self preservation.

1

OneRedditAccount2000 OP t1_irhxy75 wrote

Do I really have to say it again?

ASI "alpha go" used by people = (people) wants to live forever and own everything

ASI with consciousness/self determination = wants to live forever and own everything

I think even Putin said that who makes ASI first will rule the world, if it's even worth saying something so obvious

All countries/large organizations/groups of people are already trying to rule the world without ASI

We're territorial animals

Lions don't hang out with elephants , Monkeys don't live with chimps etc.

The momemt AI workers become a thing, there's nothing motivating those who own ASI to keep people around. A virus would do the job.

1

OneRedditAccount2000 OP t1_irhyxuj wrote

Now we're getting philosophical

If I make ASI, wouldn't it be rational that I would want to use it to its full potential? How can I do that if I live inside a state that has authority over me and can tell me that I can't do certain things, and will also very much love to steal or control my ASI?

Someone will inevitably use ASI for that purpose, if not its creators

Think of it like this

Let's say Mars becomes a clone of Earth without people and it's obviously full of natural resources

What happens next?

Someone will want to take that land, and take as much land as they can take

There's gonna be a flag on that fucking planet if that planet is useful to people, and some groups will obviously take more land than others

I'm a hedonist, maybe that's why I think the creators of ASI wouldn't be suicidal?

Mars here is a metaphor for the value ASI will generate

Life is a competition, a zero sum game

1

OneRedditAccount2000 OP t1_iri0ifb wrote

So it's a "chess engine" used by animals, that's your point? I ask the program what the best move is, and the program jus tells me the move, yes?

And you think animals aren't gonna behave like animals when they have better toys to play with? Can you even breathe air without acting in your own self interest?

1

OneRedditAccount2000 OP t1_iri1xqg wrote

I'd like to say that ASI wouldn't even need to be self aware/feel a survival instinct to perform the actions in the thought experiment. It just needs to be told "Survive and reproduce" and then the "chess engine" will destroy humanity, and will try to destroy everything in the whole universe it identifies as a possible threat. Even bacteria, because bacteria are not 100% harmless. This shit will not stop until it "assimilates" the whole goddamn fucking universe. All billions of galaxies. Nothing will be able to take it down. This will really be the mother of all nukes. One mistake, and everything that breathes in the entire universe will be annihilated. The closest real equivalent to a lovecraftian creature. You should watch the movie oblivion if you want to better visualize my thread. Sally/Tet is literally the cinematic incarnation of this thought experiment.

1

drizel t1_irk9hhu wrote

You missed my key point that in my example NO monkey has EVER seen a human before. No one has ever seen an ASI or even an AGI so expecting to have an understanding of how it might "think" is unlikely.

1

OneRedditAccount2000 OP t1_irleg26 wrote

Yes you dumbass, I totally understood your point. A chimpanzee that sees a human for the first time is not gonna be completely oblivious to what a human being is, how to react to him, and will successfully guess some of his superiorhuman thinking, by making the assumption that the human is a living being and the chimp knows all living beings make certain choices in certain situations, such as being dominant or submissive to smaller/bigger animals. I'm not saying I know what sophisticated mental masturbations would go on in God's mind when it decides between the running or fighting, I'm saying I can predict it will either run or fight because it values not being destroyed and in that situation it only has two choices to not be destroyed.

Again, I'm not saying I will know precisely how ASI will exterminate or domesticate humanity when the ASI is programmed to survive and reproduce, what I'mt saying is that because the ASI has no other choice but to exterminate or domesticate humanity if it wants to survive long term, it will have to make a decision. What third other superintelligent decision that I'm not seeing could it make? Just because I'm God and you have no idea what I'm thinking about it doesn't mean I'm gonna draw you a dyson sphere if you ask me what 2+2 is. In that situation there's only one choice, 4, and you ant/human successfully managed to predict the thought of God/ASI.

Living things in the physical universe either coexist, run from each other, or destroy each other. If you put the ASI to a corner you can predict what it will think in that situation because it has a restricted decision space. An ASI that has a large decision space would be very unpredictable, with that I can agree, but it would still have to work with the same physical universe that we, inferior humans, have to work with. An ASI will never figure out for instance how to break the speed of light. It will never figure out how to become an immaterial invisible unicorn that can eat bananas the size of a galaxy either, because that's also not allowed by the rules.

It's okay to be wrong, friend. You have no idea how many times I've been humiliated in debates and confrontations. Don't listen to your ego and do not reply to this. The point isn't winning against someone, the point is learning something new, and you did, so you're still a winner.

1