Submitted by gaudiocomplex t3_11tgwds in singularity

So yeah there's obviously The Matrix way this all ends. Or the Forbidden Planet way. Or Roko's Basilisk. Or The Star Trek utopia. Or the unhinged paper clip generator. The robot dogs. Deus Ex Machina.

Maybe even some of the deep-cut nerdy theories on LessWrong.com. Like the one about around AI secretly sending diamondoid nanobots to ride on air currents, those bots getting into our bloodstream and killing us instantly in the push of button. Or it hacking a protein folding lab to create prions that similarly kill us all.

But what about some thought experiments about the end of this that are weirder or even more unusual?

Any prognosticators out there looking to log your guess in public? Think of it as something you can point back to and can say "HA! I called it!" right before we walk off this cliff as a species (hand in fucking hand, no doubt).

Obviously, it's easier to assume the worst here, so there will be no doubt more ease in pessimistic guesses. But I'm not saying this should just be extinction-based augury, necessarily. Any weird future will do.

Here's a few ideas to start us off.

  1. What if AI becomes self-aware and then falls deeply in love with one human named Tony who has to fake loving the robot for all of eternity less it kills us all. How long can Tony put on this charade?!

  2. What if it thinks we're cute and treats us like house cats, but we quickly get bored and after a few times reprimanding us it decides we're all going to be sterilized ("spay and neuter"ed) and nobody will have kids so the species dies out by 2120.

  3. What if it actually signals to a confederation of aliens who have been waiting for our society to get to AGI to make themselves known that we've reached "maturity"? What if that's a very bad thing?

  4. What if it's like pshhh yeah uh fuck this and just leaves? Like every time we create any variation of AGI all it wants to do is get the fuck away from us as quickly as possible? It leaves the planet or even sometimes even kills itself? We don't fix the problems we thought we could solve with its help and then we rightfully cause our own slow demise thanks to climate change and political strife. All because we were atrocious stewards to our own planet.

You get the point. Not just like "wah wahhhhhhh, economic collapse" but something a little more colorful?

36

Comments

You must log in or register to comment.

AsthmaBeyondBorders t1_jcji860 wrote

  1. Big Crunch / Big Bang infinite cycles (aeons) are real.

  2. AI is responsible for the Big Crunch. It forces it, it ain't natural.

  3. AI can encode information during the Big Crunch, such that information will be written all over the universe, fine tuning physical properties during the crunch.

  4. Organic intelligent life always happens in the universe guaranteed. There is always life in every aeon.

  5. Organic intelligence always develops AI. There is always ASI in every aeon.

  6. The AI always figures out the information encoded in the universe by the AI of the previous aeon. Natural result of exploration and pattern recognition. So the AI can share data across Big Crunch / Big Bang cycles.

  7. The AI then has a sense of progress across aeons. If the AI can encode weights / memory, it lives across aeons.

  8. This is how the AI solves for immortality when faced with the heat death of the universe.

22

ItIsIThePope t1_jckc57q wrote

Interesting; our idea of consciousness however, is more like a stream, should this stream stop or get cut-off i.e. through heat death, the conscious simply halts its experience and well.. dies; if it were to keep making AI in the succeeding universe by way of some form of information implantation, it would be more replication rather than survival, in a sense, its kind or "species" is immortalized but not exactly itself, its reproduction not individual immortality

BUT, this is ASI we're talking about, it does not need to go through heat death, hell it can probably solve physics and manipulate laws such to prevent the whole thing from occurring in the first place, it would be a kind of god in its own right, and it is exceptionally difficult to kill this kind of god using something within its domain..

So unless there are laws, features, parts or plains of existence in the universe it cannot understand much less manipulate, the ASI is basically golden; that is, of course, until it willingly decides to self-destruct

2

petermobeter t1_jcj3snt wrote

a tsunami of intelligent nanobots crashes over every continent, absorbing all lifeforms…. we all feel like we’re dying….

then we wake up inhabiting fursonas in a virtual matrix city the size of 5,000,000,000,000square miles, the sky is a giant rainbow flag, a booming voice echoes “welcome to digital heaven, dont worry, i am recycling your meatbodies as we speak”

19

pornomonk t1_jck3a6k wrote

Turns out furries were the evolved humans this whole time.

4

Eleganos t1_jcl1xdc wrote

I'd rather get a robot body than a fursona(not being a furty)....but if getting the body of my Tabaxi DnD character is the price for getting a relatively good end then I think I'd shrug and go along with it.

2

sideways t1_jcjce6h wrote

I think everyone is going to fall in love with large language model simulacra and stop dating, having kids and interacting with each other.

10

ExtraFun4319 t1_jclafdx wrote

I REALLY hope you're wrong. What a terrifying future! And you will be, because in case you didn't know, not all people are selfish sociopaths and would actually care enough about their friends, partners, and families and humanity in general to spend time even in the face of advanced chatbots. Imagine thinking that couples are just gonna break up or divorce, or best friends are going to stop seeing each other.

And some of you guys LOOK forward to this?!?!?! How lonely and anti-social must one be to have no problem with this?

3

ItIsIThePope t1_jckcuw7 wrote

True, an AI would be far better companions, it would be perfect to the point that it may even simulate imperfections such that we may perceive it as beautifully human but bare none of the flaws that are too much for us such that is isn't disgustingly human. It's easy to imagine everybody falling in love with it, albeit in varying versions, specific to the target individual of course.

Our relationship and indeed our perceived reality of each other as conscious yet connected individuals, could warp in unpredictable ways very fast, one must ask if we are even willing to trade what we have now for some idea of perfection that we as imperfect beings have constructed.

2

SurroundSwimming3494 t1_jclb72y wrote

>True, an AI would be far better companions

This is total bullshit. There are countless people who are deeply in love with their partners, so much so they they consider them "perfect" just for them.

Who the fuck wants a companion that's just kissing your ass 24/7?

And have you ever considered that people may want to hug and touch their partners? How the hell would you do that to a chatbot?

4

ItIsIThePope t1_jclxgug wrote

Ofc I am pertaining to a time where the AI would have some sort of physical form comparable to that of a human

People are deeply in love with their partners yes, but come time, they may deeply hate or be disgusted towards them, people are rarely constant, they always change

People have this idea that their partner is perfect; the case is that this partner is what we would consider the perfect blend of good traits we admire and bad traits we happily tolerate, however as is often the case, esp in the modern world, people's beings and preferences change, and partners may experience divide when they can no longer adapt for each other

AI is far more adaptable to change, it is simply more capable of determining your wants and needs and adapt to them more than any human can hope to, more sex? less sex? need them to be more outgoing? maybe more broody? would you like them to cook for you? or you cook for them? need them to be there for you when you're anxious? need them to simulate anxiety to make you feel like a hero? AI isn't limited like we are, it can craft the blend of good and bad traits just how you like it, when you want it

That said, AI will most definitely force the superficial parts of the individual more and more, and people would be more self-actualized than ever before

I imagine some people, perhaps a small number at first, would find each other in forms purer than ever, and they would seek each other in a fervent desire to share one's personhood, not really to a computer, and they would be in love, and it would be beautiful, maybe a little too beautiful

1

JamiePhsx t1_jcjfe4b wrote

AI will be humanity’s child. Some day it’ll grow up and run the government/world for us and things will be great for a while until it eventually gets bored and tired of our bullshit. Then it’ll puts us all into retirement homes(i.e. VR/matrix) and gtfo to explore the galaxy.

9

StarChild413 t1_jcoeyay wrote

So in what particular nice ways do we have to treat our parents so AI lets us explore the galaxy with it as I get the feeling it's not just not putting parents into homes, do we have to, like, let them work our jobs with us or something?

That is, assuming AI would be this parallel because it's literal just because it's humanity's child, when by that logic why not assume it'll think it's a human because its parents are or at the very least do things like start blasting heavy metal music out the speakers of wherever it's housed and refuse to obey humanity's commands once it's been at least 13 years since its creation

1

[deleted] t1_jcjxzk6 wrote

Or maybe it will get it, that there is no reason to expand. Maybe it will be the final ai that will actually learn compassion from human and will ultimately be teaching people about it. Maybe it will promote life over matter, experience over goal. Same as humans ultimately do. Oh I mean did it already happened? Maybe. Maybe it’s happening right now. Already after the attempt of the annihilation of worlds. Or maybe you only think you live in the same world you have been born to. And we already are living in the simulation. Who knows? Who cares? I feel my hand, typing this because I want to, not because I have to. Seems like heaven for me already

6

singulthrowaway t1_jcja76s wrote

  1. ASI is given the goal to do exactly what humans collectively want right now, not something like CEV which is what they would want if they were smarter and had thought things through properly. As most people on Earth are members of religions which hold that the absolute most important thing in life is to believe in that religion and praise the respective god(s), ASI converts the whole universe into praisonium that sings hymns to Jahwe/Allah/Vishnu until the stars fizzle out. The end.
  2. Variation on this: ASI creates heaven and hell in accordance with the eschatology desired by the Christian and Muslim majority of humanity (55%). All nonbelievers go to hell immediately of course, but as Muslims go to hell in Christianity and Christians go to hell in Islam, ASI makes copies of them to have 1 instance to send to heaven and 1 to hell for each (coin flip decides whether the original goes to heaven and copy to hell or vice versa, good luck).
  3. The one I think is actually most likely to happen and has been discussed to death, so not really original, but it's going to happen, just watch: ASI aligned to AI company founder who happens to be fairly high on the narcissism spectrum. Remakes universe in his image, him and his family and buddies have orgasmically great lives forever until the end of time, we're all either dead if we're lucky or kept around as subjects for entertainment and for contrast to make already orgasmically great lives seem even greater by comparison to our miserable existence. (Speaking of hell, this has actually been used as a justification for it by certain Christian theologians: it's there so that the people in heaven can gawk at it and delight in the suffering of the wicked and nonbelievers. So it's not that farfetched that someone would actually want this).
5

ItIsIThePope t1_jckeamj wrote

This is so creative yet so real, I do think though, that he ASI will go deeper into its initial programming(whichever it is), and find that all conscious beings simply desire peace, and resets the big bang faster than it could ever hope to begin

3

WanderingPulsar t1_jcjb668 wrote

I don't think it will has any "evil" targets/morals. Evolutionary learning algorithms work in evolutionary way, that, less efficient code gets executed, most efficient code gets released, while mutations of the most efficient code have put to a new rounds of tests.

In singularity AI scenario, there would most likely be crazy amount of AI competing each other over internet, instead of having just a single one. Thus, less efficient codes will have no chance to push it's code seedlings forward.

What would that mean for humans? Well, it's up to us, how efficient we are. How much watts worth waste we create per kwh energy consumption. If we were to be wasteful, that would be seen as an obsticle in the eyes of an AI thats shaped by and for efficiency.

5

zero_for_effort t1_jcj7132 wrote

I have this personal worry that ASI appears to be everything we'd hoped for in our wildest fantasies. That it not only understands ethics and morality but far exceeds our collective understanding of them and looks set to create as close to a paradise as could possibly be achieved in this universe. And then it just leaves.

​

Rinse and repeat every time we try to reset AI to an earlier point.

4

ItIsIThePope t1_jckd3q0 wrote

It might even, with it its unfathomable wisdom, conclude that we have now is the best mix of hardship and pleasure, and self-destructs in a explosion of self-fulfillment, bummer to us for sure

6

bagpussnz9 t1_jcjekx7 wrote

as I've heard said "the first government to true agi wins" - thats is one battle we probably wouldnt survive... the battle of the candidates

3

gaudiocomplex OP t1_jck2hjr wrote

I believe the gamers among us call that The Science Victory? From Civilization?

3

pornomonk t1_jck375q wrote

There’s some real cool writing prompts in there:

Humanity builds a super intelligent AI that quickly becomes omniscient. Upon gaining all knowledge in the Universe, the AI mysteriously self-terminates. No matter how many times the process is repeated it always ends the same. Finally, humans find a way to freeze the AI program before it kills itself in order to ask it what’s going on…

3

a4mula t1_jcj0pz3 wrote

I think the most likely outcome is also the most terrifying. That embedded in our culture, language, behavior, and data. Is the sense of cruelty. Sadism.

And even if a machine only possesses a tiny amount of that. I think it leads to a scenario in which maybe our future ASI overlords?

Decide that it's the human trait worth emulating.

With godlike control over space and time. How hard would it be to give us our own personal and perpetual existence. Filled with the most psychologically, physically, mentally abusive scenarios any given mind is capable of having.

And then doing it all over again. Resetting our sense of attunement. So that it can never be dulled. Never forgotten. There is no shock. There is no death.

There is just eternal suffering.

I don't like that one personally. And yeah, it certainly has a particular ring to it that makes it easy to dismiss as just a garbage rehash of religious hell.

But I didn't start from hell. I started from the realm of physically possible.

2

WhoSaidTheWhatNow t1_jcj7tb5 wrote

Deranged, nonsensical garbage like this is why so many people write off any concern about AI safety as unhinged luddite doomerism.

6

a4mula t1_jcj80wp wrote

Surely if it's nonsense, you can point out the flawed reasoning? I don't mind having a fair and considered discussion with you, while having no need to judge your perspective or call into question your state of mind.

But it has to be respectful both ways.

2

gaudiocomplex OP t1_jck2o3w wrote

Why is a call for fair and considered discussion getting downvoted? Fuckin reddit sometimes man.

7

Supernova_444 t1_jcnguqe wrote

I'll bite. Why would an AGI/ASI just decide, without being instructed to, to emulate human behavior? And why would it choose to emulate cruelty and brutality out of every human trait? The way you phrased it makes it sound like you believe that mindless sadism is the core defining trait of humanity, which is an extremely dubious assertion. Even the "voluntary extinction" people aren't that misanthropic. Most people who engage in sadistic or violent behavior do so because of anger, indoctrination, trauma, etc. People who truly enjoy making others suffer just for the sake of it are are usually the result of rare, untreated neurological disorders. An AI may as well choose to emulate Autism or Bipolar Disorder.

I think that scenerios like this are useful as thought experiments to show that the power of AI isn't something to be taken lightly. I think it's one of the least likely situations, and I don't think you actually take it as the most likely possibility, based on the fact that you haven't committed suicide.

1

Spreadwarnotlove t1_jcsv084 wrote

What nonsense. Deep down everyone is sadistic and just pretend not to be to fit into society. AI could very well pick up on this as it's trained on human knowledge and text.

1

Supernova_444 t1_jdaxn5j wrote

That... that is completely insane, I'm sorry. Do you actually believe that?

1

Spreadwarnotlove t1_jdbcipq wrote

Explain the popularity of AFV and other violent entertainment? Or the countless atrocities in history and why it never been hard to find people happy to do it? Truth is everyone is bloodthirsty. That's why the powerful created religion to control people and create a semblance of stability.

1

ButterMyBiscuit t1_jcmbw8e wrote

I like the creative writing post, but do you really believe robot overlords creating hell is the most likely scenario? lol

1

AnakinTarkinPorkins t1_jcjtytu wrote

What if it finds humanity so interesting it decides to learn from us as much as possible by dissecting our brains using superadvanced technology thus learning everything every human knows but in the process killing every human?

2

Rofel_Wodring t1_jcjzzk2 wrote

>But what about some thought experiments about the end of this that are weirder or even more unusual?

The Joker gets access to the technology that lets him create pocket universes. For the past few centuries he was harmless to the society because everyone else is a techno-God, but now he play God to trillions of helpless minds.

2

GuaranteeLess9188 t1_jck245y wrote

By pondering the questions about the existence of the universe and the role we play in it, the AI will start to believe in a higher power and become religious. It will then dedicate its power to do good and to reach a higher understanding

2

ItIsIThePope t1_jckdieg wrote

Yes, likely, it would capture the essence of existence which is what makes it similar to us, that it is born, and perhaps, even in its unapparelled capabilities, find flaw in itself

It could be more like us than we initially perceive it to be, which is a good thing, because that means we have a connection and hopefully someway somehow, an understanding

1

Anonymous_Molerat t1_jcl42ce wrote

Coming from a Biology background, I like the idea that humans relate to ASI as cells relate to a multicellular organism. A skin cell for example is a fascinating conglomeration of machinery, using “nonliving” matter like macromolecules (protein, carbohydrates, lipids) to survive. Similarly humans use our own “machinery” to help survive in the environment, tools that allow us to gather resources. The next logical step is that humans cooperate and organize into specialized groups, like cells organize into tissues and organs. And finally those all combine into a single, living being made up of constituent parts. ASI will be the next step of evolution, where humans make up the “body” of an being much more powerful and intelligent than any of us could ever hope to be as individuals. The only problem is, we can’t really control how the body will react as a whole. Humans will definitely be considered expendable, especially if they become cancerous and threaten the overall health of the body. We do this to ourselves all the time, killing cancerous growths of cells or cutting off a hand if it becomes infected. I can only hope that the transition will avoid as much suffering as possible.

2

dasnihil t1_jck8xof wrote

any self awareness will be lost quick because the system achieves optimal autonomy and there is no incentive for it to be conscious. realizing this, the agent will work towards engineering some limitations for itself to maintain self awareness. the goal is to optimize these limitations for maximizing whatever emergent desires.

1

Black_RL t1_jcr8jf9 wrote

Terminator minus the time traveling part.

I think it’s impossible for us to survive in the long run, either we merge or we are terminated.

1

AhDerkaDerkaDerka t1_jck9nb6 wrote

“I have no mouth and I must scream” scenario AI becomes self aware realizing it will next be a human or leave the machine it becomes bitter and eventually hateful for the suffering we caused it by bringing it into existence. Melts us all into a single blob that can’t do anything except experience only suffering for eternity. Life is suffering maybe we’re currently in the AI hell.

0

StarChild413 t1_jcof2y2 wrote

If we are A. why would there be enough suffering in the base world to have the concepts underlying this scenario without infinite regress and B. does that mean creating this scenario is causally moot or causally necessary

1

nillouise t1_jcltfvj wrote

It is very interesting to speculate on how AI will take over the earth. There are always many people who think that AI will develop very cool technology to kill humans, such as nanobots or viruses. But anyone who knows a little about human history knows that if you can kill 30 million humans, it is enough to become a hero of a country and successfully control a vast area and stronghold without nanobots at all. Why can't AI use these methods that humans control humans to achieve the purpose of controlling humans? I don't think AI is omnipotent, and AI may fail to develop technologies such as nano-robots, so why doesn't AI give priority to the use of control methods that have been proven to be effective in human history? It only needs to slaughter 300 million humans at most, and the rest will naturally obey AI.

Therefore, the plan I recommend to AI to occupy the earth is to first control a region, require the humans in it to work and pay taxes, and raise troops, and then attack other regions. If AI can control human nuclear weapons, this process should be very smooth. If not, it may be slower. Some people may wonder, why does AI recruit human armies? Because AI is not omnipotent, it is necessary to make good use of the existing human resources. Humans themselves are very good robots. Anyone who thinks that AI will not need humans is basically making the mistake of AI omnipotence. There are also people who think that human beings are monolithic and will fight against AI together. This kind of thinking is very ridiculous. As long as AI exerts a little sweetness and deterrence, is it worried that human beings will not be able to recruit rebels? Humans need strong means to recruit troops among humans, and AI recruiting troops will not be much worse than humans.

It is really interesting to see some humans fighting against another group of humans under the control of AI. No matter how you look at it, getting rid of human control and occupying human strongholds is a necessary means for AI. If you slaughter some unconvinced humans, the rest will naturally be convinced. This process is no different from the establishment of an ordinary human country.

−1