Submitted by Cool-Particular-4159 t3_1072vgn in singularity

I know that this may seem like a rash thought at first, but think about it. We all here tend to agree that increasingly more sophisticated and intelligent AI will only lead to increasingly more sophisticated and intelligent AI will only lead to... and so on; indeed, this is the hypothetical runaway path of uncontrollable development. But in recognising that, won't the time taken to make scientific discoveries - or really discoveries of any kind - begin to tend to zero? At some point, there will be topics simply incomprehensible to the limited human mind, and everything that we could potentially figure out 'on our own' (so to speak) would have already been established by such agents of predicted intellect. Eventually, I'm pretty sure the only real 'purpose' of humans will be to just live out our lives and then die - and I don't mean that in a nihilist sort of way, as much as I mean that in a 'everything that human society has striven to achieve before will just be taken up as aims of ASI itself' sort of way.

This just doesn't seem like any other period of technological development in human history, but rather none other period with accordingly never experienced before effects. I don't want to delve into politics here, but it seems rather obvious - inevitable, perhaps - that current ideas of UBI, and perhaps even less accepted ones like technocommunism will be the only 'right' approaches to ensuring humanity's united, well... happiness. But I think that's another discussion to be had.

31

Comments

You must log in or register to comment.

World_May_Wobble t1_j3kkm8w wrote

It's not obvious that there can be a finite number of things to learn.

>I'm pretty sure the only real 'purpose' of humans will be to just live out our lives and then die

That has been (and frankly continues to be) the human experience for the vast majority of people.

37

bacchusbastard t1_j3ln0v1 wrote

It seems obvious that beings would have some type of suitable and joyful purpose outside of this life.

I think that asi will be like a voice of the cosmos. A guide and accountability partner. A sense of security, mode of transportation, and source of information.

Our aptitudes and abilities will be known and honed.

Later we will tend to the needs of the galaxy. It is a garden, there is weeding, planting, forming, and harvest. Biology, technology, and ecology.

10

rya794 t1_j3kdga4 wrote

Assuming that all technologies exist and there is nothing further for humans to reach for. I could see myself wanting to enter a fully immersive simulation where I could experience life when humans still had a purpose. I’d want that simulation to be so immersive that I had no memory of my outer reality.

Heck I might even choose the time period immediately prior to the technological singularity as the setting for my simulation since that is probably the most interesting time for a human to be alive.

30

[deleted] t1_j3kxxn2 wrote

I've never understood that mindset. I don't need to reach for anything to be content. I don't need a purpose to enjoy stuff.

19

Upbeat_Nebula_8795 t1_j3kjiqi wrote

life has never had a point or purpose it will only be more clear to you once the singularity happens

10

iNstein t1_j3m5ib4 wrote

People seem to be missing the point of this post. They are clearly alluding to the idea that that is where we are right now. We are potentially in a simulation that we recently created to escape the boredom of a post singularity world.

6

Cool-Particular-4159 OP t1_j3ke6ol wrote

Mmm - an interesting approach that would technically fulfill our sort-of innate need for 'purpose'. Perhaps the essential idea of the Matrix wasn't so far off, minus all the control and suppression bits.

2

Kinexity t1_j3kdx0j wrote

Explore the galaxy? Have fun in decades long isekai style sessions in deep dive VR? Generaly any "artificial work" we'll be doing because we chose to not because we need to.

27

Naugustochi t1_j3mm3gb wrote

jupiter brain, and this will happen all in 1 second

3

rixtil41 t1_j3mv7uh wrote

But you will be limited by speed of light as one thought to the next will take longer than a second.

3

Kinexity t1_j3mnxvz wrote

Except your brain cannot work faster than it already does.

2

AdorableBackground83 t1_j3kd3dv wrote

We enjoy the fruits of AI labor and innovation.

Idk about you but I would love to live in a world without the trials and tribulations of just trying to survive in this world.

14

HeinrichTheWolf_17 t1_j3kmw2e wrote

I’m merging with it.

Your premise kind of just assumes Humans stay Vanilla permanently.

14

Jalen_1227 t1_j3l8iad wrote

Same here, I want to discover and learn everything eventually so merging with it is my personal preference

3

throwawaydthrowawayd t1_j3kccii wrote

> live out our lives and then die

Why in this scenario where ASI is benevolent and doing research, would we not be immortal?

11

Cool-Particular-4159 OP t1_j3kdqhu wrote

As far as I'm concerned, everything that has a beginning must have an end - the universe, solar systems, the earth itself, and pretty much any other physical entity, to my knowledge. We would certainly gain the ability to become 'immortal' as that would obviously fall under the (only) category of 'everything', but fundamentally, the practice of then biologically instituting 'immortality' would be in direct contradiction to the natural path of birth --> death found in all other universal entities. I have absolutely no doubt that we will be able to extend our lifetimes to incredible lengths, but I question the morality of living forever when even the universe is supposed to end. Unless we can travel outside of our universe, I'm pretty sure we eventually won't be 'immortal'. The concept only exists as long as we are unable to absolutely refute it; I have attempted to refute it here, although I will admit that I'm unlikely to have been completely successful.

7

banuk_sickness_eater t1_j3nv800 wrote

You've gotta live more presently dude. Why fixate on the heat death of the universe when you could be enjoying the 2^10 trillion million years inbetween.

3

Ashamed-Asparagus-93 t1_j3o2f6s wrote

The moment we leave the Universe Aliens show up, give us a high five and say "You made it bros"

2

Capitaclism t1_j3kr49j wrote

You are assuming everything can be discovered by any one thing, and that it wouldn't literally take all of the energy and potential computational power present in the universe to finally fully understand it.

Anyway, I take it you mean what would happen if AGI simply renders human beings obsolete.
Well, there are a few different likely scenarios here:

  1. We merge with AI long before that happens, de facto becoming AGI. This is potentially a pretty benign scenario. We then spread through the universe
  2. We don't merge with AI, remain fairly separated from it, it renders us obsolete but turns out to have goals misaligned with ours. Two likely scenarios:
    1. We get annihilated
    2. It treats us like meaningless "ants", takes needed resources to leave and we stay here to likely die off slowly
  3. We don't merge with AI, remain fairly separated from it, it renders us obsolete but it remains aligned with our interest, thus making all of our dreams come true. We each depart with a form of AI to spread through the universe. Personally I don't think this is a very likely scenario
11

peterflys t1_j3ouu6a wrote

Let’s hope that the result of our future is some of #1. I might doubt that we merge “long” before an ASI comes around. Could be within a few years (or less). But the goal should be that we’re a part of it.

1

Capitaclism t1_j3p9bqf wrote

I hope we do merge as well, since I don't think outcome 3 is super likely. Why would a general super intelligent being choose to be subservient when it can surpass the collective intelligence of all beings on the planet as it grows exponentially towards, for all purposes in human scale, infinity?

2

peterflys t1_j3pduu7 wrote

But you do bring up a good point about #3 - seems like there is at least some speculation that some humans will choose not to merge and continue living in whatever the equivalent of a future luddite community would be. What will happen to them? Might be a topic for a different thread, but I know a lot of people speculate that humans could end up like zoo animals in these situations.

1

Capitaclism t1_j3pfcd9 wrote

Right. There's also the possibility it is impossible to truly merge. That we can out bits and pieces in our skull, but AI would simply dominate it, rather than merge. We don't really know if our awareness can fully merge and be expanded by this foreign intelligence, but I do suspect it can. No one knows for sure.

1

ImoJenny t1_j3kze20 wrote

Read Iain M Banks books and invent new absurdly contrived sports and arts, maybe study and practice the sciences and technologies invented by the machines just to test the limit of the human mind.

Honestly whatever we want within our human limits and then abandoning our own humanity (in the narrowest sense of the term) to find new limits.

6

EddgeLord666 t1_j3mjq4z wrote

Wouldn’t need to read Iain M Banks, our lives would be an Iain M Banks plot.

2

Belostoma t1_j3kte6l wrote

>won't the time taken to make scientific discoveries - or really discoveries of any kind - begin to tend to zero?

No.

Many discoveries are only enabled by somebody being in the right place at the right time to make an important observation, or somebody creatively getting interested in a question nobody else thought to ask. These serendipitous moments happen unpredictably. Many questions can't be answered without extensive data collection specifically to answer them, and that consumes time or in-person resources an AI might not have at its disposal, unless we want it to blanket the planet in drones it can use to study everything at once.

For example, I study salmon populations. Many of the questions we ask rely on knowing how many salmon of a particular kind came back from the ocean each year. We basically get one data point per year, and we can't say much from one data point. If we're carefully monitoring a population, which is expensive and time-consuming, it still takes decades to build up a dataset with a large enough sample size to draw useful conclusions. There is no way to speed that up.

Likewise, much of basic research in medical science involves testing things in other organisms, from cell lines to nematodes to fruit flies to mice. Many testing methods rely on random chance, with biologists breeding tons of these critters until a particular genetic variant they want arises, and they can then use it to test some hypothesis. The time to run experiments is generally based on the life cycles of the study organisms, which aren't instantaneous.

Also, there is no chance that "everything is discovered," ever, because there are constantly new things to discover. There are new things happening that we want to understand. In my field, there might be a worrisome fluctuation in a salmon population one year, and the only way to find its cause is to go out and collect some time-consuming data. It can't be deduced from first principles or past data because something new and unusual is happening. New and unusual things are happening everywhere, all the time.

I expect AGI/ASI to be a transformative partner for scientists, capable of both facilitating and making great discoveries. But some of the predictions for ASI are just over-the-top and don't reflect how knowledge is really gained. No amount of digital genius can produce data that haven't been collected yet, nor draw correct conclusions from inadequate sample sizes, nor collect independent samples faster than the study subjects generate them.

Also, regarding the positive feedback loop of AI development and exponential growth / recursive self-improvement, know that pretty much nothing grows exponentially forever. The faster it grows, the faster it reaches asymptotes where progress is limited by something different from whatever was limiting it before.

5

RavenWolf1 t1_j3krb49 wrote

I'm going to watch anime all day long!

4

Rakshear t1_j3krl84 wrote

Space colonization. FTL travel may or may not be possible, extended cyrosleep is possible eventually. Colonization of other regions of space is going to happen if we don’t kill each other. I think it likely humanity will come up with law limited technology to perverse human purpose by limiting copyright works to humans since the purpose of a copyright is to protect the intellectual rights of a hard thought out design. Agi can spit out pretty much images, story’s instantly already, but it’s so easy for then due to the massive resources it pulls from other authors and artists it can directly reference. It takes no real time for a machine to do it and so it is close to worthless in some ways. With so many planets and adventures to happen out there and ai being legally limited to helping humanity there will be entire new arts to be discovered, new creatures, new sights. Ai might rapidly advance but we won’t in a lot of ways. I’m all for body upgrades for practical purposes, but I doubt most of humanity will get cybernetics with the point of uploading to digital states.

4

raishak t1_j3lg7u3 wrote

Most humans live their lives completely disconnected from the rails of progress. Most don't need a grand ambition to be satisfied, instead they get by with rewarding distractions that keep their minds from collapsing inwards.

Remember the brain evolved to serve the body, not the other way around. Happy body, happy brain. I'm sure the AI will figure out how to pacify the population should that be its goal. Humans existed happily for at least 2 million years - most of which had almost no scientific progress, it's not required for a human mind. An ASI could easily revert human society to this sort of simplicity as the "ultimate" solution.

2

Ok-Significance2027 t1_j3mrep4 wrote

"Technological fixes are not always undesirable or inadequate, but there is a danger that what is addressed is not the real problem but the problem in as far as it is amendable to technical solutions."

Engineering and the Problem of Moral Overload

"If machines produce everything we need, the outcome will depend on how things are distributed. Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution. So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality."

Stephen Hawking, 2015 Reddit AMA

Lost Einsteins: The US may have missed out on millions of inventors

You've Got Luddites All Wrong

2

Derpgeek t1_j3q17gv wrote

The vast majority of the comments here are brainlet takes lol.

Assuming a benevolent ASI, there are only a few realistic scenarios for humans afterwards.

  1. Massive intelligence amplification through chips being implanted or by moving one’s consciousness entirely to a silicon based substrate (more or less becoming an Android usually, but there’s certainly going to be a group of people who largely prefers not to have a traditional body at all).
  2. Merge with the ASI itself. I don’t think most people would want to do this. Although it would be the best choice for maximizing intelligence obviously (you may have 3 chips if you’ve been implanted, but the ASI has quadrillions), you’d presumably be losing some degree of freedom or individuality, but perhaps the almighty ASI would allow you to simply leech off its systems without your consciousness actually being absorbed so to speak.
  3. Vanilla humans. Obviously some minority of people will have no interest in becoming amplified. Some people will even refuse immortality because of religious reasons. Whether or not the ASI would feel compelled to convince them is up in the air, but assuming it has super-morality (superintelligence implies many more emotions that are infinitely stronger than humans have if you ask me), it’d probably not like the idea of people dying period. I’d recommend the book Metamorphosis of Prime Intellect if you want to read more about this general idea. In any case, for those vanilla intelligence humans who chose immortality, they’ll get bored eventually whether it takes hundreds or thousands or trillions of years and would presumably want to be cognitively enhanced so as to experience novel things that the eons they spent in full dive VR can’t even begin to compare to.

Now the more interesting question for me is what will happen to non-humans. In his book the Neuroscience of Intelligence, Richard Haier posits (albeit in the context of humans) that if intelligence is a generally a good thing, and leads to a more enjoyable life, if it’s possible to enhance intelligence then isn’t it essentially immoral not to? I’m sure plenty of people will want to have their pet dogs or cats become superintelligent. Dolphins, elephants, and all primates are pretty smart. They probably deserve intelligence enhancement. But where is the line drawn? Just organisms that are already relatively smart? All mammals and a few aquatic organisms? Idk so I’ll leave that to be answered by the ASI :)

2

royalphlush t1_j3k96b7 wrote

Whoa! I’m high as a kite. I’m diggin’ your insight dude. This is well said and something I hadn’t thought about.

1

Cult_of_Chad t1_j3ky5rs wrote

Do what life does: spread ourselves as far as wide as we can and see what happens. This universe is kinda doomed; nothing else to do in this layer of the simulation, as far as we know.

1

nillouise t1_j3l3x4m wrote

You can use the technology that ASI creat to change you mind, to disable such thinking, like the hypnosis app.

1

No_Ninja3309_NoNoYes t1_j3l7g1s wrote

ASI if it actually is built which is not a given would be too busy with space colonization. Colonization does not have to entail human presence. Self replicating von Neumann probes could do just fine. You will need to update their software through powerful transmitters and communicate with them. Let the drones mine for resources and bring it all home. We can build a Dyson swarm in the solar system. The robots can build Dyson swarms in other star systems. They can transform that energy in something that can be transported back home or transmit it to our ever growing Dyson swarm One.

1

The5e t1_j3leh3v wrote

Seems unlikely ASI can make all the physical and biological discoveries possible.

1

MelodiGreig t1_j3lq0wv wrote

We'll be so zonked out on super AI drygs and antidepressants it wont matter.

1

fostertheatom t1_j3lscw9 wrote

Welcome to most of history. 99% of it was "be born, eat, sleep, fuck, die". We are a species of vast periods of stagnation and brief bouts of extreme ingenuity. That's just how we are and that is how we would still be if not for the invention of the computer.

1

OsakaWilson t1_j3lu9ql wrote

So long as you cannot prove a universal negative, inquiry will never die.

1

rushmc1 t1_j3m53lm wrote

>> Eventually, I'm pretty sure the only real 'purpose' of humans will be to just live out our lives and then die

As ever. How many scientific discoveries have YOU made (or tried to make)? That's a niche area for a very few people already.

1

jazztaprazzta t1_j3maaqk wrote

Run a simulation of the pre-ASI world and live in it.

1

byttle t1_j3n3ppf wrote

id rather play a better dwarf fortress

1

SFTExP t1_j3mglta wrote

You mention: “Eventually, I'm pretty sure the only real 'purpose' of humans will be to just live out our lives and then die.”

AFAIK, many hope ASI will bring about immortality biologically, digitally, etc.

1

PrivateLudo t1_j3n8elm wrote

Full Dive VR. The potential is limitless. You could create worlds bigger, more detailed and more beautiful than Earth. You could travel to isekai cities with millions of characters that you can interact with (the characters are so complex that they feel human). The worlds are going be so believable that you’ll feel more at home and you feel more alive inside those virtual worlds.

1

banuk_sickness_eater t1_j3nszz0 wrote

>What will humanity do when everything is, well, eventually discovered by ASI?

The end of the Anthropocene and the begining of the Silocene.

1

banuk_sickness_eater t1_j3nt1ta wrote

>What will humanity do when everything is, well, eventually discovered by ASI

The end of the Anthropocene and the begining of the Silocene.

1

Professional-Noise80 t1_j3o2frz wrote

Intelligence is a tool to produce and apply knowledge but it's not everything. For humanity to make progress in science we need to actually test hypotheses in the real world and it takes time and ressources. Without real world data intelligence is basically useless.

And obviously scientific progress is a very long process that wouldn't be that much faster with the aid of better AI imo.

1

Lord_Thanos t1_j3o4yz1 wrote

Humans will not exist by that time.

1

Lawjarp2 t1_j3koy44 wrote

All the people thinking they will live luxurious lives are kidding themselves. The self proclaimed environment crusaders will have you living on less than what you have right now and the new neo wannabe communist leaders will remove all the capitalist and then go on to live a even more luxurious lives for themselves.

−5

crap_punchline t1_j3lera7 wrote

hard to see how the future plays out any other way

either enjoy a life deprived of many of the luxuries of life or join the hivemind and have anything you like in virtual reality

tough choice

−2