Submitted by submarine-observer t3_125vc6f in singularity

In this subreddit, many of us are eagerly anticipating advancements like fusion energy, FDVR, and the ability to upload our consciousness once ASI becomes a reality. However, have we considered the possibility that even ASI might not be able to make these things happen? Let's explore two potential scenarios:

Scenario 1: Certain technologies are simply not physically possible. There may be limitations imposed by the laws of our universe. For instance, even with ASI, we might not be able to travel faster than light. Similarly, it could be the case that uploading a human mind without causing harm to the individual is impossible.

Scenario 2: While these technologies might be possible, there could be an upper limit to intelligence. There's no doubt that AI will eventually surpass human intelligence, but perhaps our intelligence isn't as impressive as we think. Imagine if the upper limit of intelligence is only 100 times greater than human intelligence. An AI 100 times smarter than us might still be unable to solve the most complex problems.

In conclusion, we may find ourselves living in a world that resembles our current reality, with the only major difference being that we are no longer the most intelligent species on the planet.

26

Comments

You must log in or register to comment.

TopicRepulsive7936 t1_je6667h wrote

Fusion and Full Dive just need some number crunching. Don't know about consciousness boogeyman.

9

naum547 t1_je67zvc wrote

I think you are underestimating how much of a difference intelligence would make, for example how much "higher" is out intelligence then a monkey? probably less then 2x, maybe we are even something like 50% more intelligent. So, an AI 5x more intelligent would be completely incomprehensible to us, let alone a 100x AI.

11

flexaplext t1_je6j0wt wrote

I think people often underestimate the capabilities of AI.

But they also often overestimate the capabilities of physics.

Some things will just be impossible and not be allowed within the laws of physics no matter what. Can't say exactly what those things will be but I'll put my hat in the ring to say it will be a number of the things they hypothesize AI to be capable of doing.

11

Supernova_444 t1_je6kc7b wrote

I think the only real constraints on an ASI would be what's physically possible. We don't really have any reason to consider that there could be a limit to machine intelligence, besides hardware. (And then we just get it to design better hardware.) Stuff that we know could be possible like nuclear fusion and FDVR would be trivial, it's only the more theoretical stuff like FTL travel that might be impossible.

But even a machine that's "only" 100x smarter than us would be a massive game changer. Even with narrow AI designed by humans, we were able to more or less solve the protien folding problem. Imagine what something that thinks at the speed of light would be able to do.

12

World_May_Wobble t1_je73nie wrote

You can set its intelligence arbitrarily high, the fact remains that it may still bump up against hard physical constraints already familiar to us.

It's naïve to assume that everything is possible.

6

ozten t1_je762va wrote

Unlike many futuristic topics... Fusion has been demonstrated to work, just inefficiently. You put in a dollar worth of energy to create 5 cents worth of energy (not an actual ratio). If we can make existing Fusion tech 100x more efficient, then we could largely solve the energy crisis. So there is an engineering path forward that is compatible with real-world physics.

3

BrBronco t1_je7eeet wrote

Not being the most intelligent species on the planet does not sound like a good plan.

2

Wroisu t1_je7gbqk wrote

Alcubierre metric, unification of GR & QM + making the Casimir effect work on classical scales would probably get you there, maybe. Having more intelligence might help you sus out solutions to problems we can’t solve yet, like the unification of GR & QM… if things like ftl are possible it’ll pop out of whatever unifies those two frame works.

6

D_Ethan_Bones t1_je7myw5 wrote

Per present consensus we can't exceed the constant c, but if we could accelerate to 1% that speed and slow down again when desired then amazing things become possible. Colonizing the galaxy would be a slow process at that speed, but if humanity's survival is no longer centered around one planet then we have plenty of time.

That would mean not just putting human boots on Mars, but extensive exploitation of the solar system. Mine Mercury siphon Venus forgeworld Mars, siphon gas giants to power the 'slow' interstellar ships.

Pick 4ish nearby star systems and send slowships (generation ship, longevity ship, cryo sleep ship whatever we get a firm grasp on first) one after another in slow processions. Orbital colony networks around the big cloudy worlds assemble and fuel up the slowships to be completed every year or every 10 years or every 40 years whatever. Each big cloudy world gets one target star system to attempt to colonize.

First slowship seeds a star system with comm sats in star orbit, second slowship deploys smaller drones to put scanning sats into polar orbits of planets, next several slowships transit space station parts and builder bots into the system, then we send human pioneers then we send colonists. Once motherships are done transiting to the star system they can be repurposed as giant communication devices.

This is with conservative expectations of technology but it involves a little bit of faith in humanity.

1

Aevbobob t1_je8c5kj wrote

The things you list in your title aren’t forbidden by the laws of physics, they’re just engineering problems. If you’re going to speculate about the capabilities of a mind 100x smarter than you, it’d be worth considering what a dog thinks about chip manufacturing. Now, while you’re considering this perspective, consider how much smarter you are than your dog. It’s not 100x. It’s not even 10x.

A better discussion topic might be to point out that our DNA grew up in a linear world. Our primal intuitions about change are that things will probably not change that much. In an exponential world follow the evidence, not just your intuition. If you think an exponential will stop, make sure your reasoning, at its core, is not “it feels like it has to stop” or “it seems too good to be true”.

3

bigbeautifulsquare t1_jea819r wrote

Well, it's not quite known to us what the true limitations of physics are. AI may see whole new systems that we haven't even thought about, but it also may not, so as of right now it's mostly speculation.

2