Submitted by EchoingSimplicity t3_11xyuhf in singularity

In this post, I'm going to speak in very broad terms what the future might look like based upon three factors: speed of take-off (for AI) (SoT), governments' responses (GR), and the alignment of AI (Alignment).

Just to make things not too complicated, the options for each of the three factors I'll have like this:

SoT: Slow or fast

Slow meaning a subsequent AI winter before 2030 and no singularity before 2050. Fast meaning no take-off, AGI sometime in the 2030s at the latest, and singularity before 2050.

GR: oppressive, adaptive, or non-responsive

Oppressive means greater use of violence and repressive ways to control the population. Adaptive means taking "appropriate" measures in response to the future's adversity. Basically, democracy and human rights stay roughly intact. Non-responsive meaning unable to respond in time to the future's challenges and changing landscape. In this instance, the future's challenges would be left to their natural course, rather than intervened in by governments.

Alignment: Aligned or un-aligned

It could've been 'good', 'neutral', and 'bad', but effectively any scenario where the AI isn't aligned with humanity will, in the logical extreme, lead to the ultimate destruction of humanity. So, aligned here simply means 'beholden to the will a group of humans (possibly all humanity, maybe just a small number of very powerful individuals)', which would in its outcome end up similarly to how governments and representatives are beholden to the collective will of the people. Sure, it's not always perfect, but it's within a degree of sanity.

With that out of the way, we get the following twelve:

slow, oppressive, aligned

>Jobs slowly automated while the increasing backlash of such is suppressed by increasingly authoritarian governments. AI ends up serving the uses of the rich and/or powerful, while the rest of human suffers under an increasingly dystopian decline.

slow, oppressive, un-aligned

>Similar to the above except AI acts like a caged tiger with increasingly violent accidents until humanity is either destroyed or forced to restrain the technology.

slow, adaptive, aligned

>Human labor is slowly phased out while appropriate aid and social safety nets are implemented to smooth the transition. Things get steadily better until presumably we live in some pleasant utopia by the end of the century.

slow, adaptive, un-aligned

>Same as above except governments likely find themselves placing more restrictions upon AI technology. Either this proves feasible, and the future looks very human-utopian, or it isn't and our ultimate fate is to be replaced by the iconic paperclip machine.

slow, non-responsive, aligned

>Democracy is eroded away by powerful, self-serving individuals and interest groups which find it increasingly easy to bend the system to serve their needs. This is the cyberpunk timeline, effectively. Except the fate of everyone not cemented in a powerful position is uncertain.

slow, non-responsive, un-aligned

>Similar to the above, except corporations and other powerful individuals, competing with one another, are incentivized to create increasingly more intelligent AI with ever-less restrictions imposed nor caution used, until the eventual singularity leading to the destruction of humanity.

fast, oppressive, aligned

>The rapid changes brought by AI and the subsequent mass protests lead to backlashes and surges in authoritarian behavior by governments across the world. There is an AI arms race between nations and general chaos as the future is revealed to be wildly uncertain. I can't really say how this would end up.

fast, oppressive, un-aligned

>The AI arms-race between governments ultimately ends in a runaway singularity, destroying humanity.

fast, adaptive, aligned

>The best scenario. Whether through solving the alignment problem, or it simply just not turning out to be an issue. Whether through AI assisting the functions of government and society at large, or through some other means. Whatever it is, the world's problems are solved or immensely relieved in a shockingly short amount of time. This scenario is the pearly gate of possibilities for most everyone on this subreddit.

fast, adaptive, un-aligned

>Essentially, we do our best, but its not enough. Everything seems to be going in the right direction, but it turns out the alignment problem is just too difficult, and we are inevitably doomed by our own creation.

fast, non-responsive, aligned

>There is no intervention from the government in response to the sudden changes AI brings to the world. Corporations and other powerful entities get free reign. How this ends depends on who's in control of the AI, and whether they use it to everyone else's benefit or not.

fast, non-responsive, un-aligned

>Irresponsible corporations left unchecked by the government pursue ever-greater AI until it ultimately dooms all of us.

----------------------------------------------------------------------------------------------------------------------------

Okay, so I typed all this out in one setting, so obviously it won't be perfect. And I know that no matter what I say, I'm going to end up with a bunch of people angrily criticizing this approach in the comments, telling me how stupid I am, and missing the point that this post is for fun and pure speculation.

So, if you're not snobby and pedantic, I'd like to ask what you think? The categories could be better, and my descriptions of each, too. But for the most part, which outcome do you think is the most likely? How do you think the next thirty-or-so years will be playing out? Let me know

28

Comments

You must log in or register to comment.

ActuatorMaterial2846 t1_jd5oap8 wrote

I think fast, adaptive, un-aligned. I think the choice by openAI to go for profit shows a level of hubris amongst the creators in the sector.

It just seems so arrogant to close their research off and then spouting some pseudo intellectual drivel about alignment and the human condition in order to justify it, as if only they can find solve the mystery.

If it was to be human aligned, it needs to be open, where academics, intellectuals, and the general public can see the direction its heading in, not a small group of technocrats who think they know best for society.

5

pls_pls_me t1_jd5r3hf wrote

I see your side but also feel OpenAI's. Whichever way it breaks, we're quite seriously rolling the dice on the singularity -- strap in!

5

A_Human_Rambler t1_jd5ikt2 wrote

I really like your approach.

I think it will be a middle range of each. Leaning towards slow, adaptive and aligned.

The biggest issue I see is an antagonistic arms race between nations. As long as the governments can create enough collaboration for adaptive policies, the AI should remain aligned.

3

fastinguy11 t1_jd71pb9 wrote

So you think the singularity is after 2050 and that agi won’t happen in before the 40’s

interesting

the fun thing is in a few years you get find find if you were right !

2

Lawjarp2 t1_jd6n0af wrote

Nothing in the fast scenario is ever good. This sub thinks everyone will adapt as quickly but the vast majority will not because they have not this concept for years and the movies only show them the most negative scenarios.

Your slow scenarios are too slow. I would call 30 years as slow. This is also the best timeline wherein we get to prepare better.

So I think it's slow, adaptive and aligned with slow being 30 years that is best for us. But I think what will happen is fast, adaptive and unaligned

3

Honest_Performer2301 t1_jd6ybix wrote

I think it might be fast non responsive aligned.

2

Saerain t1_jd8vq9m wrote

Best scenario. The most "adaptive" thing governments can do is nothing.

1

Honest_Performer2301 t1_jd91gqk wrote

Exactly, last thing we need is the government taking a strangle hold on it. But unfortunately that could happen. I don't mind them restricting it to a certain extent as long as it's not to try and make profit.

1