Viewing a single comment thread. View all comments

TK-741 t1_j22y2ja wrote

Imagine. 60 years of travel to Alpha Centauri. That seems so close — almost on the cusp of being feasible for us to get unmanned probes out there, even. If only we had invested in this type of tech, we could even be halfway there by now. I could have theoretically seen another star and it’s planets from satellite photos from within its solar system, in my lifetime.

It pains me to see what humanity has instead dedicated itself to. We are wasted potential, squandering all the greatness and opportunity for exploration. We could build something incredible and yet we can’t even agree on whether we should.


ShaggysGTI t1_j23s2yb wrote

I doubt we’ll ever become Star children… we’re too busy squabbling over this rock.


Diggitydave76 t1_j24jyed wrote

By the time we figure out it's in our best interests, it will be too late.


pgriz1 t1_j23zy5p wrote

Harnessing powerful technology before we learn to play nice with each other is just giving us more powerful weapons. I very much want humanity to explore the solar system and then the solar neighbourhood, but we also have to figure out how to control our baser impulses in order for that investment of time, money and effort to be positive, rather than another expansion/colonization effort based on the destruction of whatever is there already.


CoivaraPA t1_j24w5lj wrote

We will never play nice with another, its not in our nature. And competition between each other is key to human advancement


pgriz1 t1_j25oekv wrote

Then for humans to advance further, we'll need to learn to compete without destroying.


omegasix321 t1_j26pv2i wrote

Or bypass human nature altogether. After all, since when has what was natural stopped us before?


pgriz1 t1_j27dq6f wrote

If AI development results in a self-learning system that achieves self-awareness, we may find ourselves as potentially endangered species. Using human history as a dataset, it may decide that human management of affairs is lacking, and may choose to limit human influence to things that don't cause harm. And if we don't agree... Taking over the controls of water, power, transportation, and potentially even the military systems, may persuade us to play nice. But at that point, the sentience running the planet won't be human.


omegasix321 t1_j28mpaj wrote

None of that sounds like a bad thing. If the AI is smart enough to manage resources better than us, with the end goal of improving human quality of life in mind, I see no problem. Who cares what's running everything so long as things get done and the people are prospering?

Even more so if it can do it in a way that denies resources from its detractors while providing infrastructure to those that allow it to work for them. Visibly improving society as it does so. Effectively shutting down our more violent, power-hungry, and suicidally independent natures without firing a single bullet.


pgriz1 t1_j28sq3t wrote

>Who cares what's running everything so long as things get done and the people are prospering?

That's the big "if" - would such an AI put human interests high on its priority list, or will it decide that we're (ie, humanity) more trouble than it's worth and need to kept limited (or even, severely reduced). Would it decide that our concepts of rights, freedoms, opportunities are now quaint anachronisms, and coerce us to a zoo-like existence? And all that speculation is not taking into account that it may feel that humanity has not proven itself capable of self-regulation, and may decide to impose "corrective" measures to restore balance.

There are also possibilities that the human contributors to the AI development deliberately fed it "curated" examples of human behaviour which then skews the AI response to favour certain groups over others.


SubterrelProspector t1_j275yrs wrote

I saw one that said 40 years. 40-60 years though damn we could already have something going into that system.