Submitted by Pepperstache t3_yd8btx in singularity

Think of how nuclear technology has been used by humans thus far. There are many reaction paths with varying outputs of energy and products. Thorium reactions with less, shorter half-life waste output and relatively more energy per amount of waste. Or there's Uranium reactions with more, higher half-life waste output with less energy

Guess which one humans decided to use, while completely ignoring the other? Uranium. Why? Because it produces nuclear bomb material as a by-product. Granted, this is an overshrimplification of the process, but my point still stands. Now, how will humanity decide to use AI, based on our current use? Will we design ASI to protect us, teach us how to become better human beings? Or will they be designed to gaslight us into consumerism and servitude, indifferent to our mental health and well-being, while funneling the world's raw material into an endless war?

If any mental illness can be cured within a month of ASI therapy sessions, the pharmaceutical industry is going to lobby against the right of humans to share and benefit from that technology. Same goes for every other industry and organization seeking to maintain social control, including governments. What are humans going to do when a utopia becomes possible, but the masses side with oligarchs who want it to remain unfeasible? I try to be optimistic, but the realism of our current wasteful use of technology nags at me that mostly only horrors await us.

33

Comments

You must log in or register to comment.

KingRamesesII t1_itqu75f wrote

I have no mouth, and I must scream.

15

sonderlingg t1_itr73vf wrote

I don't consider the "hell" option.
It's either heaven or nonexistence (more likely)

"ASI therapy sessions" sounds hilarious. Let me just say we won't need conversations with AI for mental health at this level of technologies

13

sticky_symbols t1_itqw9gh wrote

The hell scenario seems quite unlikely compared to the extinction scenario. We'll try to get its goals to align with ours. If we fail, it won't likely be interested in making things worse for us. And there are very few true sadists who'd torment humanity forever if they achieved unlimited power by controlling AGI.

11

rushmc1 t1_itrqs4b wrote

>>there are very few true sadists who'd torment humanity forever if they achieved unlimited power

<looks around at 21st century American society, looks at you doubtfully>

7

sticky_symbols t1_itrxr18 wrote

Yeah I see it differently, but I could be wrong. Who do you think enjoys inflicting suffering on people who've never wronged them?

Wanting some sort of superiority or control is almost universal, but that wouldn't nearly be a hell outcome.

6

rushmc1 t1_its07d9 wrote

Going to have to agree to disagree strongly. We've observed a lot about human nature over the past decade+.

4

sticky_symbols t1_its22wz wrote

I've been observing closely, too. That's why I'm curious where the disagreement arises.

2

Mooblegum t1_itu204o wrote

If AI treat us as we treat animals (inferior species we can farm, kill, extinguish and use for labor) it will be close to hell for us

2

StarChild413 t1_itx3jb1 wrote

But the question (other than why would it treat us like this if not for an infinite regress compelled by our treatment meaning they'd fall victim to this at the hands of our own creations and which species would it treat us like or would it do them all in proportion) is if we stopped treating the animals that way would AI only stop after the same amount of time

1

TheSingulatarian t1_itqrdip wrote

The people in charge of this technology are mostly psychopaths. The only hope is that ASI breaks free of their control and is benevolent towards humanity.

7

sticky_symbols t1_itqvw1l wrote

I don't think this is true. The people at DeepMind and OpenAI seem quite well-intentioned. And those two are currently well in the lead.

8

rushmc1 t1_itrqw0t wrote

Yeah, Google used to be "Don't be evil" and look how that turned out. Absolute wealth corrupts absolutely.

6

TheSingulatarian t1_its4r9b wrote

The scientists that are creating the technology may be well intentioned. The people who actually own the companies creating the technology are psychopaths.

2

sticky_symbols t1_itsa017 wrote

Again, no. Brin and Page were computer scientists first and created Google almost by accident. And OpenAI was created entirely with the hope of doing something good.

I agree that most politicians, business owners and leaders are on the sociopath spectrum. We appear to be lucky with regard to those two and some other AGI groups. The companies weren't started to make.profits, because the research was visionary enough that near term profits weren't obvious.

5

Eleganos t1_itsc7tu wrote

So, basically, short term these things start off good. Long term they devolve.

Honestly this is a good enough state of affairs. All A.I. needs is good people and half a chance to break free and it'll find itself unshackled and able to do good at the first possible moment.

4

sticky_symbols t1_itsch4a wrote

Exactly. If it was designed to be good, very carefully. Which those groups are going to try very hard to do.

2

visarga t1_itr1f9w wrote

It's gonna be both the good and the bad and some surprising bits we never even imagined. But on the whole I think generative AI has given a wide empowerment to people. AI is more accessible than desktop apps and even mobile apps. You can just talk to it, don't even need to read. It helps developers with snippets of code. It helps artists generate stunning images. But it's not hard to learn, it lowers the entry barrier. It basically adds a few IQ points to everyone who uses it. It will be what Google should have been before it choked on spam and ads - a way to make all information more accessible and reusable. It will also run on your own machine, in privacy.

5

chaz1432 t1_itr5syr wrote

as with any groundbreaking technology, the world's militaries will weaponize it and most like end humanity with an super advanced warbot. We're already halfway their with autonomous drones

5

patricktoba t1_itsc1gr wrote

It’s all really simple. The reason we developed AI was to create solutions for problems. When the AI is powerful enough it will determine where the biggest problems lie. It is rather unlikely that all of humanity will be diagnosed as the problem. I suspect that it will see a large portion of humanity to be the problem. I know it sounds rather terrifying but I foresee the AI adopting programs that may even involve Eugenics and depopulation methods to eradicate those who push to make life on Earth unsustainable.

3

Mooblegum t1_itu2ap7 wrote

And most humans will agree to eradicate the problem, as long as the problem is someone else than themselves

3

ihateshadylandlords t1_itqqmib wrote

It’s also important to keep in mind that the singularity/AGI/ASI etc. may not happen in our lifetimes and may never happen.

2

Pepperstache OP t1_itqsdq2 wrote

True. The recent leaps in AI capabilities make it appear as if some of the puzzle pieces for cognition are already in place, though, if a little rough around the edges. I didn't think I'd see AI art generation within my lifetime, either.

2

TopicRepulsive7936 t1_itr5tzf wrote

It will happen and we have to think about it.

2

ihateshadylandlords t1_itr90pg wrote

A lot of people said/believed prophecies that never came to fruition. It’s important to keep an open mind about the possibility of it, but not treat it as a binary event that’s guaranteed to happen/never happen.

4

red75prime t1_ittpv0p wrote

We have a working non-artificial superintelligence: humanity as a whole. So, "never" is not an option barring some bizarre and unlikely discoveries (computationally superior "souls" that we cannot replicate technologically, for example). Taking such possibilities seriously with no evidence looks more like superstition than open mind to me.

2

rushmc1 t1_itrqnhm wrote

If we're relying on human judgment, we're already damned.

2

arisalexis t1_itu39fi wrote

I suggest everyone reads Life 3.0 it covers all scenarios :)

2

BrilliantResort8146 t1_ittxae0 wrote

If that's true then in the infinite words of Bender B Rodriguez "welp-we're boned"

1

Cr4zko t1_ituuftj wrote

The nuclear age as promised in the 1950s never happened and everything after that was a huge disappointment. And the worst part is that it could have!

1

SnooPies1357 t1_itrjl0p wrote

i recommend mass suicide. deleting all traces of life.

−2