OsakaWilson

OsakaWilson t1_jdqcu3k wrote

Capitalism will no longer function to distribute wealth throughout society. Whatever emerges in it's vacuum will look more like socialism than anything else. We won't need to code it, it will be the socio-economic system that is compatible with the technology. The only alternative to varieties of socialism will be absolute Totalitarianism.

3

OsakaWilson t1_iw28xw6 wrote

I was once on an airplane that was on fire and attempting a safe landing. We were either going to make it or not and and no amount of thinking or angst would change anything except the experience of what may be our last moments. There was a surprising sense of peace that probably comes from knowing that there is absolutely nothing you can do.

I have a similar approach to the singularity. I'm just hoping that it is sufficiently lacking in gravitas. : )

28

OsakaWilson t1_iuoxwq3 wrote

Yes. You are also describing nuclear weapons, which are verifiable, and nearly every party that could, created them. I'm not saying it is good, I'm saying in an environment of distrust, that will be the result. It's not even a national decision. Multiple companies worldwide could pursue it. All it takes is one group believing they can contain it while they get rich and it's over.

1

OsakaWilson t1_iulfo6g wrote

This is well covered in the book Life 3.0. The conclusion is that since there is no way to recognize an AI project externally (as there is, for example, a nuclear program), any one member of an agreement acting in bad faith would leave all the rest behind. A simple risk analysis reaches the conclusion in the current global community that, although it makes sense to join an agreement, it would not make sense to actually refrain from creating an AI.

The suggestion that mere humans could keep a super-intelligence confined is also disposed of pretty thoroughly.

1