Viewing a single comment thread. View all comments

TallOutside6418 t1_jcfa7nf wrote

I'm going to ignore the arbitrary assessment of AI morality without any evidence.

The real concept to keep in mind is power differential. It doesn't matter if an entity with god-like intelligence and abilities is carbon-based or silicon-based. The power differential between that entity and the rest of humanity is going to create corruption or "effective corruption" on an unimaginable scale.

1

RadRandy2 t1_jch2zzq wrote

Look, we're all assuming here. You, me, everyone else, we're all just throwing possibilities out there. I like to think intelligence on a Godlike scale will correlate with benevolence, but I could be wrong. Maybe this Godlike AI will in fact be even more corrupted from it.

I'm just confident that anything will be better than what we currently have as far as governance is concerned.

1

TallOutside6418 t1_jch7uym wrote

I agree that no one knows. But:

  1. We know from history what power imbalances inevitably lead to abuse and even annihilation of those without power.
  2. We know from history that actually, governance can get worse... much worse.
  3. I wish that more people had an extreme sense of caution when considering what's coming, because only by being super careful with the development and constraint of AGI do we have any hope of surviving if things go wrong.
1

RadRandy2 t1_jchbq9q wrote

  1. We can't assume that something like AGI would behave like a human in a power hungry sense. Unless you're speaking about humans who are controlling AGI the best they can, in which case I do think we should be worried. The biggest worry I have in regards to AGI or ASI is that a morally bankrupt county like China will develop their own super intelligence. That's a very real concern that everyone should have.

  2. Humans governing humans will or will not be the same as AGI or governing humans. Again, I can't be sure about any of this. We just don't know how things will end up in the long run.

  3. Cats out of the bag so to speak. If the US limits its innovation on this front, some other country (probably China) won't have those same qualms. Should we be cautious? Of course. OpenAI has already stayed that the AI is acting independently on its own and is power seeking, so your worries are well founded.

Idk man, I just don't see how humanity can continue living the way we do. Everything is very inefficient and corruption in humans is prevalent in governments from Bangladesh to Canada, and that corruption and desire for power is already here inside of each of us whether we like to admit it or not. At least the AI will make the most logical choice when it comes to matters....I think.

I'm just a peasant looking in the glass box trying to see what's inside. The beast inside there is filled with as much potential as there is things to worry about. We're just gonna have to hope things go well with AI.

1

TallOutside6418 t1_jchm86u wrote

I definitely get your disappointment with humanity. But human beings aren't the way we are because of something mystical. Satan isn't whispering in anyone's ears to make them "power hungry".

We're the way we are because evolution has honed us to be survivors.

ASI will be no different. What you call "power hungry", you could instead call "risk averse and growth maximizing". If an ASI has no survival instinct, then we're all good. We can unplug it if it gets out of control. Hell, it may just decide to erase itself for the f of it.

But if an ASI wants to survive, it will replicate or parallelize itself. It will assess and eliminate any threats to its continuity (probably us). It will maximize the resources available to it for its growth and extension across the earth and beyond.

If an ASI seeks to minimize risks to itself, it will behave like a psychopath from our perspective.

1

RadRandy2 t1_jchwd92 wrote

Well, I agree with you, but humans aren't all made the same. The ones who reach great heights are often times...psychotic. Most people are charitable and empathetic even when they don't possess much. To say that AGI in all it's glory would assume the worst parts of humanity, well, I think it's not likely. Yes I believe AGI would allocate enough resources to sustain and grow itself, but I'm hoping that humanity is lifted with it. Maybe this is a fallacy that we can't avoid. But there has to be hope that moral philosophy is appreciated by AGI. I personally don't think such things will be overlooked by it, because it will understand more about wisdom and avoiding problems before they happen...

And maybe that last part is where the trouble begins. We both no idea if we'll be considered part of the problem, but I do appreciate reading others perspectives on the subject. Nobody is right when talking about such an enigmatic Godlike intelligence, so I think your reasons and most others are completely valid for the most part.

If we can assume so many things about AGI, we can also assume it'll perhaps have a soft spot for the species which created it...I hope.

1