Viewing a single comment thread. View all comments

Accomplished_Diver86 t1_j0xxb6u wrote

Well I agree to disagree with you.

Alignes AGI is perfectly possible. While you are true that we can’t fulfill everyone’s desires we can however democratically find a middle ground. This might not please everyone but the majority.

If we do it like this there is a value system in place the AGI can use to say 1 is right 2 is wrong. Of course we will have to make sure it won’t go rogue over time (becoming more intelligent) So how? Well I always say we build into the AGI to only want to help humans based on its value system (what is helping? Defined by the democratic party everyone can partake in)

Thus it will fear itself and not want to go in any direction where it will revert from its previous value system of „helping human“ (yes defining that is the hard part but possible)

Also we can make it value only spitting out knowledge and not it taking actions itself. Or we make it value checking back with us whenever it want’s to take actions.

Point is: If we align it properly there is very much a Good AGI scenario.

2

Capitaclism t1_j0y4sf7 wrote

Sure, but democracy is the ruling of the people.

Will a vast intelligence that gets smarter exponentially every second agree to subjugate itself to the majority even when it disagrees? Why would it do such a thing when it is vastly superior? Will it not develop its own interests completely alien to humanity, when it's cognitive abilities far surpass anything possible by biological systems alone?

I think democracy is out the window with the advent of AGI. By definition we cannot make it value. It will grow smarter than all humans combined. Will it not be able to understand what it wants when it is a generalized and not specialized intelligence? That's the entire point of AGI vs the kind we are now building. AGI by definition can make those choices, can reprogram itself, can decide what is best for itself. If it's interests don't align with those of humans, humans are done for.

1

Accomplished_Diver86 t1_j0yndi2 wrote

You are assuming AGI has an Ego and Goals of itself. That is a false assumption.

Intelligence =/= Ego

1