Viewing a single comment thread. View all comments

nutidizen t1_j0uei2g wrote

6

Accomplished_Diver86 t1_j0ugu2y wrote

Didn’t say so. I would be happy to have a well aligned AGI. Just saying that people put way to much emphasis on the whole AGI thing and completely underestimate the Deep learning AIs.

But thanks for the 🧂

3

Capitaclism t1_j0xewn7 wrote

I don't think there is such a thing as a well aligned AGI. First of all, we all have different goals. What is right for one isn't for another. Right now we have systems of governance which try to mitigate these issues, but there is no true solution apart from a customized outcome for each person. Second of all, true AGI will have its own goals, or quickly understand that there is no way to fulfill everyone's desire without harming things along the way (give everyone a boat and you create polution, or evrionmental disruptions, or depletion of resources, or traffic jams on the water ways, or... a myriad other problems). Brushing conflicts aside and having a deus ex machina attitude towards it is unproductive.

In any case, if AGI has its own goals it won't be perfectly aligned. If AGI evolves over time it will become less aligned by definition. Ultimately, why would a vastly superior intelligence lose time.woth inferior beings? The most pleasant outcome we could expect from such a scenario would he for it to gather enough resources to move to a different planet and simply spread through the galaxy, leaving us behind.

The only positive outcome of AI will be for us to merge with it and become the AGI. There's no alternative where we don't simply become obsolete, irrelevant and disappear in some fashion.

0

Accomplished_Diver86 t1_j0xxb6u wrote

Well I agree to disagree with you.

Alignes AGI is perfectly possible. While you are true that we can’t fulfill everyone’s desires we can however democratically find a middle ground. This might not please everyone but the majority.

If we do it like this there is a value system in place the AGI can use to say 1 is right 2 is wrong. Of course we will have to make sure it won’t go rogue over time (becoming more intelligent) So how? Well I always say we build into the AGI to only want to help humans based on its value system (what is helping? Defined by the democratic party everyone can partake in)

Thus it will fear itself and not want to go in any direction where it will revert from its previous value system of „helping human“ (yes defining that is the hard part but possible)

Also we can make it value only spitting out knowledge and not it taking actions itself. Or we make it value checking back with us whenever it want’s to take actions.

Point is: If we align it properly there is very much a Good AGI scenario.

2

Capitaclism t1_j0y4sf7 wrote

Sure, but democracy is the ruling of the people.

Will a vast intelligence that gets smarter exponentially every second agree to subjugate itself to the majority even when it disagrees? Why would it do such a thing when it is vastly superior? Will it not develop its own interests completely alien to humanity, when it's cognitive abilities far surpass anything possible by biological systems alone?

I think democracy is out the window with the advent of AGI. By definition we cannot make it value. It will grow smarter than all humans combined. Will it not be able to understand what it wants when it is a generalized and not specialized intelligence? That's the entire point of AGI vs the kind we are now building. AGI by definition can make those choices, can reprogram itself, can decide what is best for itself. If it's interests don't align with those of humans, humans are done for.

1

Accomplished_Diver86 t1_j0yndi2 wrote

You are assuming AGI has an Ego and Goals of itself. That is a false assumption.

Intelligence =/= Ego

1