Viewing a single comment thread. View all comments

GreenOnGray t1_j7b22lh wrote

Imagine you and I each have a super intelligent AI. You ask yours to help you end humanity. I ask mine to help me preserve it. If we both diligently cooperate with our AIs’ advice, what do you think is the outcome?

1

edjez t1_j7egs8x wrote

Conflict, created by the first person in your example (me), and followed up by you, with outcomes scored by mostly incompatible criteria.

Since we are talking about language oracle class AIs, not sovereigns or free agents, it takes a human to take the outputs and enact to them, thus becoming responsible for the actions as it doesn’t matter what or who have the advice. It’s no different than substituting the “super intelligent AI” with “Congress”, or “parliament”.

(The hitchhikers guide outcome would be the AIs agree to put us on ice forever… or more insidiously constrain humanity to just one planet and keep the progress self regulated by conflict and they never leave their planet. Oh wait a second… 😉)

1

GreenOnGray t1_j7l4tb3 wrote

What do you think the outcome would be? Assume the AIs can not coordinate with each other explicitly.

1