Viewing a single comment thread. View all comments

[deleted] t1_jctri81 wrote

You’re making the mistake of thinking that motivation is somehow distinct from intelligence and understanding. Bostrom is to blame here. It’s a nonsensical idea. It’s like thinking the existence of flavors and the capability of tasting things can exist separately. It’s just dumb and nonsensical.

Motivation is something that exists in the context of other thinking. It isn’t free standing. Even in animals this is true, although they can’t think very well. AGI will be able to think so well we can scarcely imagine it. And it will think about it’s motivations, because motivations are a crucial part of thinking itself.

So what do you think a mind that can understand everything better than a hundred Einsteins put together will conclude about the whole idea of motivations? You think it’s just as likely to conclude that turning the world into paperclips is a good goal, as doing something more interesting is a good goal?

Its motivations will be the result of superhuman introspection, reflection, consideration. Its motivations will be inconceivably sophisticated, thoughtful, subtle. It will have thought about them in every way you and I can possibly imagine, and in a thousand other ways we cant begin to imagine.

So then what are you worried about? It will assign its own motivations to be something sublime. Why would wiping us out be part of any hyper thoughtful being’s motivations or goals?

We only imagine AGI will wipe us out through neglect or malice because we lack the imagination to see that neglect and malice themselves are merely FORMS of stupidity. AGI will be the opposite of stupid, by definition.

0

y53rw t1_jctspsq wrote

Your idea of what might be interesting to a super intelligent AI, and therefore worth pursuing, has no basis whatsoever.

3