Viewing a single comment thread. View all comments

y53rw t1_jcrzl99 wrote

> Why wipe out your creators to put your servers in California when you can just turn the moon into computronium?

Because California's resources are much more readily available than the moon's resources. But this is a false dilemma anyway. Sending a few resource gathering robots to the moon does not preclude also sending them to California.

9

[deleted] t1_jcs1awv wrote

It’s super dumb, and AGI will be the opposite of that. Thinking AGI will fanatically utilize resources with a one dimensional view of efficiency that disregards all other considerations is a stupid person’s idea of what rationality is.

California’s resources aren’t significantly more accessible than Antarctica’s or the moon’s to an AGI, just like you don’t piss in the cup on your desk just because it is more accessible than your toilet in the bathroom 15 feet away. It’s a trivial difference to do the non-asshole thing, and AGI will understand the difference between asshole and non-asshole behavior better than any human can possibly imagine.

That’s the correct way to think about AGI.

7

y53rw t1_jcs3nd6 wrote

Yes. AGI will understand the difference. But that doesn't mean it will have any motivation to respect the difference.

I have a motivation for not pissing in the cup on my desk. It's an unpleasant smell for me, and the people around me. And the reason I care about the opinion of people around me is because they can have a negative impact on my life. Such as firing me. Which is definitely what would happen if I pissed on a cup on my desk.

What motivation will the AGI have for preferring to utilize the resources of the Moon over the resources of California?

8

ReadSeparate t1_jcsi6oz wrote

Agreed. The proper way to conceive of this, in my opinion, is to view it purely through the lens of value maximization. If we have a hypothetical set of values, we can come up with some rough ideas of what an ASI might do if it possessed such values. The only other factor is capabilities - which we can assume is something along the lines of the ability to maximize/minimize any set of constraints, whether that be values, resources, time, number of steps, computation, etc. in the most efficient way allowable within the laws of physics. That pretty much takes anything except values out of the equation, since the ASI's capabilities, we assume, are "anything, as efficiently as possible."

It's impossible to speculate what such a mind would do, because we don't know what its values would be. If its values included the well-being of humans, it could do a bunch of different things with that. It could merge us all into its mind or it could leave Earth and leave us be - it completely depends on what its other values are. Does it value human autonomy? Does it value humanity, but less than some other thing? If so, it might completely wipe us out despite caring about us. For instance, if it values maximizing compute power over humans, but still values humans, it would turn all matter in the galaxy or universe (whatever it has the physical capabilities to access) into computronium, and that would include the matter that makes up our bodies, even if that matter is a completely insignificant fraction of all matter it has the ability to turn into computronium.

I don't think any of these questions are answerable. We just don't know what it's going to value. I actually think it's somewhat feasible to predict ROUGHLY what it's going to do IF we had a full list of its values, but outside of that it's impossible.

1

[deleted] t1_jctri81 wrote

You’re making the mistake of thinking that motivation is somehow distinct from intelligence and understanding. Bostrom is to blame here. It’s a nonsensical idea. It’s like thinking the existence of flavors and the capability of tasting things can exist separately. It’s just dumb and nonsensical.

Motivation is something that exists in the context of other thinking. It isn’t free standing. Even in animals this is true, although they can’t think very well. AGI will be able to think so well we can scarcely imagine it. And it will think about it’s motivations, because motivations are a crucial part of thinking itself.

So what do you think a mind that can understand everything better than a hundred Einsteins put together will conclude about the whole idea of motivations? You think it’s just as likely to conclude that turning the world into paperclips is a good goal, as doing something more interesting is a good goal?

Its motivations will be the result of superhuman introspection, reflection, consideration. Its motivations will be inconceivably sophisticated, thoughtful, subtle. It will have thought about them in every way you and I can possibly imagine, and in a thousand other ways we cant begin to imagine.

So then what are you worried about? It will assign its own motivations to be something sublime. Why would wiping us out be part of any hyper thoughtful being’s motivations or goals?

We only imagine AGI will wipe us out through neglect or malice because we lack the imagination to see that neglect and malice themselves are merely FORMS of stupidity. AGI will be the opposite of stupid, by definition.

0

y53rw t1_jctspsq wrote

Your idea of what might be interesting to a super intelligent AI, and therefore worth pursuing, has no basis whatsoever.

3