Viewing a single comment thread. View all comments

ReadSeparate t1_jcsi6oz wrote

Agreed. The proper way to conceive of this, in my opinion, is to view it purely through the lens of value maximization. If we have a hypothetical set of values, we can come up with some rough ideas of what an ASI might do if it possessed such values. The only other factor is capabilities - which we can assume is something along the lines of the ability to maximize/minimize any set of constraints, whether that be values, resources, time, number of steps, computation, etc. in the most efficient way allowable within the laws of physics. That pretty much takes anything except values out of the equation, since the ASI's capabilities, we assume, are "anything, as efficiently as possible."

It's impossible to speculate what such a mind would do, because we don't know what its values would be. If its values included the well-being of humans, it could do a bunch of different things with that. It could merge us all into its mind or it could leave Earth and leave us be - it completely depends on what its other values are. Does it value human autonomy? Does it value humanity, but less than some other thing? If so, it might completely wipe us out despite caring about us. For instance, if it values maximizing compute power over humans, but still values humans, it would turn all matter in the galaxy or universe (whatever it has the physical capabilities to access) into computronium, and that would include the matter that makes up our bodies, even if that matter is a completely insignificant fraction of all matter it has the ability to turn into computronium.

I don't think any of these questions are answerable. We just don't know what it's going to value. I actually think it's somewhat feasible to predict ROUGHLY what it's going to do IF we had a full list of its values, but outside of that it's impossible.

1