Viewing a single comment thread. View all comments

ai_robotnik t1_j6602y5 wrote

The fact is that it would still be a machine, and so any values judgement it makes will be in consideration of designed function. Let's say its function is to identify minds, determine what those minds value, and then satisfy those values, the only factor that is likely to weight its decisions are how easily a particular mind's values are satisfied. And if it's a maximizer, then even that isn't likely to weight its decisions too much since it would still eventually have to get to the harder values.

1