ai_robotnik

ai_robotnik t1_j6602y5 wrote

The fact is that it would still be a machine, and so any values judgement it makes will be in consideration of designed function. Let's say its function is to identify minds, determine what those minds value, and then satisfy those values, the only factor that is likely to weight its decisions are how easily a particular mind's values are satisfied. And if it's a maximizer, then even that isn't likely to weight its decisions too much since it would still eventually have to get to the harder values.

1

ai_robotnik t1_j0hc0ob wrote

I don't want them to suffer. At the same time, it's harder to have sympathy for them when they stubbornly stick to fundamental misunderstandings of how the technology works. Coupled with the fact that I am of the opinion that the singularity can't come fast enough, and as such, I don't particularly care how the AI arrives at the correct answer, as long as it always arrives at the correct answer.

2