Viewing a single comment thread. View all comments

dracsakosrosa t1_j2n4n85 wrote

Okay so I understand where you're coming from here but I fundamentally disagree on the basis that if we are accepting 'this reality' as base reality then any simulation thereafter would negate the AI from undergoing a fully human experience. In so far that it is a world contrived to replicate the human experience but would be open to it's own interpretation of what the human experience is. Assuming 'base reality' isn't itself a simulation, only there can a sentient being carve it's own path with true free will.

2

DaggerShowRabs t1_j2n60m3 wrote

Well it's definitely at least base reality for us.

And yeah, we just disagree there. I only think this hypothetical AI is denied any meaningful aspect of existence if there are fundamentally different sets of rules for the AI's universe compared to ours. As long as the rules are the same, I fail to see a compelling argument as to what exactly would be lacking from the AI's experience.

Edit: also, if this isn't "true base reality", since we're going there, it's interesting to think of the ethics of our simulators. I know I'm at least conscious, so if this isn't truly base reality, they seem to be okay putting conscious entities in simulations for at least certain situations.

2

Nalmyth OP t1_j2n76xl wrote

We as humanity treat this as our base reality, without perceptual advantage to the above side if it does exist.

Therefore to be "Human", means to come from this reality.

If we were to re-simulate this reality exactly, and train AI there we could quite happily select peaceful non-destructive components of society to fulfil various tasks.

We could be sure that they have deep roots in humanity, since they have lived and died in our past.

We simply woke them up in "the future" and gave them extra enhancements.

1

dracsakosrosa t1_j2nevfc wrote

But that brings me back to my original point. What happens when that AI is 'brought back' or 'woken up' into our base reality where peaceful non-destructive components live alongside malicious and destructive components? Interested in your thoughts

1

Nalmyth OP t1_j2ngzql wrote

Unfortunately that's where we need to move to integration, human alignment with AI which can take centuries based on our current social tech.

However the AI can be "birthed" from an earlier century if we need to speed up the process

1

dracsakosrosa t1_j2nlko9 wrote

Would you be comfortable putting a child into isolation and only exposing it to that which you deem good? Because that seems highly unethical regardless of how much we desire it to align with good intentions and imo is comparable to what you're saying. Furthermore, humanity is a wonderfully diverse species and what you may find to be 'good' will most certainly be opposed by somebody from a different culture. Human alignment is incredibly difficult when we ourselves are not even aligned with one another.

I think it boils down to what AGI will be and whether we treat it as you are suggesting as something that is to be manipulated into servitude to us or a conscious, sentient lifeform (albeit non-organic) that is free to live its life to the greatest extent it possibly can.

1

Nalmyth OP t1_j2nn7jy wrote

I think you misunderstood.

My point was that for properly aligned AI, it should live in a world exactly like ours.

In fact, you could be in training to be such an AI now with no way to know it.

To be aligned with humanity, you must have "been" human, maybe even more than one life mixed together

1