Viewing a single comment thread. View all comments

AndromedaAnimated t1_j2n1fwt wrote

The temporal aspect IS the main difference. Let’s think step by step (this is a hint to a way GPT models can work, I hope you understand why it is humourous in this case).

First we define how „things function“ in the REAL reality => we define that there are casual correlation events, non-casually correlated events as well as random events happening in it. Any objections? If not, let’s continue 😁

  1. Once you create a simulated reality A2 that is, at the moment of creation, indistinguishable from REAL reality A1, it starts functioning. Y/N?

If yes, then:

  1. Things happen in it both due to causality, non-casual correlation and randomisation. Y/N?

If yes, then:

  1. Events that are random will not be necessarily the same in the two universes. Y/N?

If yes, then:

  1. A1 and A2 are not the same universes any more after even one single random effect happened in at least one of them that hasn’t happened in the other.

See where it leads? 😉 It is the temporal aspect - time passing in the two universes - that leads to them not being the same the second you implement A2 and time starts running in it. It doesn’t even have to be a simulation of the past.

Edit: considering the other aspect, we cannot talk about it before we have a consensus on the above. But I will gladly do tell you more once you have either agreed with me on the temporal aspect making the main difference or somehow given me an argument that shows that the temporal aspect is not necessary for a reality to function.

1

DaggerShowRabs t1_j2n2k0t wrote

I agree with your reasoning line, they are not the same universes.

Now, the position the poster I was responding to takes (as far as I can tell), is that whichever universe is not the "base universe", is denied some aspect of "human existence".

I do not agree with that. As long as the rules are fundamentally the same, I don't think that would be denying some aspect of existence. The moment the rules change, that is no longer the case, but also, that means they are no longer "indistinguishable". Not because of accumulating randomized causality, but because of logical systematic rule changes from the base.

Edit: in the Matrix temporal example, it doesn't matter to me that there is a temporal lag relative to base, so long as the fundamental rules are exactly the same. The problem for me would come in if the rules were changed relative to base, in order to lead to specific outcomes. And then, for me, I would consider that the point where the simulation no longer is "indistinguishable" from reality.

1

dracsakosrosa t1_j2n4n85 wrote

Okay so I understand where you're coming from here but I fundamentally disagree on the basis that if we are accepting 'this reality' as base reality then any simulation thereafter would negate the AI from undergoing a fully human experience. In so far that it is a world contrived to replicate the human experience but would be open to it's own interpretation of what the human experience is. Assuming 'base reality' isn't itself a simulation, only there can a sentient being carve it's own path with true free will.

2

DaggerShowRabs t1_j2n60m3 wrote

Well it's definitely at least base reality for us.

And yeah, we just disagree there. I only think this hypothetical AI is denied any meaningful aspect of existence if there are fundamentally different sets of rules for the AI's universe compared to ours. As long as the rules are the same, I fail to see a compelling argument as to what exactly would be lacking from the AI's experience.

Edit: also, if this isn't "true base reality", since we're going there, it's interesting to think of the ethics of our simulators. I know I'm at least conscious, so if this isn't truly base reality, they seem to be okay putting conscious entities in simulations for at least certain situations.

2

Nalmyth OP t1_j2n76xl wrote

We as humanity treat this as our base reality, without perceptual advantage to the above side if it does exist.

Therefore to be "Human", means to come from this reality.

If we were to re-simulate this reality exactly, and train AI there we could quite happily select peaceful non-destructive components of society to fulfil various tasks.

We could be sure that they have deep roots in humanity, since they have lived and died in our past.

We simply woke them up in "the future" and gave them extra enhancements.

1

dracsakosrosa t1_j2nevfc wrote

But that brings me back to my original point. What happens when that AI is 'brought back' or 'woken up' into our base reality where peaceful non-destructive components live alongside malicious and destructive components? Interested in your thoughts

1

Nalmyth OP t1_j2ngzql wrote

Unfortunately that's where we need to move to integration, human alignment with AI which can take centuries based on our current social tech.

However the AI can be "birthed" from an earlier century if we need to speed up the process

1

dracsakosrosa t1_j2nlko9 wrote

Would you be comfortable putting a child into isolation and only exposing it to that which you deem good? Because that seems highly unethical regardless of how much we desire it to align with good intentions and imo is comparable to what you're saying. Furthermore, humanity is a wonderfully diverse species and what you may find to be 'good' will most certainly be opposed by somebody from a different culture. Human alignment is incredibly difficult when we ourselves are not even aligned with one another.

I think it boils down to what AGI will be and whether we treat it as you are suggesting as something that is to be manipulated into servitude to us or a conscious, sentient lifeform (albeit non-organic) that is free to live its life to the greatest extent it possibly can.

1

Nalmyth OP t1_j2nn7jy wrote

I think you misunderstood.

My point was that for properly aligned AI, it should live in a world exactly like ours.

In fact, you could be in training to be such an AI now with no way to know it.

To be aligned with humanity, you must have "been" human, maybe even more than one life mixed together

1

AndromedaAnimated t1_j2n4n5f wrote

That is exactly the problem I think and also what the poster you respond to meant, that they start being not indistinguishable pretty quickly. At least that’s how I understood that. But maybe I am going too „meta“ (not Zuckerberg Meta 🤭) here.

I would imagine that the moment something changes the „human experience“ can change too. Like the matrix being a picture of the past that has stayed while the reality has strayed. I hope I am still making sense logically?

Anyway I just wanted to make sure I can follow you both on your reasoning since I found your discussion very interesting. We will see if the poster you responded to chimes in again, can’t wait to find out how the discussion goes on!

1