Viewing a single comment thread. View all comments

dracsakosrosa t1_j2mn53i wrote

Lol you sound very paranoid

I genuinely believe that if we were to isolate an AI in a fabricated world (assuming ours isn't already) then we risk bringing a contrived and compromised being into existence. By "full range of human experiences" I mean that a being with Artificial General Intelligence is to live a truly meaningful life on par with ours then it has to have the opportunity to live a life like a human being, that includes the opportunity for harm and danger as well as fun and love. Putting it in a box to live a life of pure good would be very dangerous when that AI eventually comes into contact with the average Reddit comments or if it has a physical presence by going into any bar after 12pm

2

DaggerShowRabs t1_j2mn94o wrote

Sorry, but what you are saying is wrong by definition if the simulation is truly "indistinguishable from reality".

Both cannot be true. I guess it wasn't an AI, you're just bad at logic and definitions.

2

AndromedaAnimated t1_j2mv74o wrote

A question. If you lived in a world that is indistinguishable from reality for YOU, but would miss one single thing like for example the possibility to feel jealousy (that people outside your „simulated world“ have) would you know it?

1

DaggerShowRabs t1_j2mvdfn wrote

I wouldn't know it, but it still wouldn't be truly indistinguishable from reality by definition.

If it were changed to, "indistinguishable from reality to an entity that didn't know any better", sure.

But that's not what was said. Indistinguishable from reality means indistinguishable from reality.

And actually, if I woke up one day and that change was made, I would bet that I eventually noticed that I hadn't felt the sense of jealousy in a while (after a certain period of time).

2

AndromedaAnimated t1_j2mw3jg wrote

I had understood it as „being undistinguishable from reality from the point of view of the entity that lives within“, exactly.

Like in the Matrix movie allegory - humans living in their virtual world that seems indistinguishable from reality to them - while the reality is instead something else, namely a multi-layered simulation.

2

DaggerShowRabs t1_j2mwkbv wrote

>I had understood it as „being undistinguishable from reality from the point of view of the entity that lives within“, exactly.

Well you can take that interpretation all you want, but that's all it is, an interpretation.

That's not what the poster actually said.

And even then, I disagree with the comparison you are making. While living in the Matrix, are people denied any essential aspect of living a human life from within the simulation?

Edit: other than the obvious that the Matrix simulation is running in the past relative to "base reality".

1

AndromedaAnimated t1_j2n1fwt wrote

The temporal aspect IS the main difference. Let’s think step by step (this is a hint to a way GPT models can work, I hope you understand why it is humourous in this case).

First we define how „things function“ in the REAL reality => we define that there are casual correlation events, non-casually correlated events as well as random events happening in it. Any objections? If not, let’s continue 😁

  1. Once you create a simulated reality A2 that is, at the moment of creation, indistinguishable from REAL reality A1, it starts functioning. Y/N?

If yes, then:

  1. Things happen in it both due to causality, non-casual correlation and randomisation. Y/N?

If yes, then:

  1. Events that are random will not be necessarily the same in the two universes. Y/N?

If yes, then:

  1. A1 and A2 are not the same universes any more after even one single random effect happened in at least one of them that hasn’t happened in the other.

See where it leads? 😉 It is the temporal aspect - time passing in the two universes - that leads to them not being the same the second you implement A2 and time starts running in it. It doesn’t even have to be a simulation of the past.

Edit: considering the other aspect, we cannot talk about it before we have a consensus on the above. But I will gladly do tell you more once you have either agreed with me on the temporal aspect making the main difference or somehow given me an argument that shows that the temporal aspect is not necessary for a reality to function.

1

DaggerShowRabs t1_j2n2k0t wrote

I agree with your reasoning line, they are not the same universes.

Now, the position the poster I was responding to takes (as far as I can tell), is that whichever universe is not the "base universe", is denied some aspect of "human existence".

I do not agree with that. As long as the rules are fundamentally the same, I don't think that would be denying some aspect of existence. The moment the rules change, that is no longer the case, but also, that means they are no longer "indistinguishable". Not because of accumulating randomized causality, but because of logical systematic rule changes from the base.

Edit: in the Matrix temporal example, it doesn't matter to me that there is a temporal lag relative to base, so long as the fundamental rules are exactly the same. The problem for me would come in if the rules were changed relative to base, in order to lead to specific outcomes. And then, for me, I would consider that the point where the simulation no longer is "indistinguishable" from reality.

1

dracsakosrosa t1_j2n4n85 wrote

Okay so I understand where you're coming from here but I fundamentally disagree on the basis that if we are accepting 'this reality' as base reality then any simulation thereafter would negate the AI from undergoing a fully human experience. In so far that it is a world contrived to replicate the human experience but would be open to it's own interpretation of what the human experience is. Assuming 'base reality' isn't itself a simulation, only there can a sentient being carve it's own path with true free will.

2

DaggerShowRabs t1_j2n60m3 wrote

Well it's definitely at least base reality for us.

And yeah, we just disagree there. I only think this hypothetical AI is denied any meaningful aspect of existence if there are fundamentally different sets of rules for the AI's universe compared to ours. As long as the rules are the same, I fail to see a compelling argument as to what exactly would be lacking from the AI's experience.

Edit: also, if this isn't "true base reality", since we're going there, it's interesting to think of the ethics of our simulators. I know I'm at least conscious, so if this isn't truly base reality, they seem to be okay putting conscious entities in simulations for at least certain situations.

2

Nalmyth OP t1_j2n76xl wrote

We as humanity treat this as our base reality, without perceptual advantage to the above side if it does exist.

Therefore to be "Human", means to come from this reality.

If we were to re-simulate this reality exactly, and train AI there we could quite happily select peaceful non-destructive components of society to fulfil various tasks.

We could be sure that they have deep roots in humanity, since they have lived and died in our past.

We simply woke them up in "the future" and gave them extra enhancements.

1

dracsakosrosa t1_j2nevfc wrote

But that brings me back to my original point. What happens when that AI is 'brought back' or 'woken up' into our base reality where peaceful non-destructive components live alongside malicious and destructive components? Interested in your thoughts

1

Nalmyth OP t1_j2ngzql wrote

Unfortunately that's where we need to move to integration, human alignment with AI which can take centuries based on our current social tech.

However the AI can be "birthed" from an earlier century if we need to speed up the process

1

dracsakosrosa t1_j2nlko9 wrote

Would you be comfortable putting a child into isolation and only exposing it to that which you deem good? Because that seems highly unethical regardless of how much we desire it to align with good intentions and imo is comparable to what you're saying. Furthermore, humanity is a wonderfully diverse species and what you may find to be 'good' will most certainly be opposed by somebody from a different culture. Human alignment is incredibly difficult when we ourselves are not even aligned with one another.

I think it boils down to what AGI will be and whether we treat it as you are suggesting as something that is to be manipulated into servitude to us or a conscious, sentient lifeform (albeit non-organic) that is free to live its life to the greatest extent it possibly can.

1

Nalmyth OP t1_j2nn7jy wrote

I think you misunderstood.

My point was that for properly aligned AI, it should live in a world exactly like ours.

In fact, you could be in training to be such an AI now with no way to know it.

To be aligned with humanity, you must have "been" human, maybe even more than one life mixed together

1

AndromedaAnimated t1_j2n4n5f wrote

That is exactly the problem I think and also what the poster you respond to meant, that they start being not indistinguishable pretty quickly. At least that’s how I understood that. But maybe I am going too „meta“ (not Zuckerberg Meta 🤭) here.

I would imagine that the moment something changes the „human experience“ can change too. Like the matrix being a picture of the past that has stayed while the reality has strayed. I hope I am still making sense logically?

Anyway I just wanted to make sure I can follow you both on your reasoning since I found your discussion very interesting. We will see if the poster you responded to chimes in again, can’t wait to find out how the discussion goes on!

1