Submitted by Wroisu t3_10vdd7g in singularity

Some interesting food for thought for those who browse this sub -

“Most problems, even seemingly really tricky ones, could be handled by simulations which happily modelled slippery concepts like public opinion.

Or the likely reactions of other societies by the appropriate use of some especially cunning and devious algorithms… nothing more processor-hungry than the right set of equations…

But not always. 

Sometimes, if you were going to have any hope of getting useful answers, there really was no alternative to modelling the individuals themselves, at the sort of scale and level of complexity that meant they each had to exhibit some kind of discrete personality, and that was where the Problem kicked in.

Once you’d created your population of realistically reacting and – in a necessary sense – cogitating individuals, you had – also in a sense – created life. 

The particular parts of whatever computational substrate you’d devoted to the problem now held beings;

virtual beings capable of reacting so much like the back-in-reality beings they were modelling – because how else were they to do so convincingly without also hoping, suffering, rejoicing, caring, living and dreaming -

By this reasoning, then, you couldn’t just turn off your virtual environment and the living, thinking creatures it contained at the completion of a run or when a simulation had reached the end of its useful life; that amounted to genocide.“

13

Comments

You must log in or register to comment.

iNstein t1_j7id3yu wrote

You could overcome this by not killing them at the end of the simulation but instead you upload them into robotic bodies and let them live in and experience the real world. They can also take responsibility for maintaining their real world bodies and fitting into society. They pretty much have the same chance of surviving as any non simulated people.

5

Mortal-Region t1_j7lv3t9 wrote

If the simulators are digital beings themselves, then the occupants of the sim could be preserved just by moving them.

1

CertainMiddle2382 t1_j7gtgrn wrote

Yep, that was also one of Bostrom arguments.

To properly align itself with our values, even in situations we could not even imagine ourselves, making a simulation of humans and test our avatars responses could be the only way of protecting us.

By harming « them » instead.

4

Surur t1_j7gty87 wrote

If you think about it, you do the same when you try and see things from someone else's perspective. You take on their point of view and you model their reactions as realistically as possible.

And when you done, you just discard them.

3

Mortal-Region t1_j7gvfkz wrote

Furthermore... if it's a simulation of a technological society, it might have to be shut down eventually, because if the simulated people advance far enough to create their own simulations -- perhaps millions of them -- then the strain on the computer would be too great. The simulation would slow to a crawl, and more urgently, it'd run out of memory.

2

peterflys t1_j7kw4xx wrote

That could be true, but you could also end up in a situation where the hardware running the primary sim continues to get upgraded and expanded, which increases its capacity to hold more information (that is, the society—and its own simulations—that its simulating). Computation of These should get cheaper and cheaper too. Just another thought.

2

Mortal-Region t1_j7m2k8o wrote

Yeah, the argument assumes that the simulation is the "detail-on-demand" kind, meaning that when the simulated people run their own simulation, the real computer in base reality has to provide a tremendous amount of new detail -- roughly the same amount as is allocated for their world (assuming they run the same kind of simulation as the one they occupy).

So, for example, if the sims have just 10 simulations running simultaneously, the simulation they occupy will consume 11 times more computational resources than before (including memory). Even if the computer in base reality grows to the size of an entire galaxy, just one more level down means now you need ten galaxies. All this just to keep your single sub-simulation-incorporating simulation running.

I think it's more likely that the simulators will nip the problem in the bud by halting the sim just before sub-simulations become possible, thus also preventing sub-sub-simulations and sub-sub-sub simulations and so on. After all, the thing they're probably simulating is their own historical singularity, so once sub-simulations become possible, the simulation has pretty much run its course.

2

visarga t1_j7hvbgw wrote

I think intelligence is in the language, mostly, of course there are other forms of intelligence as well. Yet most of it is not in the brain, or in the AI, but in language. The corpus of everything said and written, all knowledge, science, technology, systems of thinking. A human growing without language would not be very intelligent.

So an AI would be intelligent in the same way a human is - by becoming inhabited by language. Does that make AIs and simulated beings any less than us?

2

DukkyDrake t1_j7hzdl6 wrote

Those running sufficiently capable language models could be culpable for committing mindcrimes.

1