Viewing a single comment thread. View all comments

el_chaquiste t1_j2phqj4 wrote

Note that AFAIK ancestor simulation theory still assumes computation resources are limited, thus their consumption still needs to be minimized, and some things in the simulation aren't fully accurately simulated.

Brains might be fully accurate, but the behavior of elementary particles and other objects in the simulated universe would be just approximations that look outwardly convincing. E.g. rocks and funiture would be just decoration and wallpaper.

If the simulated beings start paying attention to the details in their world, the simulation notices it and simulates a finer level of detail. Like having a universal foveated rendering algorithm for the simulated brains.

In that case, running a simulation inside the simulation could be computationally possible, but it would probably incur in too much computing overhead. But this assumption is a bit flaky, of course, considering we are already assuming miraculous levels of computing power.

Having nested simulations might be actually the point of the excercise, like seeing how many worlds end up having their own sub-worlds just for fun.

3

Mortal-Region t1_j2pq7mn wrote

>In that case, running a simulation inside the simulation could be computationally possible, but it would probably incur in too much computing overhead. But this assumption is a bit flaky, of course, considering we are already assuming miraculous levels of computing power.

If we assume that the sub-simulation we create will use the same optimization scheme (detail-on-demand) as the simulation we live in, and be of roughly the same size, then creating just a single sub-simulation, running 24/7, will double the strain on the computer in base reality. Double the computation and double the memory. No matter how powerful your computer, "twice-as-much" is always a lot. If left to run indefinitely, the system would eventually crash.

2