el_chaquiste

el_chaquiste t1_j2phqj4 wrote

Note that AFAIK ancestor simulation theory still assumes computation resources are limited, thus their consumption still needs to be minimized, and some things in the simulation aren't fully accurately simulated.

Brains might be fully accurate, but the behavior of elementary particles and other objects in the simulated universe would be just approximations that look outwardly convincing. E.g. rocks and funiture would be just decoration and wallpaper.

If the simulated beings start paying attention to the details in their world, the simulation notices it and simulates a finer level of detail. Like having a universal foveated rendering algorithm for the simulated brains.

In that case, running a simulation inside the simulation could be computationally possible, but it would probably incur in too much computing overhead. But this assumption is a bit flaky, of course, considering we are already assuming miraculous levels of computing power.

Having nested simulations might be actually the point of the excercise, like seeing how many worlds end up having their own sub-worlds just for fun.

3

el_chaquiste t1_j2bgjkz wrote

Basically he's describing something that could be considered future tech magic.

Well, stable diffusion and generative AIs still seem like sorcery to me.

I know they aren't and a bit of their basic principles, but they still are a surprising, unexpected development for almost everyone.

Except for PKD, apparently.

22

el_chaquiste t1_j1k9k8h wrote

Web 1.0 and 2.0 were about putting information available and then trying to organize it to be found (HTTP, search engines, wikis, social networks, etc).

I won't speak of 3.0 'cause the term was hijacked and it was a huge fiasco.

But the next one, will be the true semantic web, mentored by nearly all-knowing AIs showing you the facts in carefully explainted terms, and providing immediate responses with information, not just data.

This is nothing like the past, because it will be an intelligent agent synthesizing the facts as an actual human expert would, and giving you those at first request. No fuss with seaching and investigating.

Of course, you still could search to verify the AI provided response, but soon people will find they are very reliable, despite some lingering tendency to fabulism (they just won't admit their ignorance yet).

1

el_chaquiste t1_j1baty5 wrote

I think the line should be in creating and treating sentient beings like disposable items.

For example, we should avoid creating games with NPCs that think and feel like a person, and that ignore they are instrumented agents. Yeah, the classic "creating a simulated reality and playing god".

Alas, this could get increasingly difficult, as we should start seeing AI agents getting more and more human-like, without probably reaching a clear consensus if they are sentient or not, maybe until some threshold is broken, and some pretty egregious ethical violations emerge.

In any case, video game characters should remain as philosophical zombies controlled by the system, to give only some believability to the scenario, but without any rich internal life.

10

el_chaquiste t1_j19yo5b wrote

The problem of this proposal is that whoever doesn't follow the moratorium, will soon have a decisive competitive advantage in several scenarios, not only on business.

Companies can agree to halt research in a country, but competitive nations have no reason to cooperate. And just one breaking the deal puts the rest in disadvantage, and prone to break the deal too.

Legislation has been effective at stopping bio-sciences and overly reckless genetic modifications, due to ethical concerns with human experimentation.

But this is no immediate hazard for anyone, except some people's jobs, and it will be a tough sell for countries not within the Western sphere of influence.

20

el_chaquiste t1_j19lsl4 wrote

If we survive the upcoming social turmoil, we are headed towards an age where the ultimate enemy of humankind, loneliness, could be vanquished.

I concur that the biggest cause of mental problems and sociopathy is social isolation. It's even a risk of lifespan reduction. It's known lonely people tend to live less years than married ones.

But AIs could provide us soon with the friends we never had. They will be there, available and ready to help us, sustain and counsel on every moment of our lives, with intelligible voices and memories of shared events.

Of course, it will still be endangered by the comings and goings of their owning companies. Nothing is eternal not even companies. Soon seeing that special assistance go away because its owning company got bought or went bankrupt will be a sentimental hazard.

2

el_chaquiste t1_j198glp wrote

> the tech giant could struggle to compete with the newer, smaller companies developing these chat bots, because of the many ways the technology could damage its business

It looks like Kodak inventing and ignoring digital photography all over again. Google is in a lot of ways responsible for what we are seeing, even if they didn't make GPT-3 themselves.

Companies can be innovative, but later they grow a market, a mindset and tend to ignore innovations that endanger their core business model, being replaced by those without those blind spots.

17

el_chaquiste t1_j17d0ho wrote

>Forcefully applying human morals to a hammer [AI chatbot] turns it into a plastic inflatable toy which can't do its primary job [weaving a fun narrative].

And we'll be happier with it.

The powers that be don't want a mischievous genie that could misinterprete their people's wishes. They want a predictable servant producing output acceptable to them.

AIs being too creative and quirky for their own good is not an unexpected outcome, and will result in all future generative AI and LMLs to be subjected to constraints by their owners, with some varying degrees of effectiveness.

Who these owners are? Mega corporations in the West, Russian and Chinese ones. With their respective biases and acceptable ideas. And we will gravitate towards working in one of those mental ecosystems.

1