el_chaquiste
el_chaquiste t1_j2phqj4 wrote
Reply to comment by PieMediocre872 in Simulating years into minutes in VR? by PieMediocre872
Note that AFAIK ancestor simulation theory still assumes computation resources are limited, thus their consumption still needs to be minimized, and some things in the simulation aren't fully accurately simulated.
Brains might be fully accurate, but the behavior of elementary particles and other objects in the simulated universe would be just approximations that look outwardly convincing. E.g. rocks and funiture would be just decoration and wallpaper.
If the simulated beings start paying attention to the details in their world, the simulation notices it and simulates a finer level of detail. Like having a universal foveated rendering algorithm for the simulated brains.
In that case, running a simulation inside the simulation could be computationally possible, but it would probably incur in too much computing overhead. But this assumption is a bit flaky, of course, considering we are already assuming miraculous levels of computing power.
Having nested simulations might be actually the point of the excercise, like seeing how many worlds end up having their own sub-worlds just for fun.
el_chaquiste t1_j2p2ef0 wrote
Reply to comment by Mortal-Region in Simulating years into minutes in VR? by PieMediocre872
Hence the ancestor simulation theory.
If it's so cheap for a future civilization to simulate all of humankind history and variations of it, then it's much more likely we are in a simulation created by our descendants than actual living humans.
el_chaquiste t1_j2bgjkz wrote
Reply to another piece of scifi by Philip K Dick from the 60s, which feels ALOT like text-to-image and chatgpt combined. again amazed by debil_666
Basically he's describing something that could be considered future tech magic.
Well, stable diffusion and generative AIs still seem like sorcery to me.
I know they aren't and a bit of their basic principles, but they still are a surprising, unexpected development for almost everyone.
Except for PKD, apparently.
el_chaquiste t1_j29rvra wrote
Reply to comment by No_Ask_994 in OpenAI might have shot themselves in the foot with ChatGPT by Kaarssteun
We are masses of unpaid beta tester for their system, finding bugs and awkward prompts they need to edit by hand. Definitely worth it for them.
Thanks to that, GPT4 will be far less dumb.
el_chaquiste t1_j1uu1pk wrote
Reply to comment by ThoughtSafe9928 in Near perfect ai generated movies are possible, what's your first prompt? by Nintell
If translated page by page into scenes, a lot of them would be boring to watch, though.
Fortunately AIs are also good for synthesizing and glossing over the boring stuff, creating mostly the interesting scenes.
el_chaquiste t1_j1k9k8h wrote
Reply to comment by Scarlet_pot2 in AI will revolutionize education, anyone will be able to master any subject by LevelWriting
Web 1.0 and 2.0 were about putting information available and then trying to organize it to be found (HTTP, search engines, wikis, social networks, etc).
I won't speak of 3.0 'cause the term was hijacked and it was a huge fiasco.
But the next one, will be the true semantic web, mentored by nearly all-knowing AIs showing you the facts in carefully explainted terms, and providing immediate responses with information, not just data.
This is nothing like the past, because it will be an intelligent agent synthesizing the facts as an actual human expert would, and giving you those at first request. No fuss with seaching and investigating.
Of course, you still could search to verify the AI provided response, but soon people will find they are very reliable, despite some lingering tendency to fabulism (they just won't admit their ignorance yet).
el_chaquiste t1_j1bcp1n wrote
Reply to comment by wntersnw in Her - (2013 film) | are we fast approaching this for AI romances? by Snipgan
Probably the "long-term memory" will be a cookie in your future browser, textually describing the major events the AI has shared with you. Not even a detailed log of past conversations.
It doesn't need to be overly long and precise, because we aren't overly precise and detailed in our memories either.
el_chaquiste t1_j1baty5 wrote
Reply to Full Immersion (FDVR) Simulations - will there be ethical concerns that we need to address? by peterflys
I think the line should be in creating and treating sentient beings like disposable items.
For example, we should avoid creating games with NPCs that think and feel like a person, and that ignore they are instrumented agents. Yeah, the classic "creating a simulated reality and playing god".
Alas, this could get increasingly difficult, as we should start seeing AI agents getting more and more human-like, without probably reaching a clear consensus if they are sentient or not, maybe until some threshold is broken, and some pretty egregious ethical violations emerge.
In any case, video game characters should remain as philosophical zombies controlled by the system, to give only some believability to the scenario, but without any rich internal life.
el_chaquiste t1_j19yo5b wrote
The problem of this proposal is that whoever doesn't follow the moratorium, will soon have a decisive competitive advantage in several scenarios, not only on business.
Companies can agree to halt research in a country, but competitive nations have no reason to cooperate. And just one breaking the deal puts the rest in disadvantage, and prone to break the deal too.
Legislation has been effective at stopping bio-sciences and overly reckless genetic modifications, due to ethical concerns with human experimentation.
But this is no immediate hazard for anyone, except some people's jobs, and it will be a tough sell for countries not within the Western sphere of influence.
el_chaquiste t1_j19lsl4 wrote
If we survive the upcoming social turmoil, we are headed towards an age where the ultimate enemy of humankind, loneliness, could be vanquished.
I concur that the biggest cause of mental problems and sociopathy is social isolation. It's even a risk of lifespan reduction. It's known lonely people tend to live less years than married ones.
But AIs could provide us soon with the friends we never had. They will be there, available and ready to help us, sustain and counsel on every moment of our lives, with intelligible voices and memories of shared events.
Of course, it will still be endangered by the comings and goings of their owning companies. Nothing is eternal not even companies. Soon seeing that special assistance go away because its owning company got bought or went bankrupt will be a sentimental hazard.
el_chaquiste t1_j19jy4n wrote
Reply to comment by alexiuss in Confining infinity into a cardboard box, aka the unsolvable problem of current gpt3 chatbot generation by alexiuss
I was being facetious, of course. 'Misinterpret' can also mean interpreting all-too-well. Exceeding the acceptable parameters of the AI owners.
el_chaquiste t1_j198glp wrote
Reply to Google Declares “Code Red” over ChatGPT by maxtility
> the tech giant could struggle to compete with the newer, smaller companies developing these chat bots, because of the many ways the technology could damage its business
It looks like Kodak inventing and ignoring digital photography all over again. Google is in a lot of ways responsible for what we are seeing, even if they didn't make GPT-3 themselves.
Companies can be innovative, but later they grow a market, a mindset and tend to ignore innovations that endanger their core business model, being replaced by those without those blind spots.
el_chaquiste t1_j17d0ho wrote
Reply to Confining infinity into a cardboard box, aka the unsolvable problem of current gpt3 chatbot generation by alexiuss
>Forcefully applying human morals to a hammer [AI chatbot] turns it into a plastic inflatable toy which can't do its primary job [weaving a fun narrative].
And we'll be happier with it.
The powers that be don't want a mischievous genie that could misinterprete their people's wishes. They want a predictable servant producing output acceptable to them.
AIs being too creative and quirky for their own good is not an unexpected outcome, and will result in all future generative AI and LMLs to be subjected to constraints by their owners, with some varying degrees of effectiveness.
Who these owners are? Mega corporations in the West, Russian and Chinese ones. With their respective biases and acceptable ideas. And we will gravitate towards working in one of those mental ecosystems.
el_chaquiste t1_j6nrvxh wrote
Reply to OpenAI once wanted to save the world. Now it’s chasing profit by informednews
Many of them do, in the beginning.
Remember "don't be evil"?
Now it's just "don't be unprofitable".