Kaining t1_jee0c8g wrote

The only thing i know about it is that question: "if it is made, is it enough to simulate a quantum environement and bypass the need for IRL testing ?". At the moment, i'd say no. But i do not have the knowledge or expertise to guess if that could change.

However, what i can give a certain probability of being true is that simulation at regular relativistic physic scale could probably be completely simulated at some point. It's kind of already doing it anyway in very specific field with alphafold and other AI of the sort. Stack enough of specialised simulated model and you have a simulation of everything.

So uh, yes, quantum SGI maybe ?


Kaining t1_jedt5v0 wrote

We're getting good at simulating only the part we need though. Look up what Dassault Systèmes is capable to do for medical practitioner needing trial runs. That's only now.

I guess simulation will only go so far and even AGI will need real world testing for all that's quantum related at the moment but that's the problem with progress. No way to know if what you think is the endgame of possibility really is.


Kaining t1_j6cden6 wrote

Reply to comment by WashiBurr in I’m ready by CassidyHouse

You're putting the carriage before the horse here.

Consciousness first, body then. Consciousness is consciousness regardless of whatever body it is. It's molded by the body but is is what it is. New brain, new memories, new nervous system, new feeling, ect... That's kind of how the reincarnation thingy explains why you don't get to keep memories from previous life in buddhism btw and it kind of make sences. And is a bit fallacious and dodgy too as it kind of nullify the appeal of reincarnation when you first learn of the concept. It ain't a restart button at all. More of a "things stay the same in a constantly changing world" impermanence trick.

So you could rez a completely different consciousness into a VR game and it would still act the same as the being you resurected as long as you "built" it right. The problem here is not knowing if its "you" but if there is a "you" inside that VR avatar. That's an aspect of the "brain in a vat" thingy. How can you be sure that others are real when all of reality is merely a projection of your brain. How can you be sure you are even here is another nasty issue.

Ego Death is a thing afterall.


Kaining t1_j15ezfe wrote

>a hypothetical moment in time when artificial intelligence will have progressed to the point of a greater-than-human intelligence.

So with the sidebar definition, i'll had this before developing my point:

In The Technological Singularity (MIT press), Murray Shanahantake the example of a single, equal-to-human, artificial inteligence that is given the task of designing the next car by some random company with a 2 years goal.

There is two teams in competition. One full of human car specialist, another mode of that equal-to-human AI duplicated to the same number of human but not a single one of the AIs know a thing about cars.

However, being an AI running on a computer, the AIs run at a different quicker time. So in the first year IRL they get 10y of virtual experiences and in the next year, they get 10y of pure research. Outperforming the human teams by having 8years of "free" R&D. Enough for them to revolutionise the industry.

From this example, the one thing we learn is that to get a singularity, we just need one AGO to be as interligent as a regular human. Scale will turn it into a greater-than-human unstopable force of, well, not nature.

But there is one thing that the scale argument kind of gloss over. We already have some sort of inhuman form of inteligence. It emerges out of scaling human inteligence to a point that no single individual can compare, nor can it fight against it. It's corporations. They also have moral rights and are immortal entities in the eye of human laws.

You can't really kill off a corporation as another one will just swoop in and occupy it's niche. And the only way to fight a corporation is through another corporation, or a non profit, or any kind of organisation that gather a mass of humans to better apply their individual inteligence in a collective way. So let's say an oligarch comes in and buy one, kills it for whatever reason. Let's say it's an AGI R&D company too. There's now space for another company to take that market.

So now, let's scales things up, an oligarch isn't enough. Get the government in and have them forbid any kind of R&D toward AGI.

Nice, now AGI can't be born, right ? Wrong, you just made sure that your country will be taken over by an hostile country that hasn't banned research in the best case scenario. Worst case scenario is a hostile, or even friendly country, getting to make an AGI first and it's a paperclip maximiser one.

We already live in a world where greater-than-human inteligent entities exist. There is nothing short of a global effort to ban any kind of research on AI to stop the singularity from happening.

This will never happens because this is the one thing humanity cannot do. Cooperate on a global scale with every single country working on the same goal. Especially on a fiels like computer research. Being a nuclear superpower was last century goal to have some sort of self governing capability. Being an AGI superpower will be this century new major goal for every nation on Earth.

We have been living into such a world since, well, the invention of agriculture actualy. It's just that yes, the curve on the progress scale was close to flat for the last 8k~10k years and now, the question is to know if we are approaching the limit of the evolution function, that 90° perpendicular line on the exponential graph, and if yes, are we at the 60°, 70°, or close to 89.9° moment just before the infinite progression human cannot ever hope to compete with.

So, in a way, yes, we are living through the singularity. We cannot predict anything on how the current balance of power will shift once it is enbodied (it isn't at the moment, it can be considered as disincarnated at the corporation level). It is unstopable. And AI are indeed progressing at an alarming part. So fast that any career path that requires some brain work and not brawn looks like to anybody looking at AI progress a bit closely that it is going to vanish in the next 10 years.

BTW, from the perspective of any other species on the planet, the singularity has long passed. It was a biological singularity, one that lead to us.

Anyway, it kind of is meaningless to think about the problem with that point of view. So long as it cannot be stopped, the event having already happened or not doesn't matter as it will happen anyway.

So we shouldn't ask ourself if we are living through a singularity now but how to stop any doomsday Singularity scenario to happen and how to steer the Singularity toward a result that would suit us.

"What sort of singularity are we living through right now ?".

That should be the only question that matter to anybody here.


Kaining t1_j066dz3 wrote


Can't kill off your homeless population directly because that would look bad ? Convince the soon to be homeless to kill themselves with assistance from the state.

This is the worst thing i've read in a long time and i'm here on reddit, reading american bullshit all day long like the rest of you.


Kaining t1_j06681y wrote

> because that's seen as an easier fight than figuring out some way for increased efficiency to benefit everyone.

Lets face it, that's not really the case. We know where the money is, we knows who do not wants it to move around. I got a spoiled cousin that illustrated the problem very easily once. "If we don't have poor people, how can there be rich people (like me)"

The answer is in places like r/antiwork, r/latestagecapitalism. It's just frowned upon to talk about those anywhere but there.

Exponential progress that benefits everyone ? Automate the shit out of everything ! Exponential progress that benefits the usual ruling class ? Well, please no because we ain't far of a massive global conflict to cull the population then.


Kaining t1_ixr868c wrote

I'm not sure, i just googled that name and what i've read about him do not make much sense in regard to what i have posted and... he seems like another racist neo liberal biggot, one that might be at the start of the problem i'm pointing out.


Kaining t1_iwlsek9 wrote

I've started to think that the singularity already happened in a way, or is just a process that is about to leave our control. If it hasn't already.

Sentient computer will be the last step to completely seal the deal for humanity but so far, we have managed to create a form of disembodied sentient alien intelligence running on the paperclip maximiser software in the form of megacorporation, with their ecosystem being capitalism.

Corporation used to have a life expectancy. They also weren't moral entity. Now, they're a kind of immortal moral species we simply cannot stop. I doubt that any billionaires can kill a megacorporation without another one filling the void immediately and continuing the paperclip factory that is capitalism.

(Elon having a go at doing that with twitter is somehow "interesting" to see. He might just be trying to do that and act insane in the face of the world to hide his intention... yeah, no. This is crazy talk, among a post that really make me feel like i'm crazy.)

I don't know why, but i feel like the first AI will be an incarnation of the "will" of a corporation. Not something as dumb as "nestlé, the AI out to maximise water source as a dollars making machine" but an AI could probably form its own company to act as its own body to manipulate the world. Why would it need a robot body when it can simply hire human cells to do the work ? So long as it's connected to the internet obfuscate the paperwork enough so that nobody could trace it as its creator/owner it could just start doing its thing in a capitalist world without none being the wiser. The more i read about the subject the more it appears that this is kind of a given that it will be able to connect to the net and we can't stop it from doing so.

All megacorporation we have now, despite having human running them and steering all decision could be seen in a distorted way as their immediate ancestor. Because humanity competing against itself means that no company board of executive is really in charge as none are free from current market and political constraint. And i haven't heard of a suiciding mega corporation so far either.

And what is also kind of scary me the most is that in fact, capitalism isn't just a way for greed to maximise itself, it's also a way to order the world through data. And a AI would thrive in a capitalistic data world as it would probably directly influence what kind of reward function it can have.

Artificial Intelligence may not be the most immediate concerning issue. Any form of alien intelligence emerging out of our activity (really our collective subconscious) might be as this might just be what dictate how a true AGI act.

Anyway, the more i think about it, the more crazy i feel.