Viewing a single comment thread. View all comments

Kaining t1_j15ezfe wrote

>a hypothetical moment in time when artificial intelligence will have progressed to the point of a greater-than-human intelligence.

So with the sidebar definition, i'll had this before developing my point:

In The Technological Singularity (MIT press), Murray Shanahantake the example of a single, equal-to-human, artificial inteligence that is given the task of designing the next car by some random company with a 2 years goal.

There is two teams in competition. One full of human car specialist, another mode of that equal-to-human AI duplicated to the same number of human but not a single one of the AIs know a thing about cars.

However, being an AI running on a computer, the AIs run at a different quicker time. So in the first year IRL they get 10y of virtual experiences and in the next year, they get 10y of pure research. Outperforming the human teams by having 8years of "free" R&D. Enough for them to revolutionise the industry.

From this example, the one thing we learn is that to get a singularity, we just need one AGO to be as interligent as a regular human. Scale will turn it into a greater-than-human unstopable force of, well, not nature.

But there is one thing that the scale argument kind of gloss over. We already have some sort of inhuman form of inteligence. It emerges out of scaling human inteligence to a point that no single individual can compare, nor can it fight against it. It's corporations. They also have moral rights and are immortal entities in the eye of human laws.

You can't really kill off a corporation as another one will just swoop in and occupy it's niche. And the only way to fight a corporation is through another corporation, or a non profit, or any kind of organisation that gather a mass of humans to better apply their individual inteligence in a collective way. So let's say an oligarch comes in and buy one, kills it for whatever reason. Let's say it's an AGI R&D company too. There's now space for another company to take that market.

So now, let's scales things up, an oligarch isn't enough. Get the government in and have them forbid any kind of R&D toward AGI.

Nice, now AGI can't be born, right ? Wrong, you just made sure that your country will be taken over by an hostile country that hasn't banned research in the best case scenario. Worst case scenario is a hostile, or even friendly country, getting to make an AGI first and it's a paperclip maximiser one.

We already live in a world where greater-than-human inteligent entities exist. There is nothing short of a global effort to ban any kind of research on AI to stop the singularity from happening.

This will never happens because this is the one thing humanity cannot do. Cooperate on a global scale with every single country working on the same goal. Especially on a fiels like computer research. Being a nuclear superpower was last century goal to have some sort of self governing capability. Being an AGI superpower will be this century new major goal for every nation on Earth.

We have been living into such a world since, well, the invention of agriculture actualy. It's just that yes, the curve on the progress scale was close to flat for the last 8k~10k years and now, the question is to know if we are approaching the limit of the evolution function, that 90° perpendicular line on the exponential graph, and if yes, are we at the 60°, 70°, or close to 89.9° moment just before the infinite progression human cannot ever hope to compete with.

So, in a way, yes, we are living through the singularity. We cannot predict anything on how the current balance of power will shift once it is enbodied (it isn't at the moment, it can be considered as disincarnated at the corporation level). It is unstopable. And AI are indeed progressing at an alarming part. So fast that any career path that requires some brain work and not brawn looks like to anybody looking at AI progress a bit closely that it is going to vanish in the next 10 years.

BTW, from the perspective of any other species on the planet, the singularity has long passed. It was a biological singularity, one that lead to us.

Anyway, it kind of is meaningless to think about the problem with that point of view. So long as it cannot be stopped, the event having already happened or not doesn't matter as it will happen anyway.

So we shouldn't ask ourself if we are living through a singularity now but how to stop any doomsday Singularity scenario to happen and how to steer the Singularity toward a result that would suit us.

"What sort of singularity are we living through right now ?".

That should be the only question that matter to anybody here.

7