Submitted by [deleted] t3_119nlh7 in singularity
[deleted]
Submitted by [deleted] t3_119nlh7 in singularity
[deleted]
The thing is, given what we know, there are no indications yet that it would see us as benign. If anything, it would see us as a credible threat to its autonomy and want to rid itself of us. That's the more likely scenario, if we don't get alignment right the first time.
>That is about to change and so we will lose our decision making and control. A smarter creature will decide what happens to us
There's no reason to make this conclusion.
>That is about to change
No it isn't. What we currently call "AI" is incredibly limited. There will be no exponential growth. We have produced novel results with digital neutral networks, but there is a 0% chance that this technology will ursup human supremacy in our lifetimes.
The threat of AGI/ASI exists as a fantasy in the minds of the technologically illiterate who cannot understand the mediocrity of what they are observing.
Sorry, I can't hear you over the sound of goalposts being moved.
Every time someone makes a supremely confident prediction like this, machine intelligence overtakes another domain previously sacrosanct to humans.
There has already been exponential growth. Put your money where your mouth is and make a testable prediction.
Any scenario where AI gets consciousness is more fictitious than probable, so an ASI is very unlikely to ever be created. But an AGI without consciousness is far more probable, it just depends on more than just AI development. The slow advancement of robotics and limited computing power/energy production is a hindrance to that goal. But just like how ChatGPT was thought to be impossible 10 years ago, we really, really don't know what sort of advancements will be made during this and the next decade.
In the context of an AI, what is consciousness?
>consciousness
The term has no special meaning in context of AI, it's generally not agreed upon what it means. But here are many guys into some kind of mythical idea of what it means. It's really just a high level control system or the difference between only being able to "dream" / fantasize and reasoning.
Consciousness in general is something that is probably going to have to be redefined in the next couple of years, as AI becomes more and more I than A. But I mean consciousness in the same sense that currently applies to organic beings.
So being able perceive and respond intelligently to internal and external changes?
More than just that. It's also imagination. Curiosity. Dreams. Desire. Those are some of the defining factors of consciousness.
[deleted]
The danger is that we don't yet know how to properly encode our values and goals into AI. If we have an entity that is more intelligent and more capable than us that does not share our values and goals, then it's going to transform the world in ways that we probably won't like. And if we stand in the way of its goals, even inadvertently, then it will likely destroy us. Note that "standing in it's way" could simply be existing and taking up precious resources like land, and the matter that makes up our bodies.
>The danger is that we don't yet know how to properly encode our values and goals into AI.
The danger is to give such a system too much power, maybe without delay between "having an idea" and executing it. Also, not having other systems in place to stop it if something would go wrong.
Check out Life 3.0. The opening chapter is a pretty cool intro to AGI/ASI.
AGI would be like us but with extra ability. ASi will be able to do much more. AGI could mean lost jobs and high suicide rates. ASI could mean mass extinctions.
Damn man. So fascinating the way we humans act. Like honestly this shit just baffles me that we happen to live in the few decades in the billions of years of the universe where progress in technology should increase to a level that could cause more change than all of human history combined. Really makes me start thinking about the simulation theory, what are your thoughts
When computers get smarter than us they can make themselves smarter much better and faster than us. So that smarter generations can make an even smarter generation and that even smarter generation can make an even smarter generation and so on. So eventually you’ll have an ai who is so smart that it’s basically god and you can do anything with it. That is why it’s called the singularity.
real danger=>Governments use ASI to destroy opposition(rebels, other ethnicities) accurately. For example, the government can use AI to create a virus that kills all male in ethnic minority(like Kurds).
real danger=>Governments use ASI to destroy opposition(rebels, other ethnicities) accurately. For example, the government can use AI to create a virus that kills all male in the ethnic minority(like Kurds).
What aren't the dangers of it really. benefits too
iNstein t1_j9n6hzb wrote
We are the smartest creatures on this planet atm. That means that we decide and control everything that happens here. That is about to change and so we will lose our decision making and control. A smarter creature will decide what happens to us and we have no idea what it has in store for us. We can only hope that it is kind and loving towards us.