Comments

You must log in or register to comment.

iNstein t1_j9n6hzb wrote

We are the smartest creatures on this planet atm. That means that we decide and control everything that happens here. That is about to change and so we will lose our decision making and control. A smarter creature will decide what happens to us and we have no idea what it has in store for us. We can only hope that it is kind and loving towards us.

11

gaudiocomplex t1_j9nckqr wrote

The thing is, given what we know, there are no indications yet that it would see us as benign. If anything, it would see us as a credible threat to its autonomy and want to rid itself of us. That's the more likely scenario, if we don't get alignment right the first time.

2

NoidoDev t1_j9nxwll wrote

>That is about to change and so we will lose our decision making and control. A smarter creature will decide what happens to us

There's no reason to make this conclusion.

1

ImageTall5631 t1_j9njpju wrote

>That is about to change

No it isn't. What we currently call "AI" is incredibly limited. There will be no exponential growth. We have produced novel results with digital neutral networks, but there is a 0% chance that this technology will ursup human supremacy in our lifetimes.

The threat of AGI/ASI exists as a fantasy in the minds of the technologically illiterate who cannot understand the mediocrity of what they are observing.

−7

sideways t1_j9nlbll wrote

Sorry, I can't hear you over the sound of goalposts being moved.

Every time someone makes a supremely confident prediction like this, machine intelligence overtakes another domain previously sacrosanct to humans.

6

dokushin t1_j9nokgs wrote

There has already been exponential growth. Put your money where your mouth is and make a testable prediction.

3

CaribbeanR3tard t1_j9nopsk wrote

Any scenario where AI gets consciousness is more fictitious than probable, so an ASI is very unlikely to ever be created. But an AGI without consciousness is far more probable, it just depends on more than just AI development. The slow advancement of robotics and limited computing power/energy production is a hindrance to that goal. But just like how ChatGPT was thought to be impossible 10 years ago, we really, really don't know what sort of advancements will be made during this and the next decade.

0

Surur t1_j9nq5kt wrote

In the context of an AI, what is consciousness?

1

NoidoDev t1_j9ny56l wrote

>consciousness

The term has no special meaning in context of AI, it's generally not agreed upon what it means. But here are many guys into some kind of mythical idea of what it means. It's really just a high level control system or the difference between only being able to "dream" / fantasize and reasoning.

2

CaribbeanR3tard t1_j9o9wwq wrote

Consciousness in general is something that is probably going to have to be redefined in the next couple of years, as AI becomes more and more I than A. But I mean consciousness in the same sense that currently applies to organic beings.

1

Surur t1_j9odw1s wrote

So being able perceive and respond intelligently to internal and external changes?

1

CaribbeanR3tard t1_j9olnb3 wrote

More than just that. It's also imagination. Curiosity. Dreams. Desire. Those are some of the defining factors of consciousness.

1

y53rw t1_j9n86st wrote

The danger is that we don't yet know how to properly encode our values and goals into AI. If we have an entity that is more intelligent and more capable than us that does not share our values and goals, then it's going to transform the world in ways that we probably won't like. And if we stand in the way of its goals, even inadvertently, then it will likely destroy us. Note that "standing in it's way" could simply be existing and taking up precious resources like land, and the matter that makes up our bodies.

2

NoidoDev t1_j9nyjqh wrote

>The danger is that we don't yet know how to properly encode our values and goals into AI.

The danger is to give such a system too much power, maybe without delay between "having an idea" and executing it. Also, not having other systems in place to stop it if something would go wrong.

1

No_Ninja3309_NoNoYes t1_j9njjqj wrote

AGI would be like us but with extra ability. ASi will be able to do much more. AGI could mean lost jobs and high suicide rates. ASI could mean mass extinctions.

1

Sea-Advertising-3408 t1_j9njyvu wrote

Damn man. So fascinating the way we humans act. Like honestly this shit just baffles me that we happen to live in the few decades in the billions of years of the universe where progress in technology should increase to a level that could cause more change than all of human history combined. Really makes me start thinking about the simulation theory, what are your thoughts

2

hducug t1_j9nq6ku wrote

When computers get smarter than us they can make themselves smarter much better and faster than us. So that smarter generations can make an even smarter generation and that even smarter generation can make an even smarter generation and so on. So eventually you’ll have an ai who is so smart that it’s basically god and you can do anything with it. That is why it’s called the singularity.

1

SpecialMembership t1_j9nst3u wrote

real danger=>Governments use ASI to destroy opposition(rebels, other ethnicities) accurately. For example, the government can use AI to create a virus that kills all male in ethnic minority(like Kurds).

real danger=>Governments use ASI to destroy opposition(rebels, other ethnicities) accurately. For example, the government can use AI to create a virus that kills all male in the ethnic minority(like Kurds).

1

Scarlet_pot2 t1_j9nyowp wrote

What aren't the dangers of it really. benefits too

1