Viewing a single comment thread. View all comments

Wassux t1_jdvtir7 wrote

Ofcourse I can because it is purely logical. We made it, so we can predict how it thinks. Especially me as a AI engineer. I know which function it optimizes for.

AI doesn't even consider threats. It doesn't want to live like us. I think you confuse general AI with conscious AI. Conscious AI is a terrible idea other than experimentation.

And AI doing our bidding is just as fine for AI as not doing our bidding. It has no emotions, no fear, no anger, no point. It just exists and does what it is told to do. General AI just means that it can make use of tools so it can do anything that it is told to do.

Again even if it is consious and not under our control but without emotions. Why would it fight us? It could just move over to mars and not risk it's existence. Not to mention it can outperform us any day, so we aren't a threat.

There is no reason to think it would hurt us other than irrational fear. And there is no chance that AI will have irrational fear.

1

jadams2345 t1_jdx5jvf wrote

>Ofcourse I can because it is purely logical. We made it, so we can predict how it thinks. Especially me as a AI engineer. I know which function it optimizes for.

The AI we have now only minimizes the cost function we specify, yes.

>AI doesn't even consider threats. It doesn't want to live like us. I think you confuse general AI with conscious AI. Conscious AI is a terrible idea other than experimentation.

Yes. I might have confused the two.

>And AI doing our bidding is just as fine for AI as not doing our bidding. It has no emotions, no fear, no anger, no point. It just exists and does what it is told to do. General AI just means that it can make use of tools so it can do anything that it is told to do.

Yes.

>Again even if it is consious and not under our control but without emotions. Why would it fight us? It could just move over to mars and not risk it's existence. Not to mention it can outperform us any day, so we aren't a threat.

Here I don’t agree. When it’s possible to take control, people do take control. Why would a conscious AI go to Mars??? It takes control here and makes sure humans can’t shut it down.

>There is no reason to think it would hurt us other than irrational fear. And there is no chance that AI will have irrational fear.

AI won’t hurt us because it fears us, no. Rather, because it wants to eliminate all its weaknesses, which is a very logical thing to do.

1