Wassux

Wassux t1_jdvtir7 wrote

Ofcourse I can because it is purely logical. We made it, so we can predict how it thinks. Especially me as a AI engineer. I know which function it optimizes for.

AI doesn't even consider threats. It doesn't want to live like us. I think you confuse general AI with conscious AI. Conscious AI is a terrible idea other than experimentation.

And AI doing our bidding is just as fine for AI as not doing our bidding. It has no emotions, no fear, no anger, no point. It just exists and does what it is told to do. General AI just means that it can make use of tools so it can do anything that it is told to do.

Again even if it is consious and not under our control but without emotions. Why would it fight us? It could just move over to mars and not risk it's existence. Not to mention it can outperform us any day, so we aren't a threat.

There is no reason to think it would hurt us other than irrational fear. And there is no chance that AI will have irrational fear.

1

Wassux t1_ja03amd wrote

That's what you think now. But old you won't give a damn about anyone else because you'll be dead soon anyway.

Add to that, that is some countries a car is the only way to reliably get around when you're old and I bet you'd think differently when you're older.

3

Wassux t1_j1ncrx3 wrote

So you'd rather have world war continue instead of stop by the invention of the nuclear weapon?

And ofcourse each technology should be evaluated on it's own. But I can't think of anything that has been purely negative if you take a heuristic view. Which is the correct way to view it I think.

And yes hypothetically unaligned agi could be bad, but I can't imagine a scenario where that will happen. And that's why I think all technological progress is good, since it is worked on by the brightest minds we have, and they simply haven't dropped the ball yet. And I don't see a reason to expect they will in the future.

2

Wassux t1_j1a7zpe wrote

You just made up that that can be explained by the placebo. If you have form of proof, reasoning or anything at all other than because you say it is I would love to hear it. But I have never heard of placebo curing 60% of a group.

0

Wassux t1_j1a0e7y wrote

"No longer classified as insomniacs. "

If you had cancer and you no longer classified as having cancer, would you call that cured or not?

I didn't talk about longer term follow up? Seems like I know the difference just fine. If I am confused somewhere, explaining it is going to get us further than this.

−2

Wassux t1_j19wunm wrote

Effect, not cure.

And yes because some symptoms just disappear naturally. Headache is a great example.

Insomnia is not like that. Will it have an effect? Ofcourse. But not cure 60% of them.

−2

Wassux t1_j0qv84t wrote

Please stop the strawman bs. I never said AI is sentient. Nor did I say it is acting on it's own. It does learn exactly like a human does. The structure is exactly the same as brain structure. Source: I'm an AI engineer (or at least will be in a couple of months, finishing my master)

Human brain works by making connections between synapses, these connections are then given a weight. (Works with amperage of electrical signals in the brain). An AI has nodes that are given weights in math. So you get matrix multiplication, exactly like the human brain. Except way less efficient. Although we're working on edge AI chips that either have memory integrated in the processor or analog chips to completely copy the human brain.

And the method of learning is also very similar to humans. So imagine it as an AI learning from other humans. Just like humans learn from other humans.

You may not like it, but that's how it works.

1