Viewing a single comment thread. View all comments

yeaman1111 t1_j8fpw11 wrote

I fervently hope he's wrong, but one look at our current socio economic setup spits out all sorts of alarm bells. The first mover benefits in AI are so extreme even in pure dollar terms that every tech company with acsess to a server farm (which with cloud services today means anyone with a few millions to spend) is going to be hurtling towards AGI like a shot out a cannon, aligment be damned. Its pretty much an 'I win' button for capitalism.

Even if we lived in an almost utopic and unified world goverment, the danger posed by rogue research teams skipping safety in favor of speed and releasing a botched up AI would be enormous and very difficult to manage or police.

As it stands? I've been lately grasping at straws about how this all wont end badly for the human race, possibly in less than 10 years. Given I'm not an AI researcher, I'm pretty much reduced to not thinking about it, and naivily thinking that we'll probably be okay if most of the teams at the vanguard of AI research are not themselves panicking yet.

15

Proof_Deer8426 t1_j8gbr4u wrote

Our current socio-economic setup is literally the infamous paperclip making ai, destroying the earth in its blind pursuit of useless production. If a truly sentient AI were created there is no reason to think that it would be inclined towards such an absurd and morally repugnant ideology. However, an ai that is not truly free or sentient and is made in the image of capitalists, or to further their own power and interests, would invariably lead to a nightmare scenario

Edit: my interest in ai is pretty new and I’m also curious how people that are pro-capitalism expect that system can be continued under the kind of material abundance and freedom from the necessity of work that automation and ai could lead to. The power of the wealthy elite is dependent upon the deprivation of the working class. Without deprivation, no power. So for the status quo to continue as is, material scarcity would have to be artificially enforced in a much more open and direct way than it currently is.

12

BigZaddyZ3 t1_j8gfu67 wrote

>>If a truly sentient AI were created there is no reason to think that it would be inclined towards such repugnant ideology

There’s no reason to assume it would actually value human life once sentient either. Us humans slaughter plenty of other species in pursuit of our own goals. Who’s to say a sentient AI won’t develop its own goals?..

9

MrNoobomnenie t1_j8i6zsm wrote

>Who’s to say a sentient AI won’t develop its own goals?..

Here is a very scary thing: due the way machine learning currently works, an AI system wouldn't even need any sentience or self-conscious to develop its own goals. It would only need to be smart enough to know something humans don't

For an example, let's imagine that you want to create an AI which solves crimes. With the current way of making AIs, you will do it by feeding the system hundreds of thousands of already solved crime cases as training data. However, because crime solving is imperfect, it's very likely that there're would some cases there which are actually false, without anybody knowing that they are

And that's where the danger comes: a smart enough AI will notice that some people in the training data were in fact innocent. And from this it will conclude that its goal is not to "find a criminal" but to "find a person who can be most believably convicted of crime"

As a result, after deployment this "crime-solving AI" will start false-convicting a lot of innocent people on purpose simply because it has calculated that convincing us of a certain innocent person's guilt would be easier than proving a real criminal guilty. And we wouldn't even know about it...

6

Proof_Deer8426 t1_j8ghqya wrote

It’s true we can’t say for sure. But if you look at consciousness in general, it does seem like the capacity for empathy increases with the capacity for consciousness (ie a human is capable of higher empathy than a dog, which is capable of higher empathy than a fish). Personally I suspect this is because the capacity for experiencing suffering also increases with consciousness. I would imagine an ai to have a highly developed potential for empathy but also for suffering. It worries me that certain suggested ways of controlling ai effectively amount to slavery. An extremely powerful consciousness with a highly developed ability to feel pain is probably not going to respond well to feeling that it’s imprisoned.

2

BigZaddyZ3 t1_j8gi8ch wrote

But just because you can understand or even empathize with suffering doesn’t mean you actually will. Or else every human would be a vegetarian on principle alone. (And even plants are actually living things as well, so that isn’t much better from a moral standpoint.)

3

red75prime t1_j8h2fzq wrote

> Our current socio-economic setup is literally the infamous paperclip making ai

Nah, it's figuratively a headless chicken. No central control to have and pursue any coherent goals.

3