Viewing a single comment thread. View all comments

6thReplacementMonkey t1_j045dea wrote

It's not malevolent AI doing those things, it's malevolent people using AI to do those things.

The most immediate risks to us from AI don't come from a super-powerful artificial intelligence doing harm to us directly, but from regular people doing harm to each other using the AI as a force multiplier.

56

Brent_Fox t1_j04n8uq wrote

Like how idiots, bigots, and fascist posts on Twitter, get reposted and amplified so you're seeing those posts more than the more logical ones. This is why content moderation is such an important check to these major social media platforms.

21

[deleted] t1_j060ml5 wrote

Imagine unironically saying this on the biggest echo chamber on the internet.

2

Mason-B t1_j05rx4i wrote

> It's not malevolent AI doing those things, it's malevolent people using AI to do those things.

It's not even necessarily malevolent people. People in the system acting uncritically can use AI/algorithmic black boxes as a way to smuggle bias for the perpetuation of that system. Simple things like using Machine Learning to spot crimes based off of historical data... that is inherently racist because the system is. And so people think "well the computer can't be biased, it's just math!" but if the data it was trained on was biased, it can be even more malevolent than the system it is replacing without any people directly meaning to do that.

8

pellik t1_j06j8h7 wrote

So far the most destructive things I’ve seen from AI are things where there probably isn’t any AI at all but rather people hiding behind the black box to cover illegal behavior. The company that sets rent prices for their clients using AI is almost certainly bullshit and just trying to pretend their collusion ring is AI to avoid lawsuits.

1