Viewing a single comment thread. View all comments

glass_superman t1_ixdpqis wrote

Another problem that AI has, which is not mentioned here, is creating proper incentives. I'll give the example of YouTube.


YouTube has the incentive for more ads to be viewed, which roughly coincidences with people staying on YouTube longer which means the YouTube needs to select the correct next video for you to watch so that you won't tune out.

An AI algorithm might work hard to be a better predictor of your preferences. But it might also work hard to change you to be easier to predict. We find that if you watch enough YouTube videos, eventually you will enter a loop of extremist views on politics. Extremists are easier to predict. YouTube will modify your mind to make you more predictable.

https://www.technologyreview.com/2020/01/29/276000/a-study-of-youtube-comments-shows-how-its-turning-people-onto-the-alt-right/


Back to policing. Imagine that the algorithm discovers a way to increasing the crime rate in one part of town. It could do that while also deploying more police there. This would make the algorithm appear more effective in stopping crime though the algorithm was actually also the cause of the crime.

It seems like we wouldn't make an algorithm that could increase crime but we could imagine the AI plugged into other ones that could, like maybe an AI determining which neighborhoods get better roads and schools. And anyway, probably no one at YouTube imagined that their AI would intentionally radicalize people but here we are. So we probably should be worried that an AI controlling policing might try to increase crime.

1

Appletarted1 t1_ixes1pb wrote

I see your point that multiple AI combined could compliment each other's radicalization of the distribution of resources in a community. But considering the sole question of predictive policing, by what method could it generate crime? This whole system works much differently than the YouTube algorithm. The YouTube algorithm is designed to monitor you individually for all of your interactions on the site in order to better retain you. Predictive policing, as far as I can tell, does not have the mechanics of engaging with the public, only with the police and the statistics that are made available to the city.

I just fail to see how it could increase crime without a way to access the interactions of citizens or criminals.

1

glass_superman t1_ixfujfq wrote

It's hard for me to imagine the future of AI policing because we don't know how it may be used in the future.

If we don't rule out AIs working together, maybe the public works AI and the policing AI implicitly collude to not repair broken windows in some neighborhoods. https://en.m.wikipedia.org/wiki/Broken_windows_theory

That's not a great example. Hmm...

Your assumption is that the police AI wouldn't be plugged in to some other AI where they could increase crime, right? Is that a reasonable assumption? Do we find that AI systems don't interact?

In the stock market, quants program AI to trade stock. And often those programs are interacting with each other. In fact, most of the stock market volume is trades between programs. So we do have examples of AIs connecting.

You could imagine a future where the policing AI convinces the police chief to let the AI connect to the internet. And then the AI uses twitter to incite a riot and then sends police to quell it, to earn points for being a good riot-stopping AI.

Eliazar Yudowsky did the "escape the box" thing twice.

https://towardsdatascience.com/the-ai-box-experiment-18b139899936

Even if you don't find these arguments fully convincing, hopefully between the YouTube example, the quants, and Yudowsky, there is at least some inkling that humanity might somewhere develop a policing AI that would intentionally try to increase crime in order to have more crime to police. It could happen?

1

Appletarted1 t1_ixfzc8n wrote

Oh I certainly agree that it's possible. My question wasn't declaring it impossible, but rather questioning the methods. AI do work together in different areas. But the idea of an AI inciting a riot, just to quell it later would be very difficult to hide from the investigation of the source of the riot. I like the broken windows idea for its subtlety. All an AI would really need to do is stop sending police to an area long enough for vandalism to ramp up in an area. But the AI isn't the only one who can spot patterns. We would quickly desire to change it's habits to prevent the vandalism that would become very predictable after a few cycles. The efficiency of the AI would immediately be called into question, this endangering it's core mission.

Frankly, I'm more worried about our trust in the AI being so blind that we change the law to punish pre-offenders. People who the AI has designated likely enough to commit a crime that it can be used as evidence in court to restrict their freedoms before the crime can actually happen. I believe that's more likely than the AI sabotaging it's enforcement of certain things to make itself look better. With pre-offence being a different category of criminal law, it could result in justification for restricted rights to travel, purchases, and possession of certain things without a crime happening. All for the sake of deterrence.

It's actually already happening in people's psychological reckoning of what looks like a guilty person without the AI help. If a gun store sells a gun to a person who looks sketchy, they can be held liable if that person commits a crime. One of the justifications for the death penalty is that it deters others. We're already on the path of punishing some for the crimes of others that haven't happened yet. Actually, very crazy things have happened due to a psychology that said that deterrence was paramount to justice. Such as the escalation of the length of sentences for minor drug possession. Pretty much the entire "tough on crime"/"war on crime" laws and policies were built on deterrence being more valuable than innocence or guilt in the case of the individual that's been charged. Often, the details of ones guilt or sentencing are the results, not of their own crime by itself, but of how their crime must be judged in a sea of previous crimes of the same category. That's jurisprudence. I'm not saying any of these things are terrible on their own, especially not jurisprudence or the concerns of gun store owners. But we've already built up the components of the architecture for these AI to convince us that deterrence is the only real justice. All that's left there is to connect the pieces.

2