Viewing a single comment thread. View all comments

rami_lpm t1_ixe8yf0 wrote

> If you're not doing anything illegal, you get let go 99% of the time. If you act uncooperative or aggressively you invite attention.

Sure. No 'walking while brown' type of arrests in this magical neighborhood of yours.

>As B, if you know you're more likely to be punished than A for doing something, why would you do it?

this is straight up victim shaming.

−1

FaustusC t1_ixed3hx wrote

My dude, those are statically miniscule amounts of the arrests. If we counted all of them together over 10 years, they'd be a fraction of a percentage of legitimate stops and arrests.

No, it's common sense. I don't speed Because I don't want to get stopped. I drive a dumb car, in a dumb color with a vanity plate. I already have a target on myself. Why would I give them a legitimate reason to screw with me? If an action is illegal, and you know you're more likely to be punished for commiting it, why would you knowingly take the risk? How is that victim blaming?

2

rami_lpm t1_ixf3wr6 wrote

I understand it may be so now, but if they use historical data to train the ai, then any racial bias from previous decades, will show.

What if you were targeted not by your actions but by the looks of your car?

All I'm saying is that the training data needs to be vetted by several academic parties, to eliminate as much bias as possible.

1

FaustusC t1_ixf6rtn wrote

Then I don't think you understand how it works. The Bias will train itself out within a few cycles. Because that's how it works. The AI will start using that "flawed" data and then, as it progresses, will slowly integrate it's new findings into the pool. It may take a few years, but, if policing was misweighted, the AI would allocate the resources where they were needed. If you train an AI to do basic addition, and to know numbers, once it knows enough numbers you can't tell it 1+1=6. If I ask the AI for the number between 7 and 9, it will list off 6+2, 5+3, 4+4 etc. I can tell it 2+3 is the answer, but it will search and say I'm incorrect Because based purely on the data, I cannot be correct. We can compare that to the earlier arguments. The AI can see crime at points X, Y and Z in neighborhood B but crime in Q in neighborhood A.

I am lol. "Yes sir, no sir, here's my license sir, have a nice night."

And I'm saying that letting "academic parties" get their hands on it is going to simply nudge bias the opposite way. Positive bias. That will get us nowhere until the AI fixes itself at which point people will screech that somehow the AI went racist again lol. Academics has a serious issue with bias but that's an entirely different argument.

2

rvkevin t1_ixgogni wrote

>The AI can see crime at points X, Y and Z in neighborhood B but crime in Q in neighborhood A.

The AI doesn't see that. The algorithm is meant to predict crime, but you aren't feeding actual crime data into the system, you're feeding police interactions (and all the biases that individual officers have) into the system. More data doesn't always fix the issue because the issue is in measuring the data.

0

FaustusC t1_ixh5ydr wrote

But that's the thing: unless someone's getting hit with completely falsified evidence, the arrest itself doesn't become less valid. It's irrelevant to the data whether or not a crime is uncovered because of a biased interaction or an unbiased one. The prediction model itself will still function correctly. The issue isn't measuring the data, it's getting you to start acknowledging data accuracy. A crime doesn't cease to be a crime just because it wasn't noticed for the right reasons.

1

rvkevin t1_ixjrv88 wrote

>But that's the thing: unless someone's getting hit with completely falsified evidence, the arrest itself doesn't become less valid.

It still doesn’t represent actual crime; it represents crime that the police enforced (i.e. based on police interactions). For example, if white and black people carry illegal drugs at the same rate, yet police stop and search black people more, arrests will show a disproportionate amount of drugs among black people and therefore devote more resources to black neighborhoods even when the data doesn’t merit that response.

> It's irrelevant to the data whether or not a crime is uncovered because of a biased interaction or an unbiased one.

How is a prediction model supposed to function when it doesn’t have an accurate picture of where crime occurs? If you tell the model that all of the crime happens in area A because you don’t enforce area B that heavily, how is the model supposed to know that it’s missing a crucial variable? For example, speed trap towns that gets like 50% of their funding from enforcing speed limits in a mile stretch of highway. How is the system supposed to know that speeding isn’t disproportionately worse there despite the mountain of traffic tickets given out?

>The issue isn't measuring the data, it's getting you to start acknowledging data accuracy.

How you measure the data is crucial because it’s easy to introduce selection biases into the data. What you are proposing is exactly how they are introduced since you don’t even seem to be aware it’s an issue. It is more than just whether each arrest has merit. The whole issue is that you are selecting a sample of crime to feed into the model and that sample is not gathered in an unbiased way. Instead of measuring crime, you want to measure arrests, which are not the same thing.

1

notkevinjohn t1_ixebwfn wrote

No it's not, it's game theory. There may be totally valid reasons for doing that thing which might be critical to understand. It's only victim shaming if you start from the assumption that they are doing that thing because they are stupid, or lack self control, or some other undesirable characteristic.

1