Comments

You must log in or register to comment.

[deleted] t1_ixfxbya wrote

[removed]

15

MrTzatzik t1_ixgeb2o wrote

You don't have to program it like that because AI would learn that from previous lawsuits. Just like it happened in Amazon. They were using AI to hire people but the AI didn't want to hire black and women based on statistics.

18

Nervous-Masterpiece4 t1_ixgyl7f wrote

So the training data would need to be manipulated to remove racial bias.

At which time the opportunity to add new biases would arise… nicely encoded in indecipherable AI networks.

> it does things. We’re just not completely sure how.

5

pipopapupupewebghost t1_ixgz9qf wrote

Why include race in it? Why not just say "this person" if race isn't part of the crime

1

Petersburg_Spelunker t1_ixoqemk wrote

Facts, how can you trust a computer when ya can't trust the money err people who made them..

1

sourpussmcgee t1_ixg606f wrote

It will always be too early. In no way should AI do criminal justice work.

13

j0n_ t1_ixgkmkk wrote

Why? In theory it should be consistently rule-based.

4

asdaaaaaaaa t1_ixgurk1 wrote

> In theory it should be consistently rule-based.

Yes, and in-theory a lot of things would be great.

4

strugglebusn t1_ixgzb7n wrote

Never know till you try. Just data based I’d love to see the letter of the law print out. Not opinion

−2

i_demand_cats t1_ixhia1q wrote

Laws all have wiggle room for judges to give harsher or lighter sentences based on circumstances. There are very few laws that are ironclad enough in their wording for a machine to be able to interpret them correctly 100% of the time.

3

strugglebusn t1_ixgtyp2 wrote

Might be better than half the judges these days. Cite the precedent and the exact letter of the law. Realistically I think an AI print out with a recommendation -paired with a human judge would go far.

“Welp the AI recommends 10 years and $X fine because that’s the law buuuuuuut I’m going to do 6 months and $0” would at least make it more blatant when judges disregard the letter of the law.

2

markhouston72 t1_ixgtfm0 wrote

In theory is the key. As an example, look up the story about Meta's AI scientific paper generator which they posted to GitHub earlier this week. They pulled it down after 2 days. Early users identified that it was inherently biased against POC and also generates a lot of false claims.

1

j0n_ t1_ixh705o wrote

But research paper writing is an inherently creative process, even if research methods themselves are not. Thats very different. Also, i followed the story and it generated surprisingly correct claims. The problem was, that it made up sources and therefore encouraged junk science and academic dishonesty. Going as far as asigning fake papers to real authors of a field.

1

jsgnextortex t1_ixhdpv4 wrote

This has absolutely no relation to passing judgement on people...you are comparing an AIs dataset to another AI dataset with completely different entries. AI doesnt go against POC, AI doesnt even know wtf POC is...conclusions like this only show that people have absolutely no clue how AI works and base a lot of their judgements on plain ignorance.

1

OraxisOnaris1 t1_ixirk57 wrote

There's a lot more to the criminal justice system than rules. A lot of nuance happens in the implementation of laws and, frankly, the reason behind a crime is sometimes more important than the crime itself.

1

The_Bagel_Fairy t1_ixg3qps wrote

I'm all for replacing Judge Judy with AI. I would watch it. I would be slightly pissed if I went to law school and computers were taking my job though.

6

LiberalFartsMajor t1_ixfwipf wrote

We already know that technology is just as racist as the people that program it.

1

throwaway836282672 t1_ixg1f4h wrote

> the people that program it.

No, as the data fed into it.

If the technology is only evaluated on pale skinned individuals, then the technology will be most apt at that data type. You're only as strong as your weakest unit test.

13

stnlycp778 t1_ixgdooq wrote

Yeah, that's what this country needs.

1

Head-Gap8455 t1_ixgifxh wrote

Verdict guilty… please press pound to repeat this information. Good bye.

1

[deleted] t1_ixgp3mc wrote

[removed]

1

AutoModerator t1_ixgp3n6 wrote

Thank you for your submission, but due to the high volume of spam coming from Medium.com and similar self-publishing sites, /r/Technology has opted to filter all of those posts pending mod approval. You may [message the moderators](/message/compose?to=/r/technology&subject=Request for post review) to request a review/approval provided you are not the author or are not associated at all with the submission. Thank you for understanding.

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

1

gamergeek t1_ixgu0to wrote

Not if you're a big fan of racial and class biased outcomes. That's all you've got to train your data on.

1

E_Snap t1_ixgux0c wrote

*Sponsored by “Judges would like to continue to operate with impunity” of America.

1

JeevesAI t1_ixgz8qm wrote

A better application would be: train a classifier to predict bias in the criminal justice system and then determine feature importances and counterfactuals which could reverse biased trends.

1

si828 t1_ixh75ph wrote

Dear god yes it is!

1

skunksmasher t1_ixjisfz wrote

Seriously what would the difference be between current and AI Justice? Our current system is a bought and paid for shit show favoring the rich.

1

techietraveller84 t1_ixfwn4h wrote

I would worry about AI, because it would start to feel like we are one step closer to Minority Report or Judge Dread type justice.

0

throwtheclownaway20 t1_ixg5hyh wrote

Having a computer interpret laws isn't predictive. They're not going to be arresting people because the AI crunched numbers and decided these people were deffo murderers

1

garlopf t1_ixg4hz1 wrote

Hint: it will always be too early. But we will do it anyways and then it will be judge Dredd all over again.

0

eeeeeeeeeepc t1_ixh2szl wrote

>Writing in the IEEE Technology and Society Magazine, Chugh points to the landmark case Ewert v. Canada as an example of the problems posed by risk assessment tools in general. Jeffrey Ewert is a Métis man serving a life sentence for murder and attempted murder. He successfully argued before the Supreme Court of Canada that tests used by Corrections Services Canada are culturally biased against Indigenous inmates, keeping them in prison longer and in more restrictive conditions than non-Indigenous inmates.

The court only ruled that the test might be culturally biased and that the prison authorities needed to do more research on it. Ewert's own expert didn't argue that he knew the direction of the bias.

The same wishful thinking appears later in the article:

>"Ewert tells us that data-driven decision-making needs an analysis of the information going in—and of the social science contributing to the information going in—and how biases are affecting information coming out," Chugh says.

>"If we know that systemic discrimination is plaguing our communities and misinforming our police data, then how can we be sure that the data informing these algorithms is going to produce the right outcomes?"

>Subjectivity is needed

Does "systemic discrimination" mean that police focus enforcement on indigenous communities, or that they ignore crimes there? Again this is a bias claim of indeterminate direction and size. If we think that differences in crime reporting and clearance rates exist, let's estimate them, adjust the data, and respond rationally rather than retreating into "subjectivity" not disciplined by mathematical consistency.

−1