wkmowgli

wkmowgli t1_ixeeivk wrote

For this example we can train an algorithm to estimate the probability of a crime in an area given the amount of patrolling in that area. So it could be normalized out if the algorithm is designed properly. The amount of care needed in designing these algorithms will need to be high. I do know that there is active research and development in identifying these biases early (even before deployment) but it’ll never be perfect. So it’ll likely be a cycle of negatively hurting people, being called out, fixed, and then back to step 1.

3