Viewing a single comment thread. View all comments

Critical_Ad_7778 t1_iym46db wrote

I recommend reading the book "Weapons of Math Destruction". The author describes several mathematically sound algorithms that produce terrible outcomes.

Here is an example: An algorithm helps judges decide if someone should get probation. Part of the calculation includes the likelihood that they will be arrested again.

The problem is that currently, your more likely to be arrested if you're black.

The algorithm becomes racist accidentally. This is just one example of how dangerous it is to base all of your choices on "logic and reason".

17

experimentalshoes t1_iymk6ja wrote

That’s only true if the algorithm is written to build patterns and reintegrate them into its decisions, which was a human decision to program, AKA hubris. There would be no problem if it was written to evaluate the relevant data alone. It wouldn’t do anything to fix the underlying social problems, of course, but ideally this would free up some human HR that could be put on the task.

1

Critical_Ad_7778 t1_iymm9uv wrote

I want to understand your argument. My writing might sound snarky, so I apologize to you in advance.

  1. Wouldn't the algorithm be written by a human?
  2. Wouldn't the reintegration happen by a human?
  3. Aren't all decisions made by humans?

I don't understand how remove the human element.

4

experimentalshoes t1_iymnh7e wrote

I did mention that it was written by a human, yes, but the reintegration part is called “machine learning” and doesn’t necessarily require any further human input once the algorithm is given its authority.

I’m trying to say the racist outcome in this example isn’t the result of some tyranny of numbers that we need to keep subjugated to human sentiment or something. It’s actually the result of human overconfidence in the future mandate of our technological achievements, which is an emotional flaw, rather than something inherent to their simple performance as tools.

3