Viewing a single comment thread. View all comments

Titan_Astraeus t1_j1s42vh wrote

The law is about employers using AI for hiring, they need to be audited/approved to avoid innate bias in the process. The selection AIs are trained on existing employment data. There is bias baked into the system, because humans are naturally biased. So the law is about filtering out/unlearning those biases, or the very least not introducing more. For example, protected groups tend to be underrepresented. Using an AI that learned in an environment lacking protected groups, minorities, women, just institutionalizes those issues across any companies using those AIs.

4

Background-Net-4715 OP t1_j1tn30y wrote

Exactly! The issue is not that people think AI models are deliberately biased, it's that they inherently are when there's a human inputting the code behind them. As stated in the article, the model will only be as good as the data you feed it, so if the data is biased (for example resume samples from only white men in a certain state), the model will be biased. This law will force companies wanting to use automated hiring tools to audit them first and ensure eliminate bias from the model creation point.

1

ripstep1 t1_j1xnf5y wrote

Gotta teach computers racist hiring practices it seems

1