Viewing a single comment thread. View all comments

yogibear47 t1_j1w6gk6 wrote

>Any machine learning, statistical modeling, data analytics, or artificial intelligence-based software used to evaluate employees or potential recruits would have to undergo the scrutiny of an impartial, independent auditor to make sure they’re not producing discriminatory results.

The crux here is how you define discriminatory results and how you handle confounds like socioeconomic background. I don't trust the New York City Council to do this well.

>What’s more, the law mandates that both job candidates and employees be notified the tool will be used to assess them, and gives them the right to request an alternative option.

This seems like a good thing. Resume scanning is pretty shitty already, I can't imagine how bad the AI tools will be. Being able to ask for a manual review seems reasonable although I dunno how that would scale.

>For example, some of these AI models are programmed to detect certain keywords or combinations of words in the resumes, or in answers to the interview questions. You might have a very sophisticated or complex answer to an interview question. However, if the provided answer doesn’t meet the requirements of whatever the AI has been developed to detect, then your response will not be considered good enough.

I mean, it's all relative. Anyone in a technical role who's been forced to do an initial interview with a recruiter has had the exact same experience as described above, with a human being. It's not like the alternative to this AI, in most cases, is a full blown interview with an interviewing expert - it's often a brief interview with a recruiter who doesn't know anything, or even just an automated rejection.

>When it’s just human bias, there’s only so much damage you can do. If it’s two recruiters assessing resumes all day long, maybe they go through 100 candidates a day. You can run thousands of candidates in an AI system in a matter of minutes, if not seconds. So the scale and speed are very different.

Here is where the guy loses me. It's not like these two recruiters are going to manually go through all of the thousands of candidates in the system. They're going to just automatically reject the ones they can't review, probably using some even shittier resume scanning software to prioritize what they should manually review. So if the AI system captures even one additional candidate, it's already better. Not saying that that makes it worth deploying, but I would expect an expert on the issue to capture this nuance.

1