Comments

You must log in or register to comment.

TryingHard2023 t1_j1rk2pb wrote

Still, concerned about this in the hands of the NYPD and the current mayor

5

SakanaToDoubutsu t1_j1rtpbv wrote

I work in data science and I have absolutely zero idea how you'd actually implement this? Granted NLP isn't my forte but other than adding a bunch of words that potentially identify someone's demographics in your stop-word list I don't see what else you could really do without undermining the integrity of the technique.

78

PlusGoody t1_j1rvbg3 wrote

You deliberately make the AI politically correct, like, ChatGPT. It ignores correlations that might result in an inference that disfavors blacks or Hispanics, gays, women, or convicts.

13

Titan_Astraeus t1_j1s42vh wrote

The law is about employers using AI for hiring, they need to be audited/approved to avoid innate bias in the process. The selection AIs are trained on existing employment data. There is bias baked into the system, because humans are naturally biased. So the law is about filtering out/unlearning those biases, or the very least not introducing more. For example, protected groups tend to be underrepresented. Using an AI that learned in an environment lacking protected groups, minorities, women, just institutionalizes those issues across any companies using those AIs.

4

Armoogeddon t1_j1s60ej wrote

It’s been a few years, but also a data scientist with experience in NLP.

NLP would only be one component in a model(s), but even if you somehow standardized on that - and it would be really, really hard to do that - it would be virtually impossible to create what the author(s) of any bill would deem unbiased.

I suspect these people hear “bias” in machine learning and presume it’s a pejorative. It’s not; models trained by humans (“supervised machine learning”) are “biased” with their experience intentionally. Training models isn’t some Klan rally to go after people, at least not in my experience. I have serious qualms about how this stuff gets used and that’s partly why I left the field, but lawyers and career politicians aren’t helping by passing laws regulating a field they don’t understand any better than rocket science (to them).

25

Fig85420 t1_j1s65wh wrote

Didn't read - but presume anything the bureaucrats concoct is fundamentally idiotic, despite the super strong rationale

44

tsgram t1_j1s7gsb wrote

Hahaha, I was about to comment the same. In theory it’s a wonderful idea to combat AI bias, which is a humongous problem (great recent PBS Nova about it. But flash forward three years and there’s an AI Anti-Bias Dept with a bloated budget that has a bunch of do-nothing six-figure cronies at the top and sends millions in “consulting” to for-profit NGOs run by former public servants.

25

09-24-11 t1_j1s7vd4 wrote

Facial recognition technology is imperfect and as article states, can disproportionately identify people by races and gender. Using an imperfect system as evidence in legal matters puts innocent people at risk of wrongful convictions. In order to prevent wrongful convictions there needs to be regulations in place.

2

n3vd0g t1_j1s84m1 wrote

And btw, this is more of an American problem than a government as a concept problem. We let this corruption happen. It’s insidious and systemic.

2

WikiSummarizerBot t1_j1s8xqf wrote

The Rubber Room

>The Rubber Room is a 2010 documentary film about the reassignment centers run by the New York City Department of Education, which the filmmakers claim exist in various forms in school districts across the United States. Allegedly intended to serve as temporary holding facilities for teachers accused of various kinds of misconduct who are awaiting an official hearing, these reassignment centers have become known amongst the "exiled" teachers subculture as "rubber rooms", so named after the padded cells of psychiatric hospitals.

^([ )^(F.A.Q)^( | )^(Opt Out)^( | )^(Opt Out Of Subreddit)^( | )^(GitHub)^( ] Downvote to remove | v1.5)

1

SakanaToDoubutsu t1_j1s9eay wrote

>I suspect these people hear “bias” in machine learning and presume it’s a pejorative. It’s not; models trained by humans (“supervised machine learning”) are “biased” with their experience intentionally. Training models isn’t some Klan rally to go after people, at least not in my experience.

This is exactly it. This reminds me of a project my thesis advisor did where they were looking at retention rates and trying to limit freshman dropouts. One of the best predictors of dropping out they found was that people who self-identifyed as black or mixed race, and as a result anyone who entered the university who identified as black or mixed-race was automatically placed in a sort of academic probation.

Under this program dropout rates went down pretty substantially, but once the student body found out about it they protested and the statistics department could no longer use demographics data for identifying students at risk of dropping out. However, once they couldn't use that data dropout rates went back up again, so you're damned if you do and damned if you don't.

17

IIAOPSW t1_j1sdlh6 wrote

I guess this is inherently unknowable, but I am itching to know if the dropout stats were meaningfully different for black people who choose not to self identify as black on the form. For that matter, what fraction of black people pick "prefer not to say" on these sorts of forms, and is that fraction higher or lower than any other racial demographic?

5

n3vd0g t1_j1sghk0 wrote

No one said unique to America, just that this specific situation is very American; something we’ve dealt with forever. It’s an inb4 libertarians have entered the chat comment

−3

myassholealt t1_j1snzt5 wrote

>Training models isn’t some Klan rally to go after people, at least not in my experience.

In all that I've read about the biases, I never came away with the impression that it was this, or that any biases that exist were maliciously built in, but they nevertheless exist. And when it's implemented in daily life, it has the potential to negatively affect members of the public. And that's not a good thing.

4

supermechace t1_j1so0rp wrote

In all honesty, if these resume screening software are the typical rush to market software products produced at the cheapest cost, the "AI" is probably some hack job ducted taped together from googling code, apis, stackexchange posts or even if there was a data scientist on the project the programmer gave up understanding the requirements in order to finish the code on time. Resulting in the resume/interview screener being basically a glorified keyword scoring filter. If the bill allows the source code to be audited it will be easy to spot inherent keywords bias like demographics or colleges. I haven't heard of interviews being recorded to run through software but it'll be easy to spot that programmer took shortcuts such as training the model on the same demographic over and over again to get through QA. QA is usually the lowest on the totem pole. Look at the lack of regulation in social media and data privacy, the current laws are already behind in America.

3

myassholealt t1_j1spejk wrote

Anecdote here, but based on my life experience and thinking back on my experience in college, I was a working class first generation college student. Being in the college environment was so different from the world I knew through the 12th grade. My new classmates were so different. I remember overhearing a conversation between classmates where a guy was complaining about being too poor, but then talking about how he can't wait for spring break cause he just wants to go somewhere with a beach to lie down and tan the whole time. Meanwhile I was planning to pick up extra hours at my minimum wage part time job.

Or the dorm experience. I couldn't afford dorming. I commuted 90 minutes each way to class, and would go him, try to take a 40 minute nap before heading out to my 6-10 shift at my job. I had no time for clubs or events, and didn't have a lot of social interactions outside of classes and class work. Hell, I didn't even know my supervisor advisor till my senior year. As a first generation student I had no clue about that stuff. This and so many other experiences all tied together to make college very hard at times. And yes, I am a minority. But it's not my being a minority that triggered this. It was my socioeconomic status, my family history, and the access to experiences I had or didn't have before getting to college.

So when someone sees the correlation as black = high drop out rate. The immediate reaction may be to be offended or object, but while on the data side that correlation is the easiest identifier, it doesn't actually identify what may be the real issues. And let's face it, with the history of this country, lots of black people have gone through life with obstacles intentionally erected to make sure this is their reality and the reality of their children. For example, while white wwii veterans were coming back home to buy homes with their GI bill and pay for education to build a foundation that roots their family and future generations solidly in the middle class, black veterans where not allowed those same privileges in many areas, rooting them and their future generations in the working class.

9

Armoogeddon t1_j1spw46 wrote

I agree wholeheartedly with your last sentence, but it goes way beyond “bias” in models. Models are only one piece of an ever more complex system.

In terms of the impressions you’ve inferred, we could talk in good conscience about that for hours. Maybe five or six years ago, it came to light that visual recognition models performed inherently worse on people of dark skin. The tech companies (I was there at the time at a big prominent one) decided to jump ahead of the bad press by condemning themselves and promising to do better. The media fallout was negligible.

It was bunk. Did AI models perform generally worse on black/people of African descent photos? In some cases yes. Was the training data cribbed from the US? Yes. Where black people made up, what 13% of the population? Of course they performed worse: there was 1/10 the data available to train them! It wasn’t racist; it wasn’t some bias built into the models by the human trainers - there was simply less data. But nobody bothered to elaborate on what should have been a nuanced conversation and the prevailing opinion jumped to the wrong perception and the wrong remediation. It kicked off an idiotic path upon which we still find ourselves. Or watch others traversing.

The real problem is nobody understands what’s behind these models. We understand the approaches they take generally, the “convolutions” applied at various training layers - but nobody understands the logic behind the output models any better than we understand the models behind human reasoning. We can infer things but there’s nothing known; not in a binary or truly understood way.

Yet everybody keeps racing ahead to apply these models in ever more profound and - if you’re in the space - unnerving ways. It’s getting scary, and it’s way worse than the stuff that’s being discussed here, which is also a bad idea.

I guess what I’m saying is it’s so much worse than these idiot politicians realize. They’re fighting a battle that was lost ten years ago.

0

supermechace t1_j1sqmce wrote

I wouldn't say that's the correct conclusion. Techies and academics tend to be weak at understanding optics and cultural/racial issues. supposedly claiming "data is king". Academic probation is a negative term, automatic dumping people into that bucket is the most head slapping PR decision. 20 years ago there were equal opportunity programs at colleges which basically used income level and minority as a filter to quality applicants for additional college aid, work employment, and mentoring all without statistical modelling. The correct takeaway in your advisors case is to bring findings to a holistic cross discipline cross culture committee to examine the root cause such as minorities coming from underfunded school districts that poorly prepared people for college. Descions done in secret and especially without racial representative input continues blindness.

7

Wowzlul t1_j1t7bi8 wrote

Funny that every comment here seems to be against the idea of regulating AI.

Wonder why.

0

fafalone t1_j1t884o wrote

Given how people like to define "unbiased", it's going to wind up needing to be explicitly targeted to enforce equity... because thanks to centuries of deliberate oppression we simply don't have a country where there's no actual, empirical differences between demographics, and the people who pass laws like this believe that's solved by simply pretending they don't exist and enforcing equal outcomes by e.g. simply adding or subtracting points from scores to make all groups equal.

Because that's something that can be done now. Why spend decades doing the actual hard work of building an equitable society when you can prove how virtuous you right now by simply rigging the numbers?

−1

Unable-Ad3852 t1_j1taju2 wrote

This is one of those ideas where some cousin of some member of the administration has a solution in search of a problem. Like how there's only one company in NY that can produce the city legal sprinkler pipes which makes everything building wise exorbitantly more expensive. I imagine we either end up with one approved vendor for AI services to tackle hiring and it will suck like hell or some useless consulting firm that rubber stamps projects.

2

DifficultyNext7666 t1_j1tk95v wrote

I was just told a model wasn't inclusive enough. It didn't choose enough DEI vendors.

I was like the model doesn't even look at that. I finally was like do you want this to be the most efficient or the blackest?

I eventually just marked the black vendors and took the top 25% before taking the better vendors.

"Why did costs go up?" Was the next question. I don't hate what I do but God damn do I hate other people

8

DifficultyNext7666 t1_j1tlo7e wrote

It shouldn't be that hard. It's an imbalanced class problem. Either adjust the weighting or oversample.

The issue is the system will do a worse job. Is that trade off worth it? I think the powers that be would say yes.

Well the bigger issue is how do these idiots enforce it? We'd have to open up code and training data to a 3rd party bias police.

1

Background-Net-4715 OP t1_j1tn30y wrote

Exactly! The issue is not that people think AI models are deliberately biased, it's that they inherently are when there's a human inputting the code behind them. As stated in the article, the model will only be as good as the data you feed it, so if the data is biased (for example resume samples from only white men in a certain state), the model will be biased. This law will force companies wanting to use automated hiring tools to audit them first and ensure eliminate bias from the model creation point.

1

yogibear47 t1_j1w6gk6 wrote

>Any machine learning, statistical modeling, data analytics, or artificial intelligence-based software used to evaluate employees or potential recruits would have to undergo the scrutiny of an impartial, independent auditor to make sure they’re not producing discriminatory results.

The crux here is how you define discriminatory results and how you handle confounds like socioeconomic background. I don't trust the New York City Council to do this well.

>What’s more, the law mandates that both job candidates and employees be notified the tool will be used to assess them, and gives them the right to request an alternative option.

This seems like a good thing. Resume scanning is pretty shitty already, I can't imagine how bad the AI tools will be. Being able to ask for a manual review seems reasonable although I dunno how that would scale.

>For example, some of these AI models are programmed to detect certain keywords or combinations of words in the resumes, or in answers to the interview questions. You might have a very sophisticated or complex answer to an interview question. However, if the provided answer doesn’t meet the requirements of whatever the AI has been developed to detect, then your response will not be considered good enough.

I mean, it's all relative. Anyone in a technical role who's been forced to do an initial interview with a recruiter has had the exact same experience as described above, with a human being. It's not like the alternative to this AI, in most cases, is a full blown interview with an interviewing expert - it's often a brief interview with a recruiter who doesn't know anything, or even just an automated rejection.

>When it’s just human bias, there’s only so much damage you can do. If it’s two recruiters assessing resumes all day long, maybe they go through 100 candidates a day. You can run thousands of candidates in an AI system in a matter of minutes, if not seconds. So the scale and speed are very different.

Here is where the guy loses me. It's not like these two recruiters are going to manually go through all of the thousands of candidates in the system. They're going to just automatically reject the ones they can't review, probably using some even shittier resume scanning software to prioritize what they should manually review. So if the AI system captures even one additional candidate, it's already better. Not saying that that makes it worth deploying, but I would expect an expert on the issue to capture this nuance.

1

pixel_of_moral_decay t1_j1x1lf9 wrote

This is stupid to just target AI while humans are free to have whatever bias they want.

The real reason for this is because some HR folks fear computerization puts their jobs at risk and they've got pretty powerful trade associations who pushed for this to ensure it's members have work..

1

Scroticus- t1_j21hopb wrote

Employers will find a way to screen out unqualified candidates regardless of whatever rules they pass.

1