Comments

You must log in or register to comment.

[deleted] t1_j1rlvy5 wrote

These stupid AI used in recruiters is a disaster, maybe it works with very specific jobs, but it’s useless in tech.

Recruiters are telling people what to their resume just to get past the AI, and the n is up to the hiring managers to decipher those resumes, and figure out who to interviews.

I have seen some resumes for very qualified people who’s resume was completely useless, and it wasn’t because it was a shitty resume, it’s just how it needs to be done now.

54

bullettrain1 t1_j1uioq8 wrote

Are you referring to putting ‘keywords’ in resumes? Because that whole thing is a myth to sell resume enhancement and review services, the applications that handle uploaded resumes don’t actually consider those.

−1

[deleted] t1_j25txlh wrote

It’s not when qualifies applicants need to include specific words just get past an AI

1

BraveNewCurrency t1_j1szem8 wrote

>How can AI be racist if it's only looking at raw data. Wouldn't it be inherently not racist? I don't know just asking.

https://en.wikipedia.org/wiki/Tay_(bot)

https://www.theverge.com/2018/1/12/16882408/google-racist-gorillas-photo-recognition-algorithm-ai

https://www.ladbible.com/news/latest-ai-bot-becomes-racist-and-homophobic-after-learning-from-humans-20211104

https://hub.jhu.edu/2022/06/21/flawed-artificial-intelligence-robot-racist-sexist/

The problem is "Garbage In, Garbage Out".

Most companies have a fair amount of systemic racism already in their hiring. (Does the company really reflect the society around it, or is it mostly white dudes?)

So if you train the AI on the existing data, the AI will be biased.

But even at a biased company, humans can "do better" in the future, because humans have the ability to introspect. AI does not, so it won't get better over time. Another confounding factor is that most AIs are terrible at explaining WHY they made a decisions. It takes a lot of work to pull back the curtain and analyze the decisions the AI is making. (And nobody wants to challenge the expensive AI system that magically saves everybody work!)

33

sabinegirl t1_j1tjiq2 wrote

this, the ai is gonna have to exclude names and even colleges from the data sets, or it will be a mess.

8

BraveNewCurrency t1_j1vaoml wrote

This is the wrong solution.

There are hundreds of other things the AI can latch on to instead: males vs females write differently, they often have different hobbies, they sometimes take different classes, etc.

The problem is lazy people who want to have the computer magically pick "good people" from "bad people" when 1) the data is biased, 2) they are feeding the computer literally thousands of irrelevant data points, 3) nobody has ever proved that the data can actually differentiate.

https://www.brainyquote.com/quotes/charles_babbage_141832

What we need are more experiments to find ground truth. For example, Google did a study and found that people who went to college only had a slight advantage, and only for the first 6 months on the job. After that, they could find no difference.

If that researcher was studying thousands of irrelevant data points, that insight probably would have been lost in the noise.

4

cheapsexandfastfood t1_j22gqqu wrote

Seems like Google should be able to figure it out if anybody could.

They have enough resumes and enough employee review data to start with.

1

dissident_right t1_j1u220s wrote

AI is not biased. It's precisely it's lack of bias that causes AI to see patterns in society that ignorant humans would rather turn their eyes away from.

>But even at a biased company, humans can "do better" in the future, because humans have the ability to introspect.

Here 'introspect'/"do better" means 'be bullied/pressured into holding factually incorrect positions'.

Most likely the Amazon AI was as, or more proficient at selecting qualified candidates than any human HR department. It was shut down not due to inaccuracy of the algorithm at selecting qualified candidates, but rather for revealing a reality about qualified candidates that did not align with people a-priori delusions about human equality.

−7

AmbulatingGiraffe t1_j1ugwcm wrote

This is objectively incorrect. One of the largest problems related to bias in AI is that accuracy is not distributed evenly across different groups. For instance, the COMPAS expose revealed that an algorithm being used to predict who would commit crimes had significantly higher false positive rates (saying someone would commit a crime who then didn’t) for black people. Similarly the accuracy was lower for predicting more serious violent crimes than misdemeanors or other petty offenses. It’s not enough to say that an algorithm is accurate therefore it’s not biased it’s just showing truths we don’t want. You have to look very very carefully at where exactly the model is wrong and if it’s systematically wrong for certain kinds of people/situations. There’s a reason this is one of the most active areas of research in the machine learning community. It’s an important and hard problem with no easy solution.

3

AboardTheBus t1_j1w7k4l wrote

How do we differentiate between bias and facts that are true but uncomfortable for people to express?

1

Alexstarfire t1_j1u7510 wrote

Interesting argument. Anything to back it up?

2

dissident_right t1_j1ub67r wrote

>Anything to back it up?

Reality? Algorithms are used extensively by thousands of companies in thousands of fields (marketing, finance, social media etc.). They are used because they work.

A good example of this would be the University of Chicago's 'crime prediction algorithm' that attempts to predict who will commit crimes within major American cities. It has been under attack for supposed bias (racial, class, sex, etc. etc.) since the outset of the project. Despite this, it is correct in 9 out of 10 cases.

−3

Alexstarfire t1_j1uspe5 wrote

A source for how well crime predicting AIs work isn't the same as one for hiring employees. They aren't interchangeable.

1

dissident_right t1_j1w48yb wrote

>They aren't interchangeable.

No, but unfortunately we cannot say how well the algorithm 'would' have worked in this instance, since it was shut down before it was given the chance to see if it's selections made good employees.

The point remains - if algorithms are relied on to be accurate in 99.9% of cases, if even with something as complex as 'who will be a criminal' an algorithm can be accurate, why would this area be the only one where somehow AI is unrealible/biased?

As I said, it's the humans who possess the bias. They saw 'problematic' results and decided, a-priori, that the machine was wrong. But was it?

1

Dredmart t1_j1uq75f wrote

They linked you proof, and you're still full of shit. You sound exactly like a certain group that rose in the early 1900s.

1

TheJocktopus t1_j1v6wsk wrote

Incorrect. AI can definitely be biased. Where do you think the data that it's trained on comes from? Another AI? No, it comes from people. An AI is only as accurate as its training data.

For example, a famous example would be that AIs often come to the conclusion that black Americans are more healthy than other Americans and thus do not need as much assistance with their health. In reality, the opposite is true, but the AI doesn't realize that because it's just looking at the data given to it. That data shows that black Americans are less likely to go to the hospital, so the AI assumes that this is because there is nothing wrong with them. In reality, most humans would recognize that this is because black Americans are more likely to be poor, and can't afford to go to the hospital as frequently.

A few more examples that could happen: an AI image-generation program might be more likely to draw teachers as female, since that would be what most of the training data depicted. An AI facial recognition system might be less accurate at identifying hispanic people by their facial features because less images of hispanic people were included in the training data. An AI that suggests recommended prison sentences might give harsher sentences to black people because it was trained using previous decisions made by human judges, who tend to give harsher sentences to black people.

TL;DR: AI technology doesn't exist in a vacuum. People have biases, so AIs also have biases. AIs can have less bias if you're smart about what training data you use and what information you hide from the AI.

1

BraveNewCurrency t1_j1vd8a1 wrote

>Most likely the Amazon AI was as, or more proficient at selecting qualified candidates than any human HR department.

Why is that "most likely"? Citation needed.

(This reminds me of the experiment where they hired grad students to 'predict' if a light would turn on or not. The light turned on 80% of the time, but the grad students were only right about 50% of the time because they tried to predict the light. The researchers also tried monkeys, who just leaned on the button, and were right 80% of the time.

An AI is like those monkeys -- because 80% of good candidates are male, it thinks excluding female candidates will help the accuracy. But that's not actually true, and you are literally breaking the law.)

What if the truth is that a Resume alone is not enough to accurately predict if someone is going to work out in a position? What if a full job interview is actually required to find out if the person will fit in or be able to do good work? What if EVERY ranking algorithm is going to have bias, because there isn't enough data to accurately sort the resumes?

Having been a hiring manager, I have found that a large fraction of resumes contain made-up bullshit. AI is just GIGO with extra steps.

This reminds me of back in the 80's: Whenever a corporation made a mistake, they could never admit it -- instead, everyone would "blame it on the computer". That's like blaming the airplane when a bolt falls off (instead of blaming poor maintenance procedures.)

1

dissident_right t1_j1w4upw wrote

>Why is that "most likely"? Citation needed.

I can't provide a citation since the program was shut down before it had a chance to prove it's accuracy.

As I said, a simple observation however will demonstrate to you that just because a progressive calls an AI's observation 'problematic' (i.e. the Chicago crime prediction algorithm) that 'problematic' here is clearly not the same as inaccurate.

Again, why would you assume that an AI algorithm couldn't predict employee suitability seeing as how well algorithms predict... basically everything else about out world.

Your are simply trying to avoid a conclusion that you don't want to consider - What if men are naturally better suited to be software engineers?

1

BraveNewCurrency t1_j1wvngh wrote

>What if men are naturally better suited to be software engineers?

First, ignorant people proposed that exact same line of reasoning, but with firefighters instead of SW Engineers. Go read some history on how that worked out.

Second, did you read that link you sent? It claims nothing of the sort, only that "there are physical/mental differences between men and women". No shit, Sherlock. But just because the "average male is slightly taller than the average female" doesn't mean "all men are tall" nor "women can't be over 7ft tall". By the same token, "men are slightly better at task X on average" doesn't mean there aren't many women who can beat most men at that task.

Third, if we implement what you are proposing, then you are saying we should not evaluate people on "how good they are at the job", but merely on some physical attribute. Can you explain how that leads to "the best person for the job"?

​

>a simple observation however will demonstrate to you that just because a progressive calls an AI's observation 'problematic'

Haha, you keep implying that I'm ignorant (should I "do my own research?") because I point out the bias (you never addressed the constant racism by the leading AI companies) but you don't cite any data and recite 100-year-old arguments.

Wait. Are you Jordan Peterson?

1

dissident_right t1_j1wxp3a wrote

>First, ignorant people proposed that exact same line of reasoning, but with firefighters instead of SW Engineers. Go read some history on how that worked out.

Well... I live in a world in which 99% percent of fire fighters are male, so I am guessing the answer is "All the intelligent people conceded that bigger male muscles/stamina made men better at being firefighters and no-one made a big deal out of a sex disparency in fire fighting"?

I'm gonna assume here that you in some sort of self-generated alternate reality where women are just as capable of being fire fighters as men despite being physically weaker, smaller and lacking in stamina (relative to men)?

>doesn't mean there aren't many women who can beat most men at that task

No, but If I am designed an AI algorithm to select who will be best at 'task X' I wouldn't call the algorithm biased/poorly coded if it overwhelming selected from the group shown to be better suited for task X.

Which is, more or less what happened with the Amazon program. Kinda ironic seeing as they... rely on algorithms heavily in their marketing of products, and I am 100% sure that 'biological sex' is one of the factors those algorithms account for when deciding what products to try and nudge you towards.

>constant racism by the leading AI companies

I haven't 'addressed' it because I think the statement is markedly untrue. Many people call the U of Chicago crime prediction algorithm "racist" for disproportionately 'tagging' Black men as being at risk of being criminals/victims of crimes.

However if that algorithm is consistently accurate how can an intelligent person accuse it of having/being biased?

As I said there plenty of bias involved in AI, but the bias is very rarely on the part of the machines. The real bias comes from the humans who either A) ignore data that doesn't fit their a-prioris, or B) read the data with such a biased eye that they draw conclusions from it that doesn't actually align with what the data is showing. See: your reaction to the Stanford article.

>Are you Jordan Peterson?

No.

1

BraveNewCurrency t1_j276p82 wrote

>Well... I live in a world in which 99% percent of fire fighters are male

So.. Not this world, because it's more like 20% here. (And would be bigger if females weren't harassed so much.)

>no-one made a big deal out of a sex disparency in fire fighting

Sure, ignore history. You are doomed to repeat it.

> I am designed an AI algorithm to select who will be best at 'task X' I wouldn't call the algorithm biased/poorly coded if it overwhelming selected from the group shown to be better suited for task X.

Good thing nobody asks you, because that is the wrong algorithm. Maybe it's plausible short-cut if you are looking for "the best in the world". But given an arbitrary subset of people, it's not always going to be a male winner. You suck at writing algorithims.

>I haven't 'addressed' it because I think the statement is markedly untrue.

Let's summarize so far, shall we?

- You asked how an AI could be racist. I gave you links. You ignored them.

- You asserted the AI is not biased (without any evidence), and later doubled-down by saying those articles are "untrue" (again.. without any evidence)

- You claimed that 99% of firefighters are male (without evidence)

- You assert that "picking all males for a SW position is fine" (without any evidence, and despite me pointing out that it is literally illegal), then doubled down implying that you personally would preferentially hire only males even though there is no evidence that males have an advantage in SW.

You are blocked.

1

JMAN1422 t1_j1rbtlp wrote

How can AI be biased if it's only looking at raw data. Wouldn't it be inherently unbiased? I don't know just asking.

Does this just mean they want AI to match employment equity quotas? If thats the case doesn't it defeated the entire point of AI systems? Aren't they meant to be hyper efficient, finding the best of the best for jobs etc looking strictly af data?

7

GenericHam t1_j1rjshh wrote

I make AI for a living and it would be very easy to bias the data.

Let's for instance say that I use "address" as a raw feature to give to the model. This will definitely be an important feature because education and competence are associated with where you live.

However this correlation is an artifact of other things. The AI can not tell the difference between correlation and causation. So in the example, address correlates with competence but does not cause competence. Where maybe something like the ability to solve a math problem is actual evidence of competence.

30

sprinkles120 t1_j1rqgga wrote

Basically, the raw data can be biased. If you just take all your company's hiring data and feed it into a model, the model will learn to replicate any discriminatory practices that historically existed at your company. (And there are plenty of studies that suggest such bias exists even among well-meaning hiring managers who attempt to be race/gender neutral.) Suppose you have a raw dataset where 20% of white applicants are hired and only 10% of applicants of color are hired. Even if you exclude the applicants' race from the features used by the model, you will likely end up with a system that is half as likely to hire applicants' of color compared to white applicants. AI is extremely good at extracting patterns from disparate data points, so it will find other, subtler indicators of race and learn to penalize them. Maybe it decides that degrees from historically black universities are less valuable than degrees from predominantly white liberal arts schools. Maybe it decides that guys named DeSean are less qualified than guys named Sean. You get the picture. Correcting these biases in the raw data isn't quite the same as filling quotas. The idea is that two equally qualified applicants have the same likelihood of getting hired. You could have a perfectly unbiased model and still fail to meet a quota because no people of color apply in the first place.

18

drdoom52 t1_j1tneb7 wrote

> Maybe it decides that degrees from historically black universities are less valuable than degrees from predominantly white liberal arts schools.

This is actually a perfect example of the issue. An AI designed to mimic hiring practices and measures currently existing will probably show you exactly what biases have been intentionally or unintentionally incorporated into your structure.

2

myBisL2 t1_j1soeup wrote

A good real life example comes from Amazon a few years ago. They implemented AI that "learned" to prefer male candidates. From this article:

>In effect, Amazon’s system taught itself that male candidates were preferable. It penalized resumes that included the word “women’s,” as in “women’s chess club captain.” And it downgraded graduates of two all-women’s colleges, according to people familiar with the matter.

What it came down to was that the tech industry tends to be male dominated, so based on the resumes fed to the AI, it identified a pattern that successful candidates don't have words like "women's" in their resume or go to all-women schools.

14

meyerpw t1_j1sd5uq wrote

Let's say your looking to fill a position for an engineer. You train your AI by looking at the resumes if your current engineering employees. It picks up that they are all old white dudes by looking at their names and experiences.

Guess what's going to get past your AI.

8

LastInALongChain t1_j1s5on6 wrote

>How can AI be biased if it's only looking at raw data. Wouldn't it be inherently unbiased? I don't know just asking

Data can be bad looking at groups, not reflecting individuals.

If you have one person who belongs to group z, and this person is a criminal, steals, and commits assault, you wouldn't want to hire him. But the AI just choses not to hire him because he belongs to group z, and group z on average commits 10x the crime of any other group. It does the same to another guy of group z, who has a spotless record, or who has a brother that died to crime, so he is at risk of committing crime due to the association of others that revenge kill.

Basically AI can only see aggregate behavior, because judging individuals would require a level of insight that would require a dystopian amount of real time access to that persons data.

Technically an AI could look at groups and be like " On average these guys have good traits" but that's literally the definition of bigotry.

2

TheLGMac t1_j1tu4b7 wrote

AI is still an interpreter of data; there is no perfectly “true” interpretation of raw data. There is always a process of interpreting data to have some meaning. Interpretation is prone to bias.

If the machine learning model makes interpretations based on prior interpretations made (eg “historically only white or male candidates have been successfully hired in this role”) then this can perpetuate existing bias. Until recently the engineers building these models have not been thinking to build in safeguards against bias. Laws like these ensure that these kinds of biases are safeguarded against.

Think of this like building codes in architecture/structural engineering.

1

orbitaldan t1_j1tv2vc wrote

Adding on to what others are saying, the raw data is a measurement of our world, and the way we have constructed and formed our world is inherently biased. People are congregated into clusters physically, economically, and socially for all manner of reasons, many of which are unfit criteria for selection. Even after unjust actions are halted, they leave echoes in how the lives of those people and their children are affected: where they grew up, where and how much property they may own, where they went to school, and so on. Those unfit criteria are leaked through anything that gives a proxy measure of those clusters, sometimes in surprising and unintuitive ways that cannot necessarily be scrubbed out or hidden.

1

Background-Net-4715 OP t1_j1reajo wrote

So from what I understand, the models can be biased if they’re created by humans with particular bias - it’s hard to measure exactly how this happens which is why when this law comes in, companies using automated systems will have to have them audited by independent organizations. The goal is of course for the models to be as unbiased as possible, but what happens today (in some cases, not all) is that the AI model will have inherent biases against certain profiles.

−1

Burstar1 t1_j1riaze wrote

My take on it is this: Say on the resume the applicant misspelt ask as aks. The AI might rule out the resume due to the spelling error suggesting a low quality / careless applicant. The problem is if the AI starts correlating the misspelling of aks to certain cultural groups that do this normally and consequently associates that group with the 'careless' behaviour by default.

−7

Background-Net-4715 OP t1_j1qrk94 wrote

Now scheduled to come into force in April, Local Law 144 of 202 would bar companies from using any “automated employment decision tools” unless they pass an audit for bias. Any machine learning, statistical modeling, data analytics, or artificial intelligence-based software used to evaluate employees or potential recruits would have to undergo the scrutiny of an impartial, independent auditor to make sure they’re not producing discriminatory results.

The audit would then have to be made publicly available on the company’s website. What’s more, the law mandates that both job candidates and employees be notified the tool will be used to assess them, and gives them the right to request an alternative option.

6

D-redditAvenger t1_j1quoqo wrote

Seems reasonable but it's good that it's made public. It will be important to see what "bias" exactly means in this context.

4

Background-Net-4715 OP t1_j1qvdk7 wrote

Yes definitely - that's part of what's causing the delay. Companies will need to explain what goes into their algorithms, what decisions are made in the model, what the accuracy rate is, who made the model, and so on. It seems complicated but I guess it's because it's the first law of its kind.

2

killcat t1_j1tbmxu wrote

5 will get you 10 that "discriminatory outcomes" will define if the AI has "bias" as opposed to the actual programming.

1

cheapsexandfastfood t1_j22hlj9 wrote

I think it'll be easy to test with a bunch of resumes of different races and genders equally spread along a qualification spectrum.

Just have it pick from this test set and if the results are statistically tilted in any way it fails.

1

[deleted] t1_j1r18re wrote

This is great. I didn't even know this law existed, and it is excellent, thank you for posting op. If everybody's on the gas and no one is on the regulatory and legal breaks, that's when AI becomes very problematic. People are always so wrapped up in the terror of AGI, when the real issue people should be concerned with is AI models that fail (for a variety of reasons) to meet human expectations for ethics and safety - and when that happens at scale, it can be catastrophic. This law moves us in exactly the right direction. And we're going to need a lot more.

5

Shakespurious t1_j1rq4gt wrote

Is there any actual evidence of bias? I've seen articles alleging bias, but when they discuss the science, it turns out studies say the models do ok.

4

MissMormie t1_j1u19ge wrote

5

Shakespurious t1_j1uv3bj wrote

Yeah, the program was told to not select for men, so it used substitute markers, language not used by women on their resumes. And we know that the very top performers in math and physics are overwhelmingly men, so the program tried to select for men, but with extra steps.

1

MissMormie t1_j1v0otv wrote

That's not what happened though. The program was fed resumes of people that aplied in the years previously. Which was a very biased sample to begin with. We do not know for which outcome they trained this model, but it's not unlikely they trained that on people hired. So you can add bias of recruiters on top of the biased sample. This has nothing to do with actual performance.

You're just adding bias on bias which results in missing the better candidate. Even if the better candidate is often male, you don't want to miss out on those amazing woman.

3

Shakespurious t1_j1v54qx wrote

Just for perspective, though, over 99% of Nobel Prize winners in Physics are men.

1

MissMormie t1_j1vjasa wrote

Yes. So? Amazon isn't recruiting nobel prize winning physicists.

It also ignores the question if there has been any bias in getting more men in positions where they can win a nobel prize. In general, if you are a man you are more likely to be told to pursue physics. You are less likely to feel like an outsider in your class and so continue in that career. You are more likely to get picked as a teachers assistant. You are more likely to get picked for a phd spot. You are more likely to get grants. You are more likely to get a job at the right universities where you can actually do the science. You are more likely to be mentored on your professional skills then in how to improve your softskills. You are less likely to be 'promoted' out of the field. You are more likely to be hired as professor who guides phd students, hence getting your name on a lot more work. There's a thousands points where bias can and does play a role. No wonder nobel prize winners are mostly men. There's hardly any woman left in the field at that point.

Also, this bias was even worse in the past and most nobel prizes are handed out based on relatively old research. 2022's nobel prize in physics was awarded for research done in the 1990's. Looking at the birthdates of the nobel prize winners there's hardly any that were born after 1950, even those who won recently. Women in 1950 definitely did not have the same options as men to get into physics.

And even IF 99% of nobel prize winners in physics would still be men if the playing field is completely level. Then still do you not want an algorithm that's biased because it will make you miss that 1% woman that you do want to hire.

4

dissident_right t1_j1u2c8v wrote

No, the algorithms will inevitably be highly accurate, people just don't like the patterns that the ai detects (guess which demographic was most likely to be flagged as potentially criminal in Chicago).

1

politicatessen t1_j1rhxe9 wrote

Has anyone else used Reflektive or Lattice at work (goal setting and self reviews software)?

I'm concerned that data and psychological profiling from your previous job will follow you around as you look for new jobs

2

hawkwings t1_j1u0j5r wrote

Many humans believe that they know how to hire the best candidate, but they may be wrong and different humans disagree. How would you evaluate AI? How would you know if it is making good or bad decisions? Is the NBA biased against whites? Just because you hire a disproportionate number of people from one group, that doesn't mean that the hiring system is unbiased.

One problem with cities passing laws like this is that cities are easy to flee. If bankers flee NYC, then there will be fewer people eating at NYC restaurants which will lay off restaurant workers. A city can go into a death spiral where it tries to fund everything with fewer taxpayers.

2

FuturologyBot t1_j1qvupc wrote

The following submission statement was provided by /u/Background-Net-4715:


Now scheduled to come into force in April, Local Law 144 of 202 would bar companies from using any “automated employment decision tools” unless they pass an audit for bias. Any machine learning, statistical modeling, data analytics, or artificial intelligence-based software used to evaluate employees or potential recruits would have to undergo the scrutiny of an impartial, independent auditor to make sure they’re not producing discriminatory results.

The audit would then have to be made publicly available on the company’s website. What’s more, the law mandates that both job candidates and employees be notified the tool will be used to assess them, and gives them the right to request an alternative option.


Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/zvs3be/nycs_ai_bias_law_is_delayed_until_april_2023_but/j1qrk94/

1