BraveNewCurrency t1_jdpnkrb wrote

Looks are deceiving.

It is possible to create a website or app with one person, but there is a limit to the complexity.

For example, just glancing at the Indeed website:

  • They need engineers to maintain their codebase. For a site like this, it might easily be over a million lines of code. (There is a lot of functionality you can't see, like internal auditing and management systems, in order to prevent legal problems -- see below.)
  • They need engineers to maintain their servers.
  • They need people to write content in the Career Guide.
  • They need database administrators to tune, backup and monitor their databases -- often geo-replicated around the world.
  • They need security engineers to prevent hackers from taking all this valuable personal data they have collected.
  • They need employees to "answer the phone" when their customers have problems.
  • They need employees to work with companies and the schools to find out their needs.
  • They need lots of people to keep spam out of their various databases (schools, jobs, training, resumes, etc)
  • They may need employees to verify companies aren't scamming personal data.
  • They need HR, Finance, Legal, etc. Legal is probably especially big, since they need to know the hiring laws in every country and every state. (Did you know it's perfectly legal to age discriminate in most states, as long as it's not against someone over 40?)

BraveNewCurrency t1_j276p82 wrote

>Well... I live in a world in which 99% percent of fire fighters are male

So.. Not this world, because it's more like 20% here. (And would be bigger if females weren't harassed so much.)

>no-one made a big deal out of a sex disparency in fire fighting

Sure, ignore history. You are doomed to repeat it.

> I am designed an AI algorithm to select who will be best at 'task X' I wouldn't call the algorithm biased/poorly coded if it overwhelming selected from the group shown to be better suited for task X.

Good thing nobody asks you, because that is the wrong algorithm. Maybe it's plausible short-cut if you are looking for "the best in the world". But given an arbitrary subset of people, it's not always going to be a male winner. You suck at writing algorithims.

>I haven't 'addressed' it because I think the statement is markedly untrue.

Let's summarize so far, shall we?

- You asked how an AI could be racist. I gave you links. You ignored them.

- You asserted the AI is not biased (without any evidence), and later doubled-down by saying those articles are "untrue" (again.. without any evidence)

- You claimed that 99% of firefighters are male (without evidence)

- You assert that "picking all males for a SW position is fine" (without any evidence, and despite me pointing out that it is literally illegal), then doubled down implying that you personally would preferentially hire only males even though there is no evidence that males have an advantage in SW.

You are blocked.


BraveNewCurrency t1_j1wvngh wrote

>What if men are naturally better suited to be software engineers?

First, ignorant people proposed that exact same line of reasoning, but with firefighters instead of SW Engineers. Go read some history on how that worked out.

Second, did you read that link you sent? It claims nothing of the sort, only that "there are physical/mental differences between men and women". No shit, Sherlock. But just because the "average male is slightly taller than the average female" doesn't mean "all men are tall" nor "women can't be over 7ft tall". By the same token, "men are slightly better at task X on average" doesn't mean there aren't many women who can beat most men at that task.

Third, if we implement what you are proposing, then you are saying we should not evaluate people on "how good they are at the job", but merely on some physical attribute. Can you explain how that leads to "the best person for the job"?


>a simple observation however will demonstrate to you that just because a progressive calls an AI's observation 'problematic'

Haha, you keep implying that I'm ignorant (should I "do my own research?") because I point out the bias (you never addressed the constant racism by the leading AI companies) but you don't cite any data and recite 100-year-old arguments.

Wait. Are you Jordan Peterson?


BraveNewCurrency t1_j1vd8a1 wrote

>Most likely the Amazon AI was as, or more proficient at selecting qualified candidates than any human HR department.

Why is that "most likely"? Citation needed.

(This reminds me of the experiment where they hired grad students to 'predict' if a light would turn on or not. The light turned on 80% of the time, but the grad students were only right about 50% of the time because they tried to predict the light. The researchers also tried monkeys, who just leaned on the button, and were right 80% of the time.

An AI is like those monkeys -- because 80% of good candidates are male, it thinks excluding female candidates will help the accuracy. But that's not actually true, and you are literally breaking the law.)

What if the truth is that a Resume alone is not enough to accurately predict if someone is going to work out in a position? What if a full job interview is actually required to find out if the person will fit in or be able to do good work? What if EVERY ranking algorithm is going to have bias, because there isn't enough data to accurately sort the resumes?

Having been a hiring manager, I have found that a large fraction of resumes contain made-up bullshit. AI is just GIGO with extra steps.

This reminds me of back in the 80's: Whenever a corporation made a mistake, they could never admit it -- instead, everyone would "blame it on the computer". That's like blaming the airplane when a bolt falls off (instead of blaming poor maintenance procedures.)


BraveNewCurrency t1_j1vaoml wrote

This is the wrong solution.

There are hundreds of other things the AI can latch on to instead: males vs females write differently, they often have different hobbies, they sometimes take different classes, etc.

The problem is lazy people who want to have the computer magically pick "good people" from "bad people" when 1) the data is biased, 2) they are feeding the computer literally thousands of irrelevant data points, 3) nobody has ever proved that the data can actually differentiate.

What we need are more experiments to find ground truth. For example, Google did a study and found that people who went to college only had a slight advantage, and only for the first 6 months on the job. After that, they could find no difference.

If that researcher was studying thousands of irrelevant data points, that insight probably would have been lost in the noise.


BraveNewCurrency t1_j1szem8 wrote

>How can AI be racist if it's only looking at raw data. Wouldn't it be inherently not racist? I don't know just asking.

The problem is "Garbage In, Garbage Out".

Most companies have a fair amount of systemic racism already in their hiring. (Does the company really reflect the society around it, or is it mostly white dudes?)

So if you train the AI on the existing data, the AI will be biased.

But even at a biased company, humans can "do better" in the future, because humans have the ability to introspect. AI does not, so it won't get better over time. Another confounding factor is that most AIs are terrible at explaining WHY they made a decisions. It takes a lot of work to pull back the curtain and analyze the decisions the AI is making. (And nobody wants to challenge the expensive AI system that magically saves everybody work!)