dissident_right

dissident_right t1_j1wxp3a wrote

>First, ignorant people proposed that exact same line of reasoning, but with firefighters instead of SW Engineers. Go read some history on how that worked out.

Well... I live in a world in which 99% percent of fire fighters are male, so I am guessing the answer is "All the intelligent people conceded that bigger male muscles/stamina made men better at being firefighters and no-one made a big deal out of a sex disparency in fire fighting"?

I'm gonna assume here that you in some sort of self-generated alternate reality where women are just as capable of being fire fighters as men despite being physically weaker, smaller and lacking in stamina (relative to men)?

>doesn't mean there aren't many women who can beat most men at that task

No, but If I am designed an AI algorithm to select who will be best at 'task X' I wouldn't call the algorithm biased/poorly coded if it overwhelming selected from the group shown to be better suited for task X.

Which is, more or less what happened with the Amazon program. Kinda ironic seeing as they... rely on algorithms heavily in their marketing of products, and I am 100% sure that 'biological sex' is one of the factors those algorithms account for when deciding what products to try and nudge you towards.

>constant racism by the leading AI companies

I haven't 'addressed' it because I think the statement is markedly untrue. Many people call the U of Chicago crime prediction algorithm "racist" for disproportionately 'tagging' Black men as being at risk of being criminals/victims of crimes.

However if that algorithm is consistently accurate how can an intelligent person accuse it of having/being biased?

As I said there plenty of bias involved in AI, but the bias is very rarely on the part of the machines. The real bias comes from the humans who either A) ignore data that doesn't fit their a-prioris, or B) read the data with such a biased eye that they draw conclusions from it that doesn't actually align with what the data is showing. See: your reaction to the Stanford article.

>Are you Jordan Peterson?

No.

1

dissident_right t1_j1w4upw wrote

>Why is that "most likely"? Citation needed.

I can't provide a citation since the program was shut down before it had a chance to prove it's accuracy.

As I said, a simple observation however will demonstrate to you that just because a progressive calls an AI's observation 'problematic' (i.e. the Chicago crime prediction algorithm) that 'problematic' here is clearly not the same as inaccurate.

Again, why would you assume that an AI algorithm couldn't predict employee suitability seeing as how well algorithms predict... basically everything else about out world.

Your are simply trying to avoid a conclusion that you don't want to consider - What if men are naturally better suited to be software engineers?

1

dissident_right t1_j1w48yb wrote

>They aren't interchangeable.

No, but unfortunately we cannot say how well the algorithm 'would' have worked in this instance, since it was shut down before it was given the chance to see if it's selections made good employees.

The point remains - if algorithms are relied on to be accurate in 99.9% of cases, if even with something as complex as 'who will be a criminal' an algorithm can be accurate, why would this area be the only one where somehow AI is unrealible/biased?

As I said, it's the humans who possess the bias. They saw 'problematic' results and decided, a-priori, that the machine was wrong. But was it?

1

dissident_right t1_j1ub67r wrote

>Anything to back it up?

Reality? Algorithms are used extensively by thousands of companies in thousands of fields (marketing, finance, social media etc.). They are used because they work.

A good example of this would be the University of Chicago's 'crime prediction algorithm' that attempts to predict who will commit crimes within major American cities. It has been under attack for supposed bias (racial, class, sex, etc. etc.) since the outset of the project. Despite this, it is correct in 9 out of 10 cases.

−3

dissident_right t1_j1u2c8v wrote

No, the algorithms will inevitably be highly accurate, people just don't like the patterns that the ai detects (guess which demographic was most likely to be flagged as potentially criminal in Chicago).

1

dissident_right t1_j1u220s wrote

AI is not biased. It's precisely it's lack of bias that causes AI to see patterns in society that ignorant humans would rather turn their eyes away from.

>But even at a biased company, humans can "do better" in the future, because humans have the ability to introspect.

Here 'introspect'/"do better" means 'be bullied/pressured into holding factually incorrect positions'.

Most likely the Amazon AI was as, or more proficient at selecting qualified candidates than any human HR department. It was shut down not due to inaccuracy of the algorithm at selecting qualified candidates, but rather for revealing a reality about qualified candidates that did not align with people a-priori delusions about human equality.

−7

dissident_right t1_izdsllw wrote

Basically. In another chapter of 'life sucks and is horribly unfair' if you have a genetic predisposition for ADHD you also have a greater likelihood of contracting Alzheimer's as you age.

Could be that certain genes effect neuroanatomy (brain structure) and that the specific anatomy of that part of the brain effects the risk of development of numerous cognitive disorders (explaining the correlation here).

Alternatively there may be some correlation between IQ and onset of ADHD. As far as I am aware there are already studies that show a correlation between IQ and risk of Alzheimer's, so there may be overlap here (higher IQ, less risk of both conditions).

19

dissident_right t1_iybissl wrote

Lol this cope. I'm sure the planes that China is producing in 2023 will be no better than the ones LHM produced in 1990. Sure.

My entire life useful idiots like /u/KileiFedaykin have been playing down the technological and economic strength of China ("Handing over HK will be the beginnings of Chinese Democracy!", "The Asian Tiger is going to go into recession!", "This housing market slump is the end of Chinese growth!"), every time they end up looking foolish.

3