Viewing a single comment thread. View all comments

myassholealt t1_j1snzt5 wrote

>Training models isn’t some Klan rally to go after people, at least not in my experience.

In all that I've read about the biases, I never came away with the impression that it was this, or that any biases that exist were maliciously built in, but they nevertheless exist. And when it's implemented in daily life, it has the potential to negatively affect members of the public. And that's not a good thing.

4

Armoogeddon t1_j1spw46 wrote

I agree wholeheartedly with your last sentence, but it goes way beyond “bias” in models. Models are only one piece of an ever more complex system.

In terms of the impressions you’ve inferred, we could talk in good conscience about that for hours. Maybe five or six years ago, it came to light that visual recognition models performed inherently worse on people of dark skin. The tech companies (I was there at the time at a big prominent one) decided to jump ahead of the bad press by condemning themselves and promising to do better. The media fallout was negligible.

It was bunk. Did AI models perform generally worse on black/people of African descent photos? In some cases yes. Was the training data cribbed from the US? Yes. Where black people made up, what 13% of the population? Of course they performed worse: there was 1/10 the data available to train them! It wasn’t racist; it wasn’t some bias built into the models by the human trainers - there was simply less data. But nobody bothered to elaborate on what should have been a nuanced conversation and the prevailing opinion jumped to the wrong perception and the wrong remediation. It kicked off an idiotic path upon which we still find ourselves. Or watch others traversing.

The real problem is nobody understands what’s behind these models. We understand the approaches they take generally, the “convolutions” applied at various training layers - but nobody understands the logic behind the output models any better than we understand the models behind human reasoning. We can infer things but there’s nothing known; not in a binary or truly understood way.

Yet everybody keeps racing ahead to apply these models in ever more profound and - if you’re in the space - unnerving ways. It’s getting scary, and it’s way worse than the stuff that’s being discussed here, which is also a bad idea.

I guess what I’m saying is it’s so much worse than these idiot politicians realize. They’re fighting a battle that was lost ten years ago.

0