AmbulatingGiraffe t1_j1ugwcm wrote

This is objectively incorrect. One of the largest problems related to bias in AI is that accuracy is not distributed evenly across different groups. For instance, the COMPAS expose revealed that an algorithm being used to predict who would commit crimes had significantly higher false positive rates (saying someone would commit a crime who then didn’t) for black people. Similarly the accuracy was lower for predicting more serious violent crimes than misdemeanors or other petty offenses. It’s not enough to say that an algorithm is accurate therefore it’s not biased it’s just showing truths we don’t want. You have to look very very carefully at where exactly the model is wrong and if it’s systematically wrong for certain kinds of people/situations. There’s a reason this is one of the most active areas of research in the machine learning community. It’s an important and hard problem with no easy solution.