Viewing a single comment thread. View all comments

croninsiglos t1_j1lsywg wrote

“… sociodemographic information”

There it is! Then they go on to claim it’s predicting and not labeling.

Yet, if this informs prescribing then you’ve automatically programmed bias and prejudice into the model.

171

fiveswords t1_j1lwknk wrote

I like that it predicted "high-risk" at 86% accuracy. It means absolutely nothing statistically. If someone is high risk and NOT an addict is it still an accurate prediction because they're only predicting the risk?How could it even be wrong 14% of the time

59

pharmaway123 t1_j1nubt8 wrote

If you read the paper, you'd see that the paper predicted the presence of opioid use disorder with 86% balanced accuracy (sensitivity of 93.0%, and a specificity of 78.9%)

6

poo2thegeek t1_j1m1f3k wrote

There’s probably definitions for what “high risk” is. Maybe for example “high risk” means 90% of people in that group overdose within 6 months. These definitions are obviously decided by the person creating the model, and so should be based on expert opinion. But predicting someone as “high risk” 86% of the time is pretty damn good, and it’s definitely a useful tool. However, it probably shouldn’t be the only tool. Doctors shouldn’t say “the ml model says you’re high risk, so no more drugs”, instead a discussion should be started with the patient at this point, and then the doctor can make a balanced decision based on the ml output, as well as the facts they’ve got from the patient.

−1

Lydiafae t1_j1lybua wrote

Yeah, you'd want a model at least at 95%.

−7

Hsinats t1_j1lzbpr wrote

You wouldn't evaluate the model based on accuracy. If you 5 % of people became addicts you could always predict they wouldn't and get 95 % accuracy.

17

godset t1_j1mdxec wrote

Yeah, these models are evaluated based on sensitivity and specificity, and ideally each would be above 90% for this type of application (making these types of models is my job)

Edit: the question of adding things like gender into predictive models is really interesting. Do you withhold information that legitimately makes it more accurate? The fact that black women have more prenatal complications is a thing - is building that into your model building in bias, or just reflecting bias in the healthcare system accurately? It’s a very interesting debate.

4

Devil_May_Kare t1_j1pkczk wrote

And then no one gets denied medical care on the advice of your software. Which is a significant improvement over the state of the art.

1

andromedex t1_j1lynho wrote

Yeah this is really scary. What's even scarier is to wonder if it's reinforcing the exact biases that it's founded on.

29

carlitospig t1_j1m6oty wrote

Yes. Yes it is, which is why we’ve been screaming about bias for years. Yet they keep not addressing it and instead write articles like ‘look how great this is!’ instead of ‘look at all the power we are giving to our own biases!’

26

andromedex t1_j1m8pv9 wrote

People just think of AI as a magical black box.

12

InTheEndEntropyWins t1_j1lycqt wrote

There was another study that showed that ML can determine race from body scans. People were like soo what, it's not an issue.

The problem is when the ML just determines you are black from a scan, and is then like no pain killers for you.

20

Azozel t1_j1lyvdz wrote

I dont even know why they thought they needed to do it this way. I recall reading an article a couple years ago that stated they had identified genes that reliably identified if a person would be likely to become addicted to opioids.

0

carlitospig t1_j1m6u5g wrote

But even then folks with those genes deserve pain management care too. Needlessly suffering because your grandfather was an alcoholic is just cruelty wrapped in a ‘care’ bow.

19

Azozel t1_j1mx8wc wrote

Of course but then docs would know to monitor you more closely

2

linksgreyhair t1_j1p8stn wrote

I stopped telling my doctors that my mother was an addict for this exact reason. They immediately start side-eying me.

Too bad it’s still somewhere in my electronic records forever so I’m sure the damn algorithm already knows.

2