Viewing a single comment thread. View all comments

fiveswords t1_j1lwknk wrote

I like that it predicted "high-risk" at 86% accuracy. It means absolutely nothing statistically. If someone is high risk and NOT an addict is it still an accurate prediction because they're only predicting the risk?How could it even be wrong 14% of the time


pharmaway123 t1_j1nubt8 wrote

If you read the paper, you'd see that the paper predicted the presence of opioid use disorder with 86% balanced accuracy (sensitivity of 93.0%, and a specificity of 78.9%)


poo2thegeek t1_j1m1f3k wrote

There’s probably definitions for what “high risk” is. Maybe for example “high risk” means 90% of people in that group overdose within 6 months. These definitions are obviously decided by the person creating the model, and so should be based on expert opinion. But predicting someone as “high risk” 86% of the time is pretty damn good, and it’s definitely a useful tool. However, it probably shouldn’t be the only tool. Doctors shouldn’t say “the ml model says you’re high risk, so no more drugs”, instead a discussion should be started with the patient at this point, and then the doctor can make a balanced decision based on the ml output, as well as the facts they’ve got from the patient.


Lydiafae t1_j1lybua wrote

Yeah, you'd want a model at least at 95%.


Hsinats t1_j1lzbpr wrote

You wouldn't evaluate the model based on accuracy. If you 5 % of people became addicts you could always predict they wouldn't and get 95 % accuracy.


godset t1_j1mdxec wrote

Yeah, these models are evaluated based on sensitivity and specificity, and ideally each would be above 90% for this type of application (making these types of models is my job)

Edit: the question of adding things like gender into predictive models is really interesting. Do you withhold information that legitimately makes it more accurate? The fact that black women have more prenatal complications is a thing - is building that into your model building in bias, or just reflecting bias in the healthcare system accurately? It’s a very interesting debate.


Devil_May_Kare t1_j1pkczk wrote

And then no one gets denied medical care on the advice of your software. Which is a significant improvement over the state of the art.