Submitted by Overall-Importance54 t3_y5qdk8 in MachineLearning
Help me here: I'm confused. If the breast tissue scan project is so run-of-the-mill that it's used in a huge number of average undergrad courses, why is it still so under used in the real-world? Maybe it's ubiquitous, and I'm just an idiot. That is most probable.
My local clinic does not use AI to read an MRI. It's just a person in a white coat squinting at his monitor.
iqisoverrated t1_islc3jj wrote
Depends on where you live but it could simply be regulations. AI isn't allowed to make diagnostic decisions and some places mandate a 4 eye rule (i.e. 2 independent physicians have to look at a scan)
Then there's time constraints. The amount of time physicians can spend looking at a scan is severely limited (just a few seconds on average). Many AI implementations take too long and are therefore not useful (Hospitals are paid by throughput. Slowing down the workflow is not acceptable)
There's also a bit of an issue with "explainable AI". If the output isn't easily explainable then it's not going to be accepted by the physician.
But the general attitude towards AI assisted reading seems to be changing, so I'd expect AI based reading assitance to become the norm in the next 10 years or so.