Submitted by Overall-Importance54 t3_y5qdk8 in MachineLearning

Help me here: I'm confused. If the breast tissue scan project is so run-of-the-mill that it's used in a huge number of average undergrad courses, why is it still so under used in the real-world? Maybe it's ubiquitous, and I'm just an idiot. That is most probable.

My local clinic does not use AI to read an MRI. It's just a person in a white coat squinting at his monitor.

6

Comments

You must log in or register to comment.

iqisoverrated t1_islc3jj wrote

Depends on where you live but it could simply be regulations. AI isn't allowed to make diagnostic decisions and some places mandate a 4 eye rule (i.e. 2 independent physicians have to look at a scan)

Then there's time constraints. The amount of time physicians can spend looking at a scan is severely limited (just a few seconds on average). Many AI implementations take too long and are therefore not useful (Hospitals are paid by throughput. Slowing down the workflow is not acceptable)

There's also a bit of an issue with "explainable AI". If the output isn't easily explainable then it's not going to be accepted by the physician.

But the general attitude towards AI assisted reading seems to be changing, so I'd expect AI based reading assitance to become the norm in the next 10 years or so.

8

111llI0__-__0Ill111 t1_isoa3uq wrote

The whole explainability thing is becoming ridiculous, because all these fancy techniques while explainable are still not going to be explainable to someone without math knowledge.

And even simple regressions have problems like Table 2 fallacy. Completely overrated

3

iqisoverrated t1_isov3ek wrote

You can do some stuff that helps people who aren't familiar with the math. E.g. you can color in the pixels that most prominently went into making a decision. If the 'relevant' pixels are nowehere near the lesion then that's a pretty good indication that the AI is telling BS.

Another idea that is being explored is that it will select some images from the training set that it thinks show a similar pathology (or not) and display those alongside.

Problem isn't so much that AI makes mistakes (anyone can forgive it that if the overall result is net positive). The main problem is that it makes different mistakes than humans...i.e. you're running the risk of overlooking something that a human would have easily spotted if you overrely on AI diagnstics.

2

Overall-Importance54 OP t1_isnspk1 wrote

10 yeeeeeears????? 😭

−1

iqisoverrated t1_isnubwl wrote

Well, I'm drawing the analogy to Tesla. While one can have a viable product in a much shorter timespan: in order to reap real economies of scale type benefits (i.e. what will set the winner apart from the 'also-ran' competition that will eventually go bankrupt because they cannot offer a competitive product at a similar price ) you have to go big. And I mean: REALLY big. Large factories. Global resource chains. That takes time.

5

Overall-Importance54 OP t1_iso2ag0 wrote

They created text to image, Google just released text to video. They speak, AI generates a full movie. I can write software to detect gold deposits from Google Images data. I just feel like there is a huge unexplained lag between where we are in actually tech, off the shelf, and what's applied in everyday life when the shelf is riiiiiight there.

−1

CurryGuy123 t1_isoa60l wrote

There's still a lot of uncertainties from the non-AI piblic about a lot of things though. And this is even more of an issue in the healthcare field - it's a very slow moving industry because there are lots of privacy concerns and the margin for error is very low. Things are gonna take a long time to be implemented in healthcare compared to other sectors.

2

CeFurkan t1_isluzw6 wrote

I think the problem is available data. I wish USA government would take action and release a labelled anonymized data for public.

3

CurryGuy123 t1_iso9wla wrote

The NIH has released and is continuing to augment the All of Us dataset initiative which will have tons of health data available for researchers to work with. Most data will never be completely publicly available due to privacy and HIPAA issues but still is a very valuable research asset that's being developed.

1

CeFurkan t1_isuw0au wrote

any link for any actual data?

1

CurryGuy123 t1_isv31tr wrote

Here's the data browser provides a snapshot of data that's freely accessible (mostly aggregate date the more in-depth information that has individual level data is only accessible by registering for a higher tier.

1

Kitchen-Ad-5566 t1_ismuj9v wrote

Because AI in medicine is something new and it is currently getting widespread gradually. Breast cancer detection is actually one of the most promising applications of AI readings in radiology. This is because it is one of the most common cancer types, which lead to screening programs in many countries; so hundreds of millions of breast scans are done annually. And actually, mammography is probably the first imaging type where we will the use of commercial AI products getting widespread.

2

Overall-Importance54 OP t1_isnsl90 wrote

Idk, they have has ML breast tissue data sets available since 2010. You an just look one up at https://www.openml.org/search?type=data&sort=runs&id=1465&status=active

I literally just wrote the code and ran the model with the clothing data set. It's sick, in two minutes of coding and training, the model can identify any garment. Putting in the breast tissue data set is just as easy. Show it a scan, it tells you a probability of cancer. 2010 was a long time ago.

−1

zzzthelastuser t1_isnvnv9 wrote

Similar with self-driving cars, they may work with (made up number) 99% accuracy, but that 1% is still too risky.

Regardless of what the AI says, I would still ask a doctor to see my scan considering a false-negative could cost me my life and a false-positive would probably mean a doctor would double check it anyway.

The bottleneck would still be the person who looks at each scan personally.

That being said, I think there is huge potential in early risk prediction using ML long before a real human could even spot cancer tissue.

4

Kitchen-Ad-5566 t1_ispsopg wrote

It isn’t so easy. I mean, you can make something that works more or less. But moving it to a level where it can really be useful clinically will require a huge effort. Because you need to work at extremely low false positive rates while being very sensitive at detection. And, although cancer can be quite obvious in diagnostic scans, in screening scans they are usually very subtle (because screening is done without any symptoms). It will be like looking for a needle in the haystack. And try doing this in that sensitivity/specificity requirements.

I think in general the problem with engineers working on medical topics is that they underestimate the complexity of the problems and the requirement to have in-depth insights about the problems they work on. I get a similar impression from your posts.

2

Overall-Importance54 OP t1_isq17oe wrote

I do often underestimate the complexity of a problem. Thank you for your take on this.

1

goodDogsAndSam t1_isp1l07 wrote

There are a number of startups in this space (diagnostic ML), across a bunch of different health conditions and underlying datasets. The FDA has a procedure for getting clinical approval to sell/deploy ML systems in healthcare settings, and has greenlit a number of products:
https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices

2

Overall-Importance54 OP t1_isp9sz7 wrote

This is a cool comment, thank you. Maybe it's the field of opportunities it looks like. Like, early internet, let's make a website that has a directory of other websites kind of opportunity. Feels like all the low-hanging fruit is still on the tree.

1

goodDogsAndSam t1_ispbg92 wrote

IMO the biggest problem is not the ML side, it's the people-workflow-deployment side (as other commenters have pointed out) -- a radiologist has spent 18 years of schooling plus a residency, how can you position AI to help them do their job better rather than presumptively challenge their expertise? The cost of error (particularly false negatives) is orders of magnitude higher than the benefit of getting the vast majority of true-negatives correct, what's the right way to tune the model?

Also, based on my experience in the space, there's plenty of training data out there, but outside of routine preventive scans like mammograms, many diagnoses don't have a lot of "clean" negative examples, because doctors won't order a CT unless they have reason to suspect something is wrong. This is unlikely to change, since the radiation exposure from the scan poses individual harm to the patient without providing individual benefit.

1

trnka t1_ispbueg wrote

Although we can produce good models, there's a huge gap between a model that can imitate a doctor reasonably well and a software feature that's clinically helpful. That's been my experience doing ML in primary care for years.

If you build software features that influence medical decision making, there are many challenges in making sure that the doctor knows when to rely on the software and when not to. There are also many issues with legal liability for medical errors.

If you're interested in the regulation aspect, FDA updated their criteria for clinical decision support devices for AI last month. This is the summary version and the full version has more detail.

It's not hard to have a highly accurate diagnosis model but it's hard to have a fully compliant diagnosis model that actually saves time and does no harm

2

Ok_Dependent1131 t1_islc2b9 wrote

Funny you should mention that. A professor in my program is working on a NIH grant for breast cancer detection and location (boundary box). He was having a hell of a time with it and had some pretty decent data - but admittedly it was more related to the boundary box stuff

1

MisterManuscript t1_isobcjb wrote

This isn't unique to your case. A lot of applications of machine learning in healthcare suffer from one drawback: a lot of patient data is sensitive and is very hard to get your hands on. You don't just query them to your liking; there is a lot of bureaucracy in place before you get your hands on even a small sample of required data.

1

Overall-Importance54 OP t1_isoj25q wrote

Thank you for your comment! For breast tissue data sets that are publicly available, just seems like the data and tools good enough off-the-shelf to be useful immediately. I agree that bureaucracy is a headwind. Did you see Google made a model that generates movies from text?

1

MisterManuscript t1_isosuxa wrote

That's a different use case. I work in healthcare (computer vision solutions) and I have yet to see a use-case for text-to-video generation.

So far most of the use-cases are either: classification or segmentation, mostly for diagnosing diseases. There's also biomedical informatics, but I'm not familiar enough with medical tabular data to comment on it.

1

marcus_hk t1_isp9ggz wrote

Healthcare in the USA is not a competitive market. It's a government-sponsored cartel run by large institutional fiefdoms. It is heavily regulated and there is little incentive to innovate.

1

yarasa t1_isphwq5 wrote

How good are the models? Can they reliably detect cancers? How realistic are the data in public datasets? Just because a toy version of the problem exists doesn't mean it is usable in real world.

1