Viewing a single comment thread. View all comments

Novel-Time-1279 t1_j7mcq3y wrote

What evidence exists that the insights gained via single-cell perturbations can help uncover novel disease targets? A critic might say a single cell perturbations are simply not a good model for complex multicellular disease processes as the disease phenotype is rarely a linear sum of single cell phenotypes. Is the method most applicable to rare diseases with a clearly understood gene driver or also to highly prevalent diseases? I think Yumanity failed recently with their yeast disease model in neurology so I’m curious of how you address this criticism

80

ShakeNBakeGibson OP t1_j7mh7nq wrote

All reductions of complex biology cut out some of the information and become poorer representations of the patient. Scale and translation are opposing forces in biological experimentation. The most translational model is human - which is hardest to scale. The least translational model is in silico, but is easiest to scale.

What we do at Recursion is work in a human cell, the smallest unit of biology that has all of the instructions. It is not perfectly translational, but there are many examples of where it has worked well. But it does allow us to scale across biology and chemistry (whole genome scale, ~1M compounds, etc).

Using that model, we find the strong correlates of gene function and patient biology from the world’s knowledge of disease, and explore those in our dataset to find ways of modifying those processes. We then do the rigorous work of translating success from our cellular models in much more complex systems. Our clinical programs demonstrate that we are able to confirm these insights from the platform in more complex in vivo models.

47

wellboys t1_j7okvwl wrote

How/do you anticipate overcoming regulatory hurdles associated with that type of use case? I can see how this data would be valuable, but this whole concept sounds like a giant HIPA violation as soon as you try and operationalize it.

ETA: I don't think the limiting factor on big data applications to public health is the lack of conceptual frameworks, I think it's a failure of this type of plan when the rubber hits the road. I'd rather be wrong, so tell me how I am!

7

WhatsFairIsFair t1_j7pw8ae wrote

I don't get where you're coming from. Is it the combining with the world's datasets piece? They're probably using either publicly available datasets or have specific agreements with companies to make use of their datasets.

HIPAA concerns patient identity mainly, so if the dataset is anonymized or fictionalized then it's likely fine. Or if it can't be anonymized then they'll just add some extra paperwork before sharing.

Don't think that HIPAA means your data isn't shared with other companies. It just means the companies will sign some paperwork first.

Edit: also the rubber was on the road 9 years ago apparently because they've been doing this since 2013

7

t_rexinated t1_j89cjqf wrote

they use a combo of already available public datasets in addition to strategic partnership or licensings that give them accessibility to otherwise walled-off, yet potentially highly valuable data sources.

Regardless of where it comes from, everything is regulatory/HIPAA compliant prior to the data actually moving hands.

1

IAmA_Nerd_AMA t1_j7puy0x wrote

To simplify: you let the AI do the brainstorming at the cellular level but you test the most successful of those predictions using traditional methods.

3