OnceReturned
OnceReturned t1_iupwvi9 wrote
Reply to comment by farmingvillein in [N] Meta AI | Evolutionary-scale prediction of atomic level protein structure with a language model by xutw21
That's fair.
If I were someone with billions of dollars to burn on whatever moonshot R&D I could think of, it would, at least in large part, be on this stuff. So, I'm more inclined to wonder why everybody isn't working on it.
OnceReturned t1_iupkp7o wrote
Reply to comment by farmingvillein in [N] Meta AI | Evolutionary-scale prediction of atomic level protein structure with a language model by xutw21
This is all working towards engineering proteins from scratch to do whatever you want. The potential impact of engineered proteins over the next hundred years is on the order of the impact of computers over the past hundred years. Meta and Alphabet and some others get this. The problem has two basic challenges:
Pick a biochemical function you want.
-
What structure provides that function?
-
What amino acid sequence yields that structure?
We're getting closer to figuring out the second thing with these structure prediction models. Once you can reliably answer those two questions, the world is your oyster. Want to catalyze hundreds of the most valuable reactions used in industrial chemical production, thereby lowering cost, increasing efficiency, increasing yield, and even opening entirely new avenues of chemical engineering? You can. Want to develop new classes of drugs to effectively treat hundreds of the highest priority diseases? You can. Want cheap sensors that can detect anything? Want to engineer perfect crops? Want to turn waste into fuel? Want to cheaply and easily construct and repair polymers? Want to make complex metamaterials? Want real, sophisticated nanotechnology? The list goes on, well into the unimaginable. And, once you can answer the two questions, it's super cheap to make arbitrary amino acid sequences.
Figuring it out would be like discovering fire for the first time. It's especially interesting because it will almost certainly happen and be virtually perfected within the next couple decades (at the latest, IMO).
OnceReturned t1_j9rb03o wrote
Reply to comment by suflaj in Why bigger transformer models are better learners? by begooboi
>We're talking about physically impossible number of parameters here, which will require solutions radically different that simple matrix multiplication and nonlinear activations.
Solutions for what, exactly? Memorizing the entire internet (or entire training set, but still)?