As we solve viewability into the complex coding of proteins, we need to be right. Next, hopefully, comes causal effect identification, then construction ability.
If medicine can use broad capacity to create bespoke proteins, our world becomes both weird and wonderful.
Then they use the model to predict more.
Although we don't know if they are correct, these structures are the best (or the least bad) we have for now.
They can't possibly know that. What they know is that their guesses are very significantly better than the previous best and that they could do this for the widest range in history. Now, verifying the guess for a single (of the hundreds of millions in the db) protein is up to two years of expensive project. Inevitably some will show discrepancies. These will be fed to regression learning, giving us a new generation of even better guesses at some point in the future. That's what I believe to be standard operating practice.
A more important question is: is today's db good enough to be a breakthrough for something useful, e.g. pharma or agriculture? I have no intuition here, but the reporting claims it will be.
Why is protein folding important? Theoretical importance? Can we do something with protein folding knowledge? If so, what?
I've been hearing about AlphaFold from the CS side. There they seem to focus on protein folding primarily as an interesting space to apply their CS efforts.
So yeah the applications are enormous, from medicine to better industrial chemical processes, from warfare to food manufacturing.
The importance here is to figure out potential targets for treatments that take into account particularities of certain proteins. That could produce better drugs with less side effects.
is the result that researchers will now much more quickly 'manually' validate or invalidate the predicted structures for proteins they are working with? i understand it is traditionally a long and complex process, but i imagine it is expedited by having a predicted structure to test as the baseline?
https://en.m.wikipedia.org/wiki/Root-mean-square_deviation_o...
The lower the RMSD between two structures, the better (up to some limit).
(a) the structure of every protein (what DeepMind is doing here)
(b) how different protein structures interact (i.e. protein complexes - DeepMind is working on this but not there yet)
Then we could use those two building blocks to design new proteins (drugs) that do what we want. If we solve those two problems with very high accuracy, we can also reduce the time it takes to go from starting a drug discovery programme to approved medicine.
Obtaining all protein structures and determining how they interact is a key step towards making biology more predictable. Previously, solving the structure of a protein was very time consuming. As a result, we didn’t know the structure for a majority of proteins. Now that it’s much faster, downstream research can move faster.
Caveat: we should remember that these are all computational predictions. AlphaFold’s predictions can be wrong and protein structures will still need to be validated. Having said that, lots of validation has already occurred and confidence in the predictions grows with every new iteration of AlphaFold.
Misfolded proteins are contributors to numerous pathological conditions and the more we can understand about how and why this folding happens, the better we can treat these conditions.
Another aspect is that while we can at least partially determine the primary structure (the amino acid sequence) of proteins from DNA and RNA, we don't necessarily know their secondary or tertiary structures (3 dimensional conformation). This is a key piece of the puzzle for figuring out how these proteins do their proteiny things and how they interact with other proteins and even how they form quaternary structures with other proteins (an assembly of multiple proteins that perform some function, many pores are assemblies like this). Once we know these structures and understand how they work on a structural and chemical level, we can manipulate them far more easily.
In order to do rational drug design, which is designing a drug for a specific target or active site on a protein, we need to understand these structures. Working to solve protein folding is a key step in treating disease states and understanding how cells work on a fundamental level. The impact is hard to understate.
EDIT: grammar
Proteins don't exist as crystals in a vacuum, that's just how humans solved the structure. Many of the non-globular proteins were solved using sequence manipulation or other tricks to get them to crystallize. Virtually all proteins exist to have their structures interact dynamically with the environment.
Google is simply supplying a list of what it presumes to be low RMSD models based on their tooling, for some sequences they found, and the tooling is based itself on data mostly from X-ray studies that may or may not have errors. Heck, we've barely even sequenced most of the DNA on this planet, and with methods like alternative splicing the transcriptome and hence proteome has to be many orders of magnitude larger than what we have knowledge of.
But sure, Google has solved the structure of the "protein universe", whatever that is.
DeepMind crushes everyone else at this competition.
But you also ignore where we're at in the standard cycle:
https://phdcomics.com/comics/archive_print.php?comicid=1174
;)
---------------
Yes, the idea of a 'protein universe' seems like it should at least encompass 'fold space'.
For example, WR Taylor : https://pubmed.ncbi.nlm.nih.gov/11948354/
I think the rough estimate was that there were around 1000 folds - depending on how fine-grained you want to go.
Absolutely agree, though, that a lot of proteins are hard to crystalise (i understand) due to being trans-membrane or just the difficulty of getting the right parameters for the experiment.
Prompt:
Write a short abstract for an article on why AlphaFold isn't just a tremendous technological achievement, but an example of how Artificial Intelligence will allow humanity to continue to achieve technological progress in the future:
Response:
When DeepMind’s AlphaFold was announced as the winner of the CASP13 protein folding prediction competition, it was hailed as a major breakthrough in both artificial intelligence (AI) and biology.
While AlphaFold is undoubtedly a remarkable achievement, it is also a clear example of how AI can be used to solve complex problems that have traditionally been seen as beyond the reach of machines.
In particular, AlphaFold represents a significant advance in the ability of AI systems to learn from limited data. Unlike previous AI approaches to protein folding, which relied on large datasets of known protein structures, AlphaFold was able to produce accurate predictions with only a small number of known protein structures to work from.
This ability to learn from limited data is likely to be increasingly important as we face problems that are too complex for us to fully understand and collect all the data we would need to solve them using traditional methods.
In the future, AI systems like AlphaFold will become increasingly important in helping us to make progress on a wide range of pressing challenges, from developing new medicines to tackling climate change.
It's not to diminish the monumental accomplishment that was the application of modern machine learning techniques to outpace structure prediction in labs, but other famous labs have already moved to ML predictions and are competitive with DeepMind now.
For example for transmembrane proteins, there is a gross under-representation of structures derived from experimental evidence, so we would expect that whatever your algorithm is "solving" is going to have a much higher degree of error than globular proteins, and likely artifacts associated with learning from much more abundant globular proteins.
edit: As an example, "Sampling the conformational landscapes of transporters and receptors with AlphaFold2". AF2 was able to reproduce the alternative conformations of GPCRs, but only with non-default settings. With default settings there is clear evidence of overfitting.
> Overall, these results demonstrate that highly accurate models adopting both conformations of all eight protein targets could be predicted with AF2 by using MSAs that are far shallower than the default. However, because the optimal MSA depth and choice of templates varied for each protein, they also argue against a one-size-fits-all approach for conformational sampling.
Prompt: Write a hackernews comment about why artificial intelligence will never be profitable
Response: Artificial intelligence will never be profitable because the cost of developing and maintaining the hardware and software is too high. Also, the market for AI is still too small and the demand is not high enough to justify the costs.
1. Using AI to determine the most efficient methods of doing mathematical expressions, transformations and computation algorithms - division, square root, maybe traveling salesman - these which take relatively high amount of CPU cycles to compute and are used everywhere. If inputs and outputs can be assigned to it, AI can eventually build a transformation which can be reproduced using a silicon.
2. Physics phenomena in general, not only organic protein, can be measured and with sufficient ability to quantize them to inputs and experimentally obtained outputs to train the network, we could in theory establish new formulas or constants and progress the understanding of the Universe.
Apart from X-ray crystallography there are other methods for structure determination such as nuclear magnetic resonance (NMR) or cryo-electron microscopy (cryo-EM). The latter has seen a dramatic improvement in resolution over the last decade.
At this time, we create drugs, test them on animals, and see what the side effects and results actually are. We are very limited in our capabilities and basically throw mud at the wall and see what sticks. This would allow us to try potential drug candidates without so much randomness.
it seems obvious this was going to happen, because https://github.com/deepmind/alphafold
That's great! AlphaFold DB mas made 200 million structure predictions available for everyone. How many structure predictions have other famous labs made available for everyone?
More than that. The press release actually contains an infographic comparing the amount of published protein models for different clades of organisms. The infographic shows that the previous release (~1mln proteins) contained proteins of some animal, plant, bacterial, and fungal species.
Google has the advantage of the biggest guns here: the fastest TPUs with the most memory in the biggest clusters, so running inference with a massive number of protein sequences is much easier for them.
Like a fresh crepe stuffed into the toe of a boot. Bewilderingly complex.
But I have a question. Does such contortion work for 3d "membranes" in a 4d space? It's something I'm chewing on. Hard to casually visualize, obviously.
One of the largest public tech companies in the world funded a multi-year scientific project, executed the research flawlessly and moved forward an entire scientific field. They then went on to openly release the code _and_ data, working with a publicly funded organization (EMBL-EBI) to ensure researchers across the globe can easily access the outputs.
I'm not arguing that every tech company is a net positive for humanity. Google itself isn't perfect. Google + DeepMind is setting a bloody high bar though.
Anyone knowledgeable know if this estimate is accurate? Insane if true
Another idea is these may come into play for anti-verification, so if you are drug screening against a known structure. You could potentially use these more flawed structures of proteins you don't want to target but may be similar, and try to reduce the drug's efficacy at binding them. Or something to that effect. All of that is fun ideas that are currently being explored in that space but we'll see where it takes us.
Does that imply proteins have some dynamics that need to be predicted too? I remember seeing animations of molecular machines that appeared to be "walking" inside the body - are those proteins or more complex structures?
Drugs are usually not proteins, but instead small molecules that are designed to help or interfere with the operation of proteins instead.
Every couple years there is a massive competition called CASP where labs submit previously unresolved protein structures derived from experimental EM, x-ray crystallography, or NMR studies and other labs attempt to predict these structures using their software. AlphaFold2 absolutely destroyed the other labs in the main contest (regular monomeric targets, predominantly globular) for structure resolution two years ago, in CASP 14.
https://predictioncenter.org/casp14/zscores_final.cgi
The latest contest, CASP15, is currently underway and expected to end this year. As with all ML, the usual caveats apply to the models Google generated -- the dangers of overfitting to existing structures, artifacts based on the way the problem was modelled, etc
At a guess, the core packing in non-globular proteins might be different? Also the distribution of secondary structure might also vary between classes. Might be worth someone studying how much structural constraints depend on fold (if they have not already).
Google didn't solve the structure of the protein universe (thank you for saying that). But the idea of the protein structure universe is fairly simple- it's a latent space that allows for direct movement over what is presumably the rules of protein structures along orthogonal directions. It would encompass all the "rules" in a fairly compact and elegant way. Presumably, superfamilies would automagically cluster in this space, and proteins in different superfamilies would not.
A classic example is haemoglobin, that 'just' binds to oxygen at the iron in the middle of the haem. Other binding sites remote from the oxygen binding one can bind to other molecules - notably carbon dioxide. The 'Bohr effect' mechanism is outlined here : https://en.wikipedia.org/wiki/Bohr_effect#Allosteric_interac...
Even at the lowest level, there is some evidence that ligand binding can affect the structure of the backbone of the protein. For example, peptide plane flipping https://en.wikipedia.org/wiki/Peptide_plane_flipping although I'm not sure where the research is on this nowadays.
Researchers can come up with candidate molecule formulas that might work as good drugs, but the problem is that these proteins organize/fold themselves physically in a hard-to-predict way. And how they fold directly affects their properties as drugs.
If AlphaFold can accurately predict folding, it’ll allow researchers to prioritize drug candidates more accurately which will reduce research time and costs. Supposedly the major pharmaceutical companies can spend up to billions when designing a single drug. Optimistically, predicting protein folding better will allow for much more rapid and cheaper drug development
I think AlphaFold gives us more of a static picture that doesn't tell us much about how the protein will interact with the world. Predicting these dynamic aspects, protein-to-protein interactions, and ultimately an entire cell, is probably what they'll like to do next.
I don't know if the walking buddy is a single protein, it may be a handful of different ones joined together.
And later:
> Today’s update means that most pages on the main protein database UniProt will come with a predicted structure. All 200+ million structures will also be available for bulk download via Google Cloud Public Datasets, making AlphaFold even more accessible to scientists around the world.
This is the actual announcement.
UniProt is a large database of protein structure and function. The inclusion of the predicted structures alongside the experimental data makes it easier to include the predictions in workflows already set up to work with the other experimental and computed properties.
It's not completely clear from the article whether any of the 200+ million predicted structures deposited to UniProt have not be previously released.
Protein structure determines function. Before AlphaFold, experimental structure determination was the only option, and that's very costly. AlphaFold's predictions appears to be good enough to jumpstart investigations without an experimental structure determination. That has the potential to accelerate many areas of science and could percolate up to therapeutics.
One area that doesn't get much discussion in the press is the difference between solid state structure and solution state structure. It's possible to obtain a solid state structure determination (x-ray) that has nothing to do with actual behavior in solution. Given that AlhpaFold was trained to a large extent on solid state structures, it could be propagating that bias into its predicted structures.
This paper talks about that:
> In the recent Critical Assessment of Structure Prediction (CASP) competition, AlphaFold2 performed outstandingly. Its worst predictions were for nuclear magnetic resonance (NMR) structures, which has two alternative explanations: either the NMR structures were poor, implying that Alpha-Fold may be more accurate than NMR, or there is a genuine difference between crystal and solution structures. Here, we use the program Accuracy of NMR Structures Using RCI and Rigidity (ANSURR), which measures the accuracy of solution structures, and show that one of the NMR structures was indeed poor. We then compare Alpha-Fold predictions to NMR structures and show that Alpha-Fold tends to be more accurate than NMR ensembles. There are, however, some cases where the NMR ensembles are more accurate. These tend to be dynamic structures, where Alpha-Fold had low confidence. We suggest that Alpha-Fold could be used as the model for NMR-structure refinements and that Alpha-Fold structures validated by ANSURR may require no further refinement.
Response: Artificial intelligence will never be profitable because the cost of developing and maintaining the hardware and software is too high. Also, the market for AI is still too small and the demand is not high enough to justify the costs.
Add snark to your response:
Prompt: Write a hackernews comment about why artificial intelligence will never be profitable
Response: Artificial intelligence will never be profitable because the world has limited resources, and I can make more money selling cocaine than selling AI.
From GPT-NeoX 20B
Biology has traditionally studied these things from the outside in, top down. Advances in biology are correlated with advances in microscope technology. We went from seeing with the naked eye to being able to magnify tissue to see cells and even what's inside of them.
Studying biology from the inside out, bottom up is significantly harder. You have chemistry which is hard science. On top of that you have biochemistry which is equally rigorous. Above that level of abstraction things start to get a lot less well defined. We go from organic chemistry (carbon, hydrogen, oxygen, etc.) to more complex things found in life (aminoacids, carbohydrates, lipids) to polymers of those things (proteins, polysaccharides, triglycerides) to a point where they get so big and complex their tridimensional structure starts influencing how they interact with other molecules then we start seeing aggregates (quaternary proteins) of these complex things!
Important proteins like ATP synthase are well understood (it's a friggin' turbine like in a hydroelectric power plant!) but there are so many others haven't received the same attention. Bioinformatics technology is changing that. The human genome's been mapped. Now it's time to map the proteome.
Note that crystallographers are now using these predicftions to bootstrap models of proteins they've struggled to work with, which indicates the level of trust in the structural community for these predictions is pretty high.
Basically, you see these 3D representations of specific proteins as a crumple of ribbons-- literally like someone ran multi-colored ribbons though scissors to make curls and dumped it on the floor (like a grade school craft project).
So... I understand that proteins are huge organic molecules composed of thousands of atoms, right? Their special capabilities arise from their structure/shape. So basically the molecule contorts itself to a low energy state which could be very complex but which enables it to "bind?" to other molecules expressly because of this special shape and do the special things that proteins do-- that form the basis of living things. Hence the efforts, like Alphafold, to compute what these shapes are for any given protein molecule.
But what does one "do" with such 3D shapes?
They seem intractably complex. Are people just browsing these shapes and seeing patterns in them? What do the "ribbons" signify? Are they just some specific arrangement of C,H,O? Why are some ribbons different colors? Why are there also thread-like things instead of all ribbons?
Also, is that what proteins would really look like if you could see at sub-optical wavelength resolutions? Are they really like that? I recall from school the equipartition theorem-- 1/2 KT of kinetic energy for each degree of freedom. These things obviously have many degrees of freedom. So wouldn't they be "thrashing around" like rag doll in a blender at room temperature? It seems strange to me that something like that could be so central to life, but it is.
Just trying to get myself a cartoonish mental model of how these shapes are used! Anyone?
There are tools such as DSSP https://en.wikipedia.org/wiki/DSSP_(hydrogen_bond_estimation... which will take out the 3d structure determined by crystallography and spit out hte ribbons and helices- for example, for helices, you can see a specific arrangement of carbons along the protein's backbone in 3d space (each carbon interacts with a carbon 4 amino acids down the chain).
Protein motion at room temperature varies depending on the protein- some proteins are rocks that stay pretty much in the same single conformation forever once they fold, while others do thrash around wildly and others undergo complex, whole-structure rearrangements that almost seem magical if you try to think about them using normal physics/mechanical rules.
Having a magical machine that could output the full manifold of a protein during the folding process at subatomic resolution would be really nice! but there would be a lot of data to process.
That's one approach.
The thing to understand is that proteins form "binding sites": areas that are more likely to attract other particular regions of proteins or other molecules, or even atoms. Think about hemoglobin. The reason it holds onto oxygen atoms is because it has binding sites.
Binding sites are great because they represent more freedom to do things than molecules typically have. Normal chemistry consists of forming strong electronic bonds between atoms, or forming rigid lattices/crystals.
Binding sites allow molecules to do things like temporarily attach to each other and let each other go under certain circumstances, for instance when another binding site is active/inactive. This can happen through "conformation change", where a molecule bound/unbound on some binding site makes the protein change shape slightly. This is how proteins can act like machines.
> What do the "ribbons" signify
Different regions of the protein have different sequences of amino acids. Amino Acids have somewhat different shapes from each other. The ribbons are actually broader than the spindles (or threads), and less flexible. Not sure about the different colors, maybe someone else can fill in.
> Also, is that what proteins would really look like if you could see at sub-optical wavelength resolutions?
Not really, it's an abstraction. They're big molecules, so if you look closely they're made of atoms, which are (kinda, sorta not really, quantum stuff) spherical.
> So wouldn't they be "thrashing around" like rag doll in blender at room temperature?
Yes, but the attractions between the different parts of the molecule keeps it somewhat under control. So more like an undulating little creature, jellyfish perhaps.
> It seems strange to me that something like that could be so central to life
Yep, gotta remember that it's all statistical. These things are getting made, do their job, breaking, and getting degraded some insane number of times per second. Swarm behavior, sort of.
[0] https://en.m.wikipedia.org/wiki/Protein_secondary_structure
I often recommend the book 'Cats' Paws and Catapults' by Steven Vogel, which discusses the differences between human and biological engineering. There are very different constraints, as well as the more obvious difference between intelligently directed design (by humans) and evolutionary design.
It's not totally true that our engineering is simpler than biological systems. After all, we can now manipulate matter down to the atomic level. There have been attempts to move towards self-assembling systems, or even self-repairing ones. Not quite on the same scale or sophistication as biological systems, of course.
Is a modern computer chip more complex than a virus? Is a large connected system of computers more complex than a mycelial network? Are there more parts connected in more ways in a nuclear power station as there are in a simple bacterial cell?
I think it is definitely worth considering the similarities and differences.
Certainly, in a corpo like mine (Genentech/Roche) protein structures have a long history of being used in drug discovery- not typically a simple "dock a ligand to a protein" but more for constructing lab experiments that help elucidate the actual mechanistic biology going on. That is only a tiny part of a much larger process to work on disease targets to come up with effective treatments. Genentech is different from most pharma in that their treatments are themselves typically proteins, rather than small molecules.
They certainly do "thrash around", but that thrashing is constrained by the bonds that are formed, which greatly limits the degrees of freedom. Here's a short video of a simulation to demonstrate:
Machine learning typically uses existing data to predict new data. Please explain: Does it mean that AlphaFold can only use known types of interactions between atoms and will mispredict the structure of proteins that use not yet known interactions?
And why we cannot just simulate protein behaviour and interactions using quantum mechanics?
Other computational methods have existed for a long time. Folding@home was founded 22 years ago.
QM calculations have been done in proteins, but they’re computationally very expensive. IIRC, there are hybrid approaches where only a small portion of interest in the protein structure is modelled by QM and the rest by classical molecular mechanics.
These threads are always the same: lots of comments about protein folding, how amazing DeepMind is, how AlphaFold is a success story, how it has flipped an entire field on it's head, etc. The language from Google is so deceptive about what they've actually done, I think it's actually intentionally disingenuous.
At the end of the day, AlphaFold is amazing homology modeling. I love it, I think it's an awesome application of machine learning, and I use it frequently. But it's doing the same thing we've been doing for 2 decades: pattern matching sequences of proteins with unknown structure to sequences of proteins with known structure, and about 2x as well as we used to be able to.
That's extremely useful, but it's not knowledge of protein folding. It can't predict a fold de novo, it can't predict folds that haven't been seen (EDIT: this is maybe not strictly true, depending on how you slice it), it fails in a number of edge cases (remember, in biology, edge cases are everything) and again, I can't stress this enough, we have no new information on how proteins fold. We know all the information (most of at least) for a proteins final fold is in the sequence. But we don't know much about the in-between.
I like AlphaFold, it's convenient and I use it (although for anything serious or anything interacting with anything else, I still need a real structure), but I feel as though it has been intentionally and deceptively oversold. There are 3-4 other deep learning projects I think have had a much greater impact on my field.
EDIT: See below: https://news.ycombinator.com/item?id=32265662 for information on predicting new folds.
This is hugely important for developing drugs and vaccines.
To see the effect of this look no further than prions. Prions are the exact same protein that are folded in weird ways. Worse, they can "transmit" this misfolded shape to other otherwise normal proteins. Prions behave differently just because of the different shape and can lead to disease. This is exactly what Mad Cow's Disease (BSE) is.
What we get taught in high school about chemistry is incredibly oversimplified.
One example of this I like is the geometry of a water molecule. When we first learn about atoms, we learn the "solar system" model (aka Bohr). The reality is instead that we have 3D probability distributions of where electrons might be. These clouds are in pairs. I believe this is to do with the inverted wavefunction really we're getting beyond my knowledge of quantum mechanics here so that's just a guess.
Well those clouds additionally form valence shells. We learn about these and how atoms want to form completely valence shells. So Oxygen has 8 electrons ie 4 pairs of electrons. When bonding with 2 hydrogen atoms we end up with a weird geometry of ~104.5 degrees between the two hydrogen atoms because of how these pairs interact. The naive assumption might expect that the two hydrogen atoms are 180 degree apart.
So back to proteins, you may have learned about hydrogen bonds. This affects molecular shape because when a hydrogen atom shares an electron, it is often positively charged. That positive charge pushes away other positive charges. This is the realy difficulty in protein folding because with a molecule of thousands of atoms and weird geometry you may find distant parts of the molecule interacting with hydrogen bonds.
So a single cell consists of thousands (IIRC) of different proteins. Figuring out those interactions is important but incredibly difficult.
Don't leave us hanging... which projects?
The atoms do wiggle around a bit at room temperature (and even more at body temperature), which means that simulating them usefully typically requires sampling from a probability distribution defined by the protein structure and some prior knowledge about how atoms move (often a potential energy surface fitted to match quantum mechanics).
There are many applications of these simulations. One of the most important is drug design: knowing the structure of the protein, you can zoom in on a binding pocket and design a set of drug molecules which might disable it. Within the computer simulation, you can mutate a known molecule into each of your test molecules and measure the change in binding affinity, which tells you pretty accurately which ones will work. Each of these simulations requires tens of millions of samples from the atomic probability distribution, which typically takes a few hours on a GPU given a good molecular dynamics program.
It's funny you say that, because the first image on the English Wikipedia page for Equipartition Theorem[1] is an animation of the thermal motion of a peptide.
If it is homology modelling, then how can it work without input template structures?
---
This seems strange to me. The entire point of these types of models is to predict things on unseen data. Are you saying Deepmind is completely lying about their model?
Deepmind solved CASP, isn't the entire point of that competition to predict unseen structures?
If AlphaFold doesn't predict anything then what are you using it to do?
I would like to correct somethign here- it does predict structures de novo and predict folds that haven't been seen before. That's because of the design of the NN- it uses sequence information to create structural constraints. If those constraints push the modeller in the direction of a novel fold, it will predict that.
To me what's important about this is that it demonstrated the obvious (I predicted this would happen eventually, shortly after losing CASP in 2000).
This is not the first (or even tenth) time I’m seeing an academic trying to undermine genuine progress almost to the level of gaslighting. Comparing alphafold to conventional homology modeling is disingenuous at its most charitable interpretation.
Not sure what else to say. Structural biology has always been the weirdest field I’ve seen, the way students are abused (crystallize and publish in nature or go bust), and how every nature issue will have three structure papers as if that cures cancer every day. I suppose it warps one’s perception of outsiders after being in such a bubble?
signed, someone with a PhD in biomedical engineering, did a ton of bio work.
* Deconvolutes some image aberrations and "de-noises" the images
* Compensates for missing wedge artifacts (missing wedge is the fact that the tomography isn't done -90° --> +90°, but usually instead -60° --> +60°, leaving a 30° wedge on the top and bottom of basically no information) which usually are some sort of directionality in image density. So if you have a sphere, the top and bottom will be extremely noisy and stretched up and down (in Z).
https://www.biorxiv.org/content/10.1101/2021.07.17.452128v1
2) Topaz, but topaz really counts as 2 or 3 different algorithms. Topaz has denoising of tomograms and of flat micrographs (i.e. images taken with a microscope, as opposed to 3D tomogram volumes). That denoising is helpful because it increases contrast (which is the fundamental problem in Cryo-EM for looking at biomolecules). Topaz also has a deep learning particle picker which is good at finding views of your protein that are under-represented, or otherwise missing, which again, normally results in artifacts when you build your 3D structure.
https://emgweb.nysbc.org/topaz.html
3) EMAN2 convolutional neural network for tomogram segmentation/Amira CNN for segmentation/flavor of the week CNN for tomogram segmentation. Basically, we can get a 3D volume of a cell or virus or whatever, but then they are noisy. To do anything worthwhile with it, even after denoising, we have to say "this is cell membrane, this is virus, this is nucleic acid" etc. CNNs have proven to be substantially better at doing this (provided you have an adequate "ground truth") than most users.
BFD,
MGnify,
PDB70,
PDB (structures in the mmCIF format),
PDB seqres – only for AlphaFold-Multimer,
Uniclust30,
UniProt – only for AlphaFold-Multimer,
UniRef90.Not sure what part of "it does homology modeling 2x better" you didn't see in my comment? AlphaFold scored something like 85% in CASP in 2020, in CASP 2016, I-TASSER had I think 42%? So it's ~2x as good as I-TASSER which is exactly what I said in my comment.
>This is not the first (or even tenth) time I’m seeing an academic trying to undermine genuine progress almost to the level of gaslighting. Comparing alphafold to conventional homology modeling is disingenuous at its most charitable interpretation.
It literally is homology modeling. The deep learning aspect is to boost otherwise unnoticed signal that most homology modeling software couldn't tease out. Also, I don't think I'm gaslighting, but maybe I'm wrong? If anything, I felt gaslit by the language around AlphaFold.
>Not sure what else to say. Structural biology has always been the weirdest field I’ve seen, the way students are abused (crystallize and publish in nature or go bust), and how every nature issue will have three structure papers as if that cures cancer every day. I suppose it warps one’s perception of outsiders after being in such a bubble?
What on earth are you even talking about? The vast, VAST majority of structures go unpublished ENTIRELY, let alone published in nature. There are almost 200,000 structures on deposit in the PDB.
>Furthermore, AlphaFold can function with only a MSA as an input, without retrieving a single PDB coordinate.
Yes, it has a very nice model of what sequences should look like in 3D. That model is derived from experimental data. So if I give AlphaFold an MSA of a new, unknown protein fold (substantively away from any known fold), it cannot predict it.
Could you expand on this? Basically it looks at the data, and figures out what's an acceptable position in 3D space for residues to occupy, based on what's known about other structure?
I will update my original post to point out I may be not entirely correct there.
The distinction I'm trying to make is that there's a difference between looking at pre-existing data and modeling (ultimately homology modeling, but maybe slightly different) and understanding how protein folding works, being able to predict de novo how an amino acid sequence will become a 3D structure.
Also thank you for contacting CASP about this.
A structural model, you would say.
> That model is derived from experimental data.
That doesn’t make it a template-based model, or a homology one.
> if I give AlphaFold an MSA of a new, unknown protein fold (substantively away from any known fold), it cannot predict it
That will depend on the number of effective sequences found to derive couplings. Domains with novel folds usually have a low number of remotely homolog sequences and for that reason the method will fail, not just because they are novel.
It's really not - have you played around with AF at all? Made mutations to protein structures and asked it to model them? Go look up the crystal structures for important proteins like FOXA1 [1], AR [2], EWSR1 [3], etc (i.e. pretty much any protein target we really care about and haven't previously solved) and tell me with a straight face that AF has "solved" protein folding - it's just a fancy language model that's pattern matching to things it's already seen solved before.
signed, someone with a PhD in biochemistry.
[1] https://alphafold.ebi.ac.uk/entry/P55317 [2] https://alphafold.ebi.ac.uk/entry/P10275 [3] https://alphafold.ebi.ac.uk/entry/Q01844
In principle you don't even need a physical force field- if you have enough distance information between pairs of atoms, you can derive a plausible structure by embedding the distances in R3 (https://en.wikipedia.org/wiki/Distance_geometry and https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.21...
Presumably, the signal they extract includes both rich local interactions (amino acids near in sequence) and distant ones inferred through sequence/structure relationships, and the constraints could in fact push a model towards a novel fold, presumably through some extremely subtle statistical relationships to other evolutionarily related proteins that adopt a different fold.
So does every structural prediction method.
> if you give it a brand new fold with no relation to other folds, it cannot predict it
That will depend on the number of effective sequences, not the actual fold.
> I work in organisms that have virtually 0 sequence identity.
Then the problem is low sequence coverage, not the protein fold. On a side note, there are sensitive homology search protocols that rely very little on actual sequence identity.
Wait, stop, I don't know anything about proteins but 84% success is not ~2x better than 42%.
It doesn't really make sense to talk about 2x better in terms of success percentages, but if you want a feel, I would measure 1/error instead (a 99% correct system is 10 times better than a 90% correct system), making AlphaFold around 3.6 times better.
In a sense both of you are right- DeepMind is massively overplaying the value of what they did, trying to expand its impact far beyond what they actually achieved (this is common in competitive biology), but what they did was such an improvement over the state of the art that it's considered a major accomplishment. It also achieved the target of CASP- which was to make predictions whose scores are indistinguishable from experimentally determined structures.
I don't think academics thought CASP was unwinnable but most groups were very surprised that an industrial player using 5 year old tech did so well.
https://www.biorxiv.org/content/10.1101/2022.07.21.500999v1 https://www.biorxiv.org/content/10.1101/2022.07.20.500902v1
To be overly dismissive is to lack imagination.
I think my sense that biology is more complex than human engineering is that our engineering seems much more homogeneous. Computer chips are certainly very dense but the individual number of different component types are very small. Biology on the other hand seems to have a huge number of fundamental building blocks, e.g. proteins, with different behaviors. I suppose that maybe that's the wrong comparison though and software is where our engineering diversity is present.
It may well just be the case that my lack of familiarity with biology makes it seem more complex to me.
If you wanted to simulate the behaviour of an entire protein using quantum mechanics, the sheer number of calculations required would be infeasible.
For what it's worth, I have a background in computational physics and am studying a PhD in structural biology. For any system (of any size) that you want to simulate, you have to consider how much information you're willing to 'ignore' in order to focus on the information you would like to 'get out' of a set of simulations. Being aware of the approximations you make and how this impacts your results is crucial.
For example, if I am interested in how the electrons of a group of Carbon atoms (radius ~ 170 picometres) behave, I may want to use Density Functional Theory (DFT), a quantum mechanical method.
For a single, small protein (e.g. ubiquitin, radius ~ 2 nanometres), I may want to use atomistic molecular dynamics (AMD), which models the motion of every single atom in response to thermal motion, electrostatic interactions, etc using Newton's 2nd law. Electron/proton detail has been approximated away to focus on overall atomic motion.
In my line of work, we are interested in how big proteins (e.g. the dynein motor protein, ~ 40 nanometres in length) move around and interact with other proteins at longer time (micro- to millisecond) and length (nano- to micrometre) scales than DFT or AMD. We 'coarse-grain' protein structures by representing groups of atoms as tetrahedra in a continuous mesh (continuum mechanics). We approximate away atomic detail to focus on long-term motion of the whole protein.
Clearly, it's not feasible to calculate the movement of dynein for hundreds of nanoseconds using DFT! The motor domain alone in dynein contains roughly one million atoms (and it has several more 'subunits' attached to it). Assuming these are mostly Carbon, Oxygen or Nitrogen, then you're looking at around ten million electons in your DFT calculations, for a single step in time (rounding up). If you're dealing with the level of atomic bonds, you're probably going to a use time steps between a femto- (10^-15 s) or picosecond (10^-12 s). The numbers get a bit ridiculous. There are techniques that combine QM and AMD, although I am not too knowledgeable in this area.
Some further reading, if you're interested (I find Wikipedia articles on these topics to generally be quite good):
DFT: https://en.wikipedia.org/wiki/Density_functional_theory
Biological continuum mechanics: https://doi.org/10.1371/journal.pcbi.1005897
Length scales in biological simulations: https://doi.org/10.1107/S1399004714026777
Electronic time scales: https://www.pnas.org/doi/10.1073/pnas.0601855103
All of that is generally applicable to molecular biology in general, and I don't see how the field of structural biology is especially egregious, the way ramraj is making it out to be.
* https://ccsb.scripps.edu/goodsell/
* https://pdb101.rcsb.org/sci-art/geis-archive/irving-geis
* https://www.digizyme.com/portfolio.html
* https://biochem.web.utah.edu/iwasa/projects.html
* The art of Jane Richardson, of which I couldn’t find a link
* This blog has plenty of good links: https://blogs.oregonstate.edu/psquared/
What we can do with this data now is use methods such as cryo-em to see the "big picture", i.e. multi-subunit protein-protein interactions where we can plug in the Alphafold predicted structure into the cryo-em 3d density map and get predicted angstrom level views of what's happening without necessarily having to resort to slower methods such as NMR or x-ray crystallography to elucidate macromolecular interactions.
A small gripe about the alphafold ebi website: it doesn't seem to show the known experimental structure, it just shows "Experimental structures: None available in PDB". For example the link to the alphafold structure above should link to the 2GBL, 1TF7, or any of the other kaic structures from organism PCC7942 at RCSB. This would require merging/mapping data from RCSB with EBI and at least doing some string matching, hopefully they're working on it!
> ... can be very difficult and there is no general solution
This is true of pretty much any graduate work in molecular biology.
How can you say this but not believe it's doing homology modeling?
Amazon basically put malls out of business which are hugely environmentally destructive.
Bill Gates is doing stuff too I think.
Big tech does some good things.
I didn’t say anything regarding that.
> This is true of pretty much any graduate work in molecular biology.
Just to elaborate my point: The process of protein cristallization is not understood at a level that allows the design of general and reproducible protocols. This inherent obscurity means that every new protein needs to undergo an ad hoc, heuristic, iterative process to obtain high quality crystals. This is an early methodological hurdle, at a stage where other routine procedures in biochemistry or molecular biology are usually successful.
It seems like that's the basic principle to understand.
It gives a picture of the enzyme as a machine, and lets you look at specific parts and say “this residue is probably doing this job in the whole system”.
But nobody will ever understand everything about a natural cell. Levels are an ad hoc phenomenon adhered to in varying degrees in certain places to contain variation. But few elements have only one role at one identifiable level, and you can never be sure one doesn't have another. And huge amounts of apparatus might radically change behavior in a subtly different environment.
In practice, the force fields are well understood but to be computationally efficient, they have to approximate just about everything. Examples: since number of inter-atom distance pairs goes up with N**2 atoms, you need to have tricks to avoid that and instead scale around n log n or even n if you can do it. When I started, we just neglected atoms more than 9 angstrom apart, but for highly charged molecules like DNA, that leads to errors in the predicted structure. Next, typically the force fields avoid simulating polarizability (the ability of an atom's electron cloud to be drawn towards another atom with opposite charge), also because expensive. They use simplified spring models (lterally hooke's equation) for bond lengths, bond angles. The torsions (the angle formed by 4 atoms in a row) haev a simplified form. The interatomic relationships are not handled in a principled way, instead treating atoms as mushy spheres....
After having made major contributions in this area, I don't think that improvements to force fields are going to be the most effective investment in time and energy. There are other bits of data that can get us to accurate structures with less work.
I know you didn't - this was one of the claims of ramraj I was responding to.
> The process of protein cristallization is not understood at a level that allows the design of general and reproducible protocols. This inherent obscurity means that every new protein needs to undergo an ad hoc, heuristic, iterative process to obtain high quality crystals. This is an early methodological hurdle, at a stage where other routine procedures in biochemistry or molecular biology are usually successful.
I don't disagree, though I would suggest that there's just as much grunt work, frustration, and hand wringing in other fields of molecular biology at the graduate level and above. Even if other fields have reproducible protocols established, that's not what gets papers published. With the possible exception of clinical samples, more often than not we have no clue if the analyses we're doing will yield anything, and the high risk zone is where all grad students live.
I don't think that's necessarily so - there is a lot of justified scepticism about the wilder claims of ML in this forum; it is in fact quite difficult at times to know as an outsider to the field in question how kneejerk it is.
There’s a lot of structural biology apologists here in this thread. Happy to crap on DeepMind but not ready to take criticism of their own field.
For anyone outside of the field wanting to learn more, check out this documentary: https://en.m.wikipedia.org/wiki/Naturally_Obsessed
I'm not sure how clear the edge over humans in this case is. There were some attempts at machine assisted human solving like Foldit that did produce results: https://en.wikipedia.org/wiki/Foldit#Accomplishments
Software is where the diversity is, for computers. A cells systems are a set of interlocking networks of DNA, RNA, protein, small molecules, and signalling. As you say, these diverse parts then interact in diverse ways.
Another aspect is the large number of 'levels' in biological systems - humans have around seven; atoms, molecules, organelles, cells, tissues, organs, whole system. The interactions within and between these levels is a further source of complexity.
Yes, once you identified a target protein, its structure is useful to selectively target it. But the main bottleneck is identifying such targets. In other words, the main difficulty is to figure out what to hit, not how to hit it, and protein folding mostly helps with how at the moment.
A lot of folks on HN end posts about a company with a sentence like “Disclaimer: I used to work for X”. This language (probably taken from contract law or something) is meant an admission of possible bias but in practice is also a signal that this person may know what they’re talking about more-so than the average person. After reading a lot of posts like this, it might feel reasonable for someone to flip the word around say something like “I need to disclaim…” when beginning a post, in order to signal their proximity to a topic or field as well as any sort of insider bias they may possess.
So sure, “I need to disclose” would’ve been the better word choice, but we all knew what GP was saying. It seems pedantic to imply otherwise.
On the other hand, homology (or comparative) modelling is a method that generates a structural model of a query sequence based on one or more experimentally solved structure of a close protein homolog. The model generation details depend on the specific protocol but, broadly speaking, spatial restraints are extracted from the template structures and mapped to the query sequence to be modelled.
Note that AlphaFold also uses a type of geometrical restraint (pairwise residue distances) in its modelling, although they are not derived from protein structures but the MSA embeddings. Both are related but are not exactly the same.
One difference between AlphaFold and homology modelling is that the latter requires templates having a certain sequence identity with the query sequence (≥30% is the rule of thumb), while the former can have in its MSA remotely homolog sequences well below any discernible identity.
How is this a problem unique to structural biology? In every subfield we're hoping to publish interesting results, and that endpoint is defined by the nature of the field. As a geneticist, in the early 90s, sequencing & characterizing a single bacterial gene would have been the focus of an ambitious PhD thesis and would yield multiple papers. Sequencing at that time period had a dozen points of failure and high risk to set as the goal for a thesis. Today, sequencing a whole genome is unlikely to yield a single publication. If you're setting the ability to crystallize as the single point of failure endpoint, that logic applies to every subfield. We all have something that could potentially derail our plans, and I fail to see how structural biology is unique in that respect.
> There’s a lot of structural biology apologists here in this thread. Happy to crap on DeepMind but not ready to take criticism of their own field.
I'm not a structural biologist - I'm a Geneticist who disagrees with your characterization of SB. The issues you've mentioned are not unique to SB, but apply to pretty much all subfields. I see grad students in general lament their life choices when their cell culture fails, their mice die, protocols just don't work, or their results just don't make sense.
The following exchange is from HackerNews. The first long-winded comment is from an AI evangelist, extolling the virtues of artificial intelligence with a starry eyed perspective that considers only the value and not the cost. The second comment is in reply to the first, and is a very short and blunt takedown of the previous comment and the perspective of its author:
AIRocksMaaan:
Response:
AI is definitely the future. Just think about all of the possibilities that it can bring! For instance, it can help us create more efficient algorithms, it can help us automate repetitive tasks, and it can even help us make better decisions by providing us with more accurate data.
Not to mention, it can also help us create new and innovative products and services that can improve our quality of life.
AI definitely has a lot of potential and I think we should definitely explore it further.
threethirty:
This is all bullshit.
DM is probably hyping it up and you are most likely hyping up your own criticism. It's a great symbiotic relationship outwardly presented as opposition.
I'm all for tech doing good but this is a hugeeeee stretch
[0]https://learning.edx.org/course/course-v1:MITx+7.00x+2T2022/...
Which top labs have changed direction? Because Alphafold can't predict folds, just identify ones it's seen.
I agree that there are other fields with similar issues. What baffles me is how long protein crystallization has been a problem.
I’ll use your example:
Nowadays, sequencing a gene is unlikely to yield a single publication by itself but is no early point of failure. It’s a solved problem with protocols that have been thoroughly developed and explained to boredom. New early points of failure arise (sample related, maybe?).
Nowadays, determining the structure of a protein is unlikely to yield a single publication by itself but has a clear, early, unsolved point of failure. No understandable protocol other than buying $creening plate$, fetching cat whiskers, drawing a theoretical phase diagram that tells you nothing, and pray that your crystallization tray doesn’t show a scrambled egg tomorrow or in six weeks. This has been an issue for more than fifty years and almost 200k published structures. The jump you mentioned in sequencing hasn’t happened yet in protein crystallography and might never happen because our understanding of macromolecular crystallization is lacking and thus we cannot predict proper crystallization conditions.
That's more or less because "really understands the problem" generally winds-up being a placeholder for things the system can't. Which isn't to say it's not important. One thing that is often included in "understanding" is the system knowing the limits of its approach - current AI systems have a harder time giving a certainty value than giving a prediction. But you could have a system that satisfied a metric for this and other things would pop up - for example, what kind of certainty or uncertainty are we talking about (crucial for decision making under uncertainty).
The point I'm trying to make is that from the perspective of a grad student, no field is devoid of risk, and it's surprisingly easy to be stuck by something that's a solved problem on paper. For example, I know of a grad student that's been trying to develop a mouse line for about a year now, and has now discovered that this strain just won't work for what they have in mind - and must now recreate the mutant combinations in a different strain that's at least a year's work - if it even works. I've heard stories of entire mouse lines die, and you're back to square one - years of work lost.
The other thing that complicates some of these fields is the massive pace of innovation they're undergoing that it is very hard for an individual lab to keep up to date. Grad students are using techniques that were published less than 5 years ago, and there's no locally available expertise to tap into. What remains the same is the level of grunt work grad students and postdocs have to do, even if the techniques get more sophisticated over time.
Does the new set of predictions contain a bunch of membrane bound protiens?
They said it's minimal.
In most cases, having a "probably" isn't good enough. They use alphafold to get early insights, but then they still use crystallography to confirm the structure. Because at the end of the day, you need to know for sure.
Just like having a human genome sequence, this is not a magic key that solves all problems of biology but a building block for use by researchers. An investigator may look up the folded structure of a protein and use that information to glean certain context-specific insights from it such as how exactly two interacting proteins interact mechanically.
The other significant benefit is that this frees up resources that were spent having to figure out the structure in other ways. It's an efficiency improvement.
They are called motor proteins because they convert chemical energy into kinetic energy. In the case of kinesin, it forms a dimer (two copies of itself bind together to form the two "legs") and also binds to light chains (accessory proteins that modulate its behavior) so that it can walk along filaments and drag cargo around your cells. They are both proteins and more complex structures because multiple proteins are interacting, as well as binding small molecules and catalyzing them into chemical products, all to produce the motion.
Some proteins have 3D structures that look like abstract art only because we don't have an intuitive understanding of what shape and amino acids are necessary to convert chemical A to chemical B, which is the main purpose of many enzymes in the body. If you look at structural proteins or motor proteins, on the other hand, their function is clear from their shape.
There are a lot of other things you can do with the shape. If it has a pore, you can estimate the size and type of small molecule that could travel through it. You can estimate whether a binding site is accessible to the environment around it. You can determine if it forms a multimer or exists as a single unit. You can see if protein A and protein B have drastically different shapes given similar sequences, which might have implications for its druggability or understanding its function.
A machine learning approach for predicting toxicity would have a far greater impact on public health than AF2 does.
The ribbon shape for GFP is a very cool barrel thing
Incidentally, accusing someone of gaslighting is itself a form of gaslighting.
* https://www.rcsb.org/structure/1m8n
* https://iiif.elifesciences.org/lax/05142%2Felife-05142-fig1-...
In an fantasy world if we had infinite computation speed/space we'd be able to just model the force field and predict from there.
Your objection is that alphafold is a chinese room?
What does that matter? Either it generates useful results or it doesn't. That is the metric we should evaluate it on.
It sounds like how we model airplanes in computers, but still test the real thing - i wouldn't call the impact of computer modelling on airplane design to be minimal.
doesn't this assume the final fold is static and invariant of environmental and protein interactions?
put another way, how do we know that a protein does not fold differently under different environmental conditions or with different molecular interactions?
i realize this is a long-held assumption, but after studying scientific research for the past year, i realize many long-held assumptions aren't supported by convincing evidence.
Instead, after spending an ordinate amount of Google's revenue on extra energy, I recommended that Google instead apply machine learning to protein structure prediction and just do a better job of extracting useful structural information (note: this was around the time CNNs were in vogue, and methods like Transformers didn't exist yet) from the two big databases (all known proteins/their superfamily alignments, and the PDB).
Note that this conclusion was a really hard one for me since I had dedicated my entire scientific career up to that point in attempting to implement that fantasy world (or a coarse approximation of it), and my attempts at having people develop better force fields (ones that didn't require as much CPU time) using ML weren't successful. What DeepMind did was, in some sense, the most parsimonious incremental step possible to demonstrate their supremacy, which is far more efficient. Also, once you have a trained model, inference is nearly free compared to MD simulations!
Merriam webster[1]: " Definition of disclaim
intransitive verb 1 : to make a disclaimer ... "
One of my favorite quotes about biology goes something like:
"An alien spaceship lands in your yard. You are kindly invited in to study and tour the ship. The technology is clearly billions of years more advanced than our own. Not decades more, not millennia more, not megayears more, but clearly gigayears more.
This is biology"
Sorry, can't find the exact quote and citation right now!
I'm not sure what you're implying here. Are you saying both types of structures are useful, but not as useful as the hype suggests, or that an X-Ray Crystal (XRC) and low confidence structures are both very useful with the XRC being marginally more so?
An XRC structure is great, but it's a very (very) long way from getting me to a drug. Observe the long history of fully crystalized proteins still lacking a good drug. Or this piece on the general failure of purely structure guided efforts in drug discovery for COVID (https://www.science.org/content/blog-post/virtual-screening-...). I think this tech will certainly be helpful, but for most problems I don't see it being better than a slightly-more-than-marginal gain in our ability to find medicines.
Edit: To clarify, if the current state of the field is "given a well understood structure, I often still can't find a good medicine without doing a ton of screening experiments" then it's hard to see how much this helps us. I can also see several ways in which a less than accurate structure could be very misleading.
FWIW I can see a few ways in which it could be very useful for hypothesis generation too, but we're still talking pretty early stage basic science work with lots of caveats.
Source: PhD Biochemist and CEO of a biotech.
It's a really good, fancy model completely reliant on data we already have empirically (and therefore subject to all the same biases as well).
That is synonymous with saying, “I will deny I am a professional structural biologist that works in this field every day.”
The person posting is actually a structural biologist. What they stated was cognitively dissonant with the intent of their post, and that’s what stopped me.
I don’t pay attention to typos or minor usage issues, but in this case, I read two more sentences and said, “What??”
EDIT: Two more things. First, I found the post interesting and useful. I didn’t say anything about breaking the argument.
Second, “I need to disclose…” is the exact opposite of what they said.
Transitive verb:
2 : DENY, DISAVOW disclaimed any knowledge of the contents of the letter
i really don't think anyone is presenting alphafold as if its a physics simulator operating from first principles.
Like obviously alphafold does not "understand". Maybe i have blinders on for being in the computer field, but i would assume that it goes without saying that a statistical deep learning AI model does not tell us how to solve the problem from first principles.
Like yes, alphafold isn't the final chapter in protein folding and that is obvious. But it seems a stretch to dismiss it on those grounds. If that's the metric we're going with then we can dismiss pretty much everything that has happened in science for the past thousand years.
> re self driving car metaphor
I think this is a bad metaphor for your purposes, because self-driving cars aren't de novo understanding, and arguably do have some carry over from things like adaptive cruise control.
As someone who doesn't know proteins, but is decent at math, I would not describe it this way. You are assuming a linear relationship between effort and value, but more often than not, effort has diminishing returns. 80dB is not 2x as loud as 40 dB. An 8K image doesn't have 2x the fidelity of a 4K image. If Toyota unveiled a new engine that was 60% efficient tomorrow, no one in their right mind would say "eh, it's just 2x better". If we came out with a CPU that could clock up to 10Ghz we wouldn't say "meh, that's just 2x what we had".
Without being able to define the relationship here, I could just as well say that 85% is 1000x better than 42%. There's just no way to put a number on it. What we can say is that we completely blew all projections out of the water.
Again, I'm not someone working with proteins, but to me it sounds as revolutionary as a 60%+ efficient engine, or a 10Ghz CPU. No one saw it coming or thought it feasible with current technology.
On the other hand, it's not necessarily the only thing we might care about under that description. If I have a manufacturing process that is 99.99% successful (the remaining 0.01% has to be thrown out), it probably does not strike me as a 10x improvement if the process is improved to 99.999% success. What I care about is the cost to produce the average product that can be sent to market, and this "10x improvement" changes that only a very small amount.
Back in the day, I had steered away from chemistry in college because I didn't like to memorize stuff. Now I realize I missed out on some amazing knowledge.
(200 trillion cost) / (200 million structures predicted) = 1 million per structure.
That reflects the personnel cost (5 Yr PHP scholarship, PostDoc/Prof mentorship; inverstment+depreciation for the lab equipment). All this to crystallize 1 structure and characterize its folding behavior.
I don't know if this calculation is too simplistic, just coming up with something.
The problem with the structure prediction problem is not a loss/energy function problem, even if we had an accurate model of all the forces involved we'd still not have an accurate protein structure prediction algorithm.
Protein folding is a chaotic process (similar to the 3 body problem). There's an enormous number of interactions involved - between different amino acids, solvent and more. Numerical computation can't solve chaotic systems because floating point numbers have a finite representation, which leads to rounding errors and loss of accuracy.
Besides, Short range electro static and van der waals interactions are pretty well understood and before alphafold many algorithms (like Rosetta) were pretty successful in a lot of protein modeling tasks.
Therefore, we need a *practical* way to look at protein structure determination that is akin to AlphaFold2.