As we solve viewability into the complex coding of proteins, we need to be right. Next, hopefully, comes causal effect identification, then construction ability.
If medicine can use broad capacity to create bespoke proteins, our world becomes both weird and wonderful.
Then they use the model to predict more.
Although we don't know if they are correct, these structures are the best (or the least bad) we have for now.
They can't possibly know that. What they know is that their guesses are very significantly better than the previous best and that they could do this for the widest range in history. Now, verifying the guess for a single (of the hundreds of millions in the db) protein is up to two years of expensive project. Inevitably some will show discrepancies. These will be fed to regression learning, giving us a new generation of even better guesses at some point in the future. That's what I believe to be standard operating practice.
A more important question is: is today's db good enough to be a breakthrough for something useful, e.g. pharma or agriculture? I have no intuition here, but the reporting claims it will be.
https://en.m.wikipedia.org/wiki/Root-mean-square_deviation_o...
The lower the RMSD between two structures, the better (up to some limit).
Proteins don't exist as crystals in a vacuum, that's just how humans solved the structure. Many of the non-globular proteins were solved using sequence manipulation or other tricks to get them to crystallize. Virtually all proteins exist to have their structures interact dynamically with the environment.
Google is simply supplying a list of what it presumes to be low RMSD models based on their tooling, for some sequences they found, and the tooling is based itself on data mostly from X-ray studies that may or may not have errors. Heck, we've barely even sequenced most of the DNA on this planet, and with methods like alternative splicing the transcriptome and hence proteome has to be many orders of magnitude larger than what we have knowledge of.
But sure, Google has solved the structure of the "protein universe", whatever that is.
DeepMind crushes everyone else at this competition.
But you also ignore where we're at in the standard cycle:
https://phdcomics.com/comics/archive_print.php?comicid=1174
;)
---------------
Yes, the idea of a 'protein universe' seems like it should at least encompass 'fold space'.
For example, WR Taylor : https://pubmed.ncbi.nlm.nih.gov/11948354/
I think the rough estimate was that there were around 1000 folds - depending on how fine-grained you want to go.
Absolutely agree, though, that a lot of proteins are hard to crystalise (i understand) due to being trans-membrane or just the difficulty of getting the right parameters for the experiment.
It's not to diminish the monumental accomplishment that was the application of modern machine learning techniques to outpace structure prediction in labs, but other famous labs have already moved to ML predictions and are competitive with DeepMind now.
For example for transmembrane proteins, there is a gross under-representation of structures derived from experimental evidence, so we would expect that whatever your algorithm is "solving" is going to have a much higher degree of error than globular proteins, and likely artifacts associated with learning from much more abundant globular proteins.
edit: As an example, "Sampling the conformational landscapes of transporters and receptors with AlphaFold2". AF2 was able to reproduce the alternative conformations of GPCRs, but only with non-default settings. With default settings there is clear evidence of overfitting.
> Overall, these results demonstrate that highly accurate models adopting both conformations of all eight protein targets could be predicted with AF2 by using MSAs that are far shallower than the default. However, because the optimal MSA depth and choice of templates varied for each protein, they also argue against a one-size-fits-all approach for conformational sampling.
it seems obvious this was going to happen, because https://github.com/deepmind/alphafold
That's great! AlphaFold DB mas made 200 million structure predictions available for everyone. How many structure predictions have other famous labs made available for everyone?
Google has the advantage of the biggest guns here: the fastest TPUs with the most memory in the biggest clusters, so running inference with a massive number of protein sequences is much easier for them.
At a guess, the core packing in non-globular proteins might be different? Also the distribution of secondary structure might also vary between classes. Might be worth someone studying how much structural constraints depend on fold (if they have not already).
Google didn't solve the structure of the protein universe (thank you for saying that). But the idea of the protein structure universe is fairly simple- it's a latent space that allows for direct movement over what is presumably the rules of protein structures along orthogonal directions. It would encompass all the "rules" in a fairly compact and elegant way. Presumably, superfamilies would automagically cluster in this space, and proteins in different superfamilies would not.