"Recent advances in machine learning have leveraged evolutionary information in multiple sequence alignments to predict protein structure. We demonstrate direct inference of full atomic-level protein structure from primary sequence using a large language model. As language models of protein sequences are scaled up to 15 billion parameters, an atomic-resolution picture of protein structure emerges in the learned representations. This results in an order-of-magnitude acceleration of high-resolution structure prediction, which enables large-scale structural characterization of metagenomic proteins. We apply this capability to construct the ESM Metagenomic Atlas by predicting structures for >617 million metagenomic protein sequences, including >225 million that are predicted with high confidence, which gives a view into the vast breadth and diversity of natural proteins."
Notes: from xTrimoPGLM paper Table 9 (https://www.biorxiv.org/content/10.1101/2023.07.05.547496v1): 1.1e21 FLOP
Size Notes: Section A.1.1: "This allowed ESM-2 models to train on over 60M protein sequences." Average protein sequence is 200 tokens, per https://epoch.ai/blog/biological-sequence-models-in-the-context-of-the-ai-directives#fn:4 60M * 200 = 12B tokens Epochs: Used 500k steps at 2M token batch size 500k * 2M / 12B = 83.3
Notes: In the name