"Recent advances in machine learning have leveraged evolutionary information in multiple sequence alignments to predict protein structure. We demonstrate direct inference of full atomic-level protein structure from primary sequence using a large language model. As language models of protein sequences are scaled up to 15 billion parameters, an atomic-resolution picture of protein structure emerges in the learned representations. This results in an order-of-magnitude acceleration of high-resolution structure prediction, which enables large-scale structural characterization of metagenomic proteins. We apply this capability to construct the ESM Metagenomic Atlas by predicting structures for >617 million metagenomic protein sequences, including >225 million that are predicted with high confidence, which gives a view into the vast breadth and diversity of natural proteins."
Notes: "All language models were trained for 500K updates, except the 15B language model" "All models used 2 million tokens as batch size except the 15B model" [Supplementary Materials] Hence: 1000B training tokens (500k steps, 2M tokens/batch) Estimate: 35M*2*1000B + 35M*4*1000B
Size Notes: Section A.1.1: "This allowed ESM-2 models to train on over 60M protein sequences." Average protein sequence is 200 tokens, per https://epoch.ai/blog/biological-sequence-models-in-the-context-of-the-ai-directives#fn:4 60M * 200 = 12B tokens Epochs: Used 500k steps at 2M token batch size 500k * 2M / 12B = 83.3
Notes: In the name