Computational biology and bioinformatics provide vast data gold-mines from protein sequences, ideal for Language Models taken from NLP. These LMs reach for new prediction frontiers at low inference costs. Here, we trained two auto-regressive models (Transformer-XL, XLNet) and four auto-encoder models (BERT, Albert, Electra, T5) on data from UniRef and BFD containing up to 393 billion amino acids. The LMs were trained on the Summit supercomputer using 5616 GPUs and TPU Pod up-to 1024 cores. Dimensionality reduction revealed that the raw protein LM-embeddings from unlabeled data captured some biophysical features of protein sequences. We validated the advantage of using the embeddings as exclusive input for several subsequent tasks. The first was a per-residue prediction of protein secondary structure (3-state accuracy Q3=81%-87%); the second were per-protein predictions of protein sub-cellular localization (ten-state accuracy: Q10=81%) and membrane vs. water-soluble (2-state accuracy Q2=91%). For the per-residue predictions the transfer of the most informative embeddings (ProtT5) for the first time outperformed the state-of-the-art without using evolutionary information thereby bypassing expensive database searches. Taken together, the results implied that protein LMs learned some of the grammar of the language of life. To facilitate future work, we released our models at https://github.com/agemagician/ProtTrans.
Notes: FLOP = 420M * 6 * (800k*512*32k + 200k*2048*6k) 1M steps total split into two phases, (1) 800k steps, seq length 512 (batch size 32k) and (2) 200k steps, seq length 2048 (batch size 6k) single TPU Pod V3-1024 (64 nodes and 1024 TPUs) info from paper and https://huggingface.co/Rostlab/prot_bert_bfd
Size Notes: "ProtBERT-BFD (420M parameters) saw around 27B proteins during pre-training" Table 1: BFD has 2122M proteins, 393B amino acids, 572 GB Suggests average amino acid length of 185 Implies 27B * 185 = 5T amino acids seen in training However, Table 2 suggests number of tokens (amino acids) seen in training was: (512*32768*800k) + (2048*6144*200k) = 15.9T amino acids in training Geometric mean = 8.9T
Notes: Table 2