Pretrained protein sequence language models have been shown to improve the performance of many prediction tasks and are now routinely integrated into bioinformatics tools. However, these models largely rely on the transformer architecture, which scales quadratically with sequence length in both run-time and memory. Therefore, state-of-the-art models have limitations on sequence length. To address this limitation, we investigated whether convolutional neural network (CNN) architectures, which scale linearly with sequence length, could be as effective as transformers in protein language models. With masked language model pretraining, CNNs are competitive with, and occasionally superior to, transformers across downstream applications while maintaining strong performance on sequences longer than those allowed in the current state-of-the-art transformer models. Our work suggests that computational efficiency can be improved without sacrificing performance, simply by using a CNN architecture instead of a transformer, and emphasizes the importance of disentangling pretraining task and model architecture. A record of this paper’s transparent peer review process is included in the supplemental information.
Notes: 1. Hardware setup: 128 NVIDIA V100 GPUs (1.25e14 FLOP/s per GPU) 2. Training duration: 56 days (directly provided) - Converted to seconds: 56 × 24 × 3600 = 4.8384e6 seconds 3. Utilization rate: 40% 4. Final calculation: 1.25e14 FLOP/s × 128 GPUs × 4.8384e6 seconds × 0.4 = 3.1e22 FLOPs 6 FLOP / parameter / token * 640 *10^6 parameters * 11000 tokens per GPU per batch * 128 GPUs * 620000 updates = 3.3521664e+21 FLOP sqrt(3.1e22*3.3521664e+21) = 1.0193977e+22 FLOP
Size Notes: Total Datapoints = 41.5 × 10^6 × 500 = 2.075 × 10^10 ≈ 2.1 × 10^10 tokens where: - Number of sequences: 41.5 million - Average sequence length: 500 residues 11000 tokens per GPU per batch 620,000 updates
Notes: 643M from Table S1