Recently, deep learning models, initially developed in the field of natural language processing (NLP), were applied successfully to analyze protein sequences. A major drawback of these models is their size in terms of the number of parameters needed to be fitted and the amount of computational resources they require. Recently, 'distilled' models using the concept of student and teacher networks have been widely used in NLP. Here, we adapted this concept to the problem of protein sequence analysis, by developing DistilProtBert, a distilled version of the successful ProtBert model. Implementing this approach, we reduced the size of the network and the running time by 50%, and the computational resources needed for pretraining by 98% relative to ProtBert model. Using two published tasks, we showed that the performance of the distilled model approaches that of the full model. We next tested the ability of DistilProtBert to distinguish between real and random protein sequences. The task is highly challenging if the composition is maintained on the level of singlet, doublet and triplet amino acids. Indeed, traditional machine-learning algorithms have difficulties with this task. Here, we show that DistilProtBert preforms very well on singlet, doublet and even triplet-shuffled versions of the human proteome, with AUC of 0.92, 0.91 and 0.87, respectively. Finally, we suggest that by examining the small number of false-positive classifications (i.e. shuffled sequences classified as proteins by DistilProtBert), we may be able to identify de novo potential natural-like proteins based on random shuffling of amino acid sequences.
Notes: "Pretraining was done on five v100 32-GB Nvidia GPUs from a DGX cluster with a local batch size of 16 examples... The model was trained for three epochs using mixed precision and dynamic padding. Every epoch run took approximately 4 days, resulting in total pretraining time of 12 days" 5 * 125 teraFLOP/s * 12 * 24 * 3600 * 0.3 (assumed utilization) = 1.9e20
Size Notes: 43,000,000 sequences × 300 amino acids = 12,900,000,000 (1.29e10) data points Calculation: 43,000,000 × 300 = 12,900,000,000
Notes: "we were able to reduce the number of DistilProtBert parameters by almost half, to 230 M"