GNNs and chemical fingerprints are the predominant approaches to representing molecules for property prediction. However, in NLP, transformers have become the de-facto standard for representation learning thanks to their strong downstream task transfer. In parallel, the software ecosystem around transformers is maturing rapidly, with libraries like HuggingFace and BertViz enabling streamlined training and introspection. In this work, we make one of the first attempts to systematically evaluate transformers on molecular property prediction tasks via our ChemBERTa model. ChemBERTa scales well with pretraining dataset size, offering competitive downstream performance on MoleculeNet and useful attention-based visualization modalities. Our results suggest that transformers offer a promising avenue of future work for molecular representation learning and property prediction. To facilitate these efforts, we release a curated dataset of 77M SMILES from PubChem suitable for large-scale self-supervised pretraining.
FLOPs8470000000000000000
Notes: 6 FLOP / parameter / token * 125 * 10^6 parameters * 64 sequences per batch [assumption] * 512 tokens per sequence [upper bound] * 450000 steps = 1.10592e+19 FLOP 125000000000000 FLOP / GPU / sec [bf16 assumed] * 1 GPU * 48 hours * 3600 sec / hour * 0.3 [assumed utilization] = 6.48e+18 FLOP sqrt(1.10592e+19* 6.48e+18) = 8.4654366e+18 FLOP
Training Code Accessibilityhttps://huggingface.co/seyonec/ChemBERTa-zinc-base-v1 https://huggingface.co/seyonec/PubChem10M_SMILES_BPE_450k
HardwareNVIDIA V100
Hardware Quantity1
Size Notes: 10M unique SMILES from PubChem max. sequence length of 512 tokens 450K steps "We trained for 10 epochs on all PubChem subsets except for the 10M subset, on which we trained for 3 epochs to avoid observed overfitting. " assuming (!) batch size of 64: 64*512*450000 = 14 745 600 000 total tokens -> 4 915 200 000 tokens per epoch -> ~ 500 tokens per SMILE
Parameters125000000
Notes: "Our implementation of RoBERTa uses 12 attention heads and 6 layers, resulting in 72 distinct attention mechanisms" -> base model is RoBERTa Base