Recent work has demonstrated the effectiveness of cross-lingual language model pretraining for cross-lingual understanding. In this study, we present the results of two larger multilingual masked language models, with 3.5B and 10.7B parameters. Our two new models dubbed XLM-R XL and XLM-R XXL outperform XLM-R by 1.8% and 2.4% average accuracy on XNLI. Our model also outperforms the RoBERTa-Large model on several English tasks of the GLUE benchmark by 0.3% on average while handling 99 more languages. This suggests pretrained models with larger capacity may obtain both strong performance on high-resource languages while greatly improving low-resource languages. We make our code and models publicly available.
Notes: Trained for 500k steps at a batch size of 2048 with sequence length of 512 = 524,288,000,000 tokens seen. 6 * 10700000000 * 524,288,000,000 = 3.366e22
Size Notes: "We pretrain the models on the CC100 dataset, which corresponds to 167B tokens in 100 languages."
Notes: Section 2.1: " ...XLM-RXXL (L= 48, H = 4096, A = 32, 10.7B params)"