The recent “Text-to-Text Transfer Transformer” (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. We detail the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual benchmarks. We also describe a simple technique to prevent “accidental translation” in the zero-shot setting, where a generative model chooses to (partially) translate its prediction into the wrong language. All of the code and model checkpoints used in this work are publicly available.
Notes: "We pre-train our mT5 model variants for 1 million steps on batches of 1024 length-1024 input sequences, corresponding to roughly 1 trillion input tokens total." 1 million steps * 1024 batchsize * 1024 length * 13 billion params * 6 = 8.2e22 Ignores fine-tuning compute; this is likely a small fraction of pre-training compute.
Size Notes: The model was trained on a subset of 1 trillion tokens. Full mC4 corpus has data "totaling 6.6B pages and 6.3T tokens" Distribution by language is in Appendix A.
Notes: 13 billion