Large-scale generative language models such as GPT-3 are competitive few-shot learners. While these models are known to be able to jointly represent many different languages, their training data is dominated by English, potentially limiting their cross-lingual generalization. In this work, we train multilingual generative language models on a corpus covering a diverse set of languages, and study their few- and zero-shot learning capabilities in a wide range of tasks. Our largest model with 7.5 billion parameters sets new state of the art in few-shot learning in more than 20 representative languages, outperforming GPT-3 of comparable size in multilingual commonsense reasoning (with +7.4% absolute accuracy improvement in 0-shot settings and +9.4% in 4-shot settings) and natural language inference (+5.4% in each of 0-shot and 4-shot settings). On the FLORES-101 machine translation benchmark, our model outperforms GPT-3 on 171 out of 182 directions with 32 training examples, while surpassing the official supervised baseline in 45 directions. We conduct an in-depth analysis of different multilingual prompting approaches, showing in particular that strong few-shot learning performance across languages can be achieved via cross-lingual transfer through both templates and demonstration examples. Finally, we evaluate our models in social value tasks such as hate speech detection in five languages and find it has limitations similar to comparable sized GPT-3 models.
Notes: "The XGLM 7.5B model was trained on 256 A100 GPUs for about 3 weeks, at a speed of 311.6k words per second" 256 * 312 teraFLOP/s * 21 * 24 * 3600 * 0.3 utilization assumption ~= 4.3e22 also, it was trained for 500B tokens. Using Compute = 6ND, we have 6 * 500B * 7.5B = 2.25e22 311k tokens per second * 7.5B params * 6 is 1.35e16 FLOP/s. divide that by 312 teraFLOP/s, which is A100 peak compute, gets 43, suggesting low utilization (17%) of the 256-GPU cluster, or somewhat higher if there's more than one token per word. So I'll use the 6ND number.
Size Notes: Training Data. Our models are trained on a static multilingual corpus extracted from CommonCrawl, with English text comprising 32.6% of the total number of tokens corresponding to 163B tokens. 163B / 0.326 = 500B total Note that this dataset is sampled from the much larger CC100-XL, outlined in Appendix F and here: https://huggingface.co/facebook/xglm-7.5B#training-data-statistics The huggingface link sums to 1.64T tokens, while the Data Card in the appendix claims 1.9T tokens.
Notes: "Our largest model with 7.5 billion parameters sets new state of the art"