Driven by the goal of eradicating language barriers on a global scale, machine translation has solidified itself as a key focus of artificial intelligence research today. However, such efforts have coalesced around a small subset of languages, leaving behind the vast majority of mostly low-resource languages. What does it take to break the 200 language barrier while ensuring safe, high quality results, all while keeping ethical considerations in mind? In No Language Left Behind, we took on this challenge by first contextualizing the need for low-resource language translation support through exploratory interviews with native speakers. Then, we created datasets and models aimed at narrowing the performance gap between low and high-resource languages. More specifically, we developed a conditional compute model based on Sparsely Gated Mixture of Experts that is trained on data obtained with novel and effective data mining techniques tailored for low-resource languages. We propose multiple architectural and training improvements to counteract overfitting while training on thousands of tasks. Critically, we evaluated the performance of over 40,000 different translation directions using a human-translated benchmark, Flores-200, and combined human evaluation with a novel toxicity benchmark covering all languages in Flores-200 to assess translation safety. Our model achieves an improvement of 44% BLEU relative to the previous state-of-the-art, laying important groundwork towards realizing a universal translation system. Finally, we open source all contributions described in this work, accessible at https://github.com/facebookresearch/fairseq/tree/nllb.
Notes: Section 8.8: " To train NLLB-200, a cumulative of 51968 GPU hours of computation was performed on hardware of type A100-SXM-80GB" See also Table 48 Section 8.2.4 states they use FP16 NVIDIA Datasheet states 312TFLOPS for FP16 https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/a100/pdf/nvidia-a100-datasheet-nvidia-us-2188504-web.pdf Assuming 0.3 utilization: 312e12*3600*51968*0.3 Also: "Our final model is a Transformer encoder-decoder model in which we replace the Feed Forward Network (FFN) layer in every 4th Transformer block with a Sparsely Gated Mixture of Experts layer containing 128 experts. We use model dimension 2048, FFN dimension 8192, 16 attention heads, 24 encoder layers and 24 decoder layers. We use Pre-LayerNorm (Xiong et al., 2020) as described in Section 6.1.1. We share the embedding weights of the encoder input embedding, decoder input embedding and decoder output embedding layers. We use an overall dropout of 0.3, attention dropout 0.1 and EOM with peom=0.2. The model has a total of 54.5B parameters and FLOPs similar to that of a 3.3B dense model."
Size Notes: [WORDS] Section 8.2.2: "As we prepare to train on the final 202 language dataset comprising of over 18B sentence pairs and 2440 language directions" 18B sentences * 20 words/sentence
Notes: Section 8.2.4: "The model has a total of 54.5B parameters and FLOPs similar to that of a 3.3B dense model"