Scaling language models with more data, compute and parameters has driven significant progress in natural language processing. For example, thanks to scaling, GPT-3 was able to achieve strong results on in-context learning tasks. However, training these large dense models requires significant amounts of computing resources. In this paper, we propose and develop a family of language models named GLaM (Generalist Language Model), which uses a sparsely activated mixture-of-experts architecture to scale the model capacity while also incurring substantially less training cost compared to dense variants. The largest GLaM has 1.2 trillion parameters, which is approximately 7x larger than GPT-3. It consumes only 1/3 of the energy used to train GPT-3 and requires half of the computation flops for inference, while still achieving better overall zero-shot and one-shot performance across 29 NLP tasks.
Notes: The network activates 96.6 billion parameters per token and trained for 600B tokens. 6 * 600B * 96.6B = 3.478e23 Digitizing figure 4 (d) indicates 139.67 TPU-years of training. 2.75e14 * 139.67 * 365.25 * 24 * 3600 * 0.3 = 3.636e23 Since these are close, we will use the 6NC estimate and derive hardware utilization from the training time information. Later they say they measured 326W power usage per chip, which could maybe be used to estimate utilization.
Size Notes: The dataset is made of 1.6 trillion tokens, but later in the paper they say they only train the largest model for 600b tokens. 600b / 0.75 words/token = 800b words. "The complete GLaM training using 600B tokens consumes only 456 MWh and emits 40.2 net tCO2e."
Notes: 1.2 trillion parameters