Increasing model size when pretraining natural language representations often results in improved performance on downstream tasks. However, at some point further model increases become harder due to GPU/TPU memory limitations and longer training times. To address these problems, we present two parameter-reduction techniques to lower memory consumption and increase the training speed of BERT. Comprehensive empirical evidence shows that our proposed methods lead to models that scale much better compared to the original BERT. We also use a self-supervised loss that focuses on modeling inter-sentence coherence, and show it consistently helps downstream tasks with multi-sentence inputs. As a result, our best model establishes new state-of-the-art results on the GLUE, RACE, and \squad benchmarks while having fewer parameters compared to BERT-large. The code and the pretrained models are available at this https URL.
Notes: 32 hours of training 512 TPU V3s 0.33 utilization rate 123000000000000 FLOP / chip / sec * 512 TPUs * 32 hours * 3600 sec / hour * 0.33 [assumed utilization] = 2.3940956e+21 FLOP "We train all models for 125,000 steps unless otherwise specified" "All the model updates use a batch size of 4096 " "We always limit the maximum input length to 512, and randomly generate input sequences shorter than 512 with a probability of 10%." 6 FLOP / parameter / token * 235000000 parameters * 512 tokens per sequence * 4096 sequences per batch * 125000 steps = 3.6962304e+20 FLOP Authors of "AI and Memory Wall" (https://github.com/amirgholami/ai_and_memory_wall) estimated model's training compute as 31,000,000 PFLOP = 3.1*10^22 FLOP
Size Notes: Pretraining same as for BERT - Wikipedia and BookCorpus "For the pre-training corpus we use the BooksCorpus (800M words) (Zhu et al., 2015) and English Wikipedia (2,500M words)"