Advances in English language representation enabled a more sample-efficient pre-training task by Efficiently Learning an Encoder that Classifies Token Replacements Accurately (ELECTRA). Which, instead of training a model to recover masked tokens, it trains a discriminator model to distinguish true input tokens from corrupted tokens that were replaced by a generator network. On the other hand, current Arabic language representation approaches rely only on pretraining via masked language modeling. In this paper, we develop an Arabic language representation model, which we name AraELECTRA. Our model is pretrained using the replaced token detection objective on large Arabic text corpora. We evaluate our model on multiple Arabic NLP tasks, including reading comprehension, sentiment analysis, and named-entity recognition and we show that AraELECTRA outperforms current state-of-the-art Arabic language representation models, given the same pretraining data and with even a smaller model size.
Notes: 6 FLOP / parameter / token * 262144000000 tokens * 136000000 parameters = 2.139095e+20 FLOP A TPUv3-8 has 8 cores. TPUv3 has 2 cores per chip. So 4 chips. 123000000000000 FLOP / chip / sec * 4 chips * 576 hours [see training time notes] * 3600 sec / hour * 0.3 [assumed utilization] = 3.0606336e+20 FLOP sqrt(2.139095e+20*3.0606336e+20) = 2.5587079e+20 FLOP
Size Notes: pre-training: The model was pretrained for 2 million steps with a batch size of 256. sequence length: 512 2*10^6*512 *256 = 262144000000 total training tokens
Notes: 12 encoder layers, 12 attention heads, 768 hidden size, and 512 maximum input sequence length for a total of 136M parameters.