The Arabic language is a morphologically rich language with relatively few resources and a less explored syntax compared to English. Given these limitations, Arabic Natural Language Processing (NLP) tasks like Sentiment Analysis (SA), Named Entity Recognition (NER), and Question Answering (QA), have proven to be very challenging to tackle. Recently, with the surge of transformers based models, language-specific BERT based models have proven to be very efficient at language understanding, provided they are pre-trained on a very large corpus. Such models were able to set new standards and achieve state-of-the-art results for most NLP tasks. In this paper, we pre-trained BERT specifically for the Arabic language in the pursuit of achieving the same success that BERT did for the English language. The performance of AraBERT is compared to multilingual BERT from Google and other state-of-the-art approaches. The results showed that the newly developed AraBERT achieved state-of-the-art performance on most tested Arabic NLP tasks. The pretrained araBERT models are publicly available on this https URL hoping to encourage research and applications for Arabic NLP.
Notes: 6 FLOP / parameter / token * 110000000 parameters * 81920000000 total training tokens [see dataset size notes] = 5.40672e+19 FLOP 45000000000000 FLOP / sec / chip * 4 chips [=8 cores] * 96 hours * 3600 sec / hour * 0.3 [assumed utilization] = 1.86624e+19 FLOP sqrt(5.40672e+19*1.86624e+19) = 3.1765134e+19 FLOP
Size Notes: The final size of the pre-training dataset, after removing duplicate sentences, is 70 million sentences, corresponding to ∼24GB of text total of "1,250,000 steps. To speed up the training time, the first 900K steps were trained on sequences of 128 tokens, and the remaining steps were trained on sequences of 512 tokens." "batch size of 512 and 128 for sequence length of 128 and 512 respectively. Training took 4 days, for 27 epochs over all the tokens." [see AraGPT2-Mega dataset size notes] 77GB of arabic text ~ 9932800000 tokens -> 24GB ~ 3095937662 tokens 900000*128*512 + 350000*512*128 = 81920000000 total training tokens ~ 26-27 epochs as reported
Notes: We use the BERTbase configuration that has 12 encoder blocks, 768 hidden dimensions, 12 attention heads, 512 maximum sequence length, and a total of ∼110M parameters