Effective scaling and a flexible task interface enable large language models to excel at many tasks. We present PaLI (Pathways Language and Image model), a model that extends this approach to the joint modeling of language and vision. PaLI generates text based on visual and textual inputs, and with this interface performs many vision, language, and multimodal tasks, in many languages. To train PaLI, we make use of large pre-trained encoder-decoder language models and Vision Transformers (ViTs). This allows us to capitalize on their existing capabilities and leverage the substantial cost of training them. We find that joint scaling of the vision and language components is important. Since existing Transformers for language are much larger than their vision counterparts, we train a large, 4-billion parameter ViT (ViT-e) to quantify the benefits from even larger-capacity vision models. To train PaLI, we create a large multilingual mix of pretraining tasks, based on a new image-text training set containing 10B images and texts in over 100 languages. PaLI achieves state-of-the-art in multiple vision and language tasks (such as captioning, visual question-answering, scene-text understanding), while retaining a simple, modular, and scalable design.
Notes: Pre-training the ViT component involved 1.1 million steps (they train over 1M steps but run the last 100k twice and then average the two resulting models). Batch size is 16384 and the inputs are 224x224. Table 8 indicates a forward pass with ViT-e/14 on a 224 image takes 1980 GFLOPs, so total training compute for the ViT-e/14 model is: 1980e9 * 16384 * 1.1 million * 3 (account for backward passes) = 1.07e23 In the "overal model" section, they then say: "The largest model, PaLI-17B, is pretrained using 1,024 GCP-TPUv4 chips for 7 days". It is then trained for another 3 days on 512 chips at higher resolution. I assume the stated TPUv4 training does not include the ViT pretraining, since it amounts to fewer FLOPs than we estimate above for the ViT. 275 teraFLOP/s * ((1024 * 7) + (512 * 3)) * 24 * 3600 * 0.3 (utilization assumption) = 6.2e22 Total: 1.07e23 + 6.2e22 = 1.69e23
Size Notes: "During training, the model passes over 1.6B images, one epoch over the entire pretraining dataset"
Notes: 3.9b Image Encoder, 14b Multimodal Encoder-Decoder