Visual language models (VLMs) have made significant advances in accuracy in recent years. However, their efficiency has received much less attention. This paper introduces NVILA, a family of open VLMs designed to optimize both efficiency and accuracy. Building on top of VILA, we improve its model architecture by first scaling up the spatial and temporal resolutions, and then compressing visual tokens. This "scale-then-compress" approach enables NVILA to efficiently process high-resolution images and long videos. We also conduct a systematic investigation to enhance the efficiency of NVILA throughout its entire lifecycle, from training and fine-tuning to deployment. NVILA matches or surpasses the accuracy of many leading open and proprietary VLMs across a wide range of image and video benchmarks. At the same time, it reduces training costs by 4.5X, fine-tuning memory usage by 3.4X, pre-filling latency by 1.6-2.2X, and decoding latency by 1.2-2.8X. We will soon make our code and models available to facilitate reproducibility.
Notes: "We train all models using 128 NVIDIA H100 GPUs with a global batch size of 2048 across all stages." from Figure 1 NVILA 8B takes 4.5x times less gpu hours than LLAVA OneVision which is reported to take 400 GPU days ("For example, training a state-of-the-art 7B VLM [5] can take up to 400 GPU days") 400/4.5 ~ 89 GPU days ~ 2133 GPU hours 989500000000000 FLOP / sec / GPU * 2133 GPU*hours * 3600 sec / hour * 0.3 [assumed utilization] = 2.2794518e+21 FLOP Confidence: Likely (precision is FP8 not FP16 and utilization could be different from 0.3)
Size Notes: 2.2794518e+21 FLOP / (6*8*10^9) = 47488579166.7 tokens ~47B
Notes: 8B