We introduce Lumina-Image 2.0, an advanced text-to-image generation framework that achieves significant progress compared to previous work, Lumina-Next. Lumina-Image 2.0 is built upon two key principles: (1) Unification - it adopts a unified architecture (Unified Next-DiT) that treats text and image tokens as a joint sequence, enabling natural cross-modal interactions and allowing seamless task expansion. Besides, since high-quality captioners can provide semantically well-aligned text-image training pairs, we introduce a unified captioning system, Unified Captioner (UniCap), specifically designed for T2I generation tasks. UniCap excels at generating comprehensive and accurate captions, accelerating convergence and enhancing prompt adherence. (2) Efficiency - to improve the efficiency of our proposed model, we develop multi-stage progressive training strategies and introduce inference acceleration techniques without compromising image quality. Extensive evaluations on academic benchmarks and public text-to-image arenas show that Lumina-Image 2.0 delivers strong performances even with only 2.6B parameters, highlighting its scalability and design efficiency. We have released our training details, code, and models at this https URL.
Notes: 312000000000000 FLOP / GPU / sec [A100 reported, bf16 assumed] * 14184 GPU-hours [see training time notes] * 3600 sec / hour * 0.3 [assumed utilization] = 4.7794406e+21 FLOP
Size Notes: "we constructed a dataset combining both real and synthetic data, and performed data filtering based on the techniques outlined in [15, 22, 58], resulting in total 110M samples."
Notes: 2.6B