We present Imagen, a text-to-image diffusion model with an unprecedented degree of photorealism and a deep level of language understanding. Imagen builds on the power of large transformer language models in understanding text and hinges on the strength of diffusion models in high-fidelity image generation. Our key discovery is that generic large language models (e.g. T5), pretrained on text-only corpora, are surprisingly effective at encoding text for image synthesis: increasing the size of the language model in Imagen boosts both sample fidelity and image-text alignment much more than increasing the size of the image diffusion model. Imagen achieves a new state-of-the-art FID score of 7.27 on the COCO dataset, without ever training on COCO, and human raters find Imagen samples to be on par with the COCO data itself in image-text alignment. To assess text-to-image models in greater depth, we introduce DrawBench, a comprehensive and challenging benchmark for text-to-image models. With DrawBench, we compare Imagen with recent methods including VQ-GAN+CLIP, Latent Diffusion Models, and DALL-E 2, and find that human raters prefer Imagen over other models in side-by-side comparisons, both in terms of sample quality and image-text alignment.
Notes: 256 TPU v4 chips for 64x64, for 4 days 128 TPU v4 chips for 64->256, for 2 days 128 TPU v4 chips for 256->1024, for 2 days 256 TPUs * 275 teraFLOPS/TPU * 4 days + 2 * (128 TPUs * 275 teraFLOPS/TPU * 2 days) * 40% utilization = 1.46e+22 FLOP
Size Notes: [IMAGE-TEXT PAIRS] "We train on a combination of internal datasets, with ≈ 460M image-text pairs, and the publicly available Laion dataset [61], with ≈ 400M image-text pairs."
Notes: 2B 64x64 generation model, 600M 64->256 super-resolution model, 400M 256->1024 super-resolution model Uses encodings from a frozen T5-XXL, which should be included in total parameter count. Loading the model directly, there are 4,762,310,656 parameters in the encoder. 2B + 4.762B + 600M + 400M = 7.762 billion here they claim it is 3B parameters: https://arxiv.org/pdf/2407.15811