This paper proposes a unified diffusion framework (dubbed UniDiffuser) to fit all distributions relevant to a set of multi-modal data in one model. Our key insight is -- learning diffusion models for marginal, conditional, and joint distributions can be unified as predicting the noise in the perturbed data, where the perturbation levels (i.e. timesteps) can be different for different modalities. Inspired by the unified view, UniDiffuser learns all distributions simultaneously with a minimal modification to the original diffusion model -- perturbs data in all modalities instead of a single modality, inputs individual timesteps in different modalities, and predicts the noise of all modalities instead of a single modality. UniDiffuser is parameterized by a transformer for diffusion models to handle input types of different modalities. Implemented on large-scale paired image-text data, UniDiffuser is able to perform image, text, text-to-image, image-to-text, and image-text pair generation by setting proper timesteps without additional overhead. In particular, UniDiffuser is able to produce perceptually realistic samples in all tasks and its quantitative results (e.g., the FID and CLIP score) are not only superior to existing general-purpose models but also comparable to the bespoken models (e.g., Stable Diffusion and DALL-E 2) in representative tasks (e.g., text-to-image generation).
Notes: 312000000000000 FLOP/GPU/sec * 672 hours * 3600 sec / hour * 88 GPUs * 0.3 [assumed utilization] = 1.992646656e+22 FLOP
Size Notes: "The training is multiple-staged following Stable Diffusion (Rombach et al., 2022). In the first stage, we train 250K steps at 256×256 resolution on laion2B-en with a batch size of 11264 and 5K warm-up steps. In the second stage, we fine-tune the model with 200K steps at 512×512 resolution on laion-high-resolution with a batch size of 2112 and 5K warm-up steps. In the last stage, we resume from the last checkpoint of the second stage (including both weights of the model and states of the optimizer), and train 220K steps at 512×512 resolution on laionaesthetics v2 5+ with a batch size of 2112."