We present Chameleon, a family of early-fusion token-based mixed-modal models capable of understanding and generating images and text in any arbitrary sequence. We outline a stable training approach from inception, an alignment recipe, and an architectural parameterization tailored for the early-fusion, token-based, mixed-modal setting. The models are evaluated on a comprehensive range of tasks, including visual question answering, image captioning, text generation, image generation, and long-form mixed modal generation. Chameleon demonstrates broad and general capabilities, including state-of-the-art performance in image captioning tasks, outperforms Llama-2 in text-only tasks while being competitive with models such as Mixtral 8x7B and Gemini-Pro, and performs non-trivial image generation, all in a single model. It also matches or exceeds the performance of much larger models, including Gemini Pro and GPT-4V, according to human judgments on a new long-form mixed-modal generation evaluation, where either the prompt or outputs contain mixed sequences of both images and text. Chameleon marks a significant step forward in a unified modeling of full multimodal documents.
FLOPs3.34e+23
Notes: GPU method: Table 2 shows that 7B model pre-training uses 856481 GPU-hours, trained across 1024 A100s 3.12e14 * 856481 * 3600 * 0.3 = 2.89e23 Parameter-token method: Pre-training goes over 9.2T tokens, post-training only goes over 1.1B tokens (sum of tokens column in Table 3) 6 * 7B * 9.2T = 3.86e23 Geometric mean: sqrt(2.89e23 * 3.86e23) = 3.34e23
Training Code Accessibilityhttps://ai.meta.com/resources/models-and-libraries/chameleon-downloads/?gk_enable=chameleon_web_flow_is_live "The models we’re releasing today were safety tuned and support mixed-modal inputs and text-only output to be used for research purposes. While we’ve taken steps to develop these models responsibly, we recognize that risks remain. At this time, we are not releasing the Chameleon image generation model."
HardwareNVIDIA A100 SXM4 80 GB
Size Notes: Slightly conflicting info. Pre-training data details describe different types of data that sum to 4.8 trillion tokens, but Table 1 indicates 4.4T. Using table values as this agrees with other statements about epochs and total tokens seen.
Parameters7000000000