Today, we’re introducing Command A, a new state-of-the-art generative model optimized for demanding enterprises that require fast, secure, and high-quality AI. Command A delivers maximum performance with minimal hardware costs when compared to leading proprietary and open-weights models, such as GPT-4o and DeepSeek-V3. For private deployments, Command A excels on business-critical agentic and multilingual tasks, while being deployable on just two GPUs, compared to other models that typically require as many as 32. In head-to-head human evaluation across business, STEM, and coding tasks, Command A matches or outperforms its larger and slower competitors – while offering superior throughput and increased efficiency. Human evaluations matter because they test on real-world enterprise data and situations.
Notes: Unlikely to be >1e25 FLOP. Trained on "trillions of tokens", so C=6ND would suggest at most 6*111e9*10e12 = 7e24 FLOP if it used 10T tokens.
Notes: a 111 billion parameter model optimized for demanding enterprises that require fast, secure, and high-quality AI