Physical AI needs to be trained digitally first. It needs a digital twin of itself, the policy model, and a digital twin of the world, the world model. In this paper, we present the Cosmos World Foundation Model Platform to help developers build customized world models for their Physical AI setups. We position a world foundation model as a general-purpose world model that can be fine-tuned into customized world models for downstream applications. Our platform covers a video curation pipeline, pre-trained world foundation models, examples of post-training of pre-trained world foundation models, and video tokenizers. To help Physical AI builders solve the most critical problems of our society, we make Cosmos open-source and our models open-weight with permissive licenses available via https://github.com/nvidia-cosmos/cosmos-predict1.
Notes: "We train all of the WFM models reported in the paper using a cluster of 10,000 NVIDIA H100 GPUs in a time span of three months." I assign the FLOPs from this cluster proportional to the parameter size of the model trained. There are a total of 76B parameters between the 8 models. Therefore, assuming 20% utilization (starting with 33% but then accounting for time between experiments), we get (10k H100s)*(90 days)*(24*60*60)*(979e12)*(0.2 utilization)*(14/76) = 2.8e24 FLOPs
Notes: 14B