We present Segment Anything Model 2 (SAM 2 ), a foundation model towards solving promptable visual segmentation in images and videos. We build a data engine, which improves model and data via user interaction, to collect the largest video segmentation dataset to date. Our model is a simple transformer architecture with streaming memory for real-time video processing. SAM 2 trained on our data provides strong performance across a wide range of tasks. In video segmentation, we observe better accuracy, using 3x fewer interactions than prior approaches. In image segmentation, our model is more accurate and 6x faster than the Segment Anything Model (SAM). We believe that our data, model, and insights will serve as a significant milestone for video segmentation and related perception tasks. We are releasing a version of our model, the dataset and an interactive demo.
FLOPs1.24e+22
Notes: "The released SAM 2 was trained on 256 A100 GPUs for 108 hours" 256 * 108 * 3600 * 3.12e14 * 0.40 = 1.24e22
Training Code Accessibilityhttps://huggingface.co/facebook/sam2-hiera-large Apache-2.0, BSD-3-Clause licenses found
Size Notes: pre-training: data SA-1B (I assume it stands for 1B) steps ∼90k resolution 1024 precision bfloat16 batch size 256
Parameters224400000
Notes: sam2.1_hiera_large, 224.4M, https://github.com/facebookresearch/sam2