We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new capabilities of perceiving object descriptions (e.g., bounding boxes) and grounding text to the visual world. Specifically, we represent refer expressions as links in Markdown, i.e., ``[text span](bounding boxes)', where object descriptions are sequences of location tokens. Together with multimodal corpora, we construct large-scale data of grounded image-text pairs (called GrIT) to train the model. In addition to the existing capabilities of MLLMs (e.g., perceiving general modalities, following instructions, and performing in-context learning), Kosmos-2 integrates the grounding capability into downstream applications. We evaluate Kosmos-2 on a wide range of tasks, including (i) multimodal grounding, such as referring expression comprehension, and phrase grounding, (ii) multimodal referring, such as referring expression generation, (iii) perception-language tasks, and (iv) language understanding and generation. This work lays out the foundation for the development of Embodiment AI and sheds light on the big convergence of language, multimodal perception, action, and world modeling, which is a key step toward artificial general intelligence. Code and pretrained models are available at this https://github.com/microsoft/unilm/tree/master/kosmos-2
Notes: "We train the model on 256 V100 GPUs and the training takes approximately one day to complete" "We train KOSMOS-2 for 60k steps, equivalent to around 25 billion tokens" GPU-time method (256) * (1.3e14) * (24 * 3600) * (0.3) = 8.626176e20 (num gpu) * (peak flops) * (time in seconds) * (assumed utilization rate) Parameter-data method 6ND = 6*25B*1.6B = 2.4e20 Used geometric mean of two estimates.
Size Notes: text and images "We train KOSMOS-2 for 60k steps, equivalent to around 25 billion tokens" "We train the model on newly added grounded image-text pairs, monomodal text corpora, image-caption pairs, and interleaved image-text data. Our training process involves a batch size of 419K tokens, consisting of 185K tokens from text corpora, 215K tokens from original and grounded image-caption pairs, and 19K tokens from interleaved data. We train KOSMOS-2 for 60k steps, equivalent to around 25 billion tokens"
Notes: 1.6B