Inspired by progress in large-scale language modeling, we apply a similar approach towards building a single generalist agent beyond the realm of text outputs. The agent, which we refer to as Gato, works as a multi-modal, multi-task, multi-embodiment generalist policy. The same network with the same weights can play Atari, caption images, chat, stack blocks with a real robot arm and much more, deciding based on its context whether to output text, joint torques, button presses, or other tokens. In this report we describe the model and the data, and document the current capabilities of Gato.
Notes: 256 (16x16x) TPUv3 chips x 123e12 FLOPS/chip x 4 days x 86400 seconds/day * 0.4 utilization = 4.35e21 FLOPs Similar value by 6NC: 6 * 524288000000 * 1.18B = 3.71e21 Using geometric mean: sqrt(4.35e21 * 3.71e21) = 4.02e21
Notes: "This section focuses on in-simulation evaluation. Figure 10 compares the full 1.18B parameter Gato" p.10