We introduce LLaMA, a collection of foundation language models ranging from 7B to 65B parameters. We train our models on trillions of tokens, and show that it is possible to train state-of-the-art models using publicly available datasets exclusively, without resorting to proprietary and inaccessible datasets. In particular, LLaMA-13B outperforms GPT-3 (175B) on most benchmarks, and LLaMA-65B is competitive with the best models, Chinchilla-70B and PaLM-540B. We release all our models to the research community.
Notes: 1T tokens * 13B parameters * 6 FLOP/token/parameter = 7.8e22 from paper, Llama-13B took 135,168 GPU hours using A100s 312 trillion * 135,168 * 3600 = 1.518e23 FLOPs at full utilization This implies that the actual utilization was: MFU = 7.8e22/1.518e23 = 0.514
Size Notes: Table 2
Notes: 13.0B