How do large language models (LLMs) develop and evolve over the course of training? How do these patterns change as models scale? To answer these questions, we introduce \textit{Pythia}, a suite of 16 LLMs all trained on public data seen in the exact same order and ranging in size from 70M to 12B parameters. We provide public access to 154 checkpoints for each one of the 16 models, alongside tools to download and reconstruct their exact training dataloaders for further study. We intend \textit{Pythia} to facilitate research in many areas, and we present several case studies including novel results in memorization, term frequency effects on few-shot performance, and reducing gender bias. We demonstrate that this highly controlled setup can be used to yield novel insights toward LLMs and their training dynamics. Trained models, analysis code, training code, and training data can be found at
FLOPs126000000000000000000
Notes: https://www.wolframalpha.com/input?i=6+FLOP+*+70+million+*+299892736000
Training Code Accessibilityapache 2.0 for model/code/data Train code: https://github.com/EleutherAI/pythia?tab=readme-ov-file#reproducing-training inference code: https://github.com/EleutherAI/pythia?tab=readme-ov-file#quickstart
HardwareNVIDIA A100 SXM4 40 GB
Hardware Quantity32
Size Notes: "We train all models for 299,892,736,000 ≈ 300B tokens"
Parameters70000000
Notes: See Table 1 for non-embedding parameters