We introduce AudioLM, a framework for high-quality audio generation with long-term consistency. AudioLM maps the input audio to a sequence of discrete tokens and casts audio generation as a language modeling task in this representation space. We show how existing audio tokenizers provide different trade-offs between reconstruction quality and long-term structure, and we propose a hybrid tokenization scheme to achieve both objectives. Namely, we leverage the discretized activations of a masked language model pre-trained on audio to capture long-term structure and the discrete codes produced by a neural audio codec to achieve high-quality synthesis. By training on large corpora of raw audio waveforms, AudioLM learns to generate natural and coherent continuations given short prompts. When trained on speech, and without any transcript or annotation, AudioLM generates syntactically and semantically plausible speech continuations while also maintaining speaker identity and prosody for unseen speakers. Furthermore, we demonstrate how our approach extends beyond speech by generating coherent piano music continuations, despite being trained without any symbolic representation of music.
Notes: "We train each stage on 16 TPUv4s with batch size of 256 for 1M steps." That's for the 900M-param transformers If there's 256 passes in each batch, then using 6ND that's 900m * 256m * 6 = 1.3e18. sanity check: 16 tpu4s is 4.4e15 FLOP/s. 1.3e18 FLOP / 4.4e15 FLOP/s is 295 seconds. adjusting for utilization it would be ~1000 seconds or 15 minutes? probably too short, so 1.3e18 seems too low. upd there are 3 stages -> 1.3e18*3 = 3.9e+18 (Speculative due to reasoning above)
Size Notes: 60k hours of English speech 13680*60000 = 820800000 words https://docs.google.com/document/d/1G3vvQkn4x_W71MKg0GmHVtzfd9m0y3_Ofcoew0v902Q/edit#heading=h.sxcem9l5k3ce
Notes: "We use identical decoder-only Transformers in all stages, with 12 layers, 16 attention heads, embedding dimension of 1024, feed-forward layer dimension of 4096 and dropout of 0.1, together with T5-style relative positional embeddings [38], resulting in a model parameter size of 0.3B per stage." Three stages (figure 2), and 300M per stage. Plus 600M parameters for w2v-BERT XL, so 1.5B total