We study the capabilities of speech processing systems trained simply to predict large amounts of transcripts of audio on the internet. When scaled to 680,000 hours of multilingual and multitask supervision, the resulting models generalize well to standard benchmarks and are often competitive with prior fully supervised results but in a zeroshot transfer setting without the need for any finetuning. When compared to humans, the models approach their accuracy and robustness. We are releasing models and inference code to serve as a foundation for further work on robust speech processing.
FLOPs4.21e+21
Notes: See figure 9
Training Code AccessibilityMIT for weights: https://github.com/openai/whisper the repo looks like just inference code to me. also, this paper says it's just inference code and they reproduced their version of Whisper through other means: https://arxiv.org/pdf/2309.13876
Size Notes: "When scaled to 680,000 hours of multilingual and multitask supervision, the resulting models generalize well to standard benchmarks and are often competitive with prior fully supervised results but in a zeroshot transfer setting without the need for any finetuning." 13,680 words/h * 680,000h = 9,302,400,000 words
Parameters1550000000
Notes: Table 1