Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on taskspecific datasets. We demonstrate that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText. When conditioned on a document plus questions, the answers generated by the language model reach 55 F1 on the CoQA dataset - matching or exceeding the performance of 3 out of 4 baseline systems without using the 127,000+ training examples. The capacity of the language model is essential to the success of zero-shot task transfer and increasing it improves performance in a log-linear fashion across tasks. Our largest model, GPT-2, is a 1.5B parameter Transformer that achieves state of the art results on 7 out of 8 tested language modeling datasets in a zero-shot setting but still underfits WebText. Samples from the model reflect these improvements and contain coherent paragraphs of text. These findings suggest a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations.
Notes: assuming 100 epochs (consistent with original GPT paper + reasoning from here: https://arxiv.org/pdf/1906.06669) 6 FLOP / token / parameter * 774*10^6 parameters * 10666666666.7 tokens * 100 epochs = 4.9536e+21 FLOP
Size Notes: 40GB * 200*10^6 words per GB * 4/3 tokens per word = 10666666666.7 tokens
Notes: Note that the initial paper release stated GPT-2 large had 762M parameters. The official github repo notes that this was due to an error: https://github.com/openai/gpt-2?tab=readme-ov-file