We introduce Trillion-7B, the most token-efficient Korean-centric multilingual LLM available. Our novel Cross-lingual Document Attention (XLDA) mechanism enables highly efficient and effective knowledge transfer from English to target languages like Korean and Japanese. Combined with optimized data mixtures, language-specific filtering, and tailored tokenizer construction, Trillion-7B achieves competitive performance while dedicating only 10\% of its 2T training tokens to multilingual data and requiring just 59.4K H100 GPU hours ($148K) for full training. Comprehensive evaluations across 27 benchmarks in four languages demonstrate Trillion-7B's robust multilingual performance and exceptional cross-lingual consistency.
Notes: ~9.3×10²² FLOPs (reported) 6 FLOP/parameter/token * 7000000000 parameters * 2000000000000 tokens = 8.4e+22 FLOP 989400000000000 FLOP/GPU/sec * 59400 GPU-hours * 3600 sec / hour * 0.4250 [reported utilization] = 8.9918651e+22 FLOP
Size Notes: "The pretraining corpus for Trillion comprises approximately 2T tokens spanning English, multilingual, mathematical, and coding domains."
Notes: 7B