No Travel? No Problem.

Remote Participation
Efficient Large-Scale Language Model Training on GPU Clusters Using Megatron-LM
Event Type
Paper
Tags
Machine Learning and Artificial Intelligence
Reproducibility Badges
Award Finalists
Best Student Paper Finalist
Registration Categories
TP
TimeWednesday, 17 November 20211:30pm - 2pm CST
Location230-231-232
DescriptionLarge language models have led to state-of-the-art accuracies across several tasks. However, training these models efficiently is challenging because: a) GPU memory capacity is limited, making it impossible to fit large models on even a multi-GPU server, and b) the number of compute operations required can result in unrealistically long training times. Consequently, new methods of model parallelism such as tensor and pipeline parallelism have been proposed. Unfortunately, naive usage of these methods leads to scaling issues at thousands of GPUs. In this paper, we show how tensor, pipeline, and data parallelism can be composed to scale to thousands of GPUs. We propose a novel interleaved pipelining schedule that can improve throughput by 10+% with memory footprint comparable to existing approaches. Our approach allows us to perform training iterations on a model with 1 trillion parameters at 502 petaFLOP/s on 3072 GPUs (per-GPU throughput of 52% of theoretical peak).
Back To Top Button