Efficient Distributed GPU Programming for Exascale
Event Type
Tutorial
Online Only
Accelerator-based Architectures
TUT
TimeSunday, 14 November 20218am - 5pm CST
LocationOnline
DescriptionOver the past years, GPUs became ubiquitous in HPC installations around the world. Today, they provide the majority of performance of some of the largest supercomputers (e.g. Summit, Sierra, JUWELS Booster). This trend continues in upcoming pre-exascale and exascale systems (LUMI, Leonardo; Frontier): GPUs are chosen as the core computing devices to enter this next era of HPC.
To take advantage of future GPU-accelerated systems with tens of thousands of devices, application developers need to have the proper skills and tools to understand, manage, and optimize distributed GPU applications.
In this tutorial, participants will learn techniques to efficiently program large-scale multi-GPU systems. While programming multiple GPUs with MPI is explained in detail, also advanced techniques and models (NCCL, NVSHMEM, …) are presented. Tools for analysis are used to motivate implementation of performance optimizations. The tutorial combines lectures and hands-on exercises, using Europe’s fastest supercomputer, JUWELS Booster with NVIDIA A100 GPUs.
To take advantage of future GPU-accelerated systems with tens of thousands of devices, application developers need to have the proper skills and tools to understand, manage, and optimize distributed GPU applications.
In this tutorial, participants will learn techniques to efficiently program large-scale multi-GPU systems. While programming multiple GPUs with MPI is explained in detail, also advanced techniques and models (NCCL, NVSHMEM, …) are presented. Tools for analysis are used to motivate implementation of performance optimizations. The tutorial combines lectures and hands-on exercises, using Europe’s fastest supercomputer, JUWELS Booster with NVIDIA A100 GPUs.