MLPerf HPC: A Holistic Benchmark Suite for Scientific Machine Learning on HPC Systems
Machine Learning and Artificial Intelligence
TimeMonday, 15 November 202111:30am - 12pm CST
DescriptionScientific communities are increasingly adopting machine learning and deep learning models in their applications to accelerate scientific insights. High performance computing systems are pushing the frontiers of performance with a rich diversity of hardware resources and massive scale-out capabilities. There is a critical need to understand fair and effective benchmarking of machine learning applications that are representative of real-world scientific use cases. MLPerf(TM) is a community-driven standard to benchmark machine learning workloads, focusing on end-to-end performance metrics. In this paper, we introduce MLPerf HPC, a benchmark suite of large-scale scientific machine learning training applications driven by the MLCommons(TM) Association. We present the results from the first submission round, including a diverse set of some of the world's largest HPC systems, along with a systematic framework for their joint analysis and insights on implementations. Furthermore, we characterize each benchmark with compute, memory and I/O behaviours to parameterize extended roofline performance models.