SC21 Proceedings

The International Conference for High Performance Computing, Networking, Storage, and Analysis

MLPerf: A Benchmark for Machine Learning


Authors: Tom St. John (Cruise), Murali Emani (Argonne National Laboratory (ANL)), Geoffrey Fox (University of Virginia), Peter Mattson (Google LLC)

Abstract: Machine learning applications are rapidly expanding into scientific domains and challenging the hallmarks of traditional HPC workloads. We present MLPerf, a community-driven system performance benchmark which spans a range of machine learning tasks. The speakers at this BoF are experts in the fields of HPC, scientific computing, machine learning, and computer architecture, representing academia, government research organizations, and private industry. In this session, we will cover the past year's development within the MLPerf organization and provide an update on the latest round of submissions to the MLPerf-HPC benchmark suite to solicit input from interested parties within the HPC community.

Long Description: Deep learning is transforming the field of machine learning (ML) from theory to practice. Following the widespread adoption of machine learning over the past several years, ML workloads now stand together with traditional scientific computing workloads in the high performance computing application space. These workloads have sparked a renaissance in computer system design. Both academics and the industry are scrambling to integrate ML-centric designs into their products and numerous research efforts are focused on scaling up ML problems to extreme-scale systems.

Despite the breakneck pace of innovation, there is a crucial issue affecting the research and industry communities at large: how to enable fair and useful benchmarking of ML software frameworks, ML hardware accelerators, and ML systems. The ML field requires systematic benchmarking that is both representative of real-world use cases and useful for making fair comparisons across different software and hardware platforms. This is increasingly relevant as the scientific community is adopting ML in its research, such as model-driven simulations, analysis, surrogate models, etc.

MLPerf answers the call. MLPerf is a machine learning benchmark standard driven by industry (70+ companies) and engineers and researchers (1000+) at large. The benchmark suite comprises a set of key machine learning training and inference workloads that are representative of important production use cases, ranging from image classification and object detection to recommendation. MLPerf has already gone through four rounds of training and inference results, as well as one round of HPC results.

The MLPerf-HPC benchmark suite includes scientific applications that use ML, especially deep learning at HPC scale. These benchmarks can be used to help project future system performance and assist in the design of future HPC systems. These aim to evaluate certain behaviors unique to HPC applications, such as: - On-node characteristics vs. off-node communication characteristics for various training schema - Big datasets, I/O bottlenecks, reliability, MPI vs. alternative communication backends - Complex workloads where model training/inference might be coupled with simulations/high-dimensional data or hyperparameter optimization

In this session, we will focus on the following topics of discussion: - A senior member from the MLPerf committee will present the existing structure and design choices selected during the creation of the MLPerf benchmark suite. - Key stakeholders will present their perspectives on MLPerf and explain how MLPerf provides value to their organizations. - One or more representatives from national HPC research centers will discuss their unique needs when quantifying performance of their machine learning workloads on large-scale systems and how these requirements have been incorporated into the MLPerf-HPC benchmark suite. - We will host an interactive community session where interested members of the audience can ask questions of the speakers to drive discussion focused on how to best address the needs of the ML-oriented HPC community and drive community building initiatives across labs, industry, and HPC centers.

The outcome from these discussions will be summarized in a report that is hosted online and made publicly available. Also, interested attendees are encouraged to contribute to organize, curate scientific datasets, participate, and submit to MLPerf.


URL: https://mlcommons.org


Back to Birds of a Feather Archive Listing