SC21 Proceedings

The International Conference for High Performance Computing, Networking, Storage, and Analysis

Best Practices for Benchmarking Diverse Architectures with Varied Workloads


Authors: David Martin (Argonne National Laboratory (ANL)), Thomas Steinke (Zuse Institute Berlin), Hartwig Anzt (University of Tennessee, Knoxville), Vitali Morozov (Argonne National Laboratory (ANL)), Todd Evans (Texas Advanced Computing Center (TACC))

Abstract: This BoF, organized by the Intel eXtreme Performance Computing Users Group, will focus on sharing expertise in benchmarking computing systems across a variety of homogeneous and heterogeneous architectures with a variety of workloads. With increasing heterogeneity, the accurate and adequate comparison of performance across architectures and applications has become a growing challenge. This BoF will explore current approaches and best practices for benchmarking heterogeneous systems and exotic architectures, with the goal of identifying a unified set of principles and practices that can be leveraged to produce benchmarking results that are appropriate for direct comparison across sites, architectures, and applications.

Long Description: The Intel eXtreme Performance Computing Users Group (IXPUG) is a worldwide community of hundreds of application users, software developers and HPC center staff that utilize Intel hardware and software technologies (processors, accelerators, memory, software tools and storage middleware) to solve some of the world’s most challenging problems on some of the world’s most powerful supercomputers.

This BoF will focus on sharing expertise in benchmarking computing systems across a variety of homogeneous and heterogeneous architectures with a variety of workloads. With increasing heterogeneity across the computing landscape over the last few years and community projections that system diversification will likely intensify over the coming years, the accurate and adequate comparison of performance across a variety of computing architectures for a wide range of application workloads has become a growing challenge for system architects, hardware engineers, computational scientists, and system administrators. Differences in methodology or workload make it difficult to reasonably compare results between multiple sites (important for community) and across different architectures (important for procurement). Different benchmarks are often optimized to different degrees for specific architectures or application patterns, further complicating the issue. This BoF will explore current approaches and best practices for benchmarking heterogeneous systems and exotic architectures, with experience from several major computing sites, with the goal of identifying a unified set of principles and practices that can be used to produce benchmarking results that are more appropriate for direct comparison across sites, architectures, and applications. While informed by the experience of IXPUG members using Intel hardware, the presentations and discussions will be vendor agnostic.

Through invited talks by HPC software developers and benchmark experts that have real-world experience benchmarking a wide variety of systems, the BoF will provide a forum for researchers, tool developers, application programmers, HPC center staff and industry experts to share experiences in benchmarking increasingly heterogeneous systems with increasingly varied workloads. The first half of the BoF will be short presentations, followed by a moderated discussion among the speakers and audience.

This BoF follows productive and well-attended IXPUG BoFs starting from SC14 and continuing through SC20, attracting 80-150 attendees each year. IXPUG has also hosted BoFs and workshops at ISC and HPC Asia. The IXPUG steering group has confirmed a diverse group of experts to give short presentations on benchmarking, with a focus on best practices and consistency across varied architectures and workloads. A moderated discussion will complete the session and allow attendees to interact with speakers and share their own experiences. The BoF will educate attendees about how to benchmark new machines with results that accurately reflect performance with varied workloads.


URL: https://www.ixpug.org/SC21-BOF


Back to Birds of a Feather Archive Listing