SC21 Proceedings

The International Conference for High Performance Computing, Networking, Storage, and Analysis

Hardware Acceleration of Complex Machine Learning Models through Modern High-Level Synthesis

Authors: Serena Curzel (Polytechnic University of Milan, Pacific Northwest National Laboratory (PNNL)); Antonino Tumeo (Pacific Northwest National Laboratory (PNNL)); and Fabrizio Ferrandi (Polytechnic University of Milan)

Abstract: Machine learning algorithms continue to receive significant attention from industry and research. As the models increase in complexity and accuracy, their computational and memory demands also grow, pushing for more powerful, heterogeneous architectures; custom FPGA/ASIC accelerators are often the best solution to efficiently process large amounts of data close to the sensors in large-scale scientific experiments. Previous works exploited high-level synthesis to help design dedicated compute units for machine learning inference, proposing frameworks that translate high-level models into annotated C/C++. Our proposal, instead, integrates HLS in a compiler-based tool flow with multiple levels of abstraction, enabling analysis, optimization and design space exploration along the whole process. Such an approach will also allow to explore models beyond multi-layer perceptrons and convolutional neural networks (which are often the main target of "classic" HLS frameworks), for example to address the different challenges posed by sparse and graph-based neural networks.

Best Poster Finalist (BP): no

Poster: PDF
Poster summary: PDF

Back to Poster Archive Listing