No Travel? No Problem.

Remote Participation
Enhancing the Scalability of Multi-FPGA Stencil Computations via Highly Optimized HDL Components
Author/Presenter
Event Type
Workshop
Tags
Accelerator-based Architectures
Applications
Architectures
Emerging Technologies
Heterogeneous Systems
Memory Systems
Networks
Registration Categories
W
TimeMonday, 15 November 20213:45pm - 4:15pm CST
Location231-232
DescriptionStencil-based algorithms are a relevant class of computational kernels in high-performance systems, as they
appear in a plethora of fields, from image processing to seismic simulations, from numerical methods to physical modeling. Among the various incarnations of stencil-based computations, Iterative Stencil Loops (ISLs)
and Convolutional Neural Networks (CNNs) represent two well-known examples of kernels belonging
to the stencil class. Indeed, ISLs apply the same stencil several times until convergence, while CNN layers
leverage stencils to extract features from an image. The computationally intensive essence of ISLs, CNNs, and
in general stencil-based workloads, requires solutions able to produce efficient implementations in terms of
throughput and power efficiency. In this context, FPGAs are ideal candidates for such workloads, as they allow
design architectures tailored to the stencil regular computational pattern. Moreover, the ever-growing need
for performance enhancement leads FPGA-based architectures to scale to multiple devices to benefit from
a distributed acceleration. For this reason, we propose a library of HDL components to effectively compute
ISLs and CNNs inference on FPGA, along with a scalable multi-FPGA architecture, based on custom PCB interconnects. Our solution eases the design flow and guarantees both scalability and performance competitive
with state-of-the-art works.
Back To Top Button