SC21 Proceedings

The International Conference for High Performance Computing, Networking, Storage, and Analysis

APNN-TC: Accelerating Arbitrary Precision Neural Networks on Ampere GPU Tensor Cores


Authors: Boyuan Feng and Yuke Wang (University of California, Santa Barbara); Tong Geng and Ang Li (Pacific Northwest National Laboratory (PNNL)); and Yufei Ding (University of California, Santa Barbara)

Abstract: Over the years, accelerating neural networks with quantization has been widely studied. Unfortunately, prior efforts with diverse precisions (e.g., 1-bit weights and 2-bit activations) are usually restricted by limited precision support on GPUs (e.g., int1 and int4). To break such restrictions, we introduce the first Arbitrary Precision Neural Network framework (APNN-TC) to fully exploit quantization benefits on Ampere GPU tensor cores. Specifically, APNN-TC first incorporates a novel emulation algorithm to support arbitrary short bit-width computation with int1 compute primitives and XOR/AND Boolean operations. Second, APNN-TC integrates arbitrary precision layer designs to efficiently map our emulation algorithm to tensor cores with novel batching strategies and specialized memory organization. Third, APNN-TC embodies a novel arbitrary precision NN design to minimize memory access across layers and further improve performance. Extensive evaluations show that APNN-TC can achieve significant speedup over CUTLASS kernels and various NN models, such as ResNet and VGG.




Back to Technical Papers Archive Listing