PolyDL : Polyhedral Optimizations for Creation of High-performance DL Primitives

Tavarageri, Sanket and Heinecke, Alexander and Avancha, Sasikanth and Kaul, Bharat and Goyal, Gagandeep and Upadrasta, Ramakrishna (2021) PolyDL : Polyhedral Optimizations for Creation of High-performance DL Primitives. ACM Transactions on Architecture and Code Optimization, 18 (1). pp. 1-27. ISSN 1544-3566

Full text not available from this repository. (Request a copy)


Deep Neural Networks (DNNs) have revolutionized many aspects of our lives. The use of DNNs is becoming ubiquitous, including in software for image recognition, speech recognition, speech synthesis, language translation, to name a few. The training of DNN architectures, however, is computationally expensive. Once the model is created, its use in the intended application - the inference task, is computationally heavy too and the inference needs to be fast for real time use. For obtaining high performance today, the code of Deep Learning (DL) primitives optimized for specific architectures by expert programmers exposed via libraries is the norm. However, given the constant emergence of new DNN architectures, creating hand optimized code is expensive, slow and is not scalable. To address this performance-productivity challenge, in this article we present compiler algorithms to automatically generate high-performance implementations of DL primitives that closely match the performance of hand optimized libraries. We develop novel data reuse analysis algorithms using the polyhedral model to derive efficient execution schedules automatically. In addition, because most DL primitives use some variant of matrix multiplication at their core, we develop a flexible framework where it is possible to plug in library implementations of the same in lieu of a subset of the loops. We show that such a hybrid compiler plus a minimal library-use approach results in state-of-the-art performance. We develop compiler algorithms to also perform operator fusions that reduce data movement through the memory hierarchy of the computer system. Using Convolution Neural Network (CNN) models and matrix multiplication operations, we demonstrate that our approach automatically creates high performing DNN building blocks whose performance matches the performance of hand-crafted kernels of Intel's oneDNN library on high end CPUs. At the same time, our techniques take only a fraction of time (1/20 or less) compared to AutoTVM, a deep learning auto-tuner to create optimized implementations.

[error in script]
IITH Creators:
IITH CreatorsORCiD
Goyal, GagandeepUNSPECIFIED
Upadrasta, RamakrishnaUNSPECIFIED
Item Type: Article
Uncontrolled Keywords: Convolution neural network; High performance implementations; Language translation; MAtrix multiplication; Matrix multiplication operation; Optimized implementation; Polyhedral optimizations; State-of-the-art performance
Subjects: Electrical Engineering
Divisions: Department of Computer Science & Engineering
Depositing User: . LibTrainee 2021
Date Deposited: 06 Aug 2021 05:03
Last Modified: 06 Aug 2021 05:03
URI: http://raiith.iith.ac.in/id/eprint/8700
Publisher URL: http://doi.org/10.1145/3433103
OA policy: https://v2.sherpa.ac.uk/id/publication/10667
Related URLs:

Actions (login required)

View Item View Item
Statistics for RAIITH ePrint 8700 Statistics for this ePrint Item