DRACO: Co-Optimizing Hardware Utilization, and Performance of DNNs on Systolic Accelerator

Jha, Nandan Kumar and Ravishankar, Shreyas and Mittal, Sparsh and et al, . (2020) DRACO: Co-Optimizing Hardware Utilization, and Performance of DNNs on Systolic Accelerator. In: 19th IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2020, 6 July 2020through 8 July 2020, Limassol.

[img] Text
ISVLSI.pdf - Published Version
Available under License Creative Commons Attribution.

Download (585kB)

Abstract

The number of processing elements (PEs) in a fixed-sized systolic accelerator is well matched for large and compute-bound DNNs; whereas, memory-bound DNNs suffer from PE underutilization and fail to achieve peak performance and energy efficiency. To mitigate this, specialized dataflow and/or micro-architectural techniques have been proposed. However, due to the longer development cycle and the rapid pace of evolution in the deep learning fields, these hardware-based solutions can be obsolete and ineffective in dealing with PE underutilization for state-of-the-art DNNs. In this work, we address the challenge of PE underutilization at the algorithm front and propose data reuse aware co-optimization (DRACO). This improves the PE utilization of memory-bound DNNs without any additional need for dataflow/micro-architecture modifications. Furthermore, unlike the previous co-optimization methods, DRACO not only maximizes performance and energy efficiency but also improves the predictive performance of DNNs. To the best of our knowledge, DRACO is the first work that resolves the resource underutilization challenge at the algorithm level and demonstrates a trade-off between computational efficiency, PE utilization, and predictive performance of DNN. Compared to the state-of-the-art row stationary dataflow, DRACO achieves 41.8% and 42.6% improvement in average PE utilization and inference latency (respectively) with negligible loss in predictive performance in MobileNetV1 on a 64×64 systolic array. DRACO provides seminal insights for utilization-aware DNN design methodologies that can fully leverage the computation power of systolic array-based hardware accelerators. © 2020 IEEE.

[error in script]
IITH Creators:
IITH CreatorsORCiD
Item Type: Conference or Workshop Item (Paper)
Additional Information: Support for this work was provided by Semiconductor Research Corporation.
Uncontrolled Keywords: Deep neural networks (DNNs); Energy-efficiency; Latency; PE utilization; Systolic array
Subjects: Computer science
Divisions: Department of Computer Science & Engineering
Depositing User: . LibTrainee 2021
Date Deposited: 01 Nov 2022 05:12
Last Modified: 01 Nov 2022 05:12
URI: http://raiith.iith.ac.in/id/eprint/11115
Publisher URL: http://doi.org/10.1109/ISVLSI49217.2020.00088
Related URLs:

    Actions (login required)

    View Item View Item
    Statistics for RAIITH ePrint 11115 Statistics for this ePrint Item