Enhancing DNN Training Efficiency Via Dynamic Asymmetric Architecture

Samer Kurzum*, Gil Shomron, Freddy Gabbay, Uri Weiser

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Deep neural networks (DNNs) require abundant multiply-and-accumulate (MAC) operations. Thanks to DNNs' ability to accommodate noise, some of the computational burden is commonly mitigated by quantization-that is, by using lower precision floating-point operations. Layer granularity is the preferred method, as it is easily mapped to commodity hardware. In this paper, we propose Dynamic Asymmetric Architecture (DAA), in which the micro-architecture decides what the precision of each MAC operation should be during runtime. We demonstrate a DAA with two data streams and a value-based controller that decides which data stream deserves the higher precision resource. We evaluate this mechanism in terms of accuracy on a number of convolutional neural networks (CNNs) and demonstrate its feasibility on top of a systolic array. Our experimental analysis shows that DAA potentially achieves 2x throughput improvement for ResNet-18 while saving 35% of the energy with less than 0.5% degradation in accuracy.

Original languageEnglish
Pages (from-to)49-52
Number of pages4
JournalIEEE Computer Architecture Letters
Volume22
Issue number1
DOIs
StatePublished - 1 Jan 2023
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 2002-2011 IEEE.

Keywords

  • Neural nets
  • approximation
  • dataflow architectures
  • training

Fingerprint

Dive into the research topics of 'Enhancing DNN Training Efficiency Via Dynamic Asymmetric Architecture'. Together they form a unique fingerprint.

Cite this