Cited 0 time in webofscience Cited 1 time in scopus

MixNet: An Energy-Scalable and Computationally Lightweight Deep Learning Accelerator

Title
MixNet: An Energy-Scalable and Computationally Lightweight Deep Learning Accelerator
Authors
Jung, SangwooMoon, SeungsikLee, YoungjooKung, Jaeha
DGIST Authors
Kung, Jaeha
Issue Date
2019-07-30
Citation
IEEE International Symposium on Low-Power Electronics and Design
Type
Conference
ISBN
9781728129549
ISSN
1533-4678
Abstract
In this paper, we present a lightweight CNN model named MixNet which is easily scalable to different energy requirements in embedded platforms. The MixNet model uses two extreme bit-precisions that efficiently balances the model accuracy and energy consumption. The energy consumption in processing MixNet is managed by controlling the ratio between high-precision (16bit) and low-precision (1bit) paths. Since only two bit-precisions are required in designing a hardware accelerator, the control logic becomes simpler compared to other multi-precision accelerators. In addition, a reconfigurable multiplier is proposed to enable highly parallel MixNet computations for faster prediction and/or training. Overall, the energy efficiency in terms of run-time per unit power improves by 1.75 ~1.94 × over the recently proposed reduced-precision CNN model. © 2019 IEEE.
URI
http://hdl.handle.net/20.500.11750/10873
DOI
10.1109/ISLPED.2019.8824978
Publisher
Institute of Electrical and Electronics Engineers Inc.
Related Researcher
  • Author Kung, Jaeha Intelligent Digital Systems Lab
  • Research Interests 딥러닝, 가속하드웨어, 저전력 하드웨어, 고성능 시스템
Files:
There are no files associated with this item.
Collection:
Department of Information and Communication EngineeringIntelligent Digital Systems Lab2. Conference Papers


qrcode mendeley

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE