Cited 0 time in
Cited 1 time in
MixNet: An Energy-Scalable and Computationally Lightweight Deep Learning Accelerator
- MixNet: An Energy-Scalable and Computationally Lightweight Deep Learning Accelerator
- Jung, Sangwoo; Moon, Seungsik; Lee, Youngjoo; Kung, Jaeha
- DGIST Authors
- Kung, Jaeha
- Issue Date
- IEEE International Symposium on Low-Power Electronics and Design
- In this paper, we present a lightweight CNN model named MixNet which is easily scalable to different energy requirements in embedded platforms. The MixNet model uses two extreme bit-precisions that efficiently balances the model accuracy and energy consumption. The energy consumption in processing MixNet is managed by controlling the ratio between high-precision (16bit) and low-precision (1bit) paths. Since only two bit-precisions are required in designing a hardware accelerator, the control logic becomes simpler compared to other multi-precision accelerators. In addition, a reconfigurable multiplier is proposed to enable highly parallel MixNet computations for faster prediction and/or training. Overall, the energy efficiency in terms of run-time per unit power improves by 1.75 ~1.94 × over the recently proposed reduced-precision CNN model. © 2019 IEEE.
- Institute of Electrical and Electronics Engineers Inc.
- Related Researcher
Intelligent Digital Systems Lab
딥러닝, 가속하드웨어, 저전력 하드웨어, 고성능 시스템
There are no files associated with this item.
- Department of Information and Communication EngineeringIntelligent Digital Systems Lab2. Conference Papers
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.