Detail View
A Dual-Precision and Low-Power CNN Inference Engine Using a Heterogeneous Processing-in-Memory Architecture
WEB OF SCIENCE
SCOPUS
- Title
- A Dual-Precision and Low-Power CNN Inference Engine Using a Heterogeneous Processing-in-Memory Architecture
- Issued Date
- 2024-12
- Citation
- Jung, Sangwoo. (2024-12). A Dual-Precision and Low-Power CNN Inference Engine Using a Heterogeneous Processing-in-Memory Architecture. IEEE Transactions on Circuits and Systems I: Regular Papers, 71(12), 5546–5559. doi: 10.1109/TCSI.2024.3395842
- Type
- Article
- Author Keywords
- Convolutional neural networks ; deep learning ; Hardware ; Memory management ; mixed-precision quantization ; processing-in-memory ; Quantization (signal) ; Random access memory ; SW-HW co-optimization ; Computational modeling ; Convolution
- ISSN
- 1549-8328
- Abstract
-
In this article, we present an energy-scalable CNN model that can adapt to different hardware resource constraints. Specifically, we propose a dual-precision network, named DualNet, that leverages two independent bit-precision paths (INT4 and ternary-binary). DualNet achieves both high accuracy and low complexity by balancing the ratio between two paths. We also present an evolutionary algorithm that allows the automatic search of the optimal ratios. In addition to the novel CNN architecture design, we develop a heterogeneous processing-in-memory (PIM) hardware that integrates SRAM-and eDRAM-based PIMs to efficiently compute two precision paths in parallel. To verify the energy efficiency of DualNet computed on the heterogeneous PIM, we prototyped a test chip in 28nm CMOS technology. To maximize the hardware efficiency, we utilize an improved data mapping scheme achieving the most effective deployment of DualNets on multiple PIM arrays. With the proposed SW-HW co-optimization, we can obtain the most energy-efficient DualNet model operating on the actual PIM hardware. Compared to the other quantized networks with a single bit-precision, DualNet reduces the energy consumption, memory footprint, and latency by 29.0%, 49.5%, 47.3% on average, respectively, for CIFAR-10/100 and ImageNet datasets. IEEE
더보기
- Publisher
- Institute of Electrical and Electronics Engineers
File Downloads
- There are no files associated with this item.
공유
Related Researcher
- Yoon, Jong-Hyeok윤종혁
-
Department of Electrical Engineering and Computer Science
Total Views & Downloads
???jsp.display-item.statistics.view???: , ???jsp.display-item.statistics.download???:
