WEB OF SCIENCE
SCOPUS
Recently, machine learning community has focused on developing deep learning models that are not only accurate but also efficient to deploy them on resource-limited devices. One popular approach to improve the model efficiency is to aggressively quantize both features and weight parameters. However, the quantization generally entails accuracy degradation thus additional compensation techniques are required. In this work, we present a novel network architecture, named DualNet, that leverages two separate bit-precision paths to effectively achieve high accuracy and low model complexity. On top of this new network architecture, we propose to utilize both SRAM-and eDRAM-based processing-in-memory (PIM) arrays, named DualPIM, to run each computing path in a DualNet at a dedicated PIM array. As a result, the proposed DualNet significantly reduces the energy consumption by 81% on average compared to other quantized neural networks (i.e., 4-bit and ternary), while achieving 13% higher accuracy on average.
더보기Department of Electrical Engineering and Computer Science