Detail View

LightNorm: Area and Energy-Efficient Batch Normalization Hardware for On-Device DNN Training
Citations

WEB OF SCIENCE

Citations

SCOPUS

Metadata Downloads

DC Field Value Language
dc.contributor.author Noh, Seock-Hwan -
dc.contributor.author Park, Junsang -
dc.contributor.author Park, Dahoon -
dc.contributor.author Koo, Jahyun -
dc.contributor.author Choi, Jeik -
dc.contributor.author Kung, Jaeha -
dc.date.accessioned 2023-12-26T18:12:27Z -
dc.date.available 2023-12-26T18:12:27Z -
dc.date.created 2023-01-19 -
dc.date.issued 2022-10-25 -
dc.identifier.isbn 9781665461863 -
dc.identifier.issn 2576-6996 -
dc.identifier.uri http://hdl.handle.net/20.500.11750/46793 -
dc.description.abstract When training early-stage deep neural networks (DNNs), generating intermediate features via convolution or linear layers occupied most of the execution time. Accordingly, extensive research has been done to reduce the computational burden of the convolution or linear layers. In recent mobile-friendly DNNs, however, the relative number of operations involved in processing these layers has significantly reduced. As a result, the proportion of the execution time of other layers, such as batch normalization layers, has increased. Thus, in this work, we conduct a detailed analysis of the batch normalization layer to efficiently reduce the runtime overhead in the batch normalization process. Backed up by the thorough analysis, we present an extremely efficient batch normalization, named LightNorm, and its associated hardware module. In more detail, we fuse three approximation techniques that are i) low bit-precision, ii) range batch normalization, and iii) block floating point. All these approximate techniques are carefully utilized not only to maintain the statistics of intermediate feature maps, but also to minimize the off-chip memory accesses. By using the proposed LightNorm hardware, we can achieve significant area and energy savings during the DNN training without hurting the training accuracy. This makes the proposed hardware a great candidate for the on-device training. © 2022 IEEE. -
dc.language English -
dc.publisher IEEE Computer Society -
dc.title LightNorm: Area and Energy-Efficient Batch Normalization Hardware for On-Device DNN Training -
dc.type Conference Paper -
dc.identifier.doi 10.1109/ICCD56317.2022.00072 -
dc.identifier.scopusid 2-s2.0-85145880973 -
dc.identifier.bibliographicCitation Noh, Seock-Hwan. (2022-10-25). LightNorm: Area and Energy-Efficient Batch Normalization Hardware for On-Device DNN Training. IEEE International Conference on Computer Design, 443–450. doi: 10.1109/ICCD56317.2022.00072 -
dc.identifier.url https://iccd-conf.com/2022/Program_2022.html -
dc.citation.conferencePlace US -
dc.citation.conferencePlace Olympic Valley -
dc.citation.endPage 450 -
dc.citation.startPage 443 -
dc.citation.title IEEE International Conference on Computer Design -
Show Simple Item Record

File Downloads

  • There are no files associated with this item.

공유

qrcode
공유하기

Related Researcher

궁재하
Kung, Jaeha궁재하

Department of Electrical Engineering and Computer Science

read more

Total Views & Downloads