Detail View

Implication of Optimizing NPU Dataflows on Neural Architecture Search for Mobile Devices
Citations

WEB OF SCIENCE

Citations

SCOPUS

Metadata Downloads

DC Field Value Language
dc.contributor.author Lee, Jooyeon -
dc.contributor.author Park, Junsang -
dc.contributor.author Lee, Seunghyun -
dc.contributor.author Kung, Jaeha -
dc.date.accessioned 2022-11-07T07:30:35Z -
dc.date.available 2022-11-07T07:30:35Z -
dc.date.created 2022-10-26 -
dc.date.issued 2022-09 -
dc.identifier.issn 1084-4309 -
dc.identifier.uri http://hdl.handle.net/20.500.11750/17050 -
dc.description.abstract Recent advances in deep learning have made it possible to implement artificial intelligence in mobile devices. Many studies have put a lot of effort into developing lightweight deep learning models optimized for mobile devices. To overcome the performance limitations of manually designed deep learning models, an automated search algorithm, called neural architecture search (NAS), has been proposed. However, studies on the effect of hardware architecture of the mobile device on the performance of NAS have been less explored. In this article, we show the importance of optimizing a hardware architecture, namely, NPU dataflow, when searching for a more accurate yet fast deep learning model. To do so, we first implement an optimization framework, named FlowOptimizer, for generating a best possible NPU dataflow for a given deep learning operator. Then, we utilize this framework during the latency-aware NAS to find the model with the highest accuracy satisfying the latency constraint. As a result, we show that the searched model with FlowOptimizer outperforms the performance by 87.1% and 92.3% on average compared to the searched model with NVDLA and Eyeriss, respectively, with better accuracy on a proxy dataset. We also show that the searched model can be transferred to a larger model to classify a more complex image dataset, i.e., ImageNet, achieving 0.2%/5.4% higher Top-1/Top-5 accuracy compared to MobileNetV2-1.0 with 3.6x lower latency. -
dc.language English -
dc.publisher Association for Computing Machinary, Inc. -
dc.title Implication of Optimizing NPU Dataflows on Neural Architecture Search for Mobile Devices -
dc.type Article -
dc.identifier.doi 10.1145/3513085 -
dc.identifier.scopusid 2-s2.0-85140483186 -
dc.identifier.bibliographicCitation Lee, Jooyeon. (2022-09). Implication of Optimizing NPU Dataflows on Neural Architecture Search for Mobile Devices. ACM Transactions on Design Automation of Electronic Systems, 27(5). doi: 10.1145/3513085 -
dc.description.isOpenAccess FALSE -
dc.subject.keywordAuthor Dataflow optimization -
dc.subject.keywordAuthor neural networks -
dc.subject.keywordAuthor neural architecture search -
dc.subject.keywordAuthor neural processing unit -
dc.citation.number 5 -
dc.citation.title ACM Transactions on Design Automation of Electronic Systems -
dc.citation.volume 27 -
Show Simple Item Record

File Downloads

  • There are no files associated with this item.

공유

qrcode
공유하기

Related Researcher

궁재하
Kung, Jaeha궁재하

Department of Electrical Engineering and Computer Science

read more

Total Views & Downloads