Cited time in webofscience Cited time in scopus

Full metadata record

DC Field Value Language
dc.contributor.author Imani, Mohsen -
dc.contributor.author Zou, Zhuowen -
dc.contributor.author Bosch, Samuel -
dc.contributor.author Rao, Sanjay Anantha -
dc.contributor.author Salamat, Sahand -
dc.contributor.author Kumar, Venkatesh -
dc.contributor.author Kim, Yeseong -
dc.contributor.author Rosing, Tajana -
dc.date.accessioned 2023-12-26T18:44:37Z -
dc.date.available 2023-12-26T18:44:37Z -
dc.date.created 2021-05-14 -
dc.date.issued 2021-03-01 -
dc.identifier.isbn 9780738123370 -
dc.identifier.issn 1530-0897 -
dc.identifier.uri http://hdl.handle.net/20.500.11750/46939 -
dc.description.abstract Today's applications are using machine learning algorithms to analyze the data collected from a swarm of devices on the Internet of Things (IoT). However, most existing learning algorithms are overcomplex to enable real-Time learning on IoT devices with limited resources and computing power. Recently, Hyperdimensional computing (HDC) is introduced as an alternative computing paradigm for enabling efficient and robust learning. HDC emulates the cognitive task by representing the values as patterns of neural activity in high-dimensional space. HDC first encodes all data points to high-dimensional vectors. It then efficiently performs the learning task using a well-defined set of operations. Existing HDC solutions have two main issues that hinder their deployments on low-power embedded devices: (i) the encoding module is costly, dominating 80% of the entire training performance, (ii) the HDC model size and the computation cost grow significantly with the number of classes in online inference.In this paper, we proposed a novel architecture, LookHD, which enables real-Time HDC learning on low-power edge devices. LookHD exploits computation reuse to memorize the encoding module and simplify its computation with single memory access. LookHD also address the inference scalability by exploiting HDC governing mathematics that compresses the HDC trained model into a single hypervector. We present how the proposed architecture can be implemented on the existing low power architectures: ARM processor and FPGA design. We evaluate the efficiency of the proposed approach on a wide range of practical classification problems such as activity recognition, face recognition, and speech recognition. Our evaluations show that LookHD can achieve, on average, 28.3\times faster and 97.4\times more energy-efficient training as compared to the state-of-The-Art HDC implemented on the FPGA. Similarly, in the inference, LookHD is 2.2\times faster, 4.1\times more energy-efficient, and has 6.3\times smaller model size than the same state-of-The-Art algorithms. © 2021 IEEE. -
dc.language English -
dc.publisher IEEE Computer Society -
dc.relation.ispartof Proceedings - International Symposium on High-Performance Computer Architecture -
dc.title Revisiting HyperDimensional Learning for FPGA and Low-Power Architectures -
dc.type Conference Paper -
dc.identifier.doi 10.1109/HPCA51647.2021.00028 -
dc.identifier.wosid 000671076000017 -
dc.identifier.scopusid 2-s2.0-85105016876 -
dc.identifier.bibliographicCitation International Symposium on High-Performance Computer Architecture, pp.221 - 234 -
dc.identifier.url https://hpca-conf.org/2021/main-program/#:~:text=California%20San%20Diego)%3B-,Yeseong,-Kim%20(Daegu%20Institue -
dc.citation.conferenceDate 2021-02-27 -
dc.citation.conferencePlace KO -
dc.citation.conferencePlace 서울 -
dc.citation.endPage 234 -
dc.citation.startPage 221 -
dc.citation.title International Symposium on High-Performance Computer Architecture -
Files in This Item:

There are no files associated with this item.

Appears in Collections:
Department of Electrical Engineering and Computer Science Computation Efficient Learning Lab. 2. Conference Papers

qrcode

  • twitter
  • facebook
  • mendeley

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE