Cited time in webofscience Cited time in scopus

Revisiting HyperDimensional Learning for FPGA and Low-Power Architectures

Title
Revisiting HyperDimensional Learning for FPGA and Low-Power Architectures
Author(s)
Imani, MohsenZou, ZhuowenBosch, SamuelRao, Sanjay AnanthaSalamat, SahandKumar, VenkateshKim, YeseongRosing, Tajana
Issued Date
2021-02-27
Citation
International Symposium on High-Performance Computer Architecture, pp.221 - 234
Type
Conference Paper
ISBN
9780738123370
ISSN
1530-0897
Abstract
Today's applications are using machine learning algorithms to analyze the data collected from a swarm of devices on the Internet of Things (IoT). However, most existing learning algorithms are overcomplex to enable real-Time learning on IoT devices with limited resources and computing power. Recently, Hyperdimensional computing (HDC) is introduced as an alternative computing paradigm for enabling efficient and robust learning. HDC emulates the cognitive task by representing the values as patterns of neural activity in high-dimensional space. HDC first encodes all data points to high-dimensional vectors. It then efficiently performs the learning task using a well-defined set of operations. Existing HDC solutions have two main issues that hinder their deployments on low-power embedded devices: (i) the encoding module is costly, dominating 80% of the entire training performance, (ii) the HDC model size and the computation cost grow significantly with the number of classes in online inference.In this paper, we proposed a novel architecture, LookHD, which enables real-Time HDC learning on low-power edge devices. LookHD exploits computation reuse to memorize the encoding module and simplify its computation with single memory access. LookHD also address the inference scalability by exploiting HDC governing mathematics that compresses the HDC trained model into a single hypervector. We present how the proposed architecture can be implemented on the existing low power architectures: ARM processor and FPGA design. We evaluate the efficiency of the proposed approach on a wide range of practical classification problems such as activity recognition, face recognition, and speech recognition. Our evaluations show that LookHD can achieve, on average, 28.3\times faster and 97.4\times more energy-efficient training as compared to the state-of-The-Art HDC implemented on the FPGA. Similarly, in the inference, LookHD is 2.2\times faster, 4.1\times more energy-efficient, and has 6.3\times smaller model size than the same state-of-The-Art algorithms. © 2021 IEEE.
URI
http://hdl.handle.net/20.500.11750/46939
DOI
10.1109/HPCA51647.2021.00028
Publisher
IEEE Computer Society
Related Researcher
  • 김예성 Kim, Yeseong
  • Research Interests Embedded Systems for Edge Intelligence; Brain-Inspired HD Computing for AI; In-Memory Computing
Files in This Item:

There are no files associated with this item.

Appears in Collections:
Department of Electrical Engineering and Computer Science Computation Efficient Learning Lab. 2. Conference Papers

qrcode

  • twitter
  • facebook
  • mendeley

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE