Detail View
Efficient Off-Policy Reinforcement Learning via Brain-Inspired Computing
WEB OF SCIENCE
SCOPUS
- Title
- Efficient Off-Policy Reinforcement Learning via Brain-Inspired Computing
- Issued Date
- 2023-06-06
- Citation
- Ni, Yang. (2023-06-06). Efficient Off-Policy Reinforcement Learning via Brain-Inspired Computing. ACM Great Lakes Symposium on VLSI, GLSVLSI 2023, 449–453. doi: 10.1145/3583781.3590298
- Type
- Conference Paper
- ISBN
- 9798400701252
- Abstract
-
Reinforcement Learning (RL) has opened up new opportunities to enhance existing smart systems that generally include a complex decision-making process. However, modern RL algorithms, e.g., Deep Q-Networks (DQN), are based on deep neural networks, resulting in high computational costs. In this paper, we propose QHD, an off-policy value-based Hyperdimensional Reinforcement Learning, that mimics brain properties toward robust and real-time learning. QHD relies on a lightweight brain-inspired model to learn an optimal policy in an unknown environment. On both desktop and power-limited embedded platforms, QHD achieves significantly better overall efficiency than DQN while providing higher or comparable rewards. QHD is also suitable for highly-efficient reinforcement learning with great potential for online and real-time learning. Our solution supports a small experience replay batch size that provides 12.3 times speedup compared to DQN while ensuring minimal quality loss. Our evaluation shows QHD capability for real-time learning, providing 34.6 times speedup and significantly better quality of learning than DQN. © 2023 Owner/Author.
더보기
- Publisher
- ACM Special Interest Group on Design Automation (SIGDA), IEEE Council on Electronic Design Automation (CEDA)
File Downloads
- There are no files associated with this item.
공유
Related Researcher
- Kim, Yeseong김예성
-
Department of Electrical Engineering and Computer Science
Total Views & Downloads
???jsp.display-item.statistics.view???: , ???jsp.display-item.statistics.download???:
