Cited time in webofscience Cited time in scopus

Full metadata record

DC Field Value Language
dc.contributor.author Prieto Prada, John David -
dc.contributor.author Luna, Acevedo Miguel Andres -
dc.contributor.author Park, Sang Hyun -
dc.contributor.author Song, Cheol -
dc.date.accessioned 2023-12-26T18:12:30Z -
dc.date.available 2023-12-26T18:12:30Z -
dc.date.created 2023-01-21 -
dc.date.issued 2022-10-23 -
dc.identifier.isbn 9781665479271 -
dc.identifier.issn 2153-0866 -
dc.identifier.uri http://hdl.handle.net/20.500.11750/46796 -
dc.description.abstract Most virtual reality (VR) applications use a commercial controller for interaction. However, a typical virtual reality controller (VRC) lacks positional precision and accu-racy in millimeter-scale scenarios. This lack of precision and accuracy is caused by built-in sensors drift. Therefore, the tracking performance of a VRC needs to be enhanced for millimeter-scale scenarios. Herein, we introduce a novel way of enhancing the tracking performance of a commercial VRC in a millimeter-scale environment using a deep learning (DL) al-gorithm. Specifically, we use a long short-term memory (LSTM) model trained with data collected from a linear motor, an IMU sensor, and a VRC. We integrate the virtual environment developed in Unity software with the LSTM model running in Python. We designed three experimental conditions: the VRC, Kalman filter (KF), and LSTM modes. Furthermore, we evaluate tracking performances in the three conditions and two other experimental scenarios, namely stationary and dynamic. In the stationary experimental scenario, the system is left motionless for 10 s. By contrast, in the dynamic experimental scenarios, the linear stage moves the system by 12 mm along the X, Y, and Z axes. The experimental results indicate that the deep learning model outperforms the standard controllers positional performance by 85.69 % and 92.14 % in static and dynamic situations, respectively. © 2022 IEEE. -
dc.language English -
dc.publisher IEEE Robotics and Automation Society -
dc.title A Deep Learning Technique as a Sensor Fusion for Enhancing the Position in a Virtual Reality Micro-Environment -
dc.type Conference Paper -
dc.identifier.doi 10.1109/IROS47612.2022.9981239 -
dc.identifier.scopusid 2-s2.0-85146316336 -
dc.identifier.bibliographicCitation IEEE/RSJ International Conference on Intelligent Robots and Systems, pp.4935 - 4950 -
dc.identifier.url https://ras.papercept.net/conferences/conferences/IROS22/program/IROS22_ContentListWeb_3.html#tua-7_03 -
dc.citation.conferencePlace JA -
dc.citation.conferencePlace Kyoto, Japan -
dc.citation.endPage 4950 -
dc.citation.startPage 4935 -
dc.citation.title IEEE/RSJ International Conference on Intelligent Robots and Systems -

qrcode

  • twitter
  • facebook
  • mendeley

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE