Detail View

A Gaze-Speech System in Mixed Reality for Human-Robot Interaction
Citations

WEB OF SCIENCE

Citations

SCOPUS

Metadata Downloads

DC Field Value Language
dc.contributor.author Prieto Prada, John David -
dc.contributor.author Lee, Myung Ho -
dc.contributor.author Song, Cheol -
dc.date.accessioned 2024-02-09T02:10:14Z -
dc.date.available 2024-02-09T02:10:14Z -
dc.date.created 2023-09-12 -
dc.date.issued 2023-05-31 -
dc.identifier.isbn 9798350323658 -
dc.identifier.issn 1050-4729 -
dc.identifier.uri http://hdl.handle.net/20.500.11750/47917 -
dc.description.abstract Human-robot interaction (HRI) demands efficient time performance along the tasks. However, some interaction approaches may extend the time to complete such tasks. Thus, the time performance in HRI must be enhanced. This work presents an effective way to enhance the time performance in HRI tasks with a mixed reality (MR) method based on a gaze-speech system. In this paper, we design an MR world for pick-and-place tasks. The hardware system includes an MR headset, the Baxter robot, a table, and six cubes. In addition, the holographic MR scenario offers two modes of interaction: gesture mode (GM) and gaze-speech mode (GSM). The input actions during the GM and GSM methods are based on the pinch gesture and gaze with speech commands, respectively. The proposed GSM approach can improve the time performance in pick-and-place scenarios. The GSM system is 21.33 % faster than the traditional system, GM. Also, we evaluated the target- to-target time performance against a reference based on Fitts' law. Our findings show a promising method for time reduction in HRI tasks through MR environments. © 2023 IEEE. -
dc.language English -
dc.publisher IEEE Robotics and Automation Society -
dc.relation.ispartof Proceedings of the 40th IEEE International Conference on Robotics and Automation (ICRA 2023) -
dc.title A Gaze-Speech System in Mixed Reality for Human-Robot Interaction -
dc.type Conference Paper -
dc.identifier.doi 10.1109/ICRA48891.2023.10161010 -
dc.identifier.wosid 001048371100077 -
dc.identifier.scopusid 2-s2.0-85168665114 -
dc.identifier.bibliographicCitation Prieto Prada, John David. (2023-05-31). A Gaze-Speech System in Mixed Reality for Human-Robot Interaction. IEEE International Conference on Robotics and Automation, 7547–7553. doi: 10.1109/ICRA48891.2023.10161010 -
dc.identifier.url https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10160362 -
dc.citation.conferenceDate 2023-05-29 -
dc.citation.conferencePlace UK -
dc.citation.conferencePlace London -
dc.citation.endPage 7553 -
dc.citation.startPage 7547 -
dc.citation.title IEEE International Conference on Robotics and Automation -
Show Simple Item Record

File Downloads

  • There are no files associated with this item.

공유

qrcode
공유하기

Related Researcher

송철
Song, Cheol송철

Department of Robotics and Mechatronics Engineering

read more

Total Views & Downloads