<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <title>Repository Collection: null</title>
  <link rel="alternate" href="https://scholar.dgist.ac.kr/handle/20.500.11750/56664" />
  <subtitle />
  <id>https://scholar.dgist.ac.kr/handle/20.500.11750/56664</id>
  <updated>2026-05-09T17:26:49Z</updated>
  <dc:date>2026-05-09T17:26:49Z</dc:date>
  <entry>
    <title>PAPRLE: Plug-And-Play Robotic Limb Environment: A Modular Ecosystem for Robotic Limbs</title>
    <link rel="alternate" href="https://scholar.dgist.ac.kr/handle/20.500.11750/60312" />
    <author>
      <name>Kwon, Obin</name>
    </author>
    <author>
      <name>Yamsani, Sankalp</name>
    </author>
    <author>
      <name>Myers, Noboru</name>
    </author>
    <author>
      <name>Taylor, Sean</name>
    </author>
    <author>
      <name>Hong, Jooyoung</name>
    </author>
    <author>
      <name>Park, Kyungseo</name>
    </author>
    <author>
      <name>Alspach, Alex</name>
    </author>
    <author>
      <name>Kim, Joohyung</name>
    </author>
    <id>https://scholar.dgist.ac.kr/handle/20.500.11750/60312</id>
    <updated>2026-04-30T18:01:21Z</updated>
    <published>2025-12-31T15:00:00Z</published>
    <summary type="text">Title: PAPRLE: Plug-And-Play Robotic Limb Environment: A Modular Ecosystem for Robotic Limbs
Author(s): Kwon, Obin; Yamsani, Sankalp; Myers, Noboru; Taylor, Sean; Hong, Jooyoung; Park, Kyungseo; Alspach, Alex; Kim, Joohyung
Abstract: We introduce PAPRLE (plug-and-play robotic limb environment), a modular ecosystem that enables flexible placement and control of robotic limbs. With PAPRLE a user can change the arrangement of the robotic limbs and control them using a variety of input devices, including puppeteers, gaming controllers, and virtual reality (VR) devices. This versatility supports a wide range of teleoperation scenarios and promotes adaptability to different task requirements. We also introduce a pluggable puppeteer device that can be easily mounted and adapted to match the target robot configurations. PAPRLE supports bilateral teleoperation through these puppeteer devices, agnostic to the type or configuration of the follower robot. The modular design of PAPRLE facilitates novel spatial arrangements of the limbs and enables scalable data collection, thereby advancing research in embodied artificial intelligence (AI) and learning-based control. We validate PAPRLE in various real-world settings, demonstrating its versatility across diverse combinations of leader devices and follower robots. The system will be released as open source, including both hardware and software components, to support broader adoption and extension.</summary>
    <dc:date>2025-12-31T15:00:00Z</dc:date>
  </entry>
  <entry>
    <title>Serving Innovation: Seamless Service by Advancing Food Runners With Mobile Manipulation</title>
    <link rel="alternate" href="https://scholar.dgist.ac.kr/handle/20.500.11750/59958" />
    <author>
      <name>Yamsani, Sankalp</name>
    </author>
    <author>
      <name>Gim, Kevin</name>
    </author>
    <author>
      <name>Smithline, Tyler</name>
    </author>
    <author>
      <name>Qiu, Richard</name>
    </author>
    <author>
      <name>Mineyev, Roman</name>
    </author>
    <author>
      <name>Hirashima, Kenta</name>
    </author>
    <author>
      <name>Kang, Sungmin</name>
    </author>
    <author>
      <name>Park, Kyungseo</name>
    </author>
    <author>
      <name>Kang, Yoon-Koo</name>
    </author>
    <author>
      <name>An, Seulbi</name>
    </author>
    <author>
      <name>Ahn, Sunghwan</name>
    </author>
    <author>
      <name>Kim, Joohyung</name>
    </author>
    <id>https://scholar.dgist.ac.kr/handle/20.500.11750/59958</id>
    <updated>2026-02-08T16:40:15Z</updated>
    <summary type="text">Title: Serving Innovation: Seamless Service by Advancing Food Runners With Mobile Manipulation
Author(s): Yamsani, Sankalp; Gim, Kevin; Smithline, Tyler; Qiu, Richard; Mineyev, Roman; Hirashima, Kenta; Kang, Sungmin; Park, Kyungseo; Kang, Yoon-Koo; An, Seulbi; Ahn, Sunghwan; Kim, Joohyung
Abstract: The Mobile Object Manipulation Operator (MOMO) is an innovative and reconfigurable robotic system that transforms traditional serving robots into mobile manipulators. Leveraging the form factor and mobility of serving robots, MOMO integrates up to three pluggable devices, including six-DoF manipulators of varying sizes or a three-DoF sensor head. Its design incorporates two independent shoulder lifts to enhance vertical reach. The adaptability of the system tailors its capabilities to tasks beyond simple object transportation. As opposed to current food delivery robots, MOMO showcases its ability to remove obstructions from the floor and deliver items to recipients without human intervention.</summary>
  </entry>
  <entry>
    <title>Using biopotential and bio-impedance for intuitive human-robot interaction</title>
    <link rel="alternate" href="https://scholar.dgist.ac.kr/handle/20.500.11750/59281" />
    <author>
      <name>Park, Kyungseo</name>
    </author>
    <author>
      <name>Jeong, Hwayeong</name>
    </author>
    <author>
      <name>Jung, Yoontae</name>
    </author>
    <author>
      <name>Suh, Ji-Hoon</name>
    </author>
    <author>
      <name>Je, Minkyu</name>
    </author>
    <author>
      <name>Kim, Jung</name>
    </author>
    <id>https://scholar.dgist.ac.kr/handle/20.500.11750/59281</id>
    <updated>2025-12-26T08:40:12Z</updated>
    <published>2025-07-31T15:00:00Z</published>
    <summary type="text">Title: Using biopotential and bio-impedance for intuitive human-robot interaction
Author(s): Park, Kyungseo; Jeong, Hwayeong; Jung, Yoontae; Suh, Ji-Hoon; Je, Minkyu; Kim, Jung
Abstract: The rising interest in robotics and virtual reality has driven a growing demand for intuitive interfaces that enable seamless human-robot interaction (HRI). Bio-signal-based solutions, using biopotential and bio-impedance, offer a promising approach for estimating human motion intention thanks to their ability to capture physiological neuromuscular activity in real time. This Review discusses the potential of biopotential and bio-impedance sensing systems for advancing HRI focusing on the role of integrated circuits in enabling practical applications. Biopotential and bio-impedance can be used to monitor human physiological states and motion intention, making them highly suitable for enhancing motion recognition in HRI. However, as stand-alone modalities, they face limitations related to inter-subject variability and susceptibility to noise, highlighting the need for hybrid sensing techniques. The performance of these sensing modalities is closely tied to the development of integrated circuits optimized for low-noise, low-power operation and accurate signal acquisition in a dynamic environment. Understanding the complementary strengths and limitations of biopotential and bio-impedance signals, along with the advances in integrated circuit technologies for their acquisition, highlights the potential of hybrid, multimodal systems to enable robust, intuitive and scalable HRI.</summary>
    <dc:date>2025-07-31T15:00:00Z</dc:date>
  </entry>
  <entry>
    <title>A Body-Scale Robotic Skin Using Distributed Multimodal Sensing Modules: Design, Evaluation, and Application</title>
    <link rel="alternate" href="https://scholar.dgist.ac.kr/handle/20.500.11750/57458" />
    <author>
      <name>Yang, Min Jin</name>
    </author>
    <author>
      <name>Chung, Hyunjo</name>
    </author>
    <author>
      <name>Kim, Yoonjin</name>
    </author>
    <author>
      <name>Park, Kyungseo</name>
    </author>
    <author>
      <name>Kim, Jung</name>
    </author>
    <id>https://scholar.dgist.ac.kr/handle/20.500.11750/57458</id>
    <updated>2025-07-25T02:42:50Z</updated>
    <published>2024-12-31T15:00:00Z</published>
    <summary type="text">Title: A Body-Scale Robotic Skin Using Distributed Multimodal Sensing Modules: Design, Evaluation, and Application
Author(s): Yang, Min Jin; Chung, Hyunjo; Kim, Yoonjin; Park, Kyungseo; Kim, Jung
Abstract: Robotic systems start to coexist around humans but cannot physically interact as humans do due to an absence of tactile sensitivity across their bodies. Various studies have developed a scalable tactile sensor to grant a body-scale robotic skin, yet many faced drawbacks arising from the rapidly increasing number of sensing elements or a limited sensibility to a wide range of touches. This paper proposes a body-scale robotic skin composed of multimodal sensing modules and a multilayered fabric, simultaneously utilising super-resolution and tomographic transducing mechanisms. These mechanisms employ fewer sensing elements across a large area and complement each other in perceiving a wide range of stimuli humans can sense. Their measurements are processed to encode spatiotemporal properties of touch, which are decoded by a trained convolutional neural network to classify the touch modality, while their computational costs are minimised for on-device computation. The robotic skin was demonstrated on a commercial robotic arm and interpreted human touches for tactile communication, suggesting its capability as a body-scale robotic skin for further physical interaction.  © IEEE.</summary>
    <dc:date>2024-12-31T15:00:00Z</dc:date>
  </entry>
</feed>

