<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <title>Repository Collection: null</title>
  <link rel="alternate" href="https://scholar.dgist.ac.kr/handle/20.500.11750/9967" />
  <subtitle />
  <id>https://scholar.dgist.ac.kr/handle/20.500.11750/9967</id>
  <updated>2026-04-04T11:48:12Z</updated>
  <dc:date>2026-04-04T11:48:12Z</dc:date>
  <entry>
    <title>Cros-Rt: Cross-Layer Priority Scheduling for Predictable Inter-Process Communication in Ros 2</title>
    <link rel="alternate" href="https://scholar.dgist.ac.kr/handle/20.500.11750/59906" />
    <author>
      <name>Kim, Sohyun</name>
    </author>
    <author>
      <name>Song, Juho</name>
    </author>
    <author>
      <name>Lee, Kilho</name>
    </author>
    <author>
      <name>Oh, Sangeun</name>
    </author>
    <author>
      <name>Chwa, Hoon Sung</name>
    </author>
    <id>https://scholar.dgist.ac.kr/handle/20.500.11750/59906</id>
    <updated>2026-02-04T12:10:15Z</updated>
    <published>2025-05-07T15:00:00Z</published>
    <summary type="text">Title: Cros-Rt: Cross-Layer Priority Scheduling for Predictable Inter-Process Communication in Ros 2
Author(s): Kim, Sohyun; Song, Juho; Lee, Kilho; Oh, Sangeun; Chwa, Hoon Sung
Abstract: The Robot Operating System 2 (ROS 2) is a popular middleware for distributed robotic applications. However, achieving real-time guarantees in ROS 2 is challenging due to unpredictable delays and priority inversions. We reveal that these issues arise from the lack of consistent priority propagation across ROS 2&amp;apos;s multi-layered communication architecture, particularly down to the kernel layer. To address this, we present CROS-RT, the first cross-layer scheduler explicitly designed to tackle the unpredictability in ROS 2 inter-process communication caused by multi-layer priority misalignment. CROS-RT ensures consistent, priority-based scheduling across the application, middleware, and kernel layers, introducing mechanisms for priority propagation, kernel-level message prioritization, and dynamic kernel thread adjustment. We have implemented and evaluated CROS-RT on the current stable release of ROS 2. Experiments demonstrate that CROS-RT enhances communication predictability, reducing the worst-case response time by up to 89.3 % over a baseline (vanilla ROS 2). Additionally, we provide an analytical model to derive upper bounds on response times, ensuring reliable realtime performance for safety-critical applications.</summary>
    <dc:date>2025-05-07T15:00:00Z</dc:date>
  </entry>
  <entry>
    <title>Dynamic Load Balancing Framework for Compute-Network Resource Integration in MEC-Assisted Autonomous Vehicles</title>
    <link rel="alternate" href="https://scholar.dgist.ac.kr/handle/20.500.11750/59146" />
    <author>
      <name>Kim, Jeonghwan</name>
    </author>
    <author>
      <name>Song, Juho</name>
    </author>
    <author>
      <name>Chwa, Hoonsung</name>
    </author>
    <author>
      <name>Choi, Ji-Woong</name>
    </author>
    <author>
      <name>Kwak, Jeongho</name>
    </author>
    <id>https://scholar.dgist.ac.kr/handle/20.500.11750/59146</id>
    <updated>2025-11-06T10:40:11Z</updated>
    <published>2025-07-07T15:00:00Z</published>
    <summary type="text">Title: Dynamic Load Balancing Framework for Compute-Network Resource Integration in MEC-Assisted Autonomous Vehicles
Author(s): Kim, Jeonghwan; Song, Juho; Chwa, Hoonsung; Choi, Ji-Woong; Kwak, Jeongho
Abstract: As autonomous driving technology becomes more advanced, vehicle-edge computing (VEC) has drawn significant attention. However, it still faces challenges due to varying network conditions and the availability of roadside units (RSUs). In this paper, we present a Lyapunov optimization-based algorithm that jointly optimizes offloading decisions and computing resources, aiming to reduce energy consumption while keeping service time within acceptable limits through both vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication. We then evaluate the real-world performance of this algorithm by using simulator, which integrates a network model, an in-vehicle processing model in MATLAB, a vehicle topology model, and realistic driving scenarios generated with a virtual test drive (VTD).</summary>
    <dc:date>2025-07-07T15:00:00Z</dc:date>
  </entry>
  <entry>
    <title>Dynamically Scalable Pruning for Transformer-Based Large Language Models</title>
    <link rel="alternate" href="https://scholar.dgist.ac.kr/handle/20.500.11750/58487" />
    <author>
      <name>Lee, Junyoung</name>
    </author>
    <author>
      <name>Jang, Shinhyoung</name>
    </author>
    <author>
      <name>Kim, Seohyun</name>
    </author>
    <author>
      <name>Park, Jongho</name>
    </author>
    <author>
      <name>Suh, Il Hong</name>
    </author>
    <author>
      <name>Chwa, Hoon Sung</name>
    </author>
    <author>
      <name>Kim, Yeseong</name>
    </author>
    <id>https://scholar.dgist.ac.kr/handle/20.500.11750/58487</id>
    <updated>2026-02-10T11:40:23Z</updated>
    <published>2025-04-01T15:00:00Z</published>
    <summary type="text">Title: Dynamically Scalable Pruning for Transformer-Based Large Language Models
Author(s): Lee, Junyoung; Jang, Shinhyoung; Kim, Seohyun; Park, Jongho; Suh, Il Hong; Chwa, Hoon Sung; Kim, Yeseong
Abstract: We propose Matryoshka, a novel framework for transformer model pruning, enabling dynamic runtime controls while maintaining competitive accuracy to modern large language models (LLMs). Matryoshka incrementally constructs submodels with varying complexities, allowing runtime adaptation without maintaining separate models. Our evaluations on LLaMA-7B demonstrate that Matryoshka achieves up to 34% speedup and outperforms the quality of state-of-the-art pruning methods, providing a flexible solution for deploying LLMs. © 2025 EDAA.</summary>
    <dc:date>2025-04-01T15:00:00Z</dc:date>
  </entry>
  <entry>
    <title>Solid State Drive Targeted Memory-Efficient Indexing for Universal I/O Patterns and Fragmentation Degrees</title>
    <link rel="alternate" href="https://scholar.dgist.ac.kr/handle/20.500.11750/58402" />
    <author>
      <name>Im, Junsu</name>
    </author>
    <author>
      <name>Kim, Jeonggyun</name>
    </author>
    <author>
      <name>Oh, Seonggyun</name>
    </author>
    <author>
      <name>Koo, Jinhyung</name>
    </author>
    <author>
      <name>Park, Juhyung</name>
    </author>
    <author>
      <name>Chwa, Hoon Sung</name>
    </author>
    <author>
      <name>Noh, Sam H.</name>
    </author>
    <author>
      <name>Lee, Sungjin</name>
    </author>
    <id>https://scholar.dgist.ac.kr/handle/20.500.11750/58402</id>
    <updated>2025-07-25T03:34:37Z</updated>
    <published>2025-04-02T15:00:00Z</published>
    <summary type="text">Title: Solid State Drive Targeted Memory-Efficient Indexing for Universal I/O Patterns and Fragmentation Degrees
Author(s): Im, Junsu; Kim, Jeonggyun; Oh, Seonggyun; Koo, Jinhyung; Park, Juhyung; Chwa, Hoon Sung; Noh, Sam H.; Lee, Sungjin
Abstract: Thanks to the advance of device scaling technologies, the capacity of SSDs is rapidly increasing. Such increase, however, comes at the cost of a huge index table requiring large DRAM. To provide reasonable performance with less DRAM, various index structures exploiting locality and regularity of I/O references have been proposed. However, they provide deteriorated performance depending on I/O patterns and storage fragmentation. This paper proposes a novel approximate index structure, called AppL, which combines memory-efficient approximate indices and an LSM-tree that has an append-only and sorted nature. AppL reduces the index size to 6∼8-bits per entry, which is considerably smaller than the typical index structures requiring 32∼64-bits, and maintains such high memory efficiency irrespective of locality and fragmentation. By alleviating memory pressure, AppL achieves 33.6∼72.4% shorter read latency and 28.4%∼83.4% higher I/O throughput than state-of-the-art techniques. © 2025 Copyright held by the owner/author(s).</summary>
    <dc:date>2025-04-02T15:00:00Z</dc:date>
  </entry>
</feed>

