<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
  <channel>
    <title>Repository Collection: null</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/9966</link>
    <description />
    <pubDate>Sat, 04 Apr 2026 11:48:13 GMT</pubDate>
    <dc:date>2026-04-04T11:48:13Z</dc:date>
    <item>
      <title>Timing guarantees for inference of AI models in embedded systems</title>
      <link>https://scholar.dgist.ac.kr/handle/20.500.11750/58571</link>
      <description>Title: Timing guarantees for inference of AI models in embedded systems
Author(s): Lee, Seunghoon; Kang, Woosung; Bertogna, Marko; Chwa, Hoon Sung; Lee, Jinkyu
Abstract: Machine learning (ML) is increasingly being integrated into real-time embedded systems, enabling intelligent decision-making in applications such as autonomous driving and industrial automation. However, ensuring predictable execution of deep neural network (DNN) inference remains a major challenge, as real-time systems must meet strict timing constraints to guarantee safety and reliability. This paper identifies key challenges in achieving real-time AI inference in embedded systems, including limited memory capacity, high energy consumption, efficient multi-DNN scheduling, and heterogeneous resource management. To address these challenges, we emphasize the need for advanced scheduling algorithms to efficiently allocate heterogeneous computing resources across multiple DNNs, hierarchical memory management to reduce memory bottlenecks, and real-time neural architecture search and optimization techniques to enhance AI model performance under strict timing constraints. Furthermore, we discuss future research directions aimed at improving real-time AI execution, including time-predictable scheduling frameworks to ensure consistent inference latency, cross-device AI workload management to optimize resource utilization across heterogeneous processors, and benchmarking methodologies to systematically evaluate performance, timing guarantees, and energy efficiency in real-time AI systems. Advancing these research areas will enhance the reliability, efficiency, and scalability of AI-driven embedded systems, bridging the gap between ML advancements and real-time system requirements. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2025.</description>
      <pubDate>Sat, 31 May 2025 15:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://scholar.dgist.ac.kr/handle/20.500.11750/58571</guid>
      <dc:date>2025-05-31T15:00:00Z</dc:date>
    </item>
    <item>
      <title>An Integrated Network-Computing Load Balancing Simulator for VEC-Assisted Autonomous Vehicles</title>
      <link>https://scholar.dgist.ac.kr/handle/20.500.11750/58451</link>
      <description>Title: An Integrated Network-Computing Load Balancing Simulator for VEC-Assisted Autonomous Vehicles
Author(s): Kwak, Jeongho; Chwa, Hoon Sung; Jo, Han-Shin; Kang, Wonyul; Kim, Jeonghwan; Song, Juho; Kim, Jeeyoo; Lee, Seoungjae; Nam, Taesik; Seong, Wonwoo; Choi, Ji-Woong
Abstract: Achievement of offloaded analytics services through vehicle edge computing (VEC) requires a comprehensive analysis of in-vehicle processing and network environments. However, existing research on autonomous driving technologies leveraging VEC and related simulation studies remains in its early stages. This article presents the development of an integrated network-computing load (INCL) balancing simulator for autonomous vehicles, which combines a network model and an in-vehicle processing model implemented in MATLAB with a vehicle topology model and realistic driving scenarios created using a virtual test drive (VTD). Moreover, eight real-world autonomous driving scenarios are proposed to validate the simulator&amp;apos;s performance, demonstrating its ability to effectively balance network and computational loads under diverse conditions. Finally, using a case study in a platooning driving scenario, we evaluate the simulator&amp;apos;s capability to optimize resource utilization, paving the way for advanced autonomous driving technologies. © IEEE.</description>
      <pubDate>Sat, 31 May 2025 15:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://scholar.dgist.ac.kr/handle/20.500.11750/58451</guid>
      <dc:date>2025-05-31T15:00:00Z</dc:date>
    </item>
    <item>
      <title>플래시 메모리의 실시간 보장 기간 최대화를 위한 적응형 데이터 I/O 관리 기법</title>
      <link>https://scholar.dgist.ac.kr/handle/20.500.11750/57471</link>
      <description>Title: 플래시 메모리의 실시간 보장 기간 최대화를 위한 적응형 데이터 I/O 관리 기법
Author(s): 김경택; 김명석; 이성진; 좌훈승
Abstract: 본 논문에서는 낸드 플래시 내의 작업들의 P/E cycle에 따른 실행 시간 변화가 실시간성 보장을 위해 반드시 고려되어야 함을 보인다. 제시하는 문제를 해결하기 위해 P/E cycle에 따라 달라지는 데이터 I/O 실행 시간을 고려하여 실시간성 보장 여부를 판단하는 새로운 스케줄 가능성 분석과, 이 분석을 바탕으로 실시간성 보장 기간을 최대화하는 데이터 관리 기법을 제안한다. 제안하는 관리 기법은 일반적인 데이터 관리 기법과 대표적인 플래시 수명 연장 기법인 wear-leveling에 비해 실시간성 보장 기간을 향상시킨다. 임의 생성된 태스크 셋을 대상으로 제안된 기법을 실험하였을 때 일반적인 관리 기법에 비해 최대 81%, wear-leveling 기법에 비해 최대 63%까지 실시간성 보장 기간을 연장함을 확인하였다. 본 기법은 실시간 I/O를 필요로 하는 시스템의 신뢰성과 성능을 높이는 데 기여할 것으로 기대된다.


This paper demonstrates that variation of NAND flash operation latency must be considered to ensure real-time guarantees. To address this issue, a new schedulability analysis is proposed, which considers varying I/O execution time resulting from P/E cycle increase. An I/O management mechanism designed to maximize the duration of real-time guarantees is then suggested. The proposed technique can improve the duration of real-time guarantees compared to conventional flash firmware and wear-leveling technique, commonly used to extend the lifespan of flash storage. In evaluation, our proposed technique extended the real-time guarantee duration by up to 81% compared to conventional management methods, and up to 63% compared to wear-leveling techniques. Our solution is expected to enhance the reliability and performance of systems requiring real-time data I/O.</description>
      <pubDate>Sat, 30 Nov 2024 15:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://scholar.dgist.ac.kr/handle/20.500.11750/57471</guid>
      <dc:date>2024-11-30T15:00:00Z</dc:date>
    </item>
    <item>
      <title>Tight necessary feasibility analysis for recurring real-time tasks on a multiprocessor</title>
      <link>https://scholar.dgist.ac.kr/handle/20.500.11750/17492</link>
      <description>Title: Tight necessary feasibility analysis for recurring real-time tasks on a multiprocessor
Author(s): Chwa, Hoon Sung; Lee, Jinkyu
Abstract: One of the important design issues for time-critical embedded systems is to derive necessary conditions that meet all job deadlines invoked by a set of recurring real-time tasks under a computing resource (called feasibility). To this end, existing studies focused on how to derive a tight lower-bound of execution requirement (i.e., demand) of a target set of real-time tasks. In this paper, we address the following question regarding the supply provided by a multiprocessor resource: is it possible for a real-time task set to always utilize all the provided supply? We develop a systematic approach that i) calculates the amount of supply proven unusable, ii) finds a partial schedule that yields a necessary condition to minimize the amount of unusable supply, and iii) uses the partial schedule to further reclaim unusable supply. While the systematic approach can be applied to most (if not all) recurring real-time task models, we show two examples how the approach can yield tight necessary feasibility conditions for the sequential task model and the gang scheduling model. We demonstrate the proposed approach finds a number of additional infeasible task sets which have not been proven infeasible by any existing studies for the task models. © 2022 Elsevier B.V.</description>
      <pubDate>Tue, 31 Jan 2023 15:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://scholar.dgist.ac.kr/handle/20.500.11750/17492</guid>
      <dc:date>2023-01-31T15:00:00Z</dc:date>
    </item>
  </channel>
</rss>

