<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <title>Repository Collection: null</title>
  <link rel="alternate" href="https://scholar.dgist.ac.kr/handle/20.500.11750/134" />
  <subtitle />
  <id>https://scholar.dgist.ac.kr/handle/20.500.11750/134</id>
  <updated>2026-04-04T10:34:35Z</updated>
  <dc:date>2026-04-04T10:34:35Z</dc:date>
  <entry>
    <title>Motion-Based Bird-UAV Classification Using 3D-CNN for Long-Range Anti-UAV Systems</title>
    <link rel="alternate" href="https://scholar.dgist.ac.kr/handle/20.500.11750/59283" />
    <author>
      <name>Jin, Woo-Cheol</name>
    </author>
    <author>
      <name>Oh, Daegun</name>
    </author>
    <author>
      <name>Lee, Sang-Chul</name>
    </author>
    <author>
      <name>Choi, Ji-Woong</name>
    </author>
    <id>https://scholar.dgist.ac.kr/handle/20.500.11750/59283</id>
    <updated>2025-12-29T01:40:11Z</updated>
    <published>2025-11-09T15:00:00Z</published>
    <summary type="text">Title: Motion-Based Bird-UAV Classification Using 3D-CNN for Long-Range Anti-UAV Systems
Author(s): Jin, Woo-Cheol; Oh, Daegun; Lee, Sang-Chul; Choi, Ji-Woong
Abstract: The increasing threat of malicious unmanned aerial vehicles (UAVs) necessitates robust anti-UAV systems. However, their performance is often degraded by bird misclassification caused by low-resolution imagery and unseen UAV types. This study proposes a motion-based 3D convolutional neural network (3D-CNN) trained on image sequences acquired from a radar-camera integrated anti-UAV solution. The proposed method effectively distinguishes UAVs from birds, even under low-resolution conditions and when encountering previously unseen UAV types.</summary>
    <dc:date>2025-11-09T15:00:00Z</dc:date>
  </entry>
  <entry>
    <title>Latency Analysis of 5G C-V2X Real-Time Video Transmission Over Different Channel States</title>
    <link rel="alternate" href="https://scholar.dgist.ac.kr/handle/20.500.11750/59154" />
    <author>
      <name>Park, Hanyoung</name>
    </author>
    <author>
      <name>Jang, Yongjae</name>
    </author>
    <author>
      <name>Choi, Ji-Woong</name>
    </author>
    <id>https://scholar.dgist.ac.kr/handle/20.500.11750/59154</id>
    <updated>2025-11-06T12:10:11Z</updated>
    <published>2025-06-19T15:00:00Z</published>
    <summary type="text">Title: Latency Analysis of 5G C-V2X Real-Time Video Transmission Over Different Channel States
Author(s): Park, Hanyoung; Jang, Yongjae; Choi, Ji-Woong
Abstract: For future applications such as sensor sharing, computation task offloading for autonomous driving, and remote driving, robust real-time video transmission with low latency via cellular vehicle-to-everything (C-V2X) communication is essential to ensure operational reliability. While advanced communication services require sensor sharing, yet there is a lack of comprehensive latency analysis for high-volume data between vehicles and remote users or servers. Existing literature predominantly focuses on vehicle-to-vehicle communication or basic device status messages with low volume, which are insufficient for supporting advanced services such as autonomous driving. Consequently, it is crucial to analyze the latency involved in sensor sharing between vehicle and remote entity over different channel states. In this paper, the latency of sensor data transmission using 5G C-V2X Uu interface is investigated in consideration of various channel states and modulation and coding schemes. Additionally, we analyzed the latency depending on the resolution and encoding of the camera video image. Simulation results show the feasible frame rates and video resolutions for both raw and compressed video transmissions within these communication systems, and highlight the effects of channel state and multi-user scenarios on the feasibility of real-time camera video sharing.</summary>
    <dc:date>2025-06-19T15:00:00Z</dc:date>
  </entry>
  <entry>
    <title>Channel Charting-Based Vehicle Position Estimation in Real-World Coordinates of Lanes</title>
    <link rel="alternate" href="https://scholar.dgist.ac.kr/handle/20.500.11750/59149" />
    <author>
      <name>Park, Hanyoung</name>
    </author>
    <author>
      <name>Jang, Yongjae</name>
    </author>
    <author>
      <name>Choi, Ji-Woong</name>
    </author>
    <id>https://scholar.dgist.ac.kr/handle/20.500.11750/59149</id>
    <updated>2025-11-06T10:40:13Z</updated>
    <published>2025-07-07T15:00:00Z</published>
    <summary type="text">Title: Channel Charting-Based Vehicle Position Estimation in Real-World Coordinates of Lanes
Author(s): Park, Hanyoung; Jang, Yongjae; Choi, Ji-Woong
Abstract: The development of vehicle-to-everything (V2X) technology enables the real-time sharing of various information, leading to the availability of future automated driving applications such as cooperative driving. To this end, vehicle position information is being fundamental since various autonomous functions rely on it. For the position information, the global positioning system (GPS) is generally considered. However, GPS is easily contaminated by environmental factors, which implies the occurrence of GPS shadow areas including canyons, highdensity urban areas, and so on. Therefore, V2X-based localization methods are proposed in various literature. However, previous localization algorithms often rely on prerequisites, such as strict synchronization or the availability of a large number of ground-truth positions for supervised learning, which may not always be feasible in practical scenarios. From this perspective, a channel charting-based approach can be an adequate solution, but its feasibility and accuracy in the non-line-of-sight (NLoS) outdoor environments and high-speed mobility conditions have not been verified. Therefore, in this paper, we propose channel charting-based vehicle position estimation under urban driving scenarios. The results demonstrate the feasibility of channel charting-based localization considering urban driving scenarios while reducing data overhead.</summary>
    <dc:date>2025-07-07T15:00:00Z</dc:date>
  </entry>
  <entry>
    <title>Dynamic Load Balancing Framework for Compute-Network Resource Integration in MEC-Assisted Autonomous Vehicles</title>
    <link rel="alternate" href="https://scholar.dgist.ac.kr/handle/20.500.11750/59146" />
    <author>
      <name>Kim, Jeonghwan</name>
    </author>
    <author>
      <name>Song, Juho</name>
    </author>
    <author>
      <name>Chwa, Hoonsung</name>
    </author>
    <author>
      <name>Choi, Ji-Woong</name>
    </author>
    <author>
      <name>Kwak, Jeongho</name>
    </author>
    <id>https://scholar.dgist.ac.kr/handle/20.500.11750/59146</id>
    <updated>2025-11-06T10:40:11Z</updated>
    <published>2025-07-07T15:00:00Z</published>
    <summary type="text">Title: Dynamic Load Balancing Framework for Compute-Network Resource Integration in MEC-Assisted Autonomous Vehicles
Author(s): Kim, Jeonghwan; Song, Juho; Chwa, Hoonsung; Choi, Ji-Woong; Kwak, Jeongho
Abstract: As autonomous driving technology becomes more advanced, vehicle-edge computing (VEC) has drawn significant attention. However, it still faces challenges due to varying network conditions and the availability of roadside units (RSUs). In this paper, we present a Lyapunov optimization-based algorithm that jointly optimizes offloading decisions and computing resources, aiming to reduce energy consumption while keeping service time within acceptable limits through both vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication. We then evaluate the real-world performance of this algorithm by using simulator, which integrates a network model, an in-vehicle processing model in MATLAB, a vehicle topology model, and realistic driving scenarios generated with a virtual test drive (VTD).</summary>
    <dc:date>2025-07-07T15:00:00Z</dc:date>
  </entry>
</feed>

