<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/58167">
    <title>Repository Community: null</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/58167</link>
    <description />
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/60119" />
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/60059" />
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/60058" />
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/59995" />
      </rdf:Seq>
    </items>
    <dc:date>2026-04-11T03:43:30Z</dc:date>
  </channel>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/60119">
    <title>Multi-modal Knowledge Distillation-based Human Trajectory Forecasting</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/60119</link>
    <description>Title: Multi-modal Knowledge Distillation-based Human Trajectory Forecasting
Author(s): Jeong, Jaewoo; Lee, Seohee; Park, Daehee; Lee, Giwon; Yoon, Kuk-Jin
Abstract: Pedestrian trajectory forecasting is crucial in various applications such as autonomous driving and mobile robot navigation. In such applications, camera-based perception enables the extraction of additional modalities (human pose, text) to enhance prediction accuracy. Indeed, we find that textual descriptions play a crucial role in integrating additional modalities into a unified understanding. However, online extraction of text requires the use of VLM, which may not be feasible for resource-constrained systems. To address this challenge, we propose a multimodal knowledge distillation framework: a student model with limited modality is distilled from a teacher model trained with full range of modalities. The comprehensive knowledge of a teacher model trained with trajectory, human pose, and text is distilled into a student model using only trajectory or human pose as a sole supplement. In doing so, we separately distill the core locomotion insights from intra-agent multi-modality and inter-agent interaction. Our generalizable framework is validated with two state-of-the-art models across three datasets on both ego-view (JRDB, SIT) and BEV-view (ETH/UCY) setups, utilizing both annotated and VLM-generated text captions. Distilled student models show consistent improvement in all prediction metrics for both full and instantaneous observations, improving up to similar to 13%. The code is available at github.com/Jaewoo97/KDTF.</description>
    <dc:date>2025-06-15T15:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/60059">
    <title>Interaction-Merged Motion Planning: Effectively Leveraging Diverse Motion Datasets for Robust Planning</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/60059</link>
    <description>Title: Interaction-Merged Motion Planning: Effectively Leveraging Diverse Motion Datasets for Robust Planning
Author(s): Lee, Giwon; Jeong, Wooseong; Park, Daehee; Jeong, Jaewoo; Yoon, Kuk-Jin
Abstract: Motion planning is a crucial component of autonomous robot driving. While various trajectory datasets exist, effectively utilizing them for a target domain remains challenging due to differences in agent interactions and environmental characteristics. Conventional approaches, such as domain adaptation or ensemble learning, leverage multiple source datasets but suffer from domain imbalance, catastrophic forgetting, and high computational costs. To address these challenges, we propose Interaction-Merged Motion Planning (IMMP), a novel approach that leverages parameter checkpoints trained on different domains during adaptation to the target domain. IMMP follows a two-step process: pre-merging to capture agent behaviors and interactions, sufficiently extracting diverse information from the source domain, followed by merging to construct an adaptable model that efficiently transfers diverse interactions to the target domain. Our method is evaluated on various planning benchmarks and models, demonstrating superior performance compared to conventional approaches.</description>
    <dc:date>2025-10-22T15:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/60058">
    <title>Generative Active Learning for Long-tail Trajectory Prediction via Controllable Diffusion Model</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/60058</link>
    <description>Title: Generative Active Learning for Long-tail Trajectory Prediction via Controllable Diffusion Model
Author(s): Park, Daehee; Monu Surana; Pranav Desai; Ashish Mehta; Reuben MV John; Yoon, Kuk-Jin
Abstract: While data-driven trajectory prediction has enhanced the reliability of autonomous driving systems, it still struggles with rarely observed long-tail scenarios. Prior works addressed this by modifying model architectures, such as using hypernetworks. In contrast, we propose refining the training process to unlock each model&amp;apos;s potential without altering its structure. We introduce Generative Active Learning for Trajectory prediction (GALTraj), the first method to successfully deploy generative active learning into trajectory prediction. It actively identifies rare tail samples where the model fails and augments these samples with a controllable diffusion model during training. In our framework, generating scenarios that are diverse, realistic, and preserve tail-case characteristics is paramount. Accordingly, we design a tail-aware generation method that applies tailored diffusion guidance to generate trajectories that both capture rare behaviors and respect traffic rules. Unlike prior simulation methods focused solely on scenario diversity, GALTraj is the first to show how simulator-driven augmentation benefits long-tail learning in trajectory prediction. Experiments on multiple trajectory datasets (WOMD, Argoverse2) with popular backbones (QCNet, MTR) confirm that our method significantly boosts performance on tail samples and also enhances accuracy on head samples.</description>
    <dc:date>2025-10-22T15:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/59995">
    <title>Non-differentiable Reward Optimization for Diffusion-based Autonomous Motion Planning</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/59995</link>
    <description>Title: Non-differentiable Reward Optimization for Diffusion-based Autonomous Motion Planning
Author(s): Lee, Giwon; Park, Daehee; Jeong, Jaewoo; Yoon, Kuk-Jin
Abstract: Safe and effective motion planning is crucial for autonomous robots. Diffusion models excel at capturing complex agent interactions, a fundamental aspect of decision-making in dynamic environments. Recent studies have successfully applied diffusion models to motion planning, demonstrating their competence in handling complex scenarios and accurately predicting multi-modal future trajectories. Despite their effectiveness, diffusion models have limitations in training objectives, as they approximate data distributions rather than explicitly capturing the underlying decision-making dynamics. However, the crux of motion planning lies in non-differentiable downstream objectives, such as safety (collision avoidance) and effectiveness (goal-reaching), which conventional learning algorithms cannot directly optimize. In this paper, we propose a reinforcement learning-based training scheme for diffusion motion planning models, enabling them to effectively learn non-differentiable objectives that explicitly measure safety and effectiveness. Specifically, we introduce a reward-weighted dynamic thresholding algorithm to shape a dense reward signal, facilitating more effective training and outperforming models trained with differentiable objectives. State-of-the-art performance on pedestrian datasets (CrowdNav, ETH-UCY) compared to various baselines demonstrates the versatility of our approach for safe and effective motion planning.</description>
    <dc:date>2025-10-20T15:00:00Z</dc:date>
  </item>
</rdf:RDF>

