<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/57941">
    <title>Repository Collection: null</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/57941</link>
    <description />
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/59402" />
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/59236" />
      </rdf:Seq>
    </items>
    <dc:date>2026-04-04T12:10:38Z</dc:date>
  </channel>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/59402">
    <title>VLM 시대의 목표 객체 탐색을 위한 지식 융합 전략 서베이</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/59402</link>
    <description>Title: VLM 시대의 목표 객체 탐색을 위한 지식 융합 전략 서베이
Author(s): 서보건; 김지선; 손준우; 박명옥; 김기섭
Abstract: The rapid advancement of robotics and deep learning has increasingly accelerated the use of Embodied AI, where robots autonomously explore and reason in complex real-world environments. With the growing demand for domestic service robots, efficient navigation in unfamiliar settings has become even more crucial. Object Goal Navigation (OGN) is a fundamental task for this capability, requiring a robot to find and reach a user-specified object in an unknown environment. Solving OGN demands advanced perception, contextual reasoning, and effective exploration strategies. Recent Vision-Language Models (VLMs) and Large Language Models (LLMs) provide agents with external common knowledge and reasoning capabilities. This paper poses the critical question: “Where should VLM/LLM knowledge be fused into Object Goal Navigation?” We categorize knowledge integration into the three stages adapted from the Perception-Prediction-Planning paradigm to offer a structured survey of Object Goal Navigation approaches shaped by the VLM era. We conclude by discussing current dataset limitations and future directions, including further studies on socially interactive navigation and operation in mixed indoor - outdoor environments.</description>
    <dc:date>2025-12-31T15:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/59236">
    <title>LAMP: Implicit Language Map for Robot Navigation</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/59236</link>
    <description>Title: LAMP: Implicit Language Map for Robot Navigation
Author(s): Lee, Sibaek; Yu, Hyeonwoo; Kim, Giseop; Choi, Sunwook
Abstract: Recent advances in vision-language models have made zero-shot navigation feasible, enabling robots to interpret and follow natural language instructions without requiring labeling. However, existing methods that explicitly store language vectors in grid or node-based maps struggle to scale to large environments due to excessive memory requirements and limited resolution for fine-grained planning. We introduce LAMP (Language Map), a novel neural language field-based navigation framework that learns a continuous, language-driven map and directly leverages it for fine-grained path generation. Unlike prior approaches, our method encodes language features as an implicit neural field rather than storing them explicitly at every location. By combining this implicit representation with a sparse graph, LAMP supports efficient coarse path planning and then performs gradient-based optimization in the learned field to refine poses near the goal. Our two-stage pipeline of coarse graph search followed by language-driven, gradient-guided optimization is the first application of an implicit language map for precise path generation. This refinement is particularly effective at selecting goal regions not directly observed by leveraging semantic similarities in the learned feature space. To further enhance robustness, we adopt a Bayesian framework that models embedding uncertainty via the von Mises-Fisher distribution, thereby improving generalization to unobserved regions. To scale to large environments, LAMP employs a graph sampling strategy that prioritizes spatial coverage and embedding confidence, retaining only the most informative nodes and substantially reducing computational overhead. Our experimental results, both in NVIDIA Isaac Sim and on a real multi-floor building, demonstrate that LAMP outperforms existing explicit methods in both memory efficiency and fine-grained goal-reaching accuracy, opening new possibilities for scalable, language-driven robot navigation.</description>
    <dc:date>2025-11-30T15:00:00Z</dc:date>
  </item>
</rdf:RDF>

