Detail View

LAMP: Implicit Language Map for Robot Navigation
Citations

WEB OF SCIENCE

Citations

SCOPUS

Metadata Downloads

DC Field Value Language
dc.contributor.author Lee, Sibaek -
dc.contributor.author Yu, Hyeonwoo -
dc.contributor.author Kim, Giseop -
dc.contributor.author Choi, Sunwook -
dc.date.accessioned 2025-11-28T17:40:11Z -
dc.date.available 2025-11-28T17:40:11Z -
dc.date.created 2025-10-30 -
dc.date.issued 2025-12 -
dc.identifier.issn 2377-3766 -
dc.identifier.uri https://scholar.dgist.ac.kr/handle/20.500.11750/59236 -
dc.description.abstract Recent advances in vision-language models have made zero-shot navigation feasible, enabling robots to interpret and follow natural language instructions without requiring labeling. However, existing methods that explicitly store language vectors in grid or node-based maps struggle to scale to large environments due to excessive memory requirements and limited resolution for fine-grained planning. We introduce LAMP (Language Map), a novel neural language field-based navigation framework that learns a continuous, language-driven map and directly leverages it for fine-grained path generation. Unlike prior approaches, our method encodes language features as an implicit neural field rather than storing them explicitly at every location. By combining this implicit representation with a sparse graph, LAMP supports efficient coarse path planning and then performs gradient-based optimization in the learned field to refine poses near the goal. Our two-stage pipeline of coarse graph search followed by language-driven, gradient-guided optimization is the first application of an implicit language map for precise path generation. This refinement is particularly effective at selecting goal regions not directly observed by leveraging semantic similarities in the learned feature space. To further enhance robustness, we adopt a Bayesian framework that models embedding uncertainty via the von Mises-Fisher distribution, thereby improving generalization to unobserved regions. To scale to large environments, LAMP employs a graph sampling strategy that prioritizes spatial coverage and embedding confidence, retaining only the most informative nodes and substantially reducing computational overhead. Our experimental results, both in NVIDIA Isaac Sim and on a real multi-floor building, demonstrate that LAMP outperforms existing explicit methods in both memory efficiency and fine-grained goal-reaching accuracy, opening new possibilities for scalable, language-driven robot navigation. -
dc.language English -
dc.publisher Institute of Electrical and Electronics Engineers Inc. -
dc.title LAMP: Implicit Language Map for Robot Navigation -
dc.type Article -
dc.identifier.doi 10.1109/LRA.2025.3619820 -
dc.identifier.wosid 001600704200004 -
dc.identifier.scopusid 2-s2.0-105018719356 -
dc.identifier.bibliographicCitation IEEE Robotics and Automation Letters, v.10, no.12, pp.12365 - 12372 -
dc.description.isOpenAccess FALSE -
dc.subject.keywordAuthor Vision-based navigation -
dc.subject.keywordAuthor mapping -
dc.subject.keywordAuthor path planning -
dc.subject.keywordAuthor open-vocabulary scene understanding -
dc.citation.endPage 12372 -
dc.citation.number 12 -
dc.citation.startPage 12365 -
dc.citation.title IEEE Robotics and Automation Letters -
dc.citation.volume 10 -
dc.description.journalRegisteredClass scie -
dc.description.journalRegisteredClass scopus -
dc.relation.journalResearchArea Robotics -
dc.relation.journalWebOfScienceCategory Robotics -
dc.type.docType Article -
Show Simple Item Record

File Downloads

  • There are no files associated with this item.

공유

qrcode
공유하기

Related Researcher

김기섭
Kim, Giseop김기섭

Department of Robotics and Mechatronics Engineering

read more

Total Views & Downloads