Detail View

MedSumGraph: enhancing GraphRAG for medical QA with summarization and optimized prompts

Citations

WEB OF SCIENCE

Citations

SCOPUS

Metadata Downloads

DC Field Value Language
dc.contributor.author Kim, Daeho -
dc.contributor.author Yoo, Soyeop -
dc.contributor.author Jeong, Okran -
dc.date.accessioned 2026-04-15T17:10:50Z -
dc.date.available 2026-04-15T17:10:50Z -
dc.date.created 2025-12-11 -
dc.date.issued 2026-02 -
dc.identifier.issn 0933-3657 -
dc.identifier.uri https://scholar.dgist.ac.kr/handle/20.500.11750/60220 -
dc.description.abstract The rapid development of large language models (LLMs) has accelerated research into applying artificial intelligence (AI) to domains such as medical question answering and clinical decision support. However, LLMs face substantial limitations in medical contexts due to challenges in understanding specialized terminology, complex contextual information, hallucination issues (i.e., generating incorrect responses), and the black-box nature of their reasoning processes. To address these issues, methods like retrieval-augmented generation (RAG) and its graph-based variant, GraphRAG, have been proposed to incorporate external knowledge into LLMs. Nonetheless, these approaches often rely heavily on external resources and increase system complexity. In this study, we introduce MedSumGraph, a medical question-answering system that enhances GraphRAG by integrating structured medical knowledge summaries and optimized prompt designs. Our method enables LLMs to better interpret domain-specific knowledge without requiring additional training, and it enhances the reliability and interpretability of responses by directly embedding factual evidence and graph-based reasoning into the generation process. MedSumGraph achieves competitive performance on two out of eight multiple-choice medical QA benchmarks, including MedQA (USMLE), outperforming closed-source LLMs and domain-specific foundation models. Moreover, it generalizes effectively to open-domain QA tasks, yielding significant gains in reasoning over common knowledge and evaluating the truthfulness of answers. These findings demonstrate the potential of structured summarization and graph-based reasoning in enhancing the trustworthiness and versatility of LLM-driven medical AI systems. © 2025 The Author(s). -
dc.language English -
dc.publisher Elsevier BV -
dc.title MedSumGraph: enhancing GraphRAG for medical QA with summarization and optimized prompts -
dc.type Article -
dc.identifier.doi 10.1016/j.artmed.2025.103311 -
dc.identifier.wosid 001633464200001 -
dc.identifier.scopusid 2-s2.0-105023658185 -
dc.identifier.bibliographicCitation Artificial Intelligence in Medicine, v.172 -
dc.description.isOpenAccess TRUE -
dc.subject.keywordAuthor Knowledge graph -
dc.subject.keywordAuthor Large language model -
dc.subject.keywordAuthor Medical decision support system -
dc.citation.title Artificial Intelligence in Medicine -
dc.citation.volume 172 -
dc.description.journalRegisteredClass scie -
dc.description.journalRegisteredClass scopus -
dc.relation.journalResearchArea Computer Science; Engineering; Medical Informatics -
dc.relation.journalWebOfScienceCategory Computer Science, Artificial Intelligence; Engineering, Biomedical; Medical Informatics -
dc.type.docType Article -
Show Simple Item Record

File Downloads

공유

qrcode
공유하기

Total Views & Downloads

???jsp.display-item.statistics.view???: , ???jsp.display-item.statistics.download???: