<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/1924">
    <title>Repository Collection: null</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/1924</link>
    <description />
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/60053" />
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/59905" />
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/59145" />
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/59053" />
      </rdf:Seq>
    </items>
    <dc:date>2026-04-04T08:16:38Z</dc:date>
  </channel>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/60053">
    <title>Glio-LLaMA-Vision: A Robust Vision-Language Model for Molecular Status Prediction and Radiology Report Generation in Adult-type Diffuse Gliomas</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/60053</link>
    <description>Title: Glio-LLaMA-Vision: A Robust Vision-Language Model for Molecular Status Prediction and Radiology Report Generation in Adult-type Diffuse Gliomas
Author(s): Park, Yae Won; Kang, Myeongkyun; Chang, Jong Hee; Park, Sang Hyun; Ahn, Sung Soo
Abstract: BACKGROUND: To establish a robust vision-language model (“Glio-LLaMA-Vision”) for molecular status prediction and radiology report generation (RRG) in adult-type diffuse gliomas.
METHODS: Multiparametric MRI data (T1, T2, FLAIR, and postcontrast T1-weighted images) and paired radiology reports (in English) from 1,001 patients with adult-type diffuse gliomas (144 oligodendrogliomas, 157 IDH-mutant astrocytomas, and 700 IDH-wildtype glioblastomas) diagnosed according to the 2021 WHO classification were included in the institutional training set. A vision-language model, Glio-LLaMA-Vision, was developed from LLaMA 3.1 pre-trained on 2.79 million biomedical image-text pairs from PubMed Central and further optimized via fine-tuning from the institutional training set. The performance was validated in 100 patients and 80 patients with paired MRI-radiology reports from an institutional validation set and another tertiary institution, and in 170 and 477 patients with MRI from TCGA and UCSF, respectively.
RESULTS: In terms of IDH mutation status prediction, Glio-LLaMA-Vision showed an overall performance of area under the curve, accuracy, sensitivity, and specificity of 0.89 (95% confidence interval 0.81-0.95), 86.0%, 84.0%, and 88.0%, respectively. In terms of radiology report generation, the BLEU-1, ROUGE-L, and METEOR scores were 0.49, 0.42, and 0.24, respectively, while the majority (91.3%) of generated reports were considered clinically acceptable.
CONCLUSION: Glio-LLaMA-Vision shows promising performance in molecular status prediction, and RRG in adult-type diffuse gliomas, and shows potential of clinical assistance.</description>
    <dc:date>2025-11-18T15:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/59905">
    <title>Logical Anomaly Detection with Text-based Logic via Component-Aware Contrastive Language-Image Training</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/59905</link>
    <description>Title: Logical Anomaly Detection with Text-based Logic via Component-Aware Contrastive Language-Image Training
Author(s): Lee, Seung-eon; Kim, Soopil; An, Sion; Lee, Sang-Chul; Park, Sang Hyun
Abstract: AI-based automatic visual inspection systems have been extensively researched to streamline various industrial products&amp;apos; labor-intensive anomaly detection processes. Despite significant advancements, detecting logical anomalies remains challenging due to the multitude of rules governing the assembly of multiple components to create a normal product. Existing methods have relied solely on image information for anomaly detection, resulting in limited accuracy as they fail to account for these diverse complex rules. Instead, humans detect anomalies by comparing the image with pre-defined logic which can be clearly expressed with natural language. Inspired by the human decision process, we propose a logical anomaly detection model that leverages text-based logic like human reasoning. With user-defined rules (i.e., positive rules) and logically distinct negative rules, we train the model using component-aware contrastive learning that increases the similarity between images and positive rules while decreasing the similarity with negative rules. However, accurately comparing textual and visual features is challenging due to multiple components, each governed by different rules, within a single image. To address this, we developed a zero-shot related region detection technique, which guides the model&amp;apos;s focus on components relevant to each rule. We evaluated the proposed model on three public datasets and achieved state-of-the-art results in a few-shot logical anomaly detection task. Our findings highlight the potential of integrating vision-language models to enhance logical anomaly detection and utilizing text-based logic in complex industrial settings.</description>
    <dc:date>2025-08-06T15:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/59145">
    <title>Revisiting Masked Image Modeling with Standardized Color Space for Domain Generalized Fundus Photography Classification</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/59145</link>
    <description>Title: Revisiting Masked Image Modeling with Standardized Color Space for Domain Generalized Fundus Photography Classification
Author(s): Jang, Eojin; Kang, Myeongkyun; Kim, Soopil; Sagong, Min; Park, Sang Hyun
Abstract: Diabetic retinopathy (DR) is a serious complication of diabetes, requiring rapid and accurate assessment through computer-aided grading of fundus photography. To enhance the practical applicability of DR grading, domain generalization (DG) and foundation models have been proposed to improve accuracy on data from unseen domains. Despite recent advancements, foundation models trained in a self-supervised manner still exhibit limited DG capabilities, as self-supervised learning does not account for domain variations. In this paper, we revisit masked image modeling (MIM) in foundation models to advance DR grading for domain generalization. We introduce a MIM-based approach that transforms images to achieve standardized color representation across domains. By transforming images from various domains into this color space, the model can learn consistent representation even for unseen images, promoting domain-invariant feature learning. Additionally, we employ joint representation learning of both the original and transformed images, using cross-attention to integrate their respective strengths for DR classification. We showed a performance improvement of up to nearly 4% across the three datasets, positioning our method as a promising solution for domain-generalized medical image classification.</description>
    <dc:date>2025-09-24T15:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/59053">
    <title>MC-NuSeg: Multi-Contour Aware Nuclei Instance Segmentation with Segment Anything Model</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/59053</link>
    <description>Title: MC-NuSeg: Multi-Contour Aware Nuclei Instance Segmentation with Segment Anything Model
Author(s): Namgung, Hyun; Nam, Siwoo; Kim, Soopil; Park, Sang Hyun
Abstract: Accurate nuclei instance segmentation is critical in digital pathology image analysis, facilitating disease diagnosis and advancing medical research. While various methods have been proposed, recent approaches leverage foundation models like the Segment Anything Model (SAM) for their robust representational power. However, existing models face challenges in handling the unique characteristics of histopathology images, particularly dense nuclei clusters, and complex morphological and staining variations. To address these issues, we propose a novel method, Multi-Contour Aware Nuclei Instance Segmentation (MC-NuSeg) framework, which incorporates the hierarchical boundary structure of nuclei for precise segmentation. MC-NuSeg predicts multiple segmentation maps corresponding to different contour layers, allowing for accurate separation of densely clustered nuclei and those with high morphological variance. Furthermore, we introduce an auxiliary instance counting loss that directly supervises the number of nuclei, significantly enhancing segmentation accuracy by reducing false positives and missed cases. Extensive evaluations on four public pathology datasets demonstrate that MC-NuSeg achieves state-of-the-art performance, effectively addressing the challenges of nuclei instance segmentation. © 2025 Elsevier B.V., All rights reserved.</description>
    <dc:date>2025-05-26T15:00:00Z</dc:date>
  </item>
</rdf:RDF>

