<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/1922">
    <title>Repository Community: null</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/1922</link>
    <description />
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/60053" />
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/59905" />
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/59878" />
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/59400" />
      </rdf:Seq>
    </items>
    <dc:date>2026-04-04T06:40:10Z</dc:date>
  </channel>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/60053">
    <title>Glio-LLaMA-Vision: A Robust Vision-Language Model for Molecular Status Prediction and Radiology Report Generation in Adult-type Diffuse Gliomas</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/60053</link>
    <description>Title: Glio-LLaMA-Vision: A Robust Vision-Language Model for Molecular Status Prediction and Radiology Report Generation in Adult-type Diffuse Gliomas
Author(s): Park, Yae Won; Kang, Myeongkyun; Chang, Jong Hee; Park, Sang Hyun; Ahn, Sung Soo
Abstract: BACKGROUND: To establish a robust vision-language model (“Glio-LLaMA-Vision”) for molecular status prediction and radiology report generation (RRG) in adult-type diffuse gliomas.
METHODS: Multiparametric MRI data (T1, T2, FLAIR, and postcontrast T1-weighted images) and paired radiology reports (in English) from 1,001 patients with adult-type diffuse gliomas (144 oligodendrogliomas, 157 IDH-mutant astrocytomas, and 700 IDH-wildtype glioblastomas) diagnosed according to the 2021 WHO classification were included in the institutional training set. A vision-language model, Glio-LLaMA-Vision, was developed from LLaMA 3.1 pre-trained on 2.79 million biomedical image-text pairs from PubMed Central and further optimized via fine-tuning from the institutional training set. The performance was validated in 100 patients and 80 patients with paired MRI-radiology reports from an institutional validation set and another tertiary institution, and in 170 and 477 patients with MRI from TCGA and UCSF, respectively.
RESULTS: In terms of IDH mutation status prediction, Glio-LLaMA-Vision showed an overall performance of area under the curve, accuracy, sensitivity, and specificity of 0.89 (95% confidence interval 0.81-0.95), 86.0%, 84.0%, and 88.0%, respectively. In terms of radiology report generation, the BLEU-1, ROUGE-L, and METEOR scores were 0.49, 0.42, and 0.24, respectively, while the majority (91.3%) of generated reports were considered clinically acceptable.
CONCLUSION: Glio-LLaMA-Vision shows promising performance in molecular status prediction, and RRG in adult-type diffuse gliomas, and shows potential of clinical assistance.</description>
    <dc:date>2025-11-18T15:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/59905">
    <title>Logical Anomaly Detection with Text-based Logic via Component-Aware Contrastive Language-Image Training</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/59905</link>
    <description>Title: Logical Anomaly Detection with Text-based Logic via Component-Aware Contrastive Language-Image Training
Author(s): Lee, Seung-eon; Kim, Soopil; An, Sion; Lee, Sang-Chul; Park, Sang Hyun
Abstract: AI-based automatic visual inspection systems have been extensively researched to streamline various industrial products&amp;apos; labor-intensive anomaly detection processes. Despite significant advancements, detecting logical anomalies remains challenging due to the multitude of rules governing the assembly of multiple components to create a normal product. Existing methods have relied solely on image information for anomaly detection, resulting in limited accuracy as they fail to account for these diverse complex rules. Instead, humans detect anomalies by comparing the image with pre-defined logic which can be clearly expressed with natural language. Inspired by the human decision process, we propose a logical anomaly detection model that leverages text-based logic like human reasoning. With user-defined rules (i.e., positive rules) and logically distinct negative rules, we train the model using component-aware contrastive learning that increases the similarity between images and positive rules while decreasing the similarity with negative rules. However, accurately comparing textual and visual features is challenging due to multiple components, each governed by different rules, within a single image. To address this, we developed a zero-shot related region detection technique, which guides the model&amp;apos;s focus on components relevant to each rule. We evaluated the proposed model on three public datasets and achieved state-of-the-art results in a few-shot logical anomaly detection task. Our findings highlight the potential of integrating vision-language models to enhance logical anomaly detection and utilizing text-based logic in complex industrial settings.</description>
    <dc:date>2025-08-06T15:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/59878">
    <title>ARTIFICIAL INTELLIGENCE DEVICE AND OPERATION METHOD THEREOF</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/59878</link>
    <description>Title: ARTIFICIAL INTELLIGENCE DEVICE AND OPERATION METHOD THEREOF
Author(s): 정지욱; 김수필; 전혜정; 치콘테 필립; 박상현; 김재홍; 안시온
Abstract: An artificial intelligence device according to an embodiment of the present disclosure may comprise a memory and a processor for: training a binary classifier which infers whether a patch is a positive patch or a negative patch by using positive patches indicating normality and negative patches indicating abnormality on the basis of a normal sample indicating a non-defective product and an unlabeled sample; when the reliability of a patch output in response to a patch of a new unlabeled sample input to the trained binary classifier is greater than or equal to threshold reliability, determining the input patch as a positive patch; and storing the determined positive patch in the memory.</description>
  </item>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/59400">
    <title>MOSInversion: Knowledge distillation-based incremental learning in organ segmentation using DeepInversion</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/59400</link>
    <description>Title: MOSInversion: Knowledge distillation-based incremental learning in organ segmentation using DeepInversion
Author(s): Kim, Jihyeon; Lee, Gyeongmin; Shin, Seung Yeon; Kim, Soopil; Park, Sang Hyun
Abstract: Despite recent advancements in multi-organ segmentation (MOS) of medical images, existing models are limited in terms of extending their capability to unseen classes. Incremental learning has been proposed to enable models to learn new classes progressively, possibly using multiple datasets from different institutions. In this setting, models easily experience performance degradation on previously learned classes i.e., catastrophic forgetting. Although many methods have been proposed to mitigate this issue, applying them to medical imaging applications like multi-organ segmentation is not easy due to the large memory requirement when used for 3D medical data such as CT scans or the need for additional training of a generator for image synthesis. In this paper, we propose an incremental learning framework that leverages diverse synthetic images to retain the knowledge learned from previously seen data. We design MOSInversion to generate the synthetic images by utilizing a pre-trained model from the previous step. MOSInversion generates diverse images by using segmentation masks so that we can manipulate the shape, location, and size of organs. We evaluate our proposed method using three abdominal CT datasets (FLARE21, MSD, and KiTS19) and achieve state-of-the-art accuracy.</description>
    <dc:date>2025-11-30T15:00:00Z</dc:date>
  </item>
</rdf:RDF>

