<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
  <channel>
    <title>Repository Collection: null</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/127</link>
    <description />
    <pubDate>Sat, 04 Apr 2026 15:33:53 GMT</pubDate>
    <dc:date>2026-04-04T15:33:53Z</dc:date>
    <item>
      <title>Expert-level differentiation of incomplete Kawasaki disease and pneumonia from echocardiography via multiple large receptive attention mechanisms</title>
      <link>https://scholar.dgist.ac.kr/handle/20.500.11750/58607</link>
      <description>Title: Expert-level differentiation of incomplete Kawasaki disease and pneumonia from echocardiography via multiple large receptive attention mechanisms
Author(s): Lee, Haeyun; Lee, Kyungsu; Lee, Moon Hwan; Kim, Sewoong; Eun, Yongsoon; Eun, Lucy Youngmin; Hwang, Jae Youn
Abstract: Background: Incomplete Kawasaki disease (KD) is challenging to diagnose due to its lack of classic clinical features, yet it has a higher incidence of coronary artery lesions, making early detection crucial. Echocardiography plays a vital role in identifying these lesions, but differentiating incomplete KD from other febrile illnesses, such as COVID-19, is difficult. Algorithms capable of achieving expert-level performance are needed to aid diagnosis, particularly in the absence of pediatric cardiologists. Methods: To address this need, we developed two novel deep learning models: the Multiple Receptive Attention Network (MRANet) and the Multiple Large Receptive Attention Network (MLRANet). These models incorporate multiple receptive attention layers and multiple large receptive attention layers to enhance their ability to identify KD-related coronary artery abnormalities on echocardiography. The models were trained and tested on 203 echocardiographic datasets and compared with advanced deep learning models to assess diagnostic performance. Results: Both MRANet and MLRANet outperformed existing deep learning models, achieving diagnostic accuracy comparable to experienced pediatric cardiologists. Notably, MLRANet demonstrated the highest sensitivity (93.48%) and specificity (66.15%), exceeding expert-level performance in detecting coronary artery abnormalities. Furthermore, MLRANet was able to distinguish incomplete KD from pneumonia effectively, showing diagnostic results aligned with the KD specialists. Conclusions: MLRANet has proven to be a valuable tool for computer-aided diagnosis of incomplete KD, offering accurate and reliable detection of coronary artery abnormalities without requiring specialist input. These findings suggest that MLRANet can facilitate timely and precise incomplete KD diagnosis, improving patient outcomes and addressing the shortage of pediatric cardiologists worldwide. © 2025 Elsevier Ltd</description>
      <pubDate>Sun, 31 Aug 2025 15:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://scholar.dgist.ac.kr/handle/20.500.11750/58607</guid>
      <dc:date>2025-08-31T15:00:00Z</dc:date>
    </item>
    <item>
      <title>SoN: Selective Optimal Network for smartphone-based indoor localization in real-time</title>
      <link>https://scholar.dgist.ac.kr/handle/20.500.11750/58156</link>
      <description>Title: SoN: Selective Optimal Network for smartphone-based indoor localization in real-time
Author(s): Lee, Kyungsu; Lee, Haeyun; Hwang, Jae Youn
Abstract: Deep learning-based scene recognition algorithms have been developed for real-time application in indoor localization systems. However, owing to the slow calculation time resulting from the deep structure of convolutional neural networks, deep learning-based algorithms have limitations in the usage of real-time applications, despite their high accuracy in classification tasks. To significantly reduce the computation time of these algorithms and slightly improve their accuracy, we thus propose a path-selective deep learning network, denoted as Selective Optimal Network (SoN). The SoN selectively uses the depth-variable networks depending on a new indicator, denoted as the classification-complexity of a source image. The SoN reduces the prediction time by selecting optimal depth for the baseline networks corresponding to the input samples. The network was evaluated using two public datasets and two custom datasets for indoor localization and scene classification, respectively. The experimental results indicated that, compared to other deep learning models, the SoN exhibited improved accuracy and enhanced the processing speed by up to 78.59%. Additionally, the SoN was applied to a smartphone-based indoor positioning system in real-time. The results indicated that the SoN shows excellent performance for rapid and accurate classification in real-time applications of indoor localization systems. © 2025</description>
      <pubDate>Wed, 30 Apr 2025 15:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://scholar.dgist.ac.kr/handle/20.500.11750/58156</guid>
      <dc:date>2025-04-30T15:00:00Z</dc:date>
    </item>
    <item>
      <title>Predicting Obstructive Sleep Apnea Based on Computed Tomography Scans Using Deep Learning Models</title>
      <link>https://scholar.dgist.ac.kr/handle/20.500.11750/57331</link>
      <description>Title: Predicting Obstructive Sleep Apnea Based on Computed Tomography Scans Using Deep Learning Models
Author(s): Kim, Jeong-Whun; Lee, Kyungsu; Kim, Hyun Jik; Park, Hae Chan; Hwang, Jae Youn; Park, Seok-Won; Kong, Hyoun-Joong; Kim, Jin Youp
Abstract: Rationale: The incidence of clinically undiagnosed obstructive sleep apnea (OSA) is high among the general population because of limited access to polysomnography. Computed tomography (CT) of craniofacial regions obtained for other purposes can be beneficial in predicting OSA and its severity. Objectives: To predict OSA and its severity based on paranasal CT using a three-dimensional deep learning algorithm. Methods: One internal dataset (N = 798) and two external datasets (N = 135 and N = 85) were used in this study. In the internal dataset, 92 normal participants and 159 with mild, 201 with moderate, and 346 with severe OSA were enrolled to derive the deep learning model. A multimodal deep learning model was elicited from the connection between a three-dimensional convolutional neural network-based part treating unstructured data (CT images) and a multilayer perceptron-based part treating structured data (age, sex, and body mass index) to predict OSA and its severity. Measurements and Main Results: In a four-class classification for predicting the severity of OSA, the AirwayNet-MM-H model (multimodal model with airway-highlighting preprocessing algorithm) showed an average accuracy of 87.6% (95% confidence interval [CI], 86.8-88.6%) in the internal dataset and 84.0% (95% CI, 83.0-85.1%) and 86.3% (95% CI, 85.3-87.3%) in the two external datasets, respectively. In the two-class classification for predicting significant OSA (moderate to severe OSA), the area under the receiver operating characteristic curve, accuracy, sensitivity, specificity, and F1 score were 0.910 (95% CI, 0.899-0.922), 91.0% (95% CI, 90.1-91.9%), 89.9% (95% CI, 88.8-90.9%), 93.5% (95% CI, 92.7-94.3%), and 93.2% (95% CI, 92.5-93.9%), respectively, in the internal dataset. Furthermore, the diagnostic performance of the Airway Net-MM-H model outperformed that of the other six state-of-the-art deep learning models in terms of accuracy for both four- and two-class classifications and area under the receiver operating characteristic curve for two-class classification (P, 0.001). Conclusions: A novel deep learning model, including a multimodal deep learning model and an airway-highlighting preprocessing algorithm from CT images obtained for other purposes, can provide significantly precise outcomes for OSA diagnosis. Copyright © 2024 by the American Thoracic Society.</description>
      <pubDate>Sun, 30 Jun 2024 15:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://scholar.dgist.ac.kr/handle/20.500.11750/57331</guid>
      <dc:date>2024-06-30T15:00:00Z</dc:date>
    </item>
    <item>
      <title>Machine Learning-Enhanced Skull-Universal Acoustic Hologram for Efficient Transcranial Ultrasound Neuromodulation Across Varied Rodent Skulls</title>
      <link>https://scholar.dgist.ac.kr/handle/20.500.11750/57327</link>
      <description>Title: Machine Learning-Enhanced Skull-Universal Acoustic Hologram for Efficient Transcranial Ultrasound Neuromodulation Across Varied Rodent Skulls
Author(s): Lee, Moon Hwan; Lee, Kyungsu; Yoo, Youngseung; Cho, HyungJoon; Chung, Euiheon; Hwang, Jae Youn
Abstract: Ultrasound neuromodulation (UNM) has gained significant interest in brain science due to its non-invasive nature, precision, and deep brain stimulation capabilities. However, the skull poses challenges along the acoustic path, leading to beam distortion and necessitating effective acoustic aberration correction. Acoustic holograms used with single-element ultrasound transducers offer a promising solution by enabling both aberration correction and multi-focal stimulation. A major limitation, however, is that hologram lenses designed for specific skulls may not perform well on other skulls, requiring multiple custom lenses for scaled studies. To address this, we introduce the Skull-Universal Acoustic Hologram (SUAH), which enables efficient transcranial UNM across various skull types. Our hologram generation framework integrates a physics-based acoustic hologram, differentiable acoustic simulation in heterogeneous media, and a gradient accumulation technique. SUAH, trained on a range of rodent skull shapes, demonstrated remarkable generalizability and robustness, even outperforming the Skull-Specific Acoustic Hologram (SSAH). Through comprehensive analyses, we showed that SUAH performs exceptionally well - even when trained on smaller datasets - significantly outperforming training based on individual skulls. In conclusion, SUAH shows promise as a scalable, versatile, and accurate tool for ultrasound neuromodulation, representing a significant advancement over conventional single-skull hologram lenses. Its ability to adapt to different skull types without the need for multiple custom lenses has the potential to greatly facilitate research in ultrasound neuromodulation.  © IEEE.</description>
      <pubDate>Tue, 31 Dec 2024 15:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://scholar.dgist.ac.kr/handle/20.500.11750/57327</guid>
      <dc:date>2024-12-31T15:00:00Z</dc:date>
    </item>
  </channel>
</rss>

