Cited time in webofscience Cited time in scopus

Full metadata record

DC Field Value Language
dc.contributor.author Ullah, Ihsan -
dc.contributor.author Ali, Farman -
dc.contributor.author Shah, Babar -
dc.contributor.author El-Sappagh, Shaker -
dc.contributor.author Abuhmed, Tamer -
dc.contributor.author Park, Sang Hyun -
dc.date.accessioned 2023-01-26T15:40:16Z -
dc.date.available 2023-01-26T15:40:16Z -
dc.date.created 2023-01-26 -
dc.date.issued 2023-01 -
dc.identifier.issn 2045-2322 -
dc.identifier.uri http://hdl.handle.net/20.500.11750/17507 -
dc.description.abstract Automated multi-organ segmentation plays an essential part in the computer-aided diagnostic (CAD) of chest X-ray fluoroscopy. However, developing a CAD system for the anatomical structure segmentation remains challenging due to several indistinct structures, variations in the anatomical structure shape among different individuals, the presence of medical tools, such as pacemakers and catheters, and various artifacts in the chest radiographic images. In this paper, we propose a robust deep learning segmentation framework for the anatomical structure in chest radiographs that utilizes a dual encoder–decoder convolutional neural network (CNN). The first network in the dual encoder–decoder structure effectively utilizes a pre-trained VGG19 as an encoder for the segmentation task. The pre-trained encoder output is fed into the squeeze-and-excitation (SE) to boost the network’s representation power, which enables it to perform dynamic channel-wise feature calibrations. The calibrated features are efficiently passed into the first decoder to generate the mask. We integrated the generated mask with the input image and passed it through a second encoder–decoder network with the recurrent residual blocks and an attention the gate module to capture the additional contextual features and improve the segmentation of the smaller regions. Three public chest X-ray datasets are used to evaluate the proposed method for multi-organs segmentation, such as the heart, lungs, and clavicles, and single-organ segmentation, which include only lungs. The results from the experiment show that our proposed technique outperformed the existing multi-class and single-class segmentation methods. © 2023, The Author(s). -
dc.language English -
dc.publisher Nature Publishing Group -
dc.title A deep learning based dual encoder–decoder framework for anatomical structure segmentation in chest X-ray images -
dc.type Article -
dc.identifier.doi 10.1038/s41598-023-27815-w -
dc.identifier.scopusid 2-s2.0-85146348903 -
dc.identifier.bibliographicCitation Scientific Reports, v.13, no.1 -
dc.description.isOpenAccess TRUE -
dc.subject.keywordPlus COMPUTER-AIDED DIAGNOSIS -
dc.subject.keywordPlus CONVOLUTIONAL NEURAL-NETWORKS -
dc.subject.keywordPlus LUNG SEGMENTATION -
dc.subject.keywordPlus AUTOMATED SEGMENTATION -
dc.subject.keywordPlus RADIOGRAPHS -
dc.subject.keywordPlus REGIONS -
dc.subject.keywordPlus FIELD -
dc.subject.keywordPlus SHAPE -
dc.subject.keywordPlus IDENTIFICATION -
dc.citation.number 1 -
dc.citation.title Scientific Reports -
dc.citation.volume 13 -

qrcode

  • twitter
  • facebook
  • mendeley

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE