Cited 34 time in
Cited 43 time in
Standardized Assessment of Automatic Segmentation of White Matter Hyperintensities and Results of the WMH Segmentation Challenge
- Standardized Assessment of Automatic Segmentation of White Matter Hyperintensities and Results of the WMH Segmentation Challenge
- Kuijf, Hugo J.; Biesbroek, J. Matthijs; de Bresser, Jeroen; Heinen, Rutger; Andermatt, Simon; Bento, Mariana; Berseth, Matt; Belyaev, Mikhail; Cardoso, M. Jorge; Casamitjana, Adria; Collins, D. Louis; Dadar, Mahsa; Georgiou, Achilleas; Ghafoorian, Mohsen; Jin, Dakai; Khademi, April; Knight, Jesse; Li, Hongwei; Llado, Xavier; Luna, Miguel; Mahmood, Qaiser; McKinley, Richard; Mehrtash, Alireza; Ourselin, Sebastien; Park, Bo-Yong; Park, Hyunjin; Park, Sang Hyun; Pezold, Simon; Puybareau, Elodie; Rittner, Leticia; Sudre, Carole H.; Valverde, Sergi; Vilaplana, Veronica; Wiest, Roland; Xu, Yongchao; Xu, Ziyue; Zeng, Guodong; Zhang, Jianguo; Zheng, Guoyan; Chen, Christopher; van der Flier, Wiesje; Barkhof, Frederik; Viergever, Max A.; Biessels, Geert Jan
- DGIST Authors
- Park, Sang Hyun
- Issue Date
- IEEE Transactions on Medical Imaging, 38(11), 2556-2568
- Article Type
- Author Keywords
- Image segmentation; Three-dimensional displays; Manuals; White matter; Biomedical imaging; Radiology; Magnetic resonance imaging (MRI); brain; evaluation and performance; segmentation
- SMALL VESSEL DISEASE; VALIDATION
- Quantification of cerebral white matter hyperintensities (WMH) of presumed vascular origin is of key importance in many neurological research studies. Currently, measurements are often still obtained from manual segmentations on brain MR images, which is a laborious procedure. The automatic WMH segmentation methods exist, but a standardized comparison of the performance of such methods is lacking. We organized a scientific challenge, in which developers could evaluate their methods on a standardized multi-center/-scanner image dataset, giving an objective comparison: the WMH Segmentation Challenge. Sixty T1 + FLAIR images from three MR scanners were released with the manual WMH segmentations for training. A test set of 110 images from five MR scanners was used for evaluation. The segmentation methods had to be containerized and submitted to the challenge organizers. Five evaluation metrics were used to rank the methods: 1) Dice similarity coefficient; 2) modified Hausdorff distance (95th percentile); 3) absolute log-transformed volume difference; 4) sensitivity for detecting individual lesions; and 5) F1-score for individual lesions. In addition, the methods were ranked on their inter-scanner robustness; 20 participants submitted their methods for evaluation. This paper provides a detailed analysis of the results. In brief, there is a cluster of four methods that rank significantly better than the other methods, with one clear winner. The inter-scanner robustness ranking shows that not all the methods generalize to unseen scanners. The challenge remains open for future submissions and provides a public platform for method evaluation. © 2019 IEEE.
- Institute of Electrical and Electronics Engineers
- Related Researcher
Park, Sang Hyun
Medical Image & Signal Processing Lab
컴퓨터비전, 인공지능, 의료영상처리
There are no files associated with this item.
- Department of Robotics EngineeringMedical Image & Signal Processing Lab1. Journal Articles
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.