Background: The automated segmentation of brain gliomas regions in magnetic resonance (MR) images plays an important role in the early diagnosis, intraoperative navigation, radiotherapy planning and prognosis of brain tumors. It is very challenging to segment gliomas and intratumoral structures since the location, size, shape, edema range and boundary of gliomas are heterogeneous, and multimodal brain gliomas images (such as T1, T2, fluid-attenuated inversion recovery (FLAIR), and T1c images) are collected from multiple radiation centers.
Methods: This paper presents a multimodal, multi-scale, double-pathway, 3D residual convolution neural network (CNN) for automatic gliomas segmentation. First, a robust gray-level normalization method is proposed to solve the multicenter problem, such as very different intensity ranges due to different imaging protocols. Second, a multi-scale, double-pathway network based on DeepMedic toolkit is trained with different combinations of multimodal MR images for gliomas segmentation. Finally, a fully connected conditional random field (CRF) is used as a post-processing strategy to optimize the segmentation results for addressing the isolated segmentations and holes.
Results: Experiments on the Multimodal Brain Tumor Segmentation (BraTS) 2017 and 2019 challenge data show that our methods achieve a good performance in delineating the whole tumor with a Dice coefficient, a sensitivity and a positive predictive value (PPV) of 0.88, 0.89 and 0.88, respectively. Regarding the segmentation of the tumor core and the enhancing area, the sensitivity reached 0.80.
Conclusions: Experiments show that our method can accurately segment gliomas and intratumoral structures from multimodal MR images, and it is of great significance to clinical neurosurgery.