Fully-Automated Identication of Brain Abnormality From Whole Body FDG PET Imaging by Deep-Learning Based Brain Extraction

Background: The whole brain is often covered in [ 18 F]Fluorodeoxyglucose Positron emission tomography ([ 18 F]FDG-PET) in oncology patients, but the covered brain abnormality is typically screened by visual interpretation without quantitative analysis in clinical practice. In this study, we aimed to develop a fully automated quantitative interpretation pipeline of brain volume from an oncology PET image. Method: We retrospectively collected ve hundred oncologic [ 18 F]FDG-PET scans for training and validation of the automated brain extractor. We trained the model for extracting brain volume with two manually drawn bounding boxes on maximal intensity projection (MIP) images. ResNet-50, a convolutional neural network (CNN) was used for the model training. The brain volume was automatically extracted using the CNN model and spatially normalized. As an application of this automated analytic method, we enrolled twenty-four subjects with small cell lung cancer (SCLC) and performed voxelwise two-sample T-test for automatic detection of metastatic lesions. Result: The deep learning-based brain extractor successfully identied the existence of whole-brain volume, with the accuracy of 98% for the validation set. The performance of extracting the brain measured by the intersection-over-union (IOU) of 3-D bounding boxes was 72.9±12.5% for the validation set. As an example of the application to automatically identify brain abnormality, this approach successfully identied the metastatic lesions in three of the four cases of SCLC patients with brain metastasis. Conclusion: Based on the deep-learning based model, the brain volume was successfully extracted from whole-body FDG PET. We suggest this fully automated approach could be used for the quantitative analysis of brain metabolic pattern to identify abnormality during clinical interpretation of oncologic PET studies.

tumors, quantitative information of covered brain extracted from the oncology FDG-PET study could be utilized to identify brain abnormality as well as unexpected metastasis. More speci cally, with the aid of automated analytic methods, such as statistical parametric mapping (SPM) [11,12], the information from brain PET images might improve sensitivity for detecting incidental brain disorders by measuring regional metabolic abnormalities [13], such as local-onset seizures [14], or Alzheimer's disease (AD) [15], as well as brain metastasis [16].
In this study, we aimed to develop a fully automatic quantitative analysis pipeline of brain volume from a given oncology PET image. To achieve this goal, deep learning models were exploited to detect the location of the brain and to identify whether a given PET study included whole brain. The detected brain was cropped and spatially normalized to the template brain. The automatically extracted and normalized brain volume could be used to perform statistical analysis including SPM. As an example, we applied this model to identify brain metastasis from whole body FDG PET imaging.
For the quantitative assessment of the extracted brain as an independent test, FDG PET images acquired from small-cell lung cancer (SCLC) patients were retrospectively collected. The scans were acquired from January 2014 to December 2017 in the same institute. To test whether our automated brain analysis pipeline identify brain metastasis in SCLC patients, groups were de ned according to the presence of brain metastasis. Four patients had brain metastasis con rmed by brain MRI at baseline and follow-up (age: 66.8 ± 6.5, M : F = 4 : 0). Twenty PET scans without brain metastasis according to the baseline brain MRI were regarded as controls (age: 71.2 ± 6.1; M : F = 17 :3).

Image acquisition
As a routine protocol of FDG PET, after fasting more than 4 hours, patients were intravenously injected with 5.18 MBq/kg of FDG. After 1 hour, PET image was acquired from the skull base to the proximal thigh using dedicated PET/CT scanners (Biograph mCT 40 or mCT 64, Siemens, Erlangen, Germany) for 1 minute per bed. A Gaussian lter (FWHM 5 mm) was applied to reduce noise, and images were reconstructed using an ordered-subset expectation maximization algorithm (2 iterations and 21 subsets).
Deep learning model and training data for the brain extraction We devised an automatic brain extractor, based on the following two objectives: 1) the evaluation of whether a scan included the entire brain and 2) establishment of 3-dimensional bounding box which included brain volume. To achieve these goals, we implemented a CNN-based deep-learning model, according to the following procedure. The brief outline of the study is shown in (Fig. 1).
For training of the model, the maximum intensity projection (MIP) image for each of the PET scan was generated. For each of the 500 MIP image, 2-dimensional bounding boxes were manually drawn on the anterior and lateral views of the MIP image. Coordinates from the two bounding boxes were merged to obtain coordinates of 3-dimensional bounding box for each PET image. Images that did not contain full range of brain was classi ed elsewhere, as "not containing entire brain".
Two MIP images, anterior and lateral views, were changed to square matrices by zero-padding. The matrices were changed to 224 x 224 using bilinear interpolation. The pixel values represented standardized uptake value (SUV). To be inputs of a CNN model, pixel values were divided by 30, as most voxel values of PET volume has less than SUV 30 except urine, and then multiplied by 255 to have a range approximately 0 to 255.
We utilized ResNet-50 [17,18] for the learning model, a convolutional neural network pre-trained with images from ImageNet database [19]. ResNet-50 was implemented for preprocessing of the input data and predicting the coordinates of 3-D bounding boxes from the MIP images. The pre-trained ResNet-50 respectively extracted feature vectors from the two views of MIP images. The extracted features were concatenated. Additional fully-connected layer with 4096 dimensions was connected to the concatenated feature vectors and then nally connected to different outputs. An output represented coordinates of bounding box of the brain consisting of 6-dimensional vectors (coordinates for 3 axes and width, length, and depth of the bounding box for 3 axes). Another output with one-dimensional vector that represented whether a given PET volume included the entire brain. Image augmentation was applied to the training dataset. MIP images were randomly augmented by multiplying voxel values, changing contrast, scaling and translating images.
We performed the internal validation by randomly selected 10% of the data as a validation set. The loss function was de ned by two terms: 1) binary cross entropy of an output that represented whether a given PET volume included the entire brain and 2) mean squared error estimated by the 6-dimensional vector representing coordinates of the bounding box (Fig. 1). The weight for the loss was changed according to the training process: we set to alpha = 10 and beta = 1 for the sum of loss function. We measured intersection over union (IOU) for the predicted and labeled bounding boxes. From the predicted coordinates of bounding boxes, we extracted brain images from whole body PET and spatially normalized them to the template space, as mentioned later.
Processing of the extracted brain The trained model was applied to whole body PET images to extract brain if the model predicted that the image contains the whole brain volume. FDG PET volumes were resliced to have a voxel size of 2 x 2 x 2 mm 3 . We segmented the brain with the coordinates of the 3-D bounding boxes predicted by the model.
Padding of 10 voxels is applied for each axis to determine the brain volume. The extracted brain volumes were spatially normalized onto Montreal Neurological institute (McGill University, Montreal, Quebec, Canada) standard templates. The spatial normalization was performed by symmetric normalization (SyN) with the cross-correlation loss function implanted in DIPY package [20]. More speci cally, a given extracted brain volume was linearly transformed to the template PET image with a ne transform. The warping was performed by the symmetric diffeomorphic registration algorithm. The spatially normalized PET volume was saved for further quantitative imaging analysis.
Quantitative analysis of the extracted brain The extracted and spatially normalized brain volume was analyzed by a quantitative software, SPM12 (Institute of Neurology, University College of London, London, U. K.) implemented in Matlab 2019b (The MathWorks, Inc., Natick, MA, U. S.). The normalized brain images were smoothed by convolution with an isotropic gaussian kernel having a 10 mm full width at half maximum to increase the signal-to-noise ratio.
For the twenty-four subjects with SCLC, we performed the voxelwise two-sample T-test for each of the normalized brain volume from the four scans with metastatic lesions, with the whole images from the twenty control group subjects. Uncorrected P < 0.001 was applied to identify patient-wise metabolically abnormal regions.
For each of the four comparisons, we also constructed a map of T-statistics and extracted the peak T values. As a proof-of-concept study, we investigated if the statistical analysis successfully revealed the metastatic lesions con rmed by the brain MRI previously.

Results
Extraction of the brain volume The deep learning-based brain extractor successfully identi ed the existence of whole-brain volume, with the accuracy of 98% for the internal validation set. The performance of extracting the brain measured by the IOU of 3-D bounding boxes was 72.9±12.5% for the validation set. Using the predicted coordinates, all brains were successfully cropped and automatically normalized into the template space.
We show some representative images we applied for interval validation of the model in (Fig. 2). In both of the "torso" PET covering up to mid-thigh and "total-body" PET covering whole heights of body the extractor successfully located the brain (Fig. 2a, b). The extractor was also capable of identi cation of brain when the artifact caused by radiopharmaceutical injection was projected to the brain at the MIP image (Fig. 2c). When the brain volume was not fully included, the extractor classi ed the image as "not containing entire brain" (Fig. 2d).
Identi cation of brain metastasis from whole body FDG PET using the fully-automated brain analysis pipeline The fully-automated brain extraction and quantitative analysis was applied to the patients with SCLC as a proof-of-concept study. The voxelwise T-test successfully identi ed the metastatic lesions in brain at three of four subjects in the case group (P < 0.001). In all of the three successful cases, the analysis revealed hypometabolic lesions due to edematous change around the lesion (Fig. 3). In the other case with unsuccessful result, the statistical analysis showed diffuse hypometabolism in frontoparietal lobe, instead of focal metabolic defect at the metastatic site (Fig. 3).

Discussion
Since the increased utilization of FDG-PET in neurologic disorders, many kinds of literature suggest methods for quantitative analysis for FDG-PET image of the brain [12,21,22] . However, most of the subjects with FDG-PET scan, especially oncologic patients, are not bene ted from this kind of progress, for the lack of a handy and automated method of quanti cation. This work aims to achieve the rst step of this automatization, by deep learning-based extraction of brain volume from the oncologic PET scan, which is followed by a scout quantitative analysis of the extracted brains. Fully-automated brain extraction and providing quantitative information in oncologic PET can be integrated into a system that warns of metastatic lesions or major brain diseases that can be overlooked in visual reading.
The key step of automated quantitative brain imaging analysis from the oncologic PET images was the extraction of brain volumes. For this purpose, we implemented the CNN-based deep learning model, which has been successful in solving a variety of problems in the eld of image processing, including image classi cation, object detection and segmentation [23,24], and vast portion of this success includes medical image processing [25,26]. In most of the scans of internal validation, the automatic brain extractor based on ResNet-50 successfully identi ed the coverage of full brain in the whole-body scan and located and extracted the brain volume, even in the presence of artifact projected to the MIP image.
The extracted brain PET volume can be analyzed by many conventional quantitative analysis approaches. In this study, for the fully-automated process, we employed a spatial normalization process based on SyN algorithm implemented in DIPY package. Notably, the spatial normalization process after the brain extraction was fully automated. The spatially normalized brain can be further analyzed by quantitative softwares including SPM and 3-dimensional stereotactic surface projection (3D SSP) [27]. In this work, as a proof-of-concept study, we implemented SPM to identify metastatic lesions. This reveals the implication of the automatic brain extraction we performed, which could potentially extend to aid in the identi cation of unexpected metastasis during visual interpretation of oncologic PET study. Moreover, this method could be used to identify overlooked brain abnormalities such as dementia as well as tumorous lesions in the brain.
In the process of the quantitative analysis, as a proof-of-concept study, age matching was not performed between the metastatic subject and control group to yield rather non-speci c decreased metabolism along the cerebral cortex in an elderly subject. This might have resulted from the physiologic decrease in gray matter volume accompanied by a normal aging process [28,29]. Adjustment of patient factors (e.g., age and underlying disease) would be crucial to detect a localized metabolic disorder, apart from diffuse change of metabolism resulted from systemic condition.

Conclusions
Based on the deep learning-based model, we successfully developed a fully automated brain analysis method from oncologic FDG PET. The model could identify the existence of the brain volume, locate the contour of brain from the PET image, and perform the spatial normalization to the template. The quantitative analysis showed a feasibility of the identi cation of the metastatic brain lesion. We suggested that the model could be used to support FDG PET interpretation and analysis by nding unexpected brain abnormality including metastasis as well as brain disorders.

Consent for publication
Not applicable.
Availability of data and material Not applicable. Figure 1 Brief outline of the automatic brain extraction. We trained the model with two manually drawn bounding boxes on maximal intensity projection (MIP) images. ResNet-50, a convolutional neural network (CNN) was used for learning model. Internal validation of model was performed. Finally, the brain volume was extracted and spatially normalized them to the template space, as mentioned later Figure 2 Representative results of the automatic brain extractor. (a, b) In both of the "torso" PET covering up to mid-thigh and "total-body" PET covering whole heights of body the extractor successfully located the brain. (c) The extractor was also capable of identi cation of brain when the artifact caused by radiopharmaceutical injection was projected to the brain at the MIP image. (d) When the brain volume was not fully included, the extractor classi ed the image as "not containing entire brain".

Figure 3
Quantitative analysis of the extracted brain. The voxelwise T-test successfully identi ed the metastatic lesions in brain at three of four subjects in the case group (uncorrected P < 0.001). The graphics on the left side show the brain regions that shows hypometabolism compared to the control group. The image on the right side shows the corresponding FDG-PET image. (a, b, c) In all of the three successful cases, the analysis revealed hypometabolic lesions due to edematous change around the lesion. (d) In the other case with unsuccessful result, the statistical analysis showed diffuse hypometabolism in frontoparietal lobe, instead of focal metabolic defect at the metastatic site.