A Hybrid CNN-GLCM Classifier For Detection And Grade Classification Of Brain Tumor

A supervised CNN Deep net classifier is proposed for the detection, classification and diagnosis of meningioma brain tumor using deep learning approach. This proposed method includes preprocessing, classification, and segmentation of the primary occurring brain tumor in adults. The proposed CNN Deep Net classifier extracts the features internally from the enhanced image and classifies them into normal and abnormal tumor images. The segmentation of tumor region is performed by global thresholding along with an area morphological function. This proposed method of fully automated classification and segmentation of brain tumor preserves the spatial invariance and inheritance. Furthermore, based on its feature attributes the proposed CNN Deep net classifier, classifies the detected tumor image either as (low grade) benign or (high grade) malignant. This proposed CNN Deep net classification approach with grading system is evaluated both quantitatively and qualitatively. The quantitative measures such as sensitivity, specificity, accuracy, Dice similarity coefficient, precision, F-score of the proposed classifier states a better segmentation accuracy and classification rate of 99.4% and 99.5% with respect to ground truth images.

A tumor is a volume of irregular and abnormal cells affecting the function of nearby healthy cells in a human body. Meningioma is the most commonly occurring tumor and can be seen in the dura mater called meninges, the outer tissues of brain. They are mostly extra-axial neo plasms, and only a few are intracranial tumors. The general classification of tumor is Benign or noncancerous tumor and Malignant or cancerous tumor Bakhshi et al. 2019. Benign do not spread to other parts of the body and does not generate a new tumor. Malignant or cancerous tumor appears in great mass and pulls out healthy cells by taking nourishment from body tissues. Malignant tumors are fast in their growth, and their rate of reoccurrence is high even after the surgery. These tumors spread to other parts of the body through the lymphatic blood vessels Menze Reyes van Leemput 2019 It is observed that 70 percent of the tumors are benign and are categorized as Grade I. The Grade I benign tumor has less amount of cancerous cell multiplication rate which is specified as mitotic rate (HPF). Benign tumor always has less than 4 mitoses per HPF. A typical Meningioma-Grade II occupies 22-26 percent in the total tumor occurrence rate. They have a tumor cell growth rate of 15 mitoses per HPF. Anaplastic-Grade III is dangerous, and it occurs in fewer numbers. But the mitotic rate of Grade III is higher than 18 and patients are at a high risk of death Amin et al 2019. In general, the diagnosis of brain tumor begins with magnetic resonance imaging (MRI), as it provides detailed information on both hard and soft tissues with fat, fluid substances of the brain through electromagnetic fields. The NCIS and WHO has reported that every year around 13,000 people are affected by the tumor. Every year the death rate is progressively increasing because of the late diagnosis. The detection and classification of tumors by manual method is the greatest challenge which pathologist's face. According to the reports published in Cancer.Net and WHO (Alqazzaz et al. 2019), automated architecture for the detection and classification of tumor is an emerging research area in the field of medicine Bousselham al et al. 2019. This provokes number of researchers to develop a cost effective as well as more precise automated algorithm for detection, classification and diagnosis of tumor. Sun et al. presented a 3D CNN architecture and deep learning-based framework for brain tumor segmentation. In addition to it, survival prediction using multimodal MRI scan is also presented. Decision tree-based classification with cross validation is performed and thus the framework reduced model bias and extracted 4524 features. A random forest approach is used for the prediction of survival and achieved 61% of accuracy on short, mild and long survivals. Sheela et al. 2020

Contribution
Machine learning approaches require large number of features for the effective classification of brain tumor. Different ML approaches use feature extraction separately and classifies normal and abnormal brain MRI images. These external feature extraction and classification approach are not fully automatic. They require labeling of classes for classification. Moreover, the conventional methods lack in achieving spatial inheritance and invariance. Hence to overcome these drawbacks, a fully automated classification approach of CNN Deep net is proposed. This proposed architecture consists of five convolutional layers with ReLU activations for achieving nonlinearity and Max-pooling layers for feature reduction. The architecture uses two fully connected layers and a classification layer. By data augmentation, the proposed work satisfies the requirement of large volume of datasets. A combined fully automated feature extraction and supervised classification is achieved through convolution layers and classification layer. Hyper tuning techniques such as normalization, regularization and dropout improve the accuracy of the proposed CNN Deep net classifier. Furthermore, a novel hybrid classification approach of GLCM-CNN classifier is proposed to diagnosis the detected brain tumor as low grade (benign) or high grade (malignant). Figure 1 shows brain tumor MRI image. Block diagram of the proposed CNN Deep net is shown in Fig. 2.

Materials
The performance of the proposed CNN Deep Net is observed on the MR brain images of open access BRATS dataset. The dataset includes ground truth images that are obtained from an expert radiologist. This paper access 600 brain images including both normal brain images (340 brain images) and abnormal brain images (260 brain images). This dataset is grouped into training and testing set. The training dataset contains 90 normal images and 75 abnormal images. The

Methods
The meningioma tumor prediction and classification in MR image is carried out using proposed CNN Deep net. All dataset images of 512*512-pixel resolution is resized into 256*256 pixels and achieves same dimensionality with scaling range of [1 1.2]. This preprocessed brain MR image is classified either into normal or abnormal tumor image by the proposed CNN Deep net classifier. The tumor regions are segmented using global thresholding approach in fusion with connected component method. A combined GLCM and CNN classifier is proposed in this paper for the diagnosis of segmented tumor regions.

Preprocessing
The preprocessing technique resizes the source brain image for uniform dimensionality. These resized images are flipped (up and down), rotated and skewed by data augmentation. Thus, data augmentation helps the classifier to obtain a high accurate and precise classification result. The data augmentation output of the proposed CNN Deep net is listed in the Table 1. Figure 3 shows the augmented results of brain MR image.

Feature extraction and Classification
On the basis of neural network's parameter sharing property, the parameters or features are reduced in number and hence network starts learning through indirect interactions in deep learning approach. An end-to-end learning technique is proposed with deep learning architecture where feature extraction and classification are combined together. Thus, classifier is learning automatically. Both lower and higher level features are obtained from deeper and shallow layers of activators. These represent the features for training and test images. This supervised deep learning representation uses both linear and non-linear transformation with multiple layers in neural network and makes prediction more accurate.
A high-speed performance analysis tool called convolutional neural network is used in the proposed methodology for the classification of 2D images. The extracted learned image features make deep learning model powerful to train full network for classification. The structure of CNN is stated using convolution, max pooling, and fully connected layers. The entire internal architecture of CNN classifier is depicted in Fig. 4.

Convolutional Filters
The convolutional layer focuses on both higher and lowerlevel features namely edge detection, smoothening and sharpening etc. The convolution of an input image with a feature detector called kernel filter is carried out across the image through the sliding window method. As the depth of the filter is equal to the input image, the proposed method uses 3 × 3 kernel filter. An element wise multiplication is done and added up to give feature maps, which are the output of convolutional layers. The value of feature map is the value of matched input image and filter value. Hence the filter map is a 2D matrix. They are also known as activation maps or convolved features.
The convolution function is depicted in the following equation.
The symbol * indicates convolution process and the equation represent percentage of filter (J) area overlapping with input image (I) at a time τ.
Applying the tighter bounds to the integral the equation can be rewritten as The equation corresponds to single entry in 1D to compute complete convolution tensors. The multidimensional kernel tensors are required and it is given by, The convolution slides the kernel over the input image. It is mathematically given by the above equation and the output will be scalar value. This process is repeated for every xy pixels and results are stored in a convolutional matrix called feature maps. The main purpose of this convolutional layer is to reduce the size of input image for the upcoming fast process. The performance of CNN architecture can be improved with the use of consecutive convolutional layer with different filters. To obtain a greater number of learned features for the image, more convolutional layers along with filters are used and thereby spatial preservation is achieved effectively. Thus, to be simple and sharp, in convolutional layer the feature map extracts the features that are important and essential for classification. The spatial relationship between pixel, called spatial invariance, hierarchical feature learning and scalability are preserved in convolutional layer. The convolved feature or activation maps are incorporated with ReLu function to increase non linearity of the network. Recognition of objects or images using ReLu is the transition of pixels with adjacent pixels and so images are processed even when it is rotated, tilted or shifted. where f(x) is activation function of ReLu and x ranges from 0 to ∞. Thus, CNN architecture obtains its high performance through a supervised feature extraction, in which the learning process occurs on its own unlike other artificial neural networks. Figure 5 shows the features of brain MRI image obtained from different convolutional layer of the proposed CNN Deep net.

Polling layers
The pooling of features can be done through the pooling layers which is actually a down sampling process. Here high data pixels obtained from convolved feature are minimized to a level that is suitable for further processing in neural network. Pooling layers of CNN architecture comprises of Max, Min, and Average pooling. The pooling layers, function in such a way as implied by their names and an example is Fig. 6. The maximum value among the features is returned in Max pooling and the average of features in the region are returned in average pooling. The main advantage of spatial distortion and size reduction of 75% is achieved through max pooling layer. Thus, in this paper Max pooling with 2 × 2 filter mask is used and shows better performance in classification of images. This max pooling filter is applied to the result of the next convolutional layer. It is observed that the depth of feature map after applying pooling also remains same with non-overlapping features. Hence the proposed method eliminates the chance of over fitting occurrence, and reduces the number of parameters to 25% when compared to the original number of parameters obtained from the input MRI brain image.

Fully connected layer
The obtained convolved features are pooled and it is required to be flatten sequentially into a single column. This flattened single sequential column achieves spatial invariance. The flattened single vector acts as an input vector and is forwarded to hidden layers through fully connection method for classification. High level features are combined with the attributes and hence prediction of category or classes can be done even better. The fully connected layer learns all the possible non-linear functions and 3D feature map is feed forwarded to produce 1D feature vector of 3D volume. In this stage, volume is deep enough with increased number of kernels, pooling layers and convolutional layers. The gradient descent approach involves only a mini batch of training set for the computation of error function in neural network. But a stochastic gradient descent approach is used to compute error function for the entire training set at a time. Hence the proposed architecture uses stochastic descent approach and thereby achieves high speed. Thus, multi-layer preceptorfeed forward neural network with pooling layer is proposed to perform the classification by weight adjusting operator.

Proposed CNN Deep Net
An efficient deep learning architecture is proposed for the classification of brain tumor. The proposed CNN Deep Net detects the normal and tumor affected brain image from MR image dataset. As the proposed method is supervised, feature extraction and classification are inherently built and implies the process with more trainable features. In the proposed CNN Deep net, the convolution process is linear and continuous. The convolutional module prepares an output by convolving pre-processed image with the 64 convolutional kernel filter. The kernel filter of 3 × 3 with single stride is involved in the process. This convolution ends up with 256 × 256x64 feature maps which are stacked along their depths to create an output volume. The convolutional layer results in different features that are dictated by kernel using convolution. And moreover, no padding is required for the proposed architecture. The framework of the proposed method is shown in Fig. 7.
In this paper, large number of features and parameters are obtained from the flattened layer's single column vector. The proposed CNN Deep net consists of two fully connected hidden layers for two classes and an output classification layer with SoftMax activation. The neuron in output layer when stated as 0, represents the non-tumor case and represents the tumor case when stated as 1. Table 2 shows the number of layers with its output shape and learnable parameters of the proposed CNN Deep net. The analysis report is obtained from deep learning network analysis tool. The activation size, number of extracted features and learnable parameters with weights and bias are tabulated. The total learnable parameters of the proposed CNN Deep net are 5, 66, 24, and 136.

Segmentation
The classified meningioma brain tumor image is segmented to obtain dilated and eroded images by global opening and closing functions respectively. These two images are subtracted to obtain threshold image. In order to improve the segmentation process, an area morphological function is utilized in threshold image to eliminate the missed pixels. Here, an important process of image analysis called labeling is carried through connected component method. The connected component method forms a group based on its pixel connectivity. The labeling is applied to the threshold image to segregate the tumor regions of brain from normal regions. The MRI image is scanned pixel by pixel to group the same intensity values and 8 point-connectivity is assumed for analysis. Experimental results of segmentation show better performance with sharp boundary detections of tumor pixels. The boundary detection of the proposed method shows high effectiveness in comparison to conventional region growing algorithms.
The segmented tumor region using proposed method is shown in Fig. 8.

Multi preceptor CNN based Diagnosis system
The CNN architecture diagnosis low grade (benign) and high grade (malignant) tumor from the segmented tumor regions by extracting their features. The grade classification is performed by grey level co-occurrence matrix (GLCM). The strong features of tumor region obtained from GLCM in fusion with CNN deep features forms a vector matrix. Grade classification is done on this vector matrix through the CNN classification layer. The proposed diagnosis system for grading uses single neuron at the output layer. The classification result of 0 or 1 is  obtained from CNN which indicates low grade and high-grade case respectively. Figure 9 illustrates the diagnosis system.

GLCM -CNN classifier
Texture properties of segmented tumor region are computed using GLCM features. These GLCM features are spatial features obtained from co-occurrence matrix of pre-processed brain image at 90 degrees orientation. The various GLCM features suited for the proposed methodology are contrast, entropy, mean and homogeneity. In this proposed diagnosis, 260 brain MRI images with 156 low grade cases (benign) and 104 high grade cases (malignant) are trained. A total number of features extracted from images are 1040 as each image computes 4 features. These obtained features in vector are applied to fully connected classification layer of CNN classifier to classify low grade and high-grade tumor. The GLCM features used for the proposed approach are stated below in the Eqs. (5), (6), (7) and (8).
where, P(i, j) is the GLCM matrix constructed at the orientation of 90 degree, the rows and columns of the GLCM matrix are represented as i and j . The number of rows and columns in GLCM matrix are represented by P and Q, respectively. Figure 10a and 10b shows the proposed GLCM-CNN classifier results for low grade and high-grade brain MRI images.

Results and Discussion
In this article, the proposed CNN Deep net detection and classification methodology is applied on the set of brain MR images in the BRATS, an open access dataset. From the dataset, 600 brain images of 512*512 pixels are used in this article which includes 340 normal brain images and 260 tumor affected brain images. MATLAB R2020 b is used as simulation tool for detection, classification and segmentation  Table 3 shows the performance metrics of proposed CNN Deep net classifier on BRATS brain MR image datasets. The validation accuracy, validation loss and training loss with different epochs are obtained from the simulation and an average accuracy of proposed Deep net is tabulated.
The performance of the proposed CNN Deep net depends on its accuracy and loss values against the epoch. Thus, an accuracy and loss plots are plotted for training and test datasets. Figure 11 shows the performance analysis of proposed CNN Deep net. The performance of proposed Deep net is improved by the implementation of parameter hypertuning such as batch regularization, normalization and 50% of dropout with 0.01 learning rate.

Performance evaluation metrics
Both subjective and objective comparison of classification and segmentation are well explained in terms of sensitivity, specificity, accuracy, precision, F score, DSI etc. Analysis of the proposed CNN Deep net is performed by means of classification rate which is the ratio of number of correctly classified objects to the total objects involved in the method. In this proposed method the classification rate of normal image and affected tumor image is 99.5% when 340 normal MRI images and 206 affected tumor MRI images are considered for classification. It is thus found that an average of 99.5% classification rate and 99.4% of validation accuracy is achieved through the proposed method.
Further, the following performance evaluation metrics are considered for the proposed CNN Deep net brain tumor detection system. The correlation between the correctly classified pixels is well stated by sensitivity and specificity where accuracy defines the rate of correctly detected pixels in segmented brain image. The tumor free healthy pixels are detected and stated through Precision. Disc Similarity Index is used to analysis the correctly detected pixels that are similar to each other when compared to ground truth image. All these parameters are measured in percentage and vary between 0 and100. The evaluation metrics for the performance analysis of the proposed method is stated through a confusion matrix which has the value of TP and TN, providing correctly detected tumor and correctly detected non tumor pixels as well as FP and FN to provide non-correctly detected tumor and noncorrectly detected non tumor pixels. Table 4 is listed with Sensitivity (Sen), Specificity (Spe), Accuracy (Acc), Precision (Pr), F-score, Dice Similarity Index (DSI) values. From Table 5, the classification rate of well know LeNET CNN architecture with their pooling layers is compared with proposed CNN Deep net pooling layers and their classification rate. Thus, the proposed work is designed with max pooling layers and achieves high classification rate. On the basis of centroid index on segmented region the morphological parameters such as circumference (C), perimeter (P), area (A) and eccentricity (Ecc)are calculated and their values are tabulated in Table 6.
In this paper, 156 low grade cases and 104 high grade cases with a total of 260 tumor affected MRI brain images are used for diagnosis. The features extracted from segmented region are tabulated and Table 7 shows the GLCM features of low grade and high-grade tumor images. A diagnosis rate of 99.3% is obtained, as the proposed system classifies the low-grade cases correctly with the ratio of 155:156 and similarly produces 98.07% of diagnosis rate while classifying high grade cases correctly. Thus, the proposed method has obtained an average of 98.6% diagnosis rate. The diagnosis result of the proposed method is tabulated in Table 8. Table 9 is the comparisons of proposed simulation results with other conventional methods on same dataset images. It is inferred from   and CNN with minimal usage of GPU boosts the performance in the deep learning architecture for medical image processing.

Discussion
It is observed that the metrics such as sensitivity, specificity, accuracy, precision and F-score of the proposed CNN Deep Net shows better performance when compared with other machine learning methods on same dataset. Table 10 shows the comparison of the proposed CNN Deep Net architecture with pre-trained CNN architectures. The proposed method is applied on different image datasets and tested the feasibility. The qualitative analysis of proposed method on MR brain, breast, cervix isgiven in Fig. 12. Figure 12A shows the source images and Fig. 12B shows the ground truth images. Figure 12C shows thresholding segmentation results and Fig. 12D shows Contour segmentation results. The proposed segmentation output results are shown in Fig. 12E. Figure Figure 13 shows the performance chart   Figure 14 shows the performance comparison of proposed CNN Deep Net classifier with various conventional classifiers.

Conclusion
In this paper, detection of meningioma brain tumor and classification by CNN Deep net is proposed. The CNN Deep net classifier is designed with five convolutional layers and Max pooling layers. The classifier with multi neuron feed forward neural network extracts deep features from input brain MRI images. The proposed methodology uses multilayer perceptron architecture for detection and classification of brain tumor with grade diagnosis. Global threshold segmentation approach is used for segmenting tumor affected region where both dilation and erosion are used for locating the tumor regions. Further, a novel diagnosis system using GLCM CNN classifier is proposed and achieves a high classification rate and accuracy. Open access dataset BRATS is used for the analysis and evaluation of the proposed method. Thus, the proposed CNN Deep net classifier achieved a higher classification rate of 99.5% with better specificity (98.6%) and sensitivity (97.2%). The Deep net achieves a segmentation accuracy of 99.4% where the hybrid diagnosis system achieves diagnosis rate of 98.6%,sensitivity of 97.3%, specificity of 98.9%.