Brain Tumor Segmentation and Prediction using Fuzzy Neighborhood Learning Approach for 3D MRI Images

Segmentation of brain tumors is a daunting process comprising the delineation of heterogeneous cancerous tissues and diffuse types in anatomical representations of the brain. Deep learning techniques have recently made important strides in the segmentation of brain tumors. However, owing to the irregularity of the tumor, most of the deep learning-based segmentation techniques are not used directly for tumor detection. Although recent studies are capable of addressing the irregularity issue and retaining permutation invariance, many approaches struggle to catch the valuable high-dimensional local features of finer resolution. Inspired by the fuzzy learning methods and an analysis of the shortcomings of existing methods, an automated fuzzy neighborhood learning-based 3D segmentation technique has been proposed for the detection of cerebrum tumors in 3D images. In this technique, the fuzzy neighborhood function is deeply integrated with the proposed network architecture. This technique has been evaluated on BRATS 2013dataset. The simulation results show that the proposed brain tumor detection technique is superior to other methods in the diagnosis of brain tumors with the dice coefficient of 0.85 and the Jaccard index of 0.74.


INTRODUCTION
Nowadays, with the rapid growth of technology and access to medical records, more reliable machine learning methods are required. In recent years, many authors have been focusing on the analysis of healthcare records utilizing artificial intelligence algorithms [1][2].
Cerebrum tumor is the development of specific brain cells [3]. The impact of a tumor varies depending on different variables, such as the type of tumor, size of the tumor, and the way the tumor spreads and grows. Magnetic resonance imaging (MRI) is an imaging method utilized by pathologists to analyze body biomechanics. Since MRI can provide a huge amount of useful information on the fatty tissue of a person's body biomechanics, these imageries could also lead to the detection of cerebrum tumors and also be utilized to study the activity of the brain [4][5]. It is also possible to make different comparisons of tissues in this imaging technique, making it an effective tool for imaging different desirable mechanisms. Concerning the nature and existence of cerebrum tumors, a single MRI technique is not appropriate for brain tumor segmentation.
These days, mixtures of different MRI methodologies are typically utilized for brain tumor detection.
While surgical treatment is the standard action for most of the tumors, hormonal therapy could be utilized to downward brain tumors. Decrease/inhibits the progression and spread of tumors that cannot be forcefully evicted. Before any surgical treatment is scheduled, segmenting the tumor is vital to the safety of healthy tissues. Segmentation of brain tumors requires diagnosing, delineating, and distinguishing various tumor tissues from normal brain tissues: grey matter, white matter, and cerebrospinal fluid [6]. It offers useful data on the clinical protocol for diagnosis, tumor progression control, treatment planning, patient outcome prediction. However, accurate segmentation of tumors in multi-model and three-dimensional images remains a crucial task, as these tumors have a various appearance and may become visible anywhere around the brain, in any shape and size, as well as their boundaries are often uneven, fuzzy, and difficult to separate from normal tissues [7].
Manual segmentation needs competent specialists to interactively layout tumor subregions, slice by slice, and multi-modal 3D MR imageries are also costly and time-consuming activities that are vulnerable to inter-and intra-expert heterogeneity. It is, therefore, necessary to implement automated and reliable segmentation techniques for the early detection of brain tumors.
Traditional methods such as clustering and feature extraction filtration have been utilized for cerebrum tumor segmentation systems. Then, deep learning methods that retrieve handcrafted features were becoming dominant methodologies over a prolonged period. Fully automatic segmentation of brain tumors that have occurred, Generative methods must have actual knowledge of the presence of ordinary and unhealthy tissues and also depends on atlas [8][9]. On the other hand, the exclusionary strategies focused less on prior knowledge and assign a class mark to every pixel of the image based on several of the features extracted.
Artificial intelligence is an ensemble learning algorithm that has been developed since the 1980s [10]. Deep learning has attracted significant attention in the 2000s due to the availability of large-scale solutions for the specified problem. Compared to conventional approaches, handcrafted features are used, and deep learning algorithms effectively learn symbolic and nuanced data-driven features. Therefore, instead of continuously attempting to obtain more effective handcrafted features that could require technical expertise, these methods concentrate on the construction of productive architectures.
In comparison to 2.5D approaches, the point-wise image features are extracted directly from the cerebrum images without losing spatial data, yielding better recognition of tumor segmentation. 3D representation conserves spatial data by dynamic transformations without loss of data [11][12][13]. In [12] applied transformed occupancy grids, it is difficult for parametric representation for point-wise classification. PointNet [14] and several improved PointNet-based architectures [15][16][17] are therefore proposed to ingest brain tumors directly. In [15] perceives the local features of the adjacent grids. However, these non-convolutionary methods separately handle points at the grid level to maintain permutation invariance. This method disregards the geometrical relationship between points [16][17]. In contrast to 3D CNN based methods and nonconvolutionary methods, Graph Convolutional Networks (GCN) is used by other methods [18][19][20][21] to combine tumor points with spatial information. On the other hand, the proposed method uses spatial information from various perspectives by deeply incorporating the learning of the fuzzy neighborhood function into the network architecture. This method addresses the fine-grained spatial existence of the missing problem by analyzing the fuzzy neighborhood characteristics of each.
Recently, state-of-the-art methods [22][23][24] have been utilized handcrafted features for tumor segmentation. These strategies are close to the inclusion of additional customized elements represented by a fuzzy set to a crisp input. These handcrafted fuzzy elements are calculated based on a crisp input, which ultimately restricts their performance. In [23] derives the intensity of every pixel by a neighbor pixel membership value within a distance of 2 pixels, which are equal to adding a 5×5 kernel convolution to the input signal. Instead of using fuzzy data preprocessing approaches for cascading neural networks, extract features from smooth information, deeply incorporating fuzzy neighborhood learning at the level of network architecture to boost the efficiency of the proposed model. The contributions of this paper are as follows: 1) A deep neural model that incorporates fuzzy neighborhood learning is formulated for brain tumor fragmentation.
2) The developed scheme deepens the understanding of the fuzzy neighborhood character of the point in the project. The proposed model addresses the missing problem of fine-grained local function while maintaining the invariance of permutation.

PROPOSED METHODOLOGY
The overview of the proposed methodology is given in figure 1. It has three components: 1) symmetrical information aggregation addressing permutation invariance by approximation function; 2) fuzzy neighborhood learning and 3) point-wise, fuzzy neighborhood, and global feature integration. The key components are defined in the following sections.

Aggregation of Symmetric Information
The block diagram of the fuzzy neighborhood learning-based 3D segmentation has been given in figure 2. To resolve the criterion of permutation invariance, the concept of applying symmetrical functions to integrate information from points of view, and the usefulness of this has been demonstrated [14]. PointNet [14] aggregates global information on all input points, while other developed approaches aggregate data on spatial proximity points [17][18]21] or grids [15] As a consequence, the fuzzy neighborhood function of the point pi vector is capable of collecting the local features and fine granularity.

Point-wise, Fuzzy Neighborhood and Global Feature Integration
Once the fuzzy neighborhood features and the global attributes are determined, then both the features are concatenated to create a new combined feature vector ′ : Where, ⊕ is the concatenation operation. The fuzzy 3D segmentation using fuzzy neighborhood learning algorithm has been explained below. f'= f ⊕ fNB ⊕ fglobal = ( 2 . +

SIMULATION RESULTS
The execution of the fuzzy neighborhood learning-based 3D segmentation method must be correlated with other existing segmentation techniques to find efficiency. This method has been executed on BRATS (2013) datasets. It contains 30 live glioma patients and 50 virtual glioma patients. In this section, both the quantitative and qualitative analyses have been presented. The simulated results have been presented in figure 3 for qualitative analysis and the three-dimensional presentation of segmented outcomes are given in figure 4.

CONCLUSION
In this paper, a fuzzy neighborhood learning-based 3D segmentation technique has been proposed for the recognition of brain tumors in 3D images. In this technique, the fuzzy neighborhood learning function is deeply integrated with the proposed network architecture. This learning function captures the local features of each point. This method directly considers the features as input and address the fine-grained local feature missing problem. Since the fuzzy neighborhood learning-based 3D segmentation method effectively illustrates the estimation of neighborhood highlights, investigating the harmony between nearby highlights and global highlights in the learning pipeline would conceivably further boost the presentation. The execution of the fuzzy neighborhood learning-based 3D segmentation method has been correlated with other existing brain tumor segmentation techniques. This paper conveys that the fuzzy neighborhood learning-based 3D segmentation method gives better segmentation outcomes with the dice coefficient of 0.85 and the Jaccard index of 0.74 for the complete tumor.
The dice coefficient has been improved with 0.018 and the Jaccard index has been improved with 0.035 when compared with the existing segmentation technique.