Fused Second-Order MR Image Feature Framework for Region of Interest Delineation

Healthcare infrastructure relies on technology-driven solutions such as CAD systems for improving the overall e ﬃ ciency of its procedures and processes. Image segmentation is one of the most critical phases for such systems in view of the fact that accuracy of this phase determines the e ﬃ cacy of the later phases, to a large extent. Extensive research is underway to develop segmentation techniques that can achieve highest accuracy with some suggestions directed towards an information fusion based approach within the machine learning paradigm. This research paper proposes a fused second-order statistical image feature framework for Region of Interest delineation. It is a feature fusion-based segmentation approach (ACM-FT) that fuses texture driven feature maps from GLCM , GLRLM and Gabor ﬁlters. The proposed approach is then compared with Active Contour Model with classical edge detection method (ACM-ED) and Active Contour Model without edges (ACM-WE) using Overlap Index (OI) and Jackard’s Similarity Co-e ﬃ cient (JSI). The proposed approach achieves an average accuracy of 92.17% and 93.19% for JSI and OI, respectively, demonstrating signiﬁcant improvements.


Introduction
The emergence of transformative healthcare technologies has revolutionized the manner in which healthcare services are provisioned. Most of the changes have been brought about to improve the operational efficiency of the system and are driven by Computer Aided Detection and Diagnostic (CAD) systems. Recent advances in imaging technologies has made it plausible for medical practitioners to use advanced and hybrid imaging techniques such as PET, USG, X-Rays, CT Scan, fMRI and SPECT, in addition to others, for advanced computer-aided diagnostics.
Enhancements in these techniques enable medical practitioners to gather detailed information about body organs and physiology. These techniques typically make use of internal, external or both sources of energy [1,2]. However, a significant implication of these developments is the overwhelming amount of information that medical practitioners now possess and need to process. Design, development and optimization of CAD systems have gathered immense research attention in the recent past, which can be attributed to the fact that they play a pivotal role in integrating the capabilities of humans and computers for the greater benefit of mankind.
Recent research in this area extensively focuses on brain tumour segmentation and classification for several reasons. The most profound of these reasons include the fact that neuroscience research is ever-evolving with many subrealms of the field still unexplored. Moreover, it has a significant impact on the overall socio-economic condition of the country. With respect to brain tumour examination and classification, MRI remains the most widely used imaging technique because of its ability to provide high soft tissue contrast and greater resolution images [3].
In addition, post-segmentation processes in healthcare systems such as classification, volume estimation and robotic surgeries require segmentation with acceptable accuracy as a prerequisite. This makes it quintessential for iterative research in this field to strive for highest accuracy at the segmentation level. As a consequence of these, this research work is scoped within this context. This research paper proposes to use an information fusion-based approach for medical images. The proposed approach fuses texture-driven feature maps from GLRLM, GLCM and Gabor filters, to exploit the high, middle and low frequency image details.
Information fusion is a well-established research field that is based on the conundrum of identifying diverse pattern characteristics by extracting different features and then, cumulatively using them for classification and recognition applications [4,5]. The advantages of using these methods are multi-faceted. Firstly, it allows selection of effective discriminant information coming from different features. In addition, it also allows elimination of redundancy. Although the potential of using information fusion for image processing applications has been explored, actualization of this proposition is not yet established.
This research paper is organized as follows: Section 2 elaborates on the underlying concepts providing a background of the proposed approach. Section 3 provides details of the proposed contour energy and its formulation. Section 4 provides details about the results of the implementation. Lastly, Section 5 synopsizes the contributions of this research, also providing insights on scope for future work.

Background
In a conventional clinical setting, pathological delineation is done manually by experts. A well trained and experienced professional will go through the affected region, slice by slice, to make an assessment. As is evident, this is a time-consuming process and the performance is barred by inter-observer and intra-observer errors [6]. Semi-automatic and automatic tumor segmentation techniques are being developed because of the human-level limitations associated with this approach. Moreover, segmentation of pathologies is not a straightforward task. For instance, in the case of brain tumors, the granularity of image details associated with an organ or tissue of interest is diverse, signifying the need of a well established technique for pathological delineation. In addition, segmentation tasks are difficult due to the presence of various types of noises, most of which are induced during image acquisition processes [7], [8], [9], [10].
Existing literature suggest that there are a large number of segmentation methodologies that have been proposed from time to time such as intensitybased methods [11,12], region growing-based segmentation methods [13,14], surface-based segmentation methods [15], [16], deformable segmentation methods [17,18] and hybrid methods [19,20]. The use of deep learning techniques for medical image analysis has lately garnered substantial research attention [21].
In order to avoid these hazards and reach the desired boundary, the initial contour should lie as close to the desired boundary as possible. Taking this limitation into consideration, several deformable contour methods have been proposed as improvements to the original snake [22][23][24][25][26][27][28][29][30][31][32][33][34]. There is sufficient evidence in existing literature [35,36] to prove the equivalence of contour energy minimization to contour length minimization weighted by an edge detection function in the Riemannian space.
Recent works in this domain also provide a detailed review and comparison of the level set methods [35][36][37]. Niessen et al. [38] proposes a novel geodesic active contour method for performing segmentation of multiple objects. The object shape is used as apriori knowledge for the formulation of level set in some research articles [39][40][41] to allow deformation within an admissible range. The method proposed by Kohen anf Kimmel [42] pre-selects two true boundary end points for an object and attempts to locate the global energy minimum of a contour between them. Some research articles [43][44][45][46] point towards the use of region-based image features in a standalone manner or in conjunction with edge-based features for constructing the term of energy minimization.
Assuming that an image only contains an object and background, Chan and Vese [44] use region-based image features for performing image segmentation. The variants of original snakes (topology snake [25], balloon snake [22], gradient vector flow snake [24] and distance snake [23]), level set methods (original level set [37], area and length active contour [43] and geodesic active contour [35]), and constrained optimization [46] are profoundly used in applications that require medical image segmentation. Besides this, many survey papers are available in the domain of deformable contour methods [38,[47][48][49][50][51] as well.

Formulation of Proposed Contour Energy
Initially proposed by [10], Active Contour Model (ACM) is an optimization technique wherein a curve (parameterized) evolves in space. It has also been referred to as 'snake' in existing literature. The space herein refers to the spatial image domain space. upon initialization, the curve is said to be driven by two energies: internal and external, both derived from within the image. The smoothness of the snake is ensured by internal energy, during the course of its evolution and is given by, where f and g are the euclidean norms of the function. To calculate energy of the whole parameterized curve (C), we simply sum the energy given in equation 1 as: To be able to tune both the terms in the equation 2 of a contour with initial and final points of A and B respectively, we can put equation 2 as: where ↵ and are the tuning parameters for the contour's behavior. The matching of image objects and deformable curve is described by external energy component of the contour behaviour function. It is important to mention here that image objects in this case include tumour boundary. Therefore, external energy function is given by, 9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63  64  65 where H is a potential attraction field onto the edge of Region of Interest (ROI), so the basic aim is to use an attraction field and a potential energy on that field that the proposed approach will minimize to reach to the edge of an object (ROI). So, this energy needs to be considered for the whole contour instead of a localized consideration C and plugged into the equation 4: Similarly, for a contour which is running from a point 1 to point M , equation 5 may be rewritten as: where r(F ) is the image gradient and is given by: Since, total contour energy needs to be minimized, the final term E external is taken as: The complete energy function for the contour can be expressed in terms of internal and external energy and is given by, So, as per equation 9, the contour would stop evolving when the energy is minimum, which is at the maximum value of the gradient (boundary of an object). In order to avoid falling into a local minima while attempting to reach the desired boundary, the initial contour should lie as close to the desired boundary as possible. There are hints of using this idea of stopping the evolving curve near the desired boundary using deformable contour methods in existing literature [22,26].
In the proposed work, the boundary of detected pathologies is obtained by letting an initial contour to evolve with the aid of a new external force field obtained by exploring image textures. As far as the texture of the brain MR images is concerned, it can be intuitively said that it possesses a stochastic texture. To characterize this texture in MR brain images, it is imperative that most appropriate image features be extracted and used in an ensemble framework .   9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63  64  65 The external force field in this work has been obtained by exploring the gaussian modulated sinusoids and second order image co-occurrence features. The rationale behind this methodology is that each of these textural feature extractions derive all the low, mid and high frequency image details. GLRLM and GLCM are the second order statistics computed from within the image.
These statistics model the relationship between pixels in an image for GLCM. This can be understood from the fact that they model various intensity run length-based statistics for GLRLM. GLCM has been extensively explored in various application fields for the extraction of textural details of an image. Iqbal et al. [52] explores the potential of GLCM for classification of crops from remote sensed images and used different textural feature extractors including GLCM with neural networks for classification problems. The work presented in [54] performs classification of benign and malignant tumors using GLCM.Similarly, the works [55,56] illustrate the potential of GLCM in other application areas such as autonomous cleaning robots.
GLRLM has similarly been utilized for the textural image feature extraction. In the biomedical field, GLRLM has been explored for medical image analysis, pathology detection, tumor classification and other image analysis tasks [57][58][59][60]. Both these models extract high frequency details from an image. Similarly, the low and mid frequency details of the image are pulled out by employing Gabor filters. The efficacy of Gabor filters for textural feature extraction is testified by existing literature. When Gabor filters are used as filters, their information retrieval capabilities are extended with respect to scales and orientations. These filters have been used for detection and segmentation of brain pathologies and feature extraction of biometric modalities at acceptable rates [61] [64].
Based on these two models, the overall problem can be formulated by initially defining the total energy of the contour as: where E int is the internal force field while E ext is the external force field, both defined parametrically for V(s). Besides this, both internal and external force field can be tuned to achieve the desired segmentation of the brain tumors. The internal energy is composed of two terms namely, ↵ and ,w h i c hr e s pe c t i v e l y control the rigidity and smoothness of the evolving contour over the area of the brain tumor. This can be defined as: The selection of ↵ and is significant and may vary from one type of region of interest to another. However, it is the external force field that is of prime importance. The reason for this is that the external force field determines as to 9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63  64  65 where an evolving contour will finally stop. So, in this case, an external force field would be a precise function of where the tumor boundary is desired. The classical way of looking at this problem is to calculate the image gradient of the original brain tumor image as: The edge indicator function would provide an external force field that would normally give an energy minimization just at the edges of the image as gradient would be maximum there. Therefore, minima of the following equation needs to be computed: However, the above equation may not be a viable option because of two potential reasons: (1) the evolving contour may fall into a false minima, which may occur due to some false edges estimated by the edge indicator (2) the edge indicator may not always give the exact tumor boundary where contour convergence is intended. In both these cases, there is high likelihood that brain tumor boundary is not determined and consequently tumor region is not accurately segmented.
In order to find the exact tumor boundary from each MR slice, multiple instances of information from the corresponding slice is required so as to perform a majority voting for the existence of the tumor boundary. This approach explores the fusion of various textural feature maps coming from GLCM, GLRLM and Gabor filters. The potential of information fusion for achieving better performance in image processing application can be found in existing literature [65,68]. However, there is no well established technique for quantifying every type of texture in an image. Hence, texture quantification techniques have been continuously being worked upon in the past [69][70][71].
Consider an image f (x, y) of size R x xC y with intensity levels: L✏1,....,G. The probability density P of the intensity of a pixel p located spatially at (x, y) can be estimated using neighbouring pixels' intensities, which include: N 4 (p),N D (p),N 8 (p) and so on. Let N (x, y) be a set of pixel values in the neighbourhood of p, then the value of a random pixel q at (x i ,y i ) can be expressed as: Therefore, for a GLCM , a value in its matrix is a count representing the number of times a pixel with one of the intensity levels in L✏1,....,Gis followed by a pixel with same value from L with predefined d and ✓ indicating inter pixel distance and angle of computation. Based on the the probability densities, the 9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63  64 65 Article Title following second order statistical features are firstly calculated: where P ij is the element of co-occurrence matrix.
Similarly, a set of image features are estimated wherein run lengths of pixels is emphasized upon. These features tend to extract high frequency image information. The following are calculated using GLRLM matrices: Long Run Emphasis (LRE) = 1/T p Gray Level Distribution (GLD) = 1/T p Run Length Distribution Run Percentage (RP) = 1/T p where G is the number of grey levels, N R is the number of run lengths in the matrix, and T P is 9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63  64  65 The image information pertaining to low frequency intensity changes may be captured by employing Gabor filters whose even and the odd components can be given by: These lead to a 2D complex Gabor filter function, which is given by, From the above Gabor filter expression, it is evident that the filter can be tuned using ✓: the orientation of the window, ! 0 : the frequency of the envelop and :the spatial spread of the Gaussian window. Assuming a GLCM driven feature map as I glcm (x,y ), GLRLM as rI glrlm (x,y) and Gabor as rI gabor (x,y), an external energy term is defined as the average of the three maps, given as: The above I fused term can be plugged in the final contour energy that needs to be minimized. This yields the final contour energy as: The minimization problem given in the above equation now comprises of an external force field that comes from the textural image feature fusion. This fused feature map eliminates the possibility of the contour getting trapped into any false or local minima. In addition to this, the tumor boundary is well defined and prominent as texture details capture most of the local and global gradients in the image.  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63 64
The score plot shown in Fig. 3 shows the capability of the principle components to group GLCM features from the craniophryngioma and high grade glioma. This underlines the fact that top eigen valued components retain significantly high variance in order to establish a discrimination between the two large feature sets. Fig 4 shows the JSI and OI values at different window sizes. It is evident from the standard deviation and mean values that change in window size reflects slightly in terms of JSI and OI values. Fig. 9 shows the texture maps pertaining to GLCM, GLRLM and Gabor filters. A subjective evaluation suggests that these maps present different textural details. Fig. 8 (a) shows a laryngeal squamous cell carcinoma MR image with contour initialized. The ↵ and chosen for this case is 0.11 and 0.16, respectively. Since weights of ↵ and combine with first and second derivative terms respectively, it is important that the two are tuned giving a balance to the contour's elasticity and rigidity. The images in Fig. 8 shows are curve evolution after random iterations. Fig. 8(e) is the final evolved curve on this LSCC MR image. Table 2 shows the segmentation accuracy of the three methods using JSI and Table 3 shows the segmentation accuracy of the same three methods using OI.
The segmentation results for [72], [73] and [74] datasets are shown in  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63  64 65 To validate the efficacy of the proposed method in presence of the noise, we have considered [74] dataset. We corrupted the images with gaussian noise of different mean and variance values. Fig 11 shows one such image degradation process with gaussian noise. In this example, we choose a µ = 0 and = 0.01 as shown in Fig 11 (b) and Fig 11 (f). Fig 11 (c) and Fig 11 (g) show the curve 9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63  64 65  [75] methods. The extent of segmentation for the case study of adenomas cases is done using shape-based features calculated from the extracted and segmented regions with proposed as well as manual segmentation, as shown in Fig. 7. The features used for comparison are area (Ar), perimeter (P) and the ratio of lengths of longest to shortest axis (ALR). Since, these are computed based on pixel count, data needs to be normalized using z-score normalization to bring these features to a common scale. The features indicate high degree of similarity between the two segmented regions of adenomas. It is observed that the use of fused texture in the framework of ACM (ACM-FT) captures boundary of the tumor more closely than the use of simple edge as the external force field for the ACM, referred to as ACM-ED in this work.
Since pathological segmentation is a critical process preceding many other processes including volume quantification, which is a key component for dose estimation and other oncological decisions. To compare the extracted area by using the proposed methodology, we have used a large number of irregular shapes that resemble tumor boundaries. Based on these shapes, we have calculated the area using the mostly used cavalieri and ABC/2 methods. Fig. 6 depicts the variation in the three methods. For a better tumor resection, the volume estimation should be estimated accurately, which eventually depends on the efficiency of the area estimation from an MR image slice .   9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63  64  65

Conclusion
This research paper proposes a fused textural feature based image map as an external force field for ACM. The work focused initially a feature extraction and feature dimensionality reduction via principle component analysis. Later, the same were used in the framework of ACM for delineating pathological regions. Overlap Metric and Jackard's Similarity Co-efficient were used to evaluate the performance of the proposed method in comparison with the ground truth. These two measures were computed for ACM-ED, ACM-WE and ACM-FT.
The proposed segmentation approach demonstrates an accuracy of 92.18±5.32 using Jackard's Similarity co-efficient, which is a significant improvement over the same for ACM-ED, which was 81.76 ± 9.20. The accuracy determined using Overlap index was found to be 93.19±3.62 for ACM-FT as compared to 78.66±7.3 demonstrated by ACM-ED. Accuracy as high as 98.51% is obtained from OI and 96.8% using JSI. The presented work can be extended to 3D tumor volume reconstruction and can help in quantitative evaluation of tumor progression before and after surgery.

Declarations
• Data Availability: The dataset uswed for the work was collected from medicare and the other datasets used for validation were taken from published articles. • Animal Research (Ethics): No data related to animals has been used for this work. • Consent to Participate (Ethics): Not applicable for the scope of this work. • Consent to Publish (Ethics): Not applicable for the scope of this work. • Plant Reproducibility: Not applicable for the scope of this work.
• Clinical Trials Registration: Not applicable for the scope of this work.
• Author Contribution: The first author is the primary contributor.