Improved OD and OC Fragmentation using Deep Learning for the Progression Recognition of Glaucoma

Glaucoma is one of the most hazardous diseases, proceeding to impact and burden an extensive bit of our general population. Appropriately, the initial stage of identification of glaucoma is significant to prevent the permanent vision misfortune. The CDR is the important factor for glaucoma recognition. The precise fragmentation of optic disc and cup is yet an evolving issue. Most of the segmentation based glaucoma recognition methods depends on the handcrafted features. It affects the overall performance of the glaucoma recognition. To resolve this issue, an efficient deep learning based optic cup and disc segmentation using technique multi-label segmentation Au-net has been developed in this paper. The proposed method focusing on the optic cup-to-disc ratio for the recognition of glaucoma, which may be the best system for building a capable, energetic, and accurate structure for glaucoma analysis. This system has been simulated on DRISHTI datasets. The exploratory outcomes indicates the proposed strategy performs well to the best with state-of-the-art methodologies accomplishing a 99% of Accuracy, 88% of Sensitivity and 95.5% of Specificity on the DRISHTI GS1 dataset individually.


Introduction
Glaucoma is the common reasons for visual deficiency, it is anticipated to influence about 80 million individuals by 2020 [1]. It is a persistent eye infection that prompts vision misfortune, in which the optic nerve is dynamically harmed. It is otherwise named as silent thief of sight, since the symptoms possibly occur in advance stage. The impact of the disease could be determined from the prediction that the populace influenced from glaucoma may expand from 80 million of in 2020 to 111.8 million in 2040 [2]. The individuals among 40 to 64 years old are encouraged to complete their eye test once in every 2 to 4 years [3]. These eye test has been taken by the experts, it will increase the possibility of human error. The Computer-Aided Diagnostic frameworks could assist with decreasing the flaws and may help the medical care specialists in an unprejudiced screening of patients [4][5].
Even though glaucoma can't be relieved, the progression of glaucoma can be eased by proper eye treatment. For early recognition of glaucoma effective fundus images are required. Different parameters such as CDR [6], disc measurement [7], ISNT rule [8], and Peripapillary atrophy [9] are used to recognize the glaucoma. CDR is one of the commonly utilized parameter by experts [10]. It is determined by taking the proportion of Vertical Cup Width (VCD) to Vertical disc diameter (VDD), as indicated in Fig. 1. The manual assessment of CDR is tedious and exorbitant, and it is not reasonable for large-scale screening. Extricating the region of interest (ROI) takes lesser time to measure the optic cup (OC) and optic disc (OD) region [11]. a) Fundus Image b) Normal c) Glaucoma This paper is further coordinated as follows; Section 2 briefly explain the proposed methodology using multi-label segmentation using Au-net; Section 3 presents the simulation outcomes and analysis; and lastly, conclusion of this work has been given in Section 4.

Methodology
The block diagram of proposed methodology used to recognize the glaucoma has been portrayed in fig 2. The multi-label segmentation Au-Net is a one-stage framework [13], used to fragment the OD and OC from the retinal images. In this network, the multi-scale layer assembles an image pyramid to give multi-level inputs. Here, four different streams are utilized. The first stream act as the classifier on the fundus images straightforwardly. The subsequent stream is a fragmentation guided structure, it restricts the OD locale from the retinal imageries and inserts the OD fragmentation portrayal to recognize glaucoma. The third steam deals with local OD locale to anticipate the likelihood from the OD locale. The fourth stream concentrated on OD locale with polar transformation, it develops the OD and OC framework with the geometry operation and increases the efficiency of segmentation. At last, each output from all the different streams are melded to yield the result. At this stream, a multi-label loss function is used to ensure fragmenting the OD and OC. This proposed net creates the OD and OC fragmentation maps, then determines the CDR dependent on fragmented OD and OC to find the risk level of glaucoma. In the multi-label segmentation Au-net, the OD locale is initially confined by utilizing the automatic OD recognition technique, and afterward the input image is converted to polar co-ordinate system. Then this image is given to the Au-net to foresee the multi-label probability maps for OD and OC locale. At long last, the converse polar transformation recuperates the fragmentation map back to the Cartesian coordinate system.

Multi-level Au-Net
The Au-Net [14] has been developed from of U-net. The multi-label segmentation Au-net comprises of the encoder and decoder whose operation is same as U-net. These encoder and decoder generate the feature map. The skip association transfers the feature map from encoder path and connects them to up-sampled decoder feature maps. The multi-scale info is used to increase the efficiency of segmentation [15,16]. The multi-label segmentation Au-net utilizes the normal pooling layer to down-sample the image normally and build a multi-scale contribution to the encoder path.
The final decoder output is given to the classifier network. This classifier uses convolutional layer with Sigmoidal initiation to yield the likellihood map. Here, the multi-label fragmentation network gives K channel probability map.

Side-output Layer
This segmentation technique is used for separating people into groups based on their labels Au-net employs a side layer that performs the initial classification to include a local output chart [17]. Let W represent the coefficients of regular convolutional layers, and let M represent the weights of all side-output layers, where w = (w (1) ………..w (M) ). The side-output layer's objective feature is as follows: Here am represents the multi-label loss function (am = 0.25 ), and ( ) (,) represents the loss function. The merits of side-output layer are: a) it could be perceived the association among the loss and early layer. and b) multi-scale output unification may provide better performance.

Multi-label Loss Function
The pixel fragmentation problems is conceived as a multi-label issue in the proposed multilabel network. The current segmentation approach is associated with a multi-class environment, in which each instance is assigned to a unique label. On the other hand, Multi-label uses a separate binary classifier for every class and assigns each instance to several binary labels. The OD locale overlays the OC pixels, particularly for OD and OC fragmentation, which implies that a pixel labeled as cup often marked as disc. Furthermore, in glaucoma situations, the disc pixels omitted cup area is shaped like a ring, making the disc label highly unbalanced in comparison to the context label in a multi-class environment. As a result, the multi-label approach, which treats OD and OC as two separate areas, is better suited to dealing with these problems. A multi-label loss function in Au-Net, and it is described as: Here (3)

Polar Transformation
In the multi-label segmentation Au-Net, the pixel-wise polar transformation has been utilized. It can be expressed as follows: and the inverse transformation is given as: The polar image's height and width are determined by the polar radius r and the discretization 2p=s, here s represents the stride. It has the following properties: Geometric Restriction: The OC region must be inside the OD region. In Cartesian co-ordinate system, it is very difficult to implement the OC and OD region. But, in polar transformation, the OC,OD and background locale are separated in a layered structure [18,19].

Analogous Augmentation:
The polar transition is equivalent to data augmentation since it is a pixel-wise projection. In the augmentation process, increasing the transition radius is simiar to the scaling factor. As a result, data augmentation for DL could be achieved effectively.

Cup Proportion Balancing:
The distribution of OC and context pixels in the fundus image is severely skewed. The area of optic cup is always lesser when compared with ROI region. Usually, the cup region occupies about 4% of the total area. This highly unbalanced ratio quickly contributes to prejudice and overfitting in deep model training. The polar transformation flattens the image centred on the OD nucleus, allowing interpolation to expand the cup area and raise the OC proportion. In this paper, the ratio of cup region over the ROI improves to 23.4 percent, this helps to prevent overfitting during model training and boost OC fragmentation even more.

Random Forest Glaucoma Classification from Morphometric Attributes
The third stage is a Random Forest Classifier that collects the 19 MAs and categorizes them into two probable classes such as healthy, or glaucoma. The functioning of this algorithm is similar to the decision tree algorithm. This algorithm collects all predictions from decision trees and finds the best solution. Each tree depends upon the random feature set which is described in Fig.3.

Fig 3: Random Forest Classification
The Gini index is a proportion of prediction intensity of factors in regression, depends on the impurity reduction procedure. It can be utilized to rank the significance of attributes for an order issue. It is non-parametric and it doesn't depend on information having a place with a specific sort of conveyance. For a binary, Gini index is determined as pursues: Where pj represents the occurrence of class j in the hub n. For dividing a binary node in the most ideal manner, the enhancement in the Gini index must be boosted. The lower value of the Gini index implies that a specific indicator plays a significant role in categorizing the data into the two classes.

Segmentation Evaluation
Here, the performance of OD and OC fragmentation using multi-label Au-net has been evaluated. The public dataset DRISHTI-GS1 [20,21]  Accuracy. They are discussed as follows,

Sensitivity
It is calculated by taking the proportion among the correct positive predictions and the total number of positives. The best sensitivity is 1, whereas, the worst case +is 0. It is expresses as follows [22],

3.2Specificity
Specificity reflects the true negative rate of class. It is the ratio of correct negative predictions and the total number of negatives. The best specificity is 1, whereas the worst is 0.It is expressed as follows,

Accuracy
Accuracy is the ratio of the total number of predictions that are correct. 99% accuracy can be excellent. It is expressed as follows [22], In the above equations (7,8,9), TP and TN represents the number of true positive and true negative respectively, and FP and FN represents the false positive and false negative respectivey   The merit of this proposed methodology is polar transformation, it solves the imbalance issues occurred during OC and OD segmentation. This balanced region avoid the over-fitting and helps to improve the segmentation performance.

Conclusion
In this paper, we have discussed and analyzed the performance of multi-label fragmentation Au-Net with other existing methodologies. The multi-label segmentation Au-Net integrates four deep streams on different levels and recognize glaucoma from the retinal imageries. The design of this Au-Net is comprises of three fundamental parts. The first is a multi-scale U-shape convolutional network, significant features are obtained from this network. Then, the side-output layer gives deeply supervision. Finally, the loss function is used to fragment OD and OC locale together. For accurate segmentation, it additionally utilizes a polar transformation to convert the input image into polar transformed image. The experiment has been performed on the DRISHTI-GS1 glaucoma dataset and it yields effective performances. This proposed method can be used to diagnose eye disease in automated retinal image processing systems.