CNN with Multiple Input for automatic glaucoma assessment using Fundus Images

DOI: https://doi.org/10.21203/rs.3.rs-610059/v1

Abstract

In the area of ophthalmology, glaucoma affects an increasing number of people. It is a major cause of blindness. Early detection avoids severe ocular complications such as glaucoma, cystoid macular edema, or diabetic proliferative retinopathy. Intelligent artificial has been confirmed beneficial for glaucoma assessment. In this paper, we describe an approach to automate glaucoma diagnosis using funds images. The setup of the proposed framework is, in order: The Bi-dimensional Empirical Mode Decomposition (BEMD) algorithm is applied to decompose the Regions of Interest (ROI) to components (BIMFs + residue). CNN architecture VGG19 is implemented to extract features from decomposed BEMD components. Then, we fuse the features of the same ROI in a bag of features. These last are very long; therefore, Principal Component Analyses (PCA) are used to reduce features dimensions. Obtained bags of features are the input parameters of the implemented classifier based on the Support Vector Machine (SVM). To train the built models, we have used two public datasets, which are ACRIMA and REFUGE. For testing our models, we have used a part of ACRIMA and REFUGE plus four other public datasets, which are RIM-ONE, ORIGA-light, Drishti-GS1, and sjchoi86-HRF. The overall accuracy of 98.31%, 98.61%, 96.43%, 96.67%, 95.24%, and 98.60% are obtained on ACRIMA, REFUGE, RIM-ONE, ORIGA-light, Drishti-GS1, and sjchoi86-HRF datasets, respectively, by using the model trained on REFUGE. Against an accuracy of 98.92%, 99.06%, 98.27%, 97.10%, 96.97%, and 96.36% are obtained on ACRIMA, REFUGE, RIM-ONE, ORIGA-light, Drishti-GS1, and sjchoi86-HRF datasets, respectively, using the model training on ACRIMA. Obtained experimental results from different datasets demonstrate the efficiency and robustness of the proposed approach. A comparison with some recent previous work in the literature has shown a significant advancement in our proposal.

Full Text

This preprint is available for download as a PDF.