Automatic Choroid Vascularity Index Calculation in Optical Coherence Tomography Images Low Contrast Sclerochoroidal Junction Using Deep Learning

Choroidal vascularity index (CVI) is a new biomarker dened for retinal optical coherence tomography (OCT) images for measuring and evaluating the choroidal vascular structure. CVI is the ratio of the choroidal luminal area (LA) to the total choroidal area (TCA). The automatic calculation of this index is important for ophthalmologists but has not yet been explored. In this study, we proposed a fully automated method based on deep learning for calculating CVI in three main steps: 1- segmentation of the choroidal boundary, 2- detection of the choroidal luminal vessels, and 3-computation of the CVI. The proposed method is evaluated in complex situations like the presence of diabetic retinopathy and pachychoroid spectrum. In pachychoroid spectrum, the choroid is thickened, and the boundary between choroid and sclera (sclerochoroidal junction) is blurred, which makes the segmentation more challenging. The proposed method is designed based on the U-Net model, and a new loss function is proposed to overcome the segmentation problems. The vascular LA is then calculated using Niblack’s local thresholding method, and the CVI value is nally computed. The experimental results for the segmentation stage with the best-performing model and the proposed loss function were used showed dice coecients of 0.941 and 0.936 in diabetic retinopathy and pachychoroid spectrum patients, respectively. The unsigned boundary localization errors in the presence of diabetic retinopathy were 0.0020 and 0.0138 pixels for the BM boundary and sclerochoroidal junction, respectively. Similarly, the unsigned errors in the presence of pachychoroid spectrum were 0.0072 and 0.0254 pixels for BM and sclerochoroidal junction. The performance of the proposed method for calculating CVI was evaluated; the Bland-Altman plot indicated acceptable agreement between the values allocated by experts and the proposed method in the presence of diabetic retinopathy and pachychoroid spectrum.


Introduction
The choroid is a vascularized tissue located between the retina (as the innermost layer of the eye) and the sclera (as the outermost layer of the eye). It has the highest blood ow of all tissues in the human body. This layer is responsible for supplying blood to the outer parts of the retina and the optic nerve (1). Choroidal changes occur primarily or secondarily in many ocular diseases. The inspection of choroidal changes adjoined with information from the retina leads to better understandings of the pathogenesis of diverse diseases and how to control responses to treatments (2).
Despite the valuable clinical information that has been gained through choroidal thickness (CT), it only represents the overall choroidal structure, providing no distinctions between the stromal and luminal vascular components (2).
In work related to the segmentation of the choroidal layer, patch-based approaches were presented using CNN for the segmentation of RPE and choroidoscleral interface (CSI) boundaries (20)(21)(22). In other studies, a combination of deep learning and graph cut algorithms were used for choroidal boundary segmentation (23,24). Mao et al. suggested skip connection attention (SCA) block integrated into the U-shape architectures, such as U-Net and context encoder network (CE-Net), for automated choroid layer segmentation (25). They showed that SCA embedded into CE-Net performs better for choroid layer segmentation. Xu et al. applied an automated method for PED segmentation in polypoidal choroidal vasculopathy using a deep neural network (26). Zheng et al. carried out choroid layer segmentation to obtain several evaluation parameters in swept-source OCT images from a healthy population. Residual U-Net was used to segment the choroidal boundaries (27). Table 1 provides a better comparison of the previous methods by summarizing the recent deep learning works for choroid layer segmentation and comparing them from different aspects, such as dataset, device, proposed method, loss function, metrics, and results. As discussed above, the second task for automatic calculation of CVI is the detection of choroidal luminal vessels.
Histopathological assessment is the gold standard for investigating the choroidal vascular area. However, it is not a practical clinical tool since it is dependent on autopsy or biopsy samples and due to post-xation shrinkage and distortion resulting in the lack of repeatability and reliability. On the other hand, it is almost impossible to label the images manually due to the complex structure and the substantial amount of time required.  (4,29).
The proposed method is designed to ful ll the three steps mentioned above with fully automated algorithms. For this purpose, we adopted Niblack thresholding to develop software that is clinically acceptable and can be used by clinicians with comparable results to most well-known works in this area.
A fully automated method with freely available code is proposed for the rst time to calculate the CVI value in diabetic retinopathy and pachychoroid spectrum using deep learning methods.
The proposed modi ed U-Net can segment the choroid and BM boundaries in challenging cases like low contrast images with thickened choroidal areas.
The proposed loss function (a weighted combination of dice loss (DL), weighted categorical cross entropy (WCCE), and Tversky loss) is shown to be able to overcome the very imbalanced data (small foreground vs. big background).
The remainder of this paper is organized as follows: Section 2 presents the proposed method in detail, Section 3 describes experimental results, and Section 4 discusses the results. The source code of this method is publicly available at https://github.com/TaherehMahmoudi/OCT-CVI-DeepLearning.

OCT data and manual annotation
The data set used in this study was collected in Farabi Hospital, Tehran University of Medical Sciences, Tehran, Iran. EDI-OCT images were obtained using the RTVue XR 100 Avanti instrument (Optovue, Inc., Fremont, CA, USA). Patients with diabetic retinopathy were positioned appropriately, and equally spaced high-resolution EDI-OCT B-scans at 8 mm * 12 mm long raster patterns were captured. In patients with pachychoroid spectrum, the image acquisition protocol was based on EDI-OCT HD line 8 mm * 12 mm long scans.  Table 2.
For manual demarcation of choroidal boundaries as ground truth, raw OCT images were imported using ImageJ software (http://imagej.nih.gov/ij/), which is provided in the public domain by the National Institutes of Health, Bethesda, MD, USA. Choroidal borders of all images were delineated by the polygonal selection tool in the software toolbar. RPE-Bruchs membrane complex (BM) and the sclerochoroidal border were selected as the upper and lower margins of the choroid, respectively. The edge of the optic nerve head and the most temporal border of the image were selected as the nasal and temporal margins of the choroidal area. All manual segmentations were conducted by a skilled grader (EKH) and veri ed by another independent grader (HRE). In case of any disputes, the outlines are segmented by consensus.

Manual calculation of CVI
According to the method introduced by Sonoda et al. (30), the total choroidal area was manually selected from the optic nerve to the temporal side of the image. The selected area was added as a region of interest (ROI) with the ROI manager tool. CVI was calculated in selected images from both diabetic and pachychoroid spectrum groups by randomly selecting three sample choroidal vessels with lumens larger than 100 µm by the oval selection tool in the toolbar. The average re ectivity of these areas was determined by the software. The average brightness was set as the minimum value to minimize the noise in the OCT image.
Then, the image was converted to 8 bits and adjusted by the auto local threshold of Niblack. The binarized image was reconverted into an RGB image, and the luminal area was determined using the color threshold tool. The light pixels were de ned as the choroidal stroma or interstitial area, and the dark pixels were de ned as the luminal area. The total choroidal area (TCA), luminal area (LA), and stromal area (SA) were automatically calculated (30).
Herein, we refer to the ratio of LA to TCA as the choroidal vascularity index (CVI). The CVI is calculated separately for each image.

Fully automated Calculation of CVI Using Deep Learning
Fully automated calculations of CVI include three main steps: segmenting the choroidal layer, detecting the choroidal luminal vessels, and computing the CVI. These steps are elaborated in the following sections.

Automatic segmentation of the choroidal layer
In the proposed method, manual segmentation of the choroidal area was utilized to construct the ground truth masks.
The choroid area is of interest in this work; using a simple approach, we needed to perform two-class segmentation to divide the image into the choroid area (foreground) and the rest of the image (background), as this is the most prevalent technique used in similar works. However, we found that the backgrounds of the upper and lower choroid areas have different textures because they refer to different anatomical structures. Accordingly, we selected the threeclass approach to consider the intrinsic differences of the ocular structures. The second approach yielded better results after being trained with similar data, and we used it as our nal segmentation approach to be used in the next stages and CVI calculation.
All original images ( gure 1-a) were cropped to set the most nasal side of the macula on the margins of the optic disc ( gure 1-a1), thus eliminating the unwanted areas of the image. Two distinct models (two-class and three-class approaches) were trained separately on the cropped images.
In the two-class segmentation approach, all pixels of the choroidal area (target class) were replaced by white pixels (Figure 1-b2).
In the three-class segmentation approach, the cropped image was divided into three regions (Figure 1-c2).
The block diagram of the proposed approach is depicted in Figure 1.

Modi ed Loss Function
One of the main issues in the segmentation of a relatively thin layer from a whole image is handling the unbalanced data. Standard losses such as cross-entropy loss and dice loss usually fail on such occasions, and some new substitutes have already been introduced to solve the problem.
For instance, Tversky loss is introduced as an alternative to dice coe cient, which treats false negative (FNs) and false positive (FPs) equally, resulting in higher recall and lower precision. Tversky loss was proposed (32) to overcome this issue by weighing FNs more heavily than FPs for imbalanced data.
On the other hand, WCCE is introduced to replace the most famous loss function in classi cation problems, categorical cross entropy (CCE). WCCE weighs the foreground more heavily than the background (choroid area vs. the whole image) and can be useful for imbalanced data. To design the weights of WCCE in (j = 3) class segmentation, we used the following equation: where w j is a weight vector in WCCE,f j is the ratio indicating the area corresponding to class j compared to the whole area, and β is a hyperparameter (β∈[0,1)) chosen empirically as 0.8 in this study.
Finally, considering that the desired segmented region (choroid area) has an interconnected structure and that the anatomical background does not yield disjoint sections, total variation (TV) loss is expected to model these characteristics.
In order to consider the mentioned loss functions, we investigated the results of using each individual loss function For evaluating the accuracy of the segmentation, the predicted masks of the choroid layer are compared with manual masks, and the dice similarity coe cient (DSC) is calculated based on the following equation: where A and B are the predicted and ground truth masks, respectively. The average DSC on all B-scans of the test dataset is reported in Section 3. Furthermore, the mean absolute error between predicted boundaries and the manually annotated boundaries are calculated and reported.
The following steps were performed to measure CVI in pachychoroid spectrum and diabetic retinopathy subjects: Calculation of the BM boundary and sclerochoroidal junction based on the choroidal layer mask from U-Net architecture, as elaborated above.
Noise reduction using non-local means algorithm with a deciding lter strength of 10 (34).
Vascular LA detection using Niblack's auto local threshold method using Python software. The selected parameters for Niblack's method are summarized in Table 3 to make the provided code reproducible. The parameters are empirically adjusted to resemblance gold standard values as much as possible in the training dataset.

Results
The software environment used in this study consists of the Keras platform backend in python 3.  Table 4. These errors are normalized by dividing them by the input image size of the model. The experimental results with different batch sizes, input image sizes, and test-train split ratios are reported in the appendix.

Choroidal boundary segmentation in the diabetic retinopathy dataset
Similar to the pachychoroid spectrum dataset, the segmentation results are shown in Figure 4, and the boundary positioning errors are reported in Table 5. The experimental results with different batch sizes, input image sizes, and test-train split ratios are again reported in the appendix. In addition, the DSC for each proposed combination of loss functions is shown in Figure 5 to provide a visual comparison between different loss functions. The results indicate that the best performance is exhibited by the threeclass segmentation combination of the best-chosen loss function (DL, WCCE, and Tversky losses). The sample results using the best-selected model are demonstrated in Figure 6 for diabetic renopathy and pachychoroid spectrum data.

Vascular LA segmentation CVI measurement
The results of vascular LA segmentation using Niblack's method are shown in Figure 7. Automated and manual CVI for pachychoroid spectrum and diabetic retinopathy were measured, and the Bland-Altman limits of agreement between manual and automated measurements were calculated for pachychoroid spectrum and diabetic retinopathy data. Bland-Altman plots were used to graphically represent the agreement between manual and automated methods in CVI measurements, as shown in Figure 8. Limits of agreement between the automated and manual methods to measure CVI ranged from -0.168 to 0.120 to and -0.080 to 0.095 to for pachychoroid spectrum and diabetic retinopathy data, respectively.

Discussion
The recently introduced metric, CVI, was developed in response to the need for choroidal vasculature assessments that are more reliable and precise than previous measures such as CT and choroidal vessel diameter, which have limitations. CVI displayed less changeability than CT and was in uenced by fewer physiologic elements in previous studies, showing that it is a generally stable biomarker for studying choroidal modi cations (4-6).
The need for automatic calculations of CVI is undeniable in the presence of a high number of OCT scans; also, there is no fully automatic method with freely available code to address this issue. In the current study, a U-Net architecture was used to segment the RPE boundary and sclerochoroidal junction. A tailored set of loss functions with a combination of four classic losses (such as dice loss coe cient (DL), weighted categorical cross-entropy (WCCE), total variation (TV), and Tversky) are examined to ful ll the need for managing the imbalanced data in segmentation of small regions in an image. Subsequently, the desired task is inspected using two-class and three-class segmentation approaches. At rst, we carried out a two-class approach. One class was the choroid area, and the rest of the image was the other class. As the upper and lower choroid areas have different textures, we used a three-class approach as the nal approach.
Experimental results showed that the three-class approach performed better at segmenting the choroid area, and we used this network for nal segmentation to obtain the CVI. Experimental results with three-class segmentation using combined loss (dice, WCCE, and Tversky) showed dice coe cients of 0.941 and 0.936 in diabetic retinopathy and pachychoroid spectrum patients, respectively ( Figure 5).
In the presence of diabetic retinopathy, the unsigned boundary localization errors are 0.0020 and 0.0138 pixels for the BM boundary and sclerochoroidal junction, respectively. However, in the presence of the pachychoroid spectrum, the unsigned errors are 0.0072 and 0.0254 pixels for BM and sclerochoroidal junction, respectively (Tables 4 and 5).
Then, Niblack's auto local threshold is adopted to calculate the vascular LA in accordance with clinically approved publications. The CVI value is then calculated based on this stage, and the Bland-Altman plot indicated an acceptable agreement between manually allocated values and the proposed method for CVI calculation.
The results of our study showed that the designed deep learning algorithm can detect RPE with higher accuracy than sclerochoroidal junction due to the higher intensity of RPE signal in OCT images compared to the sclerochoroidal junction.
Previous studies were limited to normal OCT scans, and the challenge of segmenting blurred boundaries is achieved in this work. There is just one available code called OCT-tool (35) that can be used to segment choroidal layer. Using this code, BM boundary of each test data is automatically detected however the choroid boundary is detected manually by adding some control points. It takes a long time to add enough control points and t the choroid layer.
But in the current study this layer is automatically detected with high accuracy which is greatly time consuming.
Due to the uncertainty of the sclerochoroidal junction, choroidal segmentation is di cult in patients with thick choroidal, such as patients with pachychoroid spectrum. Therefore, in this study, in addition to diabetic patients, a group of patients with pachychoroid spectrum was included. Moreover, we considered ROI extended from the optic nerve to the temporal side of the image to get a more precise estimate of CVI. While Agrawal et al. (6) demonstrated that subfoveal CVI can accurately re ect the CVI of the entire macular area in healthy individuals, this assertion does not hold in pachychoroid disease or ethnicities with the thicker choroid.
The limitations of our study include the following. This study was performed on just a small number of OCT scans and included only two disease entities: diabetic retinopathy and pachychoroid spectrum. At best, the accuracy of our algorithm is in the range of 10 microns around the sclerochoroidal junction in both groups of patients with diabetic retinopathy and pachychoroid spectrum. Although this accuracy seems acceptable in patients in the current study, it may not be satisfactory in patients with a very thin choroid, such as patients with age-related macular degeneration. In conclusion, our limited experience showed that a deep learning-based algorithm for the automatic calculation of CVI in OCT images of patients suffering from diabetic retinopathy and pachychoroid spectrum can yield good performance in this regard. Larger-scale studies on different retinochoroidal diseases with more OCT scans captured with different devices can improve automated measurements of CVI, which is a useful biomarker for diagnosis and prognostication.

Declarations
Competing interest statement The authors declare no competing interests.

Data and code Availability
The data and code are publicly available at https://github.com/TaherehMahmoudi/OCT-CVI-DeepLearning.

Figure 1
Block diagram of the proposed method. The unwanted area of the original image is cropped in (a1). Two approaches )2-class (b1,2) and 3-class (c1,2)) are used for segmentation of the choroid area using U-net (d). After denoising of the obtained image (e), vascular LA detection is carried out using Niblack's auto local threshold (f). CVI is nally calculated using the ratio of the LA to the TCA.    Illustrations of the four samples with nal segmentation using the three-class approach with a combination of three loss functions (DL +WCCE + Tversky). The rst row belongs to the diabetic retinopathy dataset and the second row shows the results from pachychoroid spectrum dataset.  Bland Altman plot for manual and automated measurement of CVI in a) pachychoroid spectrum and b) diabetic retinopathy data.

Supplementary Files
This is a list of supplementary les associated with this preprint. Click to download. Appendix.docx