DOI: https://doi.org/10.21203/rs.2.15340/v1
Background: In a pathological examination of pancreaticoduodenectomy for pancreatic head adenocarcinoma, resection margins have no cancer cells within 1 mm, the resection is considered as R0 resection; resection margins have cancer cells within 1 mm, the resection is recognized as R1 resection. The pathological examinations of the resection margins are complicated and depend on the subjective experiences of physicians to some extent. This study aims to design a computer-aided diagnosis (CAD) system based on texture features of preoperative computer tomography (CT) images to evaluate a resection margin was R0 or R1.
Methods: This study retrospectively analyzed 86 patients who were diagnosed as pancreatic head adenocarcinoma by preoperative abdominal CT examination. These patients underwent pancreaticoduodenectomies, then their resection margins were pathologically diagnosed as R0 or R1. The CAD system consists of five stages: (i) delineate and segment regions of interest (ROIs); (ii) by solving discrete Laplacian equations with Dirichlet boundary conditions, fit ROIs to rectangular regions; (iii) enhance textures of ROIs combining wavelet transform and fractional differential; (iv) extract texture features combining wavelet transform and statistical analysis methods; (v) reduce features using principal component analysis (PCA) and perform classification using support vector machine (SVM), use a linear kernel function and leave-one-out cross-training and testing to reduce overfitting. Mann-Whitney U-test is used to explore associations between texture features and histopathological characteristics.
Results: The developed CAD system achieved an AUC (area under receiver operating characteristic curve) of 0.8614 and an accuracy of 84.88%. Setting p-value ≤ 0.01 in the Mann-Whitney U-test, two features of run-length matrix, which derived from diagonal subbands in wavelet decomposition, showed statistically significant differences between R0 and R1.
Conclusions: It indicates that the developed CAD system is rewarding for discriminating R0 from R1. Texture features can potentially enhance physicians' diagnostic ability.
Pancreatoduodenectomy is the main treatment for pancreatic head adenocarcinoma. Knowledge of preoperative assessment of cancer resection and excision expansion will help to choose optimal therapies for patients. Thus, it is very important to evaluate the resection margin of pancreaticoduodenectomy. In a pathological examination of pancreaticoduodenectomy for pancreatic head adenocarcinoma, resection margins have no cancer cells within 1 mm, the resection is considered as R0 resection; resection margins have cancer cells within 1 mm, the resection is recognized as R1 resection. The pathological examinations of the resection margins are complicated and depend on the subjective experiences of physicians to some extent. This study tries to design a computer-aided diagnosis (CAD) system based on texture features of preoperative computer tomography (CT) images to evaluate a resection margin was R0 or R1.
Intertumoral heterogeneity is generally considered as a typical finding of malignancy. It reflects variations in tumor-cell differentiation, extracellular matrix, and cellularity angiogenesis [1]. In recent years, non-invasive techniques based on texture analysis are widely used to quantify tumor heterogeneity by evaluating spatial variations of gray-level in images, and thus have been applied to lesion-related aided diagnosis, efficacy evaluation, and prognosis[2]. This is termed radiomics [3–4]. CT is a commonly used examination for diagnosis of pancreatic head adenocarcinoma. To the best of our knowledge, there are currently no radiomic researches yet to evaluate differential diagnosis of R0 and R1 using texture features. However, there have been some similar texture feature-based radiomic researches of pancreatic cancer on portal-venous phase CT images. In 2017, Cassinotto C et al. [5] used Laplacian of Gaussian (LoG) filter and histogram to extract texture features to evaluate pathologic tumor aggressiveness and predict disease-free survival in patients with resectable pancreatic adenocarcinoma; Eilaghi A et al. [6] used gray-level co-occurrence matrix (GLCM) to extract texture features to assess whether CT-derived texture features predict survival in patients undergoing resection for pancreatic ductal adenocarcinoma; Chakraborty J et al. [7] used histogram, GLCM, gray-level run-length matrix (GLRLM), and angle co-occurrence matrix (ACM), etc. to extract texture features to predict 2-year survival of pancreatic ductal adenocarcinoma (PDAC). In 2018, Canellas R et al. [8] used LoG filter and histogram to extract texture features to assess whether CT texture analysis and CT features are predictive of pancreatic neuroendocrine tumor grade based on the World Health Organization classification and to identify features related to disease progression after surgery; Qiu JJ et al. [9] used histogram, GLCM, wavelet transform, and the methods of their combinations to extract texture features of non-enhanced CT images to explore the feasibility of discriminating pancreatic cancer from normal pancreas. In 2019, Cheng SH et al. [10] used LoG filter and histogram to extract texture features to determine if CT texture analysis measurements of the tumor are independently associated with progression-free survival and overall survival in patients with unresectable PDAC. These similar researches based on texture features to establish regression, neural network, support vector machine, Bayesian and other models for classification and prediction.
We evaluated whether an operation was performed by R0 resection or R1 resection based on its surgical margins of portal-venous CT images, and investigated differences of histopathological characteristics between R0 and R1 by using statistical significance tests of texture features. This study has been approved by the Ethics Committee of West China Hospital of Sichuan University (Trial registration: NCT02928081).
In an R0 or R1 resection margin, region of interest (ROI) is an irregular strip-shaped area, its structure contains complex internal details such as capillary distribution, cancer cell tissue, and pancreatic cell tissue, etc. Statistical texture analysis methods are appropriate for this. Multi-resolution texture analysis methods perform well in extracting detail features. However, both statistical texture analysis methods and multi-resolution texture analysis methods are limited to irregular strip-shaped and small ROIs. We developed a CAD system. It relieved these limitations and performed classification on R0 and R1. Figure 1 illustrates the framework of the CAD system. It consists of five stages.
Stage 1: obtain ROIs by preprocessing patients’ CT images.
Stage 2: by solving discrete Laplacian equations with Dirichlet boundary conditions, fit ROIs to rectangular regions.
Stage 3: combining wavelet transform and fractional differential, enhance textures of the rectangular ROIs.
Stage 4: combining wavelet transform and statistical analysis methods, extract texture features from the enhanced ROIs
Stage 5: reduce features using principal component analysis (PCA) and perform classification using support vector machine (SVM) (use a linear kernel function and leave-one-out cross-training and testing to reduce overfitting).
This study retrospectively analyzed 86 patients who were diagnosed as pancreatic head adenocarcinoma by preoperative abdominal CT examination at West China Hospital from October 2015 to March 2018. These patients underwent pancreaticoduodenectomy, and postoperative pathological examinations showed pancreatic head adenocarcinoma. These cases of patients (34 cases of R0 and 52 cases of R1) were screened based on NCCN guidelines for diagnostic criteria and standard surgical procedures.
Abdominal scan and enhanced scan were performed using 64-slice spiral CT of American GE. Collimator was set to 0.625 mm, FOV was set to 350 mm × 350 mm, tube voltage was set to 120 kV, tube current was set to 160 mAs, layer thickness was set to 1.250 mm. In enhanced scanning, iopamide was injected via cubital veins, flow rate was 3 ml / s, dose was 90 ~ 100 ml, delayed time was 25 ~ 30 s for scanning of portal-venous phase. We used portal-venous phase CT images as the objects of radiomic analysis. A CT image was exported as an 8-bit grayscale image (the range of gray-level was [0, 255]).
Steps of delineating and segmenting are as follows: 1) choose three portal-venous phase CT images from each case, which located at the top, middle, and bottom of a tumor; Figure 2 [11] illustrates the locations; 2) delineate resection margins around portal veins on the chosen images; it is shown in Figure 3; to ensure authenticity of signals, the delineated resection margins exclude edges of stent and metal artifacts; 3) segment the delineated areas to form ROIs using region growing.
Two physicians with 10 years of experience in abdominal CT diagnosis delineated all resection margins. The first physician delineated the resection margins, and repeated the delineations after 2 weeks to prevent measured deviations. The other physician only delineated the resection margins once to assess whether his delineations was consistent with the delineations of the first physician.
As can be seen from Figure 3, ROIs are irregular strip-shaped regions. An image is a two-dimensional signal based on rows and columns. We fitted the strip-shaped ROIs to rectangular ROIs by solving discrete Laplacian equations with Dirichlet boundary conditions. The fitting method is abbreviated as LD. LD has a good application in signal fitting [12–14]. It can fit missing information of an original image very well. Discrete Laplace equation can be defined in Eq. (1).
[Due to technical limitations, this equation is only available as a download in the supplemental files section.] (1)
Eq. (1) shows that a linear equation can be established based on a 4-neighborhood of a point (to be fitted) . A region to be fitted is named as a mask. If a current pixel is on an edge of the mask, then at least one of its neighbors (on the Dirichlet boundary) is known. A set of linear equations can be established along the Dirichlet boundary (along edges of the mask). The values of pixels to be fitted can be obtained by solving that set of linear equations. This solving procedure is then extended into interiors of the mask. Figure 4 shows a mask to be fitted and its boundaries. Figure 5 illustrates two fitting examples.
In histopathology, an ROI of R1 has cancer cells, some parts of its tissue are more compact, and its capillary distribution is less; while an ROI of R0 has no cancer cells, it only contains pancreatic tissue, its capillary distribution is more abundant [15–16]. However, these differences are just qualitative in details and difficult to visually observe from CT images. Multi-resolution analysis methods are advantageous in local time-frequency analysis and are appropriate for deriving detail characteristics. Statistical analysis methods can usually derive representative mathematical descriptors. It can be inferred that multi-resolution analysis methods and statistical analysis methods are appropriate here. They are two types of texture analysis methods that are frequently used in radiomics, and they are also frequently used in the radiomic researches related to pancreatic cancer in recent years [4–10]. Furthermore, to improve the performances of these texture analysis methods, the CAD system enhances textures of the fitted ROIs before extracting texture features. The main purpose of texture enhancement is to highlight high-frequency contour information (detailed information, that is, portions of gray-levels that changes relatively more varied or more quickly) while preserving low-frequency smoothing information as much as possible. Traditional enhancement methods such as histogram equalization, integer-order differentials, frequency enhancement filters, etc., increase contrast or highlight contours, but they lose lots of low-frequency texture information and usually sharpen contour information. In recent years, applying fractional differentials in medical image processing compensates for the drawback of greatly losing low-frequency information, making it an effective method for texture enhancement [17–19]. As stated above, we consider the following 3 factors:
We designed a texture enhancement method with reference to Grumwald-Letnikov (G-L) fractional differential definition and wavelet transform [20–21]. The enhancement method is abbreviated as WF. It’s illustrated in Figure 6.
Step 1: Decompose an ROI into 4 components using wavelet transform (22): H (horizontal), V (vertical), and D (diagonal), which represent high-frequency components; A (approximate), which represent low-frequency component. The approximate component can be decomposed again.
Step 2: Convolve each high-frequency component (including all high-frequency components in decompositions of all levels) with a fractional differential operator M.
Step 3: Perform wavelet inverse transform based on the convolution results of Step 2 and the approximate component in the last-level decomposition.
Wavelet inverse transform will reconstruct the ROI, which is the enhanced ROI. In the WF method, the steps we construct the fractional differential operator M are as follows:
1) Discretize G-L definition: Eq. (2) is the v-order fractional differential G-L definition of f(x) on [a, t], where 𝛤(...) is a Gamma function; discretize the continues duration [a, t] equally by unit interval h, where n = [(t-a)/h]; it’s known that 𝛤(n) = (n-1)!=𝛤(n+1)/n, Eq. (3) can be derived.
[Due to technical limitations, this equation is only available as a download in the supplemental files section.] (2)
[Due to technical limitations, this equation is only available as a download in the supplemental files section.] (3)
2) Expand Eq. (3): it’s know that h=1 (unit interval), Eq. (4) can be derived.
[Due to technical limitations, this equation is only available as a download in the supplemental files section.] (4)
We constructed a fractional differential operator named M based on the expanded coefficients of Eq. (4). Figure 7 demonstrates the operator M. The operator M performs fractional differential operations in eight symmetric directions in a 5×5 neighborhood. The c at the center point position is an adjustable parameter and is called compensation parameter. In experiments, the order v and the parameter c can be appropriately adjusted. Figure 5 illustrates two enhancing examples using the WF method.
Deep learning algorithms have made significant progress in image pattern recognition. However, these algorithms are limited by problems of small samples, small targets, etc. [23–24]. Moreover, deep learning algorithms lack pertinence in quantitative analysis of ROIs, which requires analysis of between quantitative data and clinical outcomes, or analysis of between quantitative data and histopathological characteristics. Therefore, ROI-based radiomics is the main approach of CAD systems based on medical images.
As described above, it recommends extracting texture features using multiresolution analysis methods and statistical analysis methods. Actually, some similar studies also used these two types of methods [5–10]. We repeated these texture analysis methods in experiments to compare them with the method in this paper. In this paper, we combined these two types of analysis methods in order to better describe the details.
We used a texture analysis method that combining wavelet transform and statistical methods (histogram, co-occurrence matrix, and run-length matrix). Reverse biorthogonal wavelets (rbio) are compactly supported biorthogonal spline wavelets for which symmetry and exact reconstruction are possible with FIR (finite impulse response) filters. Wavelet of rbio was used in this research. The steps of feature extraction are as follows:
Step 1: Perform wavelet transform on an ROI (has fitted and enhanced); a decomposition will derives 4 components; a coefficient matrix uniquely expresses a component.
Step 2: Convert a high-frequency component to a grayscale image called subband image.
In a coefficient matrix of a high-frequency component, elements with larger absolute values usually represent singular value points (meaning a fast and large change). First, absolute values of coefficient matrices are calculated. Then, elements of a coefficient matrix are linearly and equally discretized into a grayscale range of [0, 255] (the range of gray-level) according to the minimum and maximum values of the coefficient matrix. The calculations are shown in Eqs. (5)-(7), where C is the coefficient matrix and D is the discretized matrix (subband image).
[Due to technical limitations, this equation is only available as a download in the supplemental files section.] (5)
[Due to technical limitations, this equation is only available as a download in the supplemental files section.] (6)
[Due to technical limitations, this equation is only available as a download in the supplemental files section.] (7)
Step 3: Extract features from subband images using histogram, co-occurrence matrix, and run-length matrix. Considering that the size of a subband image is also small, gray-levels of pixels are rescaled and then statistical methods are applied.
Reducing features can usually improve classification performance. We used principal component analysis (PCA) for feature reduction and limited the number of features to reduce overfitting. Empirically, it is appropriate that the number of features is 1/5 or 1/10 of the number of samples, and a linear classifier allows for more features.
Support vector machine (SVM) [25] is widely used due to its outstanding performance in pattern recognition problems of small samples. To reduce overfitting, we used a linear kernel and used leave-one-out cross-training and testing. Linear kernel-based SVM allow more features without easily overfitting. In the vast majority of cases, especially in classification problems of small samples, the model evaluated in the leave-one-out method is close to the model that expected to be evaluated using training data. Thus, evaluation results of the leave-one-out method are often considered more accurate [26].
For comparison, we also used some other texture analysis methods from similar researches, and also applied the PCA-based feature reduction method and the linear SVM-based classification method. These methods are shown in Table 1.
Considering that the size of an ROI is small, the experiments performed 1-level wavelet decomposition, and set the distances of co-occurrence matrix to 1 and 2. Feature values of 4 directions (0, 45, 90, and 135) of a co-occurrence matrix were averaged, so was a run-length matrix. Wavelet transform should be performed on rows and columns. Before applying the WT method and WT-HCR method, we filled ROIs into valid matrices based on interpolation methods. The linear interpolation method was first applied, then we fill the remaining missing values using the nearest interpolation method. Literature 7 used multiple texture analysis methods and obtained the best performance using the ACM methods. Thus, we used the ACM-D method and ACM-M method separately. The LD-WF method used a reverse biorthogonal wavelet, and selected rbio2.8 through multiple experiments. Figure 8 illustrates two examples of decomposing ROIs using rbio2.8.
Binary classification problems can use a confusion matrix to express results. R1 is used as positive class, R0 is used as negative class. Table 2 shows the experimental results. The LD-WF method achieves the best classification performance, its accuracy and AUC are 84.88% and 0.8641, respectively, followed by the LOG-GH method and the CTM method. Although the accuracy of CTM is lower than that of LOG-GH, its AUC value is higher than LOG-GH. The ROC (receiver operation curve) and AUC (area under the ROC) are powerful indicators for measuring a binary classification model. They illustrate the diagnostic ability of the classifier when its discrimination threshold changes.
Figure 9 illustrates the ROC curves of these methods. The classifier based on the LD-WF method approaches the upper-left corner faster, followed by the classifier based on the CTM method and the classifier based on the LOG-GH method.
To investigate the discriminations of texture features between RO and R1, we performed Mann-Whitney U-tests on the texture features that extracted based on the LD-WF method. Table 3 shows the features with p-value ≤ 0.05, which usually means that there are statistically significant differences between the two types of samples (R0 samples and R1 samples). It demonstrates that the middle and bottom ROIs present more differences on texture features, and the diagonal subband image expresses more characteristic differences in details. The p-values of F4 and F6 are ≤ 0.01, which means that there are extremely significant differences between the two types of samples on statistics.
Radiomics uses computer methods such as computer vision and machine learning to perform digital medical image processing, which can deeply mine the heterogeneous data at levels of tissue and molecular that contained in medical images such as CT images (2, 27–28). CT imaging is that X-rays penetrate different media with different attenuations to form different gray-levels. Thus, grayscale patterns in CT images should be able to reflect changes of body’s pathology. An R0 resection margin does not contain pancreatic head adenocarcinoma cells. An R1 resection margin contain pancreatic head adenocarcinoma cells. From histopathological analysis, an R1 resection margin contains large number of normal pancreatic tissue and some tumor tissue, and its capillary distribution is less than an R0 resection margin; relatively, an R0 resection margin only contains normal pancreatic tissue and its capillary distribution is more abundant. Thus, characteristics of internal details can better characterize the two types of samples. Analogous to wavelet transform, LOG-GH is also a multi-scale analysis method. Both the two types of methods are suitable for characterizing detail characteristics. From the classification results, the multi-resolution or multi-scale analysis methods behave better.
In addition, it is necessary to address some issues such as the problem of irregular strip-shaped ROIs and the problem of atypical manifestations of details (macroscopically difficult to distinguish). The CAD system in this research used the LD-WF method to process ROIs, fitted the ROIs and enhanced textures, then combined wavelet transform and statistical methods to extract descriptors of detail characteristics. The experimental results indicated that such processing pronouncedly improved classification performance.
We expect that some texture features should be able to reflect these differences. Three features were selected based on the ascending order of p-values. Table 3 shows these three features in bold: F4, F6, and F9. To test the feature values larger or smaller, right-tailed hypothesis tests based on Wilcoxon rank sum test were performed on F4, F6, and F9, where the alternative hypothesis states that the median of R1 samples is greater than the median of R0 samples. Table 4 demonstrates the results of right-tailed hypothesis tests.
Table 4 shows that F4-values of R1 are larger than F4-values of R0 at significant level p = 0.001, F6-values of R1 are larger than F6-values of R0 at significant level p = 0.001, F9-values of R1 are larger than F9-values of R0 at significant level p = 0.011. In wavelet transform, every coefficient is in charge of an oscillation in certain scale and frequency. The discussions of these three features are as follows:
As for feature F4: 1) the subband image expresses the component that gray-level changes more and faster in diagonal direction; 2) high gray-level run emphasis (HGRE) of run-length matrix measures the distribution of higher gray-level values, with a higher value indicating a greater concentration of high gray-level values in an image; 3) in the diagonal component, higher gray-level means larger oscillation; 4) the test result for F4 in Table 3 indicates that points with larger oscillations appears more continuously in ROIs of R1 than that of in ROIs of R0, this should be associated with the fact that: the ROIs of R1 contains normal pancreatic tissue and cancer tissue, while the ROIs of R0 only contains normal pancreatic tissue.
As for feature F6, it is similar to F4. Short run high gray-level emphasis (SRHGE) is a supplement to HGRE, indicating that points with larger oscillations (fine texture) appear more continuously.
As for feature F9: 1) the meaning of diagonal subband image has explained above; 2) the cubic moment of histogram measures skewness, higher skewness means greater degree of asymmetry; 3) the test result for F9 in Table 3 indicates that the degree of asymmetry in R1 is greater than that in R0; it should still be associated with the fact that: the ROIs of R1 contains normal pancreatic tissue and cancer tissue, while the ROIs of R0 only contains normal pancreatic tissue; because R0 has only normal pancreatic tissue, the structural changes on the diagonal component are relatively more uniform and more symmetry.
As analyzed above, it can be inferred that, as for R0 and R1, there are associations between histopathological characteristics and texture features. These texture features with statistical differences are markers associated with discriminating between R0 and R1. They may potentially enhance physician’s ability to differentially diagnose R0 and R1. This is rewarding for future radiomic studies of efficacy evaluation and prognosis. In addition, classification results indicate that the CAD system play an important guiding role for differential diagnosis of R0 and R1.
This study has some limitations and deficiencies. First, it was a retrospectively study in a single institution, patients population and imaging methods were basically homogeneous and selection bias may exist, making it difficult to generalize the results to other institutions. Then, ROIs were fitted to rectangles, but the pixel size of a ROI is still small. Thus, better fitting methods are worth exploring. Third, sensitivity still needs to be improved.
By analyzing histopathological characteristics of the two types of resection margins and some deficiencies of ROIs, we designed a CAD system based on portal-venous CT images to identify whether surgery was conducted R0 resection or R1 resection. The experimental results indicate that the developed CAD system is rewarding for discriminating R0 from R1. By analyzing statistical differences of texture features between R0 and R1, it elucidates that histopathological characteristics of R0 and R1 can be represented by texture features of preoperative CT images. It implies that texture features can potentially enhance physician’s diagnostic abilities. The research also contributes to future radiomics-based researches that refer to R0 and R1 for efficacy assessments and prognosis.
The study in this paper has been approved by the Ethics Committee of West China Hospital of Sichuan University (the trial registration number in ClinicalTrials.gov is NCT02928081).
Not applicable.
The datasets during and/or analyzed during the current study available from the corresponding author on reasonable request pending the approval of the institution and trial/study investigators who contributed to the dataset.
The authors declare that they have no competing interest.
This research work is supported by the Key Research Plan for State Commission of Science Technology of China under Grant No. 2018YFC0807501.
(1) Substantial contributions to conception and design, acquisition of data, or analysis and interpretation of data: (a) substantial contributions to conception and design: Bei Hui, Jia-Jun Qiu, and Jin-Heng Liu; (b) acquisition of data: Jin-Heng Liu, Neng-Wen Ke; (c) analysis and interpretation of data: Jia-Jun Qiu.
(2) Drafting the article of revising it critically for important intellectual content: Bei Hui, Jia-Jun Qiu
(3) Final approval of the version to be published: all
(4) Agreement to be accountable for all aspects of the work: Bei Hui
Not applicable.
[1]. Marusyk A, Polyak K. Tumor heterogeneity: causes and consequences. Biochimica et Biophysica Acta - Reviews on Cancer 2010; 1805(1): 105-117
[2]. Aerts HJ, Velazquez ER, Leijenaar RT, Parmar C, Grossmann P, Carvalho S, et al. Decoding tumour phenotype by noninvasive imaging using a quantitative radiomics approach. Nature communications 2014; 5: 4006
[3]. Lambin P, Rios-Velazquez E, Leijenaar R, Carvalho S, Van Stiphout RG, Granton P, et al. Radiomics: extracting more information from medical images using advanced feature analysis. European journal of cancer 2012; 48(4): 441-446
[4]. Gillies RJ, Kinahan PE, Hricak H. Radiomics: images are more than pictures, they are data. Radiology 2015; 278(2): 563-577
[5]. Cassinotto C, Chong J, Zogopoulos G, Reinhold C, Chiche L, Lafourcade JP, et al. Resectable pancreatic adenocarcinoma: role of CT quantitative imaging biomarkers for predicting pathology and patient outcomes. European journal of radiology 2017; 90: 152-158
[6]. Eilaghi A, Baig S, Zhang Y, Zhang J, Karanicolas P, Gallinger S, et al. CT texture features are associated with overall survival in pancreatic ductal adenocarcinoma–a quantitative analysis. BMC medical imaging 2017; 17(1): 38
[7]. Chakraborty J, Langdon-Embry L, Cunanan KM, Escalon JG, Allen PG, Lowery MA, et al. Preliminary study of tumor heterogeneity in imaging predicts two year survival in pancreatic cancer patients. PloS one 2017; 12(12): e0188022
[8]. Canellas R, Burk KS, Parakh A, Sahani DV. Prediction of pancreatic neuroendocrine tumor grade based on CT features and texture analysis. American Journal of Roentgenology 2018; 210(2): 341-346
[9]. Qiu JJ, Wu Y, Hui B, Huang ZX, Chen J. Texture Analysis of Computed Tomography Images in the Classification of Pancreatic Cancer and Normal Pancreas: A Feasibility Study. Journal of Medical Imaging and Health Informatics 2018; 8(8): 1539-1545
[10]. Cheng SH, Cheng YJ, Jin ZY, Xue HD. Unresectable pancreatic ductal adenocarcinoma: Role of CT quantitative imaging biomarkers for predicting outcomes of patients treated with chemotherapy. European Journal of Radiology 2019; 113: 188-197
[11]. Verbeke CS, Menon KV. Redefining resection margin status in pancreatic cancer. Hpb 2009; 11(4): 282-289
[12]. Leahy C, O’Brien A, Dainty C. Illumination correction of retinal images using Laplace interpolation. Applied optics 2012; 51(35): 8383-8389
[13]. Shi Z, Osher S, Zhu W. Weighted graph laplacian and image inpainting. Journal Of Scientific Computing 2016; 577
[14]. Hoeltgen L, Kleefeld A, Harris I, Breuß M. Theoretical foundation of the weighted laplace inpainting problem. Applications of Mathematics 2019; 64(3): 281-300
[15]. Freelove R, Walling AD. Pancreatic cancer: diagnosis and management. Am Fam Physician 2006; 73(3): 485-92
[16]. Kamisawa T, Wood LD, Itoi T, Takaori k. Pancreatic cancer. The Lancet 2016; 388(10039): 73-85
[17]. Jalab HA, Ibrahim RW. Texture enhancement for medical images based on fractional differential masks. Discrete dynamics in nature and society 2013; 2013
[18]. Li B, Xie W. Adaptive fractional differential approach and its application to medical image enhancement. Computers & Electrical Engineering 2015; 45: 324-335
[19]. Wang L, Peng J, Cheng X, Dai E. Ct and MRI Image Diagnosis of Cystic Renal Cell Carcinoma Based on a Fractional-order Differential Texture Enhancement Algorithm. Journal of Medical Imaging and Health Informatics 2019; 9(5): 917-923
[20]. Oliveira EC, Machado JA. A review of definitions for fractional derivatives and integral. Mathematical Problems in Engineering 2014; Article ID 238459
[21]. Qiu JJ, Wu Y, Hui B, Liu YB. Fractional differential algorithm based on wavelet transform applied on texture enhancement of liver tumor in CT image. Journal of Computer Applications 2019; 39(4): 1196-1200
[22]. Mallat SG. A theory for multiresolution signal decomposition: the wavelet representation. IEEE Transactions on Pattern Analysis & Machine Intelligence 1989; (7): 674-693
[23]. Litjens G , Kooi T , Bejnordi BE , Setio AAA. A survey on deep learning in medical image analysis. Medical Image Analysis 2017; 42:60-88
[24]. Zhao ZQ , Zheng P , Xu ST , Wu X. Object Detection with Deep Learning: A Review. IEEE transactions on neural networks and learning systems 2019; arXiv preprint arXiv:1807.05511
[25]. Cortes C, Vapnik V. Support-vector networks. Machine learning 1995; 20(3): 273-297
[26]. Ng AY. Preventing "Overfitting" of Cross-Validation Data. Fourteenth International Conference on Machine Learning. Nashville, 1997: 245-253
[27]. Davnall F, Yip CSP, Ljungqvist G, Selmi M, Ng F, Sanghera B, et al. Assessment of tumor heterogeneity: an emerging imaging tool for clinical practice?. Insights into imaging 2012; 3(6): 573-589
[28]. Lambin P, Leijenaar RTH, Deist TM, Peerlings J, De Jong EE, Van Timmeren J, et al. Radiomics: the bridge between medical imaging and personalized medicine. Nature Reviews Clinical Oncology 2017; 14(12): 749
Table 1. Description of experimental methods and their references
Abbreviation |
Description |
References |
GH |
Gray-level histogram. Feature names: mean; standard deviation; smoothness; cubic moment; uniformity; entropy; fourth moment; |
[4], [7], [9] |
GLCM |
Gray-level co-occurrence matrix. Feature names: autocorrelation; cluster prominence; cluster shade; contrast; correlation; difference entropy; difference variance; dissimilarity; energy; entropy; homogeneity (inverse difference moment); information measure of correlation1; information measure of correlation2; inverse difference (homogeneity in matlab); maximum probability; sum average; sum entropy; sum of squares (variance); sum variance; Renyi entropy; Tsallis entropy |
[4], [6], [7], [9] |
GLRLM |
Gray-level run-length matrix. Feature names: short run emphasis; long run emphasis; gray-level non-uniformity; run length non-uniformity; run percentage; low gray-level run emphasis; high gray-level run emphasis; short run low gray-level emphasis; short run high gray-level emphasis; long run low gray-level emphasis; long run high gray-level emphasis; |
[4],[7] |
WT |
Wavelet transform. Feature names: mean; variance; energy; |
[4], [9] |
WT-HCR |
Wavelet transform combining GH, GLCM, and GLRLM. Feature names: refer to GH, GLCM, and GLRLM |
|
LOG-GH |
Laplacian of Gaussian filter combining histogram. Feature names: refer to GH |
[5], [8], [10] |
ACM-D |
Angle co-occurrence matrix: direction gradient matrix based on Sobel operator combining co-occurrence matrix. Feature names: refer to GLCM |
[7] |
ACM-M |
Angle co-occurrence matrix: magnitude gradient matrix based on Sobel operator combining co-occurrence matrix. Feature names: refer to GLCM |
[7] |
CTM |
Combined texture method (all texture features including GH, GLCM, GLRLM, WT, WT-HCR, LOG-GH, ACM1, ACM2) |
|
LD-WF |
The method designed in this search. Feature names: refer GH, GLCM (five representative features are used: contrast; correlation; energy; homogeneity; entropy), and GLRLM |
|
Table 2. Classification results
Method |
TP |
TN |
FN |
FP |
Accuracy |
Sensitivity |
Specificity |
AUC |
GH |
15 |
31 |
19 |
21 |
53.49% |
44.12% |
59.62% |
0.4842 |
GLCM |
21 |
27 |
13 |
25 |
55.81% |
61.76% |
51.92% |
0.6010 |
GLRLM |
20 |
24 |
14 |
28 |
51.16% |
58.82% |
46.15% |
0.4938 |
WT |
20 |
35 |
14 |
17 |
63.95% |
58.82% |
67.31% |
0.6711 |
WT-HCR |
22 |
31 |
12 |
21 |
61.63% |
64.71% |
59.62% |
0.6309 |
LOG-GH |
21 |
41 |
13 |
11 |
72.09% |
61.76% |
78.85% |
0.6861 |
ACM-D |
22 |
21 |
12 |
31 |
50.00% |
64.71% |
40.38% |
0.4531 |
ACM-M |
21 |
30 |
13 |
22 |
59.30% |
61.76% |
57.69% |
0.6267 |
CTM |
23 |
35 |
11 |
17 |
67.44% |
67.65% |
67.31% |
0.7130 |
LD-WF |
26 |
47 |
8 |
5 |
84.88% |
76.47% |
90.38% |
0.8641 |
Table 3. Mann-Whitney U-test results
Number |
Feature name |
Statistical name |
Subband |
Location |
p |
F1 |
run length nonuniformity |
run-length matrix |
horizontal |
top |
0.045 |
F2 |
energy |
co-occurrence matrix (d=1) |
diagonal |
middle |
0.032 |
F3 |
energy |
co-occurrence matrix (d=2) |
diagonal |
middle |
0.032 |
F4 |
high gray-level run emphasis |
run-length matrix |
diagonal |
middle |
0.002 |
F5 |
short run low gray-level emphasis |
run-length matrix |
diagonal |
middle |
0.045 |
F6 |
short run high gray-level emphasis |
run-length matrix |
diagonal |
middle |
0.002 |
F7 |
standard deviation |
histogram |
diagonal |
bottom |
0.026 |
F8 |
smoothness |
histogram |
diagonal |
bottom |
0.026 |
F9 |
cubic moment |
histogram |
diagonal |
bottom |
0.021 |
F10 |
fourth moment |
histogram |
diagonal |
bottom |
0.036 |
F11 |
correlation |
co-occurrence matrix (d=2) |
diagonal |
bottom |
0.029 |
F12 |
long run emphasis |
run-length matrix |
diagonal |
bottom |
0.026 |
F13 |
long run low gray-level emphasis |
run-length matrix |
diagonal |
bottom |
0.025 |
F14 |
long run high gray-level emphasis |
run-length matrix |
diagonal |
bottom |
0.026 |
Table 4. Results of right-tailed hypothesis tests
Feature name |
p |
F4: high gray-level run emphasis (HGRE), run-length matrix, diagonal subband image, middle slice |
0.00 |
F6: short run high gray-level emphasis (SRHGE), run-length matrix, diagonal subband image, middle slice |
0.001 |
F9: cubic moment, histogram, diagonal subband image, bottom slice |
0.011 |