Kernel-Based Fuzzy Local Information Clustering Algorithm Self-integrating Non-Local Information

The application of fuzzy clustering algorithms in image segmentation is a hot research topic nowadays. Existing fuzzy clustering algorithms have the following three problems: (1)The parameters of spatial information constraints can ′ t be selected adaptively; (2)The image corrupted by high noise can ′ t be segmented eﬀectively; (3)It is diﬃcult to achieve a balance between noise removal and detail preservation. In the fuzzy clustering based on the optimization model, the choice of distance metric is very important. Since the use of Euclidean distance will lead to sensitivity to outliers and noise, it is diﬃcult to obtain satisfactory segmentation results, which will aﬀect the clustering performance. This paper proposes an optimization algorithm based on the kernel-based fuzzy local information clustering integrating non-local information (KFLNLI). The algorithm adopts a self-integration method to introduce local and non-local information of images, which solves the com-mon problems of current clustering algorithm. Firstly, the self-integration method solves the problem of selecting spatial constraint parameters. The algorithm uses continuous self-learning iteration to calculate the weight coeﬃcients; Secondly, the distance metric uses Gaussian kernel function to induce the distance to further enhance the robustness against noise and the adaptivity of processing diﬀerent images; Finally, both local and non-local information are introduced to achieve a segmentation eﬀect that can eliminate most of the noise and retain the original details of the image. Experimental results show that the algorithm is superior to existing state-of-the-art fuzzy clustering-related algorithm in the presence of high noise.


Introduction
Cluster analysis, or clustering for short, is the process of subdividing data objects into subsets. Each subset is a cluster. Items in the same cluster have high similarity, while objects in different clusters have relatively high divergence. At present, clustering has become one of the important techniques of data analysis and data mining, widely used in the fields of text classification, biometric recognition, image segmentation and so on [1]. In these applications, image segmentation based on clustering is very popular, and a large number of algorithms and applications have been proposed in the fields of medicine [2], remote sensing [3], and intelligent transportation [4]. Popular clustering algorithms mainly include hierarchical clustering [5], spectral clustering [6], fuzzy clustering [7], affinity propagation clustering [8], density peak clustering [9]and so on. Among them, using fuzzy clustering theory to solve the problem of ill-posed image segmentation has become a hot topic at present. In this context, a fuzzy clustering optimization algorithm based on objective function optimization is proposed in this paper.
The image itself has uncertainty and complexity, and it is more or less disturbed by noise in the process of acquiring and transmitting the image, which leads to the degradation of the segmentation performance of the fuzzy clustering algorithm. Therefore, many scholars have devoted themselves to studying how to denoise noisy images under a variety of different intensities of noise conditions, that is, how to improve the robustness of the clustering algorithm to noise-containing images, and further improve the accuracy and efficiency of the image segmentation algorithm.
The traditional fuzzy C-means clustering (FCM) algorithm [10,11]was first proposed by Dunn, and later promoted by Bezdek. FCM clustering has been widely used in image segmentation, and it has a good segmentation effect on images containing no noise. But at the same time, the FCM algorithm has two problems: First, it is sensitive to noise [12]; Second, the lack of local spatial information of the image leads to excessive segmentation [13]. To solve these problems, scholars have proposed many improved fuzzy clustering algorithms. There are two main strategies used, one is to use the regularization method to realize the self-optimization of FCM, and the other is to include local spatial information into the object function of FCM to improve image segmentation [14].Yang and Tsai et al. [15]extended FCM to the kernel space, and obtained a kernel-based fuzzy C-means clustering algorithm (KFCM). However, FCM and KFCM did not consider the spatial neighborhood information of the current pixel, resulting in poor clustering results of noise images. Therefore, Ahmed et al. [16]proposed a fuzzy mean clustering (FCM s) with spatial constraints by modifying the objective function of FCM. The algorithm adds the neighborhood information of the pixel to the objective function of the FCM algorithm. Although FCM S algorithm can obtain a better segmentation effect, the neighborhood information of its pixels needs to be calculated in each iteration, which reduces the segmentation efficiency of FCM s algorithm. In response to this situation, Chen and Zhang [17]replaced the neighborhood item with the neighborhood mean or median value, and proposed a robust FCM s1 or FCM s2, which greatly shortened the execution time. At the same time, they proposed the kernelized fuzzy clustering algorithms KFCM s, KFCM s1 and KFCM s2 of FCM S, FCM s1 and FCM s2. Subsequently, to further speed up the execution speed, Szilagyi et al. [18]proposed enhanced FCM (EnFCM), which is to generate filtered images from the original image and its neighborhood mean in advance, and cluster based on its grayscale histogram. The above algorithms all introduce a spatial parameter, which is manually specified according to the situation. However, because noise is highly dependent on parameters, and the type and intensity of noise cannot be known in advance, it is difficult to determine the optimal parameters. In addition, Krinidis and Chatzis [19]proposed a fuzzy local information mean clustering (FLICM) with a certain adaptive ability, which integrates spatial information to construct local fuzzy factors to ensure the sensitivity of noise and the preservation of details. Gong, Zhou, and Ma extended FLICM to the kernel space and proposed the KWFLICM algorithm, which greatly improved the robustness of noisy images [20]. Recently, Zhang, Sun [21]and others improved the FLICM algorithm and proposed an algorithm with local and non-local information(FLICMLNLI). The algorithm   1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63  64  65 considers self-similarity and back projection to improve robustness. This algorithm still has the problem of not being able to balance the anti-noise ability and detail retention, and there is still room for improvement in the anti-noise ability. On the basis of this algorithm, aiming at the problem that the existing fuzzy clustering algorithm [22][23][24][25][26][27]is not perfect enough, this paper further improves the existing clustering algorithm on the basis of the existing clustering algorithm, in order to enhance the original algorithm's anti-noise robustness and segmentation accuracy. Based on the FLICMLNLI algorithm, this paper introduces the Gaussian kernel distance metric and uses iterative self-learning to assign the weight of the current pixel and its neighborhood information to solve the problems of the FLICMLNLI algorithm to further improve the robustness of the algorithm. The innovation of this article is reflected in the following three aspects: (1) Abandon the traditional algorithm that roughly thinks that the closer the distance between pixels, the stronger the correlation, and use a more reasonable correlation calculation model to calculate the correlation between pixels and use it to optimize the calculation of the blur factor; (2) To solve the problem that existing algorithms cannot automatically select spatial parameters, obtain the weight of current pixel information and its partial information in distance measurement calculations through continuous iterative self-learning methods; (3) Extend the distance metric to the kernel space to further enhance the anti-noise robustness of the algorithm.
A certain number of experiments are performed to compare the algorithm in this paper with other algorithms. The algorithm in this paper can better balance the noise removal and detail retention capabilities, and the anti-noise robustness is greatly enhanced.
The rest of this article is organized as follows: The second part briefly describes the method used in this article. The third section introduces the related content of the kernel function and the analysis of related algorithms. The fourth section introduces the correlation calculation method in detail and gives the self-learning weighted non-local information kernel distance metric, embedding the objective function and deriving the improved algorithm. The fifth section illustrates the superiority of the algorithm through a series of tests on the image. Finally, the sixth section gives the main conclusions.

Methods
The aim, design and setting of the study.
The purpose of this paper is to further improve the performance of fuzzy clustering algorithm in order to achieve better image segmentation effect under the background that the existing fuzzy algorithm is not perfect. The problems existing in image segmentation are roughly divided into two categories, one is the problem between two pixels, the other is the problem between pixels and clustering center.
(2) The self-integration method utilizes the local and non-local information of the image. And extend the Euclidean distance to the nuclear space to further construct a new distance measure; (3) (1) and (2) are fused together to construct a new fuzzy factor.
Combined with the above three parts, the objective function of the optimization algorithm is constructed, and the design frame diagram is shown in Figure 1.
A certain amount of parameters are introduced in the construction of the fuzzy clustering algorithm. The given image class number c varies according to different images. In addition, other parameter values involved in this article are shown in Table 1. The optimization algorithm proposed in this paper introduces a new parameter σ. For the selection of σ, please refer to the following article section.

Experimental data source
The algorithms cited in this article are marked in the article and given in the last reference. In addition, the pictures shown in the article are all taken from the BSDS300 gallery, BSDS500 gallery and MedPix database.
Brief description of the experiment process First, read a lot of literature, innovate on the basis of FLICMLNLI algorithm to get the optimization model of this article, and get the cluster center, membership matrix and weight calculation formula after formula derivation. Then use Matlab2018 to write the experiment code, and find suitable pictures from the gallery for testing. Finally, according to the obtained segmentation result graph and performance index, the experimental conclusion is drawn and the experiment is analyzed and summarized.

Preliminary knowledge
Introduction to kernel method The optimization algorithm in this paper needs to extend the Euclidean distance to the kernel space, so this section introduces the kernel method and related kernel algorithms in detail. Kernel technology has been widely used in image segmentation technology. To understand the kernel method more accurately, it is necessary to understand the kernel function. The definition of the kernel function is as follows: Suppose H is the feature space and Φ is a mapping of , K is called the kernel function, and the kernel function between Φ(x) and Φ(y) is expressed as: Where Φ(x), Φ(y) represents the inner product operation of x,y mapping on the feature space, and then: Wang [28]elaborated some basic properties of the kernel function and gave a rigorous proof process. The nature of the kernel function can construct many new kernel functions on the basis of the basic kernel function. The Gaussian function is one of the most widely used kernel functions and is defined as follows: Among them, x and y are the ith and jth samples of the entire data set respectively. σ is the scale parameter of the Gaussian kernel. It can be seen from (3) K(x, x) = 1,K(y, y) = 1. Then (2) can be simplified to: KFCM and related algorithms Yang and Cai [29]combined fuzzy clustering with Gaussian kernel function to obtain KFCM. The core of KFCM is to map the original space samples to the high-dimensional feature space by nonlinear function, and then cluster the highdimensional feature space samples by Gaussian kernel function. The optimization model of KFCM algorithm can be expressed as: KFCM makes linearly inseparable samples linearly separable in high dimensional feature space, which overcomes the shortage of FCM for linearly inseparable samples. Ahmed et al. [30] proposed the FCM S method by introducing local spatial constraints in FCM, which can obtain satisfactory results and reduce the influence of noise on the segmentation results. Then use Gaussian kernel-induced distance instead of Euclidean distance to propose KFCM S. The optimization model of the KFCM S algorithm can be described as follows: Krinidi et al. proposed the FLICM algorithm by defining the constraint relationship between the membership degree of the central pixel and the neighboring pixels, which effectively solved the problem that the traditional fuzzy clustering algorithm is sensitive to noise and outliers.
Suppose X = {x i |i = 1, 2, · · · }, it represents a grayscale image to be segmented, where x i is the intensity of the ith pixel, i is the pixel index, and n is the total number of pixels. The optimization model of the FLICM algorithm is defined as: Where j is the category index, c is the current number of clusters, u ij is the fuzzy membership function, which indicates the degree to which the ith pixel belongs to the jth category, m is the fuzzy degree of the algorithm,v j is the cluster center of the kth cluster, ||x i − v j || 2 is the Euclidean distance between the ith pixel and the jth cluster center. Where of pixel i , i ′ is the neighborhood pixel index, and d ii ′ is the spatial distance between i and i ′ . Gong et al. extended it to the kernel space based on the FLICM algorithm and proposed the KWFLICM algorithm. The fuzzy factor of the KWFLICM algorithm is defined as follows: Where w ri is the weight of the pixel x r relative to the central pixel x i in the neighborhood N i , and its calculation formula is w ri =w sc ×w gc , where w sc describes the spatial information constraints of the central pixel and its neighboring pixels, and is defined as follows: Where d ri is the Euclidean distance between pixels x i and x r , w gc is the intensity information constraint of the central pixel and its neighboring pixels, which is defined as follows: Where C r is the variance of the local coefficients, andC r is the average value of C r in the local window. The detailed calculation process is given by Gong et al.

Robust fuzzy algorithm for fusion of local and non-local information
Zhang, Sun et al. consider both self-similarity and back projection to improve robustness. Using self-similarity, making full use of non-local information, and retaining the original information through back projection. The FLICMLNI algorithm is further proposed, and the optimization model of the algorithm is: In the FLICMLNLI algorithm, the cluster center v i becomes a vector (v i , v N i ) , and the fuzzy factor is corrected accordingly: Where The corrected blur factor ensures that the center pixel and similar pixels (not all pixels) in the search window will be classified into the same category.  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63  64  65 By minimizing the objective function, the fuzzy membership matrix calculation formula can be obtained as: The calculation formula for cluster centers is as follows: Compared with FLICM, FLICMLNLI considers more similar pixels to significantly increase insensitivity, while the original information is used to preserve image details.

Proposed algorithm
The algorithm introduces local and non-local information at the same time and extend the FLICMLNLI algorithm to the kernel space. The explanation of the pixels in the image and their local and non-local information is shown in Figure 2.

Correlation analysis
A smaller distance between pixels does not always mean a higher correlation. In order to solve this problem, a new pixel correlation model is adopted in which adjacent image blocks with different weights are considered. A more reasonable correlation model is detailed in Algorithm 1.

Algorithm 1: Correlation model calculation process
Input: The image I , and two parameters α , γ Output: Relevance between the central pixel p and the pixels in the searching window.
Step 1 : For any pixel p in the image, construct image patches Xp .
Step 2 : Retrieve the difference between corresponding patches in different directions: dp (q) = (1/|Np|) · |Xp − Xq|,in which Np is the set of neighboring pixels with cardinality of |Np| .
Step 3 : Retrieve the weights in different directions: Step 4 : Retrieve the weighted distance in different directions: , where ⊗ is the dot product for two vectors.

Optimization algorithm modeling
Combining the maximum entropy fuzzy clustering and sample attribute weighting calculation methods proposed by Xinmin Tao [31]and Vikas Singh [32], the optimization model of the algorithm proposed in this paper is constructed as follows: The constraints are: ,get the following formula: Extend the distance to the kernel space and assign different weight values to the current pixel and its neighborhood information. The optimized blur factor is as follows: Use the Lagrange multiplier rule. The Lagrange function is: The detailed process of minimizing using the alternate optimization method is as follows: • Minimize J with ω Take the partial derivative of ω 1 to get: 1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63  64  65 Calculate (23) to get: In the same way, we obtain the partial derivative of ω 2 : And when ω 1 + ω 2 = 1 the regularization 2 r=1 ω r ln ω r is the largest, then, Find the partial derivative of v j , ∂J ∂vj = 0 and use the iterative method to get: Where Find the partial derivative of v N j , ∂J ∂v N j = 0 and use the iterative method to get: Where Find the partial derivative of u ij , ∂J ∂uij = 0 and use the iterative method to get: The calculation process of the algorithm is shown in Figure 4.
Compute the improved distance between pixels and clusters according tõ Calculate the corrected blur factorG ij (t + 1) factor with Eq. (21); 10.
The objective function can be described as: Among them, ω 1 and ω 2 are constants, if u * is the minimum point of L(U ), then ∂L(u * ) ∂uij = 0, that is ∂L(u * ) ∂uij = mu m−1 ij (d 2 ij +G ij ) − λ i = 0, then we can get: Then solve the Hessian matrix of the function L(u) at u = u * , and get: Therefore, it can be seen from the above that the diagonal elements of the Hessian matrix are greater than 0 and the non-diagonal elements are 0, so it is a positive definite matrix, so u * is the local minimum point of L(U ) . (2) The objective function can be described as: 1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63  64  65 Among them, ω 1 and ω 2 are constants, if v * is the minimum point of Then solve the Hessian matrix of the function L(v) at v = v * , and get: Therefore, it can be seen from the above that the diagonal elements of the Hessian matrix are greater than 0 and the non-diagonal elements are 0, so it is a positive definite matrix, so v * is the local minimum point of L(V ) . (3) The objective function can be described as: Then solve the Hessian matrix of the function L(v N ) at v = v ( N * ) , and get: Therefore, it can be seen from the above that the diagonal elements of the Hessian matrix are greater than 0 and the non-diagonal elements are 0, so it is a positive definite matrix, so v N * j is the local minimum point of L(V N ) . 1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63  64  65 The experimental process

Evaluation index
The effect of clustering algorithm segmentation of images can be seen intuitively from the image, and it also needs to be evaluated by evaluation indicators. This article selects several different evaluation indicators, which can illustrate the advanced nature of the optimization algorithm from different angles.
• Partition coefficient V P C and partition entropy V P E . [33] The segmentation result with the smallest blur is the best, so the larger V P C and smaller V P E the segmented image, the better the segmentation effect. • Accuracy (ACC.), Sensitivity (Sen.), Specificity (Spe.) and Precision (Pre.).
[34] Among them, P, T, N and F represent positive, negative, true and false respectively. Segmentation accuracy is the ratio of pixels that correctly classify all pixels relative to all pixels. Sensitivity and specificity reveal the possibility of correctly classifying positive and negative pixels. Therefore, the values of these four measurement methods are between 0 and 1. The greater the accuracy, sensitivity, specificity, and precision, the better the segmentation effect. • Misclassification error (ME) [35]as a quantitative evaluation index can objectively and quantitatively describe the effectiveness and robustness of the algorithm.
Where A i represents the set of pixels belonging to the ith category in the ideal segmented image, and B j represents the set of pixels belonging to the jth category in the real segmented image obtained by the algorithm. c represents the number of classes. The smaller the ME value, the closer the segmentation result is to the ideal segmentation result, and the better the segmentation performance, and vice versa.  1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63  64  65 • Jacobian score (JS) [37]can also be used to compare the performance of various algorithms.

JS
Where X i denote the pixels belonging to the manual segmented image, Y i denote the pixels belonging to the experimental segmented image. The larger the JS, the better the effect of segmentation.

Parameter selection
This section focuses on the newly introduced parameters σ in this paper. The pros and cons of the algorithm in this paper have a lot to do with the selection of this parameter. Taking a simple composite image as an example, the different segmentation effects of different σ values are shown in the Figure 5.Take two representative performance indicators (ACC. and ME). Under different types of noise, the curves of performance indicators with σ value are shown in Figure 6. The first row in the figure is a synthetic image contaminated by Gaussian noise with a normalized variance of 0.5; The second row is a synthetic image contaminated by speckle noise with a normalized variance of 0.3; The third row is a synthetic image contaminated by Rician noise with a standard deviation of 100. It can be seen from the figure that the segmentation results are very different under different values, so the selection of this parameter has a great influence on the segmentation results of this algorithm, and it is very important to select an appropriate one. Further from the curve of the index value changing within Figure 6, we can see that when takes a certain appropriate value, the index reaches the best value, and the change has little effect on the segmentation effect, that is, we need to find the optimal value.

Results and Discussion
The first row in Figure7 is the result of each algorithm processing synthetic image (c = 4) polluted by Gaussian noise with a normalized variance of 0.3, in which the algorithm parameter in this paper σ = 300; The second row is the result of each algorithm processing natural image polluted by Gaussian noise with a normalized variance of 0.1, in which the algorithm parameters in this paper σ = 100; The third row is the result of each algorithm processing the medical image contaminated by Gaussian noise with a normalized variance of 0.1, in which the algorithm parameter of this paper is σ = 300. The index comparison of each algorithm is shown in Table  2.
The first row in Figure 8 is the result of each algorithm processing synthetic image (c = 3) contaminated by 50% salt&pepper noise, including the algorithm parameters in this paper σ = 300; The second row is the result of each algorithm processing medical image with 30% salt&pepper noise, in which the algorithm parameters of this paper σ = 300. The index comparison of each algorithm is shown in Table 3.
The first row in Figure 10 is the result of each algorithm processing the natural image contaminated by speckle noise with a normalized variance of 0.1, including the algorithm parameters in this paper σ = 300; The second row is the result of each algorithm processing a sensing image contaminated by speckle noise with a normalized variance of 0.1, including the algorithm parameters of this paper σ = 300. The index comparison of each algorithm is shown in Table 5.
By adjusting the change of different noise parameters, the curve of performance index (ACC.) of each algorithm with the change of noise can be obtained. The curves of ACC changes with different noises (Gaussian noise, pepper and salt noise, speckle noise, and Rice noise) are shown in Figure12.
Select several more advanced algorithms and compare them with the algorithm proposed in this article. Select KWFLICM algorithm [38], APFCM algorithm [39], TFLICM algorithm [40], FALRCM algorithm and its fast algorithm [41]. The comparison result is shown in Figure 13. The index comparison of each algorithm is shown in Table 7.
In Figure 13, the first row is the comparison of the medical image segmentation results of each algorithm's segmentation contaminated by Gaussian noise with a normalized variance of 0.1;The second row is the comparison of the natural image segmentation results of each algorithm's segmentation contaminated by salt&pepper noise with an intensity of 30%; The third row is the comparison of the segmentation results of the remote sensing image contaminated by mixed noise(Gaussian noise with a normalized variance of 0.05 and salt&pepper noise with an intensity of 10%) by each algorithm; The fourth row is the comparison of the medical image segmentation results of each algorithm's segmentation contaminated by speckle noise with a normalized variance of 0.3; The fifth row is the comparison of the medical image segmentation results of each algorithm's segmentation contaminated by Rician noise with a standard deviation of 80.
In this section, by comparing the segmentation images and segmentation performance indicators of different types of image segmentation algorithms, we can find that the KFLNLI algorithm has good noise robustness, and compared with some algorithms with strong anti-noise effects, this algorithm can retain most of the image details , which is closer to the ideal segmentation image, and at the same time, different effect index values are better than other algorithms.
On the basis of the existing fuzzy clustering algorithm, the algorithm proposed in this paper further improves the anti-noise performance and has the ability to balance noise removal and detail preservation. However, the better performance comes at the expense of time cost.In addition, in the research of this article, the newly introduced parameter σ was determined by a large number of experiments, and it failed to realize automatic selection.

Conclusion
The optimization algorithm proposed in this paper is improved on the basis of the existing algorithm. The distance measurement and fuzzy factor calculation in 1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63  64  65 the clustering algorithm are optimized, so that the obtained algorithm has good anti-noise robustness and can retain details in the original picture. Extending the distance measure to the kernel space greatly increases the anti-noise performance of the algorithm; when calculating the distance, assigning different weights to the current pixel information of the picture and its neighborhood information to ensure the accuracy and rationality of the calculation; The self-integrating method is used to continuously iteratively obtain the corresponding weights to solve the problem that the current algorithm cannot automatically select the spatial parameters. Through the experimental results, it is found that the optimization algorithm has a better segmentation effect no matter what type of image is processed, and the anti-noise robustness is significantly enhanced.

Availability of data and materials
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Ethics approval and consent to participate
(1) This material has not been published in whole or in part elsewhere; (2) The manuscript is not currently being considered for publication in another journal; (3) All authors have been personally and actively involved in substantive work.

Competing interests
The authors declare that they have no competing interests.

Authors' contributions
CW provided the principle and implementation method of the algorithm, and completed the algorithm design. XG and QS are responsible for writing algorithm programs. XT provides the framework and layout of the entire article.

Authors' information
His research direction is signal and information processing technology in modern communication.