Research on Robustness of Geometric Algebra Based Adaptive Filtering Algorithms In Non-Gaussian Environment

In this paper, two new geometric algebra (GA) based adaptive filtering algorithms in non-Gaussian environment are proposed, which are deduced from the robust algorithms based on the minimum error entropy (MEE) criterion and the joint criterion of the MEE and the mean square error (MSE) with the help of GA theory. Some experiments validate the effectiveness and superiority of the GA-MEE and GA-MSEMEE algorithms in α -stable noise environment. At the same time, the GA-MSEMEE algorithm has faster convergence speed compared with the GA-MEE.


Introduction
Adaptive filtering algorithms have a strong effect on signal processing, such as adaptive beamforming [1], acoustic echo cancelation [2][3][4], noise cancelation [5] and channel equalization [6]. Among them, mean square error (MSE) has been the typical criterion of adaptive filtering algorithms. Owing to its simple structure and rapid convergence, the LMS algorithm on the basis of MSE proposed by Widrow and Hoff has been applied in many fields [7][8][9]. Nevertheless, the performance of the LMS algorithm is not optimal, one problem is that the algorithm is vulnerable to the input signal, the other problem is the contradictory between step size and steady-state error. Subsequently, the NLMS algorithm put forward to solve these problems by normalizing the power of the input signal [10]. However, when signals are disturbed by abnormal values such as impulse noise, the performance of the LMS-type algorithm will be seriously degraded. Therefore, some robustness criteria have been proposed and successfully applied to adaptive filtering algorithms to deal with adaptive signal problems under impulsive noise, such as adaptive wireless channel tracking [11] and blind source decomposition [12]. Some typical robustness criteria include maximum correntropy criterion (MCC) [13][14][15], minimum error entropy (MEE) [16][17][18], generalized MCC [19] and minimum kernel risk-sensitive loss criterion [20]. They are insensitive to large outliers, which improves the performance under impulsive noise. In this regard, Professor J.C. principle and his team proposed to use the error signal of Renyi entropy instead of the MSE. Minimum error entropy is capable of getting better error distribution according to [21]. Although MEE criterion can obtain high accuracy, it does not take the mean factor into account, while the characteristics of MSE are just opposite to that of MEE. In this regard, B. Chen et al. [21] put forward a joint criterion building up a connection between MSE and MEE by adding the weight.
In addition, recent studies have shown that MEE criterion is superior to MCC criterion and can be used for adaptive filtering [18] and Kalman filtering [17]. Therefore, G. Wang et al. [22] proposed recursive MEE, which has strong robustness under impulsive noise. However, the current adaptive filtering algorithms based on MEE criterion and the joint criterion of MSE and MEE only can be used for onedimensional signals processing. It is worth noting that combined with geometric algebra, these algorithms can be extended to higher dimensions, so that the correlation of each dimension can be considered in the process of analyzing problems, and the performance of algorithms can be effectively improved.
Geometric algebra (GA) gives an effectiveness computing framework for multidimensional signal processing [23,24]. GA has a wide range of applications, such as image processing, multi-dimensional signal processing [25][26][27][28] and computer vision [29,30]. Combined with this framework, some scholars propose GA adaptive filtering algorithms. Lopes et al. [31] devised the GA-LMS algorithm and analyzed the feasibility of the algorithm, used for point cloud registration through rotation estimation. After that, Al-Nuaimi et al. [32] further exploited the potential of the algorithm, applied for the 6DOF alignment. Recently, W. Wang et al. [33] conducted the instantaneous performance analysis of GA-LMS. However, the LMS algorithm extended to the GA space still has some limitations, such as its poor performance under non-Gaussian noise. The α-stable distribution fits very well with the actual data, and is consistent with multichannel interference in wireless networks and backscatter echoes in radar systems. Therefore, the use of α-stable distribution to simulate non-Gaussian noise has more general significance. W. Wang et al. [34] deduced and proposed GA-MCC algorithm, analyzing its performance in α-stable noise. The results show that GA-MCC has good robustness, but there is still room for improvement in its convergence rate. Due to the superiority of MEE criterion over MCC criterion, GA-MEE and GA-MSEMEE algorithms are proposed in this paper to improve the validity of existing GA adaptive filtering algorithms and expand the scope of application.
Our contributions are as follows. Firstly, according to the GA theory, the multidimensional problem is transformed into mathematical description, represented by multivectors. Secondly, the algorithms based on MEE criterion and its joint criterion with MSE is deduced in GA space. The original MEE and its joint criterion algorithm with MSE can be used for higher dimensional signals processing with the help of GA theory; Finally, some experiments validate the validity and the robustness of the GA-MEE and GA-MSEMEE algorithm.

Basic theory
Geometric Algebra was proposed by the British mathematician William Kingdon Clifford, which provides a unified computational framework for multi-dimensional signal processing [35][36][37]. This algebraic approach contains all geometric operators and permits specification of constructions in a coordinate-free manner [38]. Compared with several particular cases of vector and matrix algebras, complex numbers and quaternions, using geometric algebra can deal with higher dimensional signals.
Assuming that an orthogonal basis of R n is {e 1 , e 2 , · · · , e n } , the basis of G n can be generated by multiplying the n basis elements (plus the scalar 1) via geometric product. The geometric product of two basis elements is non-commutative, its property is defined as: e i e ij = e i e i e j = e j , i, j = 1, . . . , n, i ̸ = j Given n= p + q, the expression of the operation rule of orthonormal basis is: Thus, the basis of G n is: {1, e i , e i e j , · · · , e 1 e 2 · · · e n } The core product in GA space is geometric product. The expression of the geometric product of vector a and b is: in which a · b represents the inner product, which is commutative, a ∧ b denotes the outer product, which is not commutative. According to their properties, the following expression can be obtained: Suppose A is a general multivector in G n , the basic element of G n can be defined as: which is made up of its s-vector part ⟨·⟩ s .
Actually, any multivector can be decomposed according to [39]: In the operation of geometric algebra, the main properties used are as follows: (1) Scalar product: A * B = ⟨AB⟩ 0 (2) Cyclic reordering: ⟨AB · · · C⟩ = ⟨B · · · CA⟩ (3) Clifford reverse: Where α > 0 is the order of entropy, V α (e k ) is information potential, and when α → 1 , Renyi entropy is equivalent to Shannon entropy. In addition, to keep the orientation consistent with the LMS algorithm (minimization), select α < 1 . In this case, the minimum error entropy can be converted into minimizing the information potential.
Hence, for the traditional minimum error entropy (MEE) algorithm, its core expressions are:

The MSE-MEE algorithm
The mean square error standard has good sensitivity. The minimum error entropy has a good error distribution, especially in the case of high-order statistics. Therefore, based on these two methods, a new performance index is proposed, which combines the advantages of each method to realize the sensitivity and the synchronization validity of the error distribution. The core expressions of the LMS algorithm are: 1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63  64  65 While the MSE-MEE algorithm is the mixed of square power of LMS and information potential of MEE. Then the MSE-MEE cost function is: in which η is the mixing parameter and η ∈ [0, 1].
Then the corresponding gradient algorithm is:

Problem formulation of adaptive filtering
Regarding the linear filtering model, its formulation involves the input signal of length L u(n)=[U n , U n−1 , · · · , U n−L+1 ] T , the system vector to be estimated T and the desired signal d(n): In this research, we give some assumptions as follows: A1) The multivector valued components of the input signal u(n) are zero-mean white Gaussian processes with variance σ 2 s . A2) The multivector valued components of the additive noise are described by α-stable processes. α-stable distribution is a family of four parameter distributions, which can be represented by S (α, β, γ, σ), in which α denotes the characteristic index, which describes the tail of the distribution; β denotes the skewness, γ denotes the dispersion coefficient, σ denotes the distribution position.
A3) The noise v n , the initial weight vector w o , the input signal u(n) and the weight error vector ∆w n are uncorrelated.

The proposed GA-MEE algorithm
In this part, we deduce the GA-MEE algorithm with the help of GA theory [34]. In traditional algorithms, the cost function of MEE is expressed by information potential. When α ∈ (0, 1), the minimum error entropy is equal to minimize the cost function. The GA-MEE cost function can be obtained by rewriting the formula (9) in the GA form.

The proposed GA-MSEMEE algorithm
In the same way, according to GA theory, we can obtain the GA-MSEMEE cost function as follows by rewriting the formula (11) in the GA form.

Results and discussion
This section carries out some experiments, analyzing the performance of the two new algorithms in non-Gaussian environment. First of all, in order to know how to select appropriate adjustable parameters for GA-MEE and GA-MSEMEE algorithms, the experimental part analyzes the influence of these parameters (the kernel width σ, the order of entropy α and weight coefficient η) on the MSD learning curves in detail. Secondly, GA-MEE and GA-MSEMEE algorithms are compared with other GA based algorithms to verify their superiority. Finally, the algorithms are applied to multi-dimensional signal denoising in non-Gaussian environment to verify their validity in processing multi-dimensional signals. All MSD learning curves and the experimental data are averaged 50 independent runs. In this paper, initial weight vector ω 0 denotes a 5 × 1 multivector, and the length of the sliding window is L = 8. The input signal and noise are shown in A1 and A2, α-stable noise distribution is given by S (1.5, 0, 1, 0) in the experiment. In addition, we use the generalized signal-to-noise ratio (GSN R = 10 log ( σ 2 s /γ v ) ) to describe the relationship between the input signal and noise, σ 2 s is the variance of input signal multivector, γ v is the dispersion coefficient of noise.   3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63 64 65 To further instinctively analyze the effect of kernel width and order of entropy on the GA-MEE algorithm, the steady-state MSD is taken as a function of kernel width and order of entropy are plotted in Fig.1 for various values of the kernel width σ and the order of entropy α.
The tendency of steady-state values in respect of kernel width and order of entropy are clearly highlighted in Fig.1. It can be obtained from Table 1 and the 3-dimensional diagram that the steady-state MSD is smaller with both larger values of σ and α. Different parameter σ: The value of parameter α is setting as 0.6, and the value of parameter σ is setting as 50, 60, 70, 90, 100, respectively. Fig.3 shows the instantaneous MSDs of the GA-MEE under various σ. It can be seen from Fig.3, as kernel width increases, the steady-state MSD decreases and convergence rate increases. But when the parameter σ exceeds a certain value, the convergence rate decreases gradually. So, the selection of σ should balance the steady-state MSD and convergence rate. In this group of experiments, its convergence rate is the best when σ = 70.
Different parameter α: The value of parameter σ is setting as 70, and the value of parameter α is setting as 0.3, 0.5, 0.6, 0.7, 0.8, respectively. Fig.4 demonstrates the instantaneous MSDs of the GA-MEE under various α. the steady-state MSD increases with the increase of the order of entropy α, and the convergence rate decreases obviously. So, the selection of α should balance the steady-state MSD and convergence rate.

GA-MSEMEE algorithm
From the experimental part of the GA-MEE, it is concluded that the greater the parameter α, the slower the convergence rate. In order to study the influence of parameters on the GA-MSEMEE and the superiority of the GA-MSEMEE in convergence rate, this section selects different parameters σ and η, to analyze the performance of the GA-MSEMEE when α = 0.8. Table 2 3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63  64  65 To further instinctively analyze the effect of kernel width and weight coefficient on the GA-MSEMEE algorithm, the steady-state MSD is taken as a function of kernel width and weight coefficient are plotted in Fig.5 for various values of kernel width σ and weight coefficient η.   Table 2 and 3-dimensional diagram that the steady-state value is smaller as σ becomes larger. However, from the numerical point of view, the influence of the weight coefficient η on MSD is not obvious. Different parameter σ: The value of parameter η is setting as 8.5 × 10 −6 , and the value of parameter σ is setting as 50, 60, 70, 90, 100, respectively. Fig.7 shows the instantaneous MSDs of the GA-MSEMEE under various σ. It is concluded from Fig.7 that as the kernel width becomes more larger, the steady-state MSD and convergence rate decrease gradually. Comprehensively considering the above two indicators, GA-MSEMEE has better performance when σ = 70 in this group of experiments.
Different parameter η: The value of parameter σ is setting as 70. Since the values of parameters η are similar in Table 2, it is difficult to see the impact of these parameters on the MSDs of the GA-MSEMEE. Thus, we set the parameters η at large intervals, which are: 7 × 10 −6 , 7 × 10 −5 , 7 × 10 −4 , 8 × 10 −4 and 9 × 10 −4 . Fig.8 shows the instantaneous MSDs of the GA-MSEMEE under different η. As η increases by ten times, the convergence rate becomes faster, the steadystate MSD gradually increases, and the robustness of the algorithm becomes worse. Therefore, the selection of weight coefficient should comprehensively compare the performance of three aspects. In this group of experiments, GA-MSEMEE has the best performance when η = 7 × 10 −5 .
As can be seen from Fig.9, compared with GA-MCC, the GA-MEE has better steady-state MSD and convergence rate, but its convergence rate slows down significantly with the decrease of GSNR. Compared with GA based LMS-type algorithms, the GA-MEE has better steady-state MSD and robustness, but GA-MEE needs more iterations to converge. The improved GA-MSEMEE algorithm solves this problem to a certain extent. The GA-MSEMEE always maintains superior convergence rate, good steady-state MSD and robustness under different GSNR.

Application and multi-dimensional signal analysis
In this part, the two new algorithms are applied to signal denoising. In order to test their superiority in non-Gaussian environment, we performed the following experiments.  Fig.10, GA-LMS, GA-NLMS and GA-MCC algorithms all need an adaptive process at the beginning of denoising, which the proposed algorithms do not need. Fig.11 shows the average 4-dimensional signal recovery errors of different algorithms with different GSNR. The recovery error of 4-dimensional signal is described by ∥u ′ − u∥ 2 2 , which represents the norm square of the difference between the denoised signal and the clean signal.
What is more, it is worth noting that the two new algorithms can be applied to higher dimensional signal processing. Fig.12 demonstrates the denoising results of 8-dimensional signal with GA-MEE and GA-MSEMEE with GSNR = 0dB.

Computational Complexity
The running time of different algorithms for 4-dimensional and 8-dimensional signal denoising is shown in Table 3. The experiments are carried out via MATLAB with Intel (R) Core (TM) i7-6500U 2.50GHz CPU and 4 GB memory.  Table 3 shows that the proposed algorithms in this paper have higher computational complexity. The reason for the higher computational complexity of GA-MEE algorithm is that it involves the calculation of minimum error entropy, which includes exponential operation of different error signals. The computational complexity of GA-MSEMEE is the highest, mainly because GA-MSEMEE algorithm is acquired by fusing MSE and MEE through weight coefficients.

Conclusions
Two new GA based algorithms GA-MEE and GA-MSEMEE are proposed, which are deduced from the MEE criterion and the joint criterion respectively, combined with GA theory. The GA-MEE and GA-MSEMEE show strong robustness and high precision for high-order signal processing in non-Gaussian environment. However, although the GA-MEE shows more robustness than other algorithms, its convergence rate and sensitivity are low. The GA-MSEMEE can effectively compensates for the lack of the GA-MEE. The experiments demonstrate that the GA-MSEMEE achieves a good balance between robustness and convergence rate.