Fractional-order Learning Algorithm for PID Neural Network Decoupling Control Based on Sparrow Search Algorithm

Research on control of multi-variable system with strong coupling has been a signiﬁcant issue in industry. To accurately eliminate the coupling between system variables and improve the control eﬀect, decoupling control techniques are investigated. In this paper, a decoupling control scheme based on fractional-order proportion integration diﬀerentiation neural network and sparrow search algorithm (SSA-FPIDNN) is proposed, where sparrow search

weights of PIDNN and undesirable accuracy of the gradient descent method [10][11][12]. Carefully selected initial weights may help PIDNN find global optimum, otherwise, only local optimal results can be derived for control systems [13][14][15], namely, optimal initial weights are primary to improve the control performance. Shu proposed a network weight correction strategy with additional momentum that can generate a disturbance to avoid the local minimum [16], but the controller is unstable. Hao proposed a PIDNN controller based on an adaptive scaling factor, which can adaptively change the scaling factor during the training process [17]. This method improves the control performance of PIDNN, but the selection of the initial scaling factor is constrained. The gradient descent method in PIDNN is integer-order and ignores the influence of the past information when calculating the weights of the current sampling time, and leads to weak calculation accuracy [18].
In order to eliminate the coupling between system variables and improve control accuracy, this paper presents a modified PIDNN controller for nonlinear multi-variable systems with strong coupling. Sparrow search algorithm (SSA) is employed to find optimal initial weights of PIDNN, while a fractionalorder weight correction algorithm is used to modify the gradient descent method and improve system control. The proposed algorithm takes the advantages of optimal search algorithm and fractional-order characteristics, which has a better performance in tracking speed, stability, and global optimum. The main contributions can be described as follows: -The SSA is introduced to PIDNN to solve the local minimum problem and explore optimal initial weights, thereby enhancing the response speed of the PIDNN.
-A fractional-order algorithm is employed to modify the weights of PIDNN which is in integer-order form. Past information is included when updating the weights, and the decoupling control performance of PIDNN can be improved.
The rest of the paper is organized as follows: In Section 2, a detailed introduction of PIDNN is stated. In Section 3, the SSA is introduced to PIDNN to find optimal initial weights. In Section 4, a fractional-order algorithm is employed to modify the weights of PIDNN. In Section 5, two examples are presented, which verified the effectiveness of the proposed algorithm. Finally, some conclusions are given in Section 6.

PID neural network
The neural network has been proved to be a powerful control tool for multivariable and nonlinear systems [19]. However, the structure and function of traditional neural network are difficult to relate to the dynamic indicators in the control system. Although the multi-layer forward neural network is widely used, its input and output characteristics are static, and dynamic components must be added when designing the controller. PIDNN integrates proportional, integral, and derivative functions into hidden layer neurons to have dynamic characteristics. Therefore, PIDNN is more suitable for system control [20]. Fig. 1 depicts the structure of a PIDNN [7], where the controlled object is a coupling system with m-input and n-output. The PIDNN is a forward neural network with three layers composed of several parallel sub-networks, r s (s = 1, 2, · · · , n) represent the desired signal of variables, y s represent the outputs of controlled object, v o (o = 1, 2, · · · , m) represent the neuron states. The input layer of each sub-network contains two neurons, which process the desired signal and output respectively, the hidden layer contains three neurons, specifically unit proportional, integral, and differential element, the output layer carries only one unit proportional neuron to complete the synthesis and output of the entire PIDNN control law. The output of PIDNN is the input of the controlled object.
(1) Input layer The inputs of a single layer of neurons are: where p is the sampling time.
The input layer of PIDNN includes 2n identical neurons, and their state functions are unit proportional functions, so the neurons' states are equal to Then, the outputs of the input layer are: where i is the serial number of the sub-network input layer (i = 1, 2), s is the sequence number of the sub-network (s = 1, 2, . . . , n).
(2) Hidden layer The hidden layer contains n neurons each for proportional, integral, and differential neurons, their states are updated by equation (5) The inputs of the hidden layer are: Then, the outputs of the hidden layer are: where ω sih is the connection weight from input layer to the hidden layer and h is the number of the sub-network hidden layer (h = 1, 2, 3). (

3) Output layer
The output layer contains m neurons, and the output is an m-dimensional vector. The input of each output neuron is the weighted sum of the output of all neurons in the hidden layer.
The states of the output neurons are: Then, the outputs of the output layer are: The output of the output layer is the same as the output of PIDNN, that is: where ω sho is the weight from hidden layer to output layer and o is the output layer neuron number (o = 1, 2, · · · , m).
(4) Weight adjustment algorithm PIDNN weight adjustment adopts the gradient descent method, and its cost function is defined as: where l is the number of sampling points. Define the learning rate as η, the connection weights of each layer can be updated by: PIDNN embeds PID rules into the neural network, which makes the neural network have dynamic characteristics so that it can deal with coupling systems.
However, because of the random convergence direction of the PIDNN, a heavy computational budget will be required to find the right direction, which will slow down the convergence speed. In addition, it will increase the probability of the weights falling into a local optimum [20]. To solve this problem, SSA is introduced into PIDNN to optimize initial weights and improve the control of the PIDNN.

PID neural network based on sparrow search algorithm
SSA is a novel evolutionary search algorithm with the advantages of fast con-divided into producers and scroungers. Producers provide foraging directions for the rest sparrows while finding food. Therefore, producers with higher fitness can get food first and have a large search range. In the search range, every sparrow can turn into a producer if it finds food. Scroungers find food by making use of the producer. Furthermore, they may compete for the food with discoverers [21]. The algorithm flow is shown in Fig. 2. The positions of the producers are updated by: where t represents the number of iterations, P t+1 µ,ρ is the position of the µth sparrow in ρth dimension at iteration t, a ∈ (0, 1] and Q obeys normal distribution are numbers that are generated arbitrarily, iter max is maximum number of iterations, R 2 ∈ [0, 1] and ST ∈ [0.5, 1.0] denote early warning value and safety critical value respectively, if R 2 < ST , it indicates that the surrounding environment is safe, otherwise, dangers occur, and the sparrow must fly to safe places, L is a matrix with each element inside 1.
The positions of the scroungers are updated by: where z is the number of sparrows, P B represents the best position founded by the producer, P worst denotes the global worst position at iteration t, C is a matrix with each entry randomly assigned 1 or −1, and C † = C T CC T −1 .
The positions of sparrows that are perceived the danger in the sparrow population are updated by: where β represents step size, P best is the global best position at iteration t, G ∈ [−1, 1] is a number that is generated arbitrarily, ε denotes a small constant, f µ represents the fitness of the sparrow, f g and f w represent the global optimal and worst fitness at iteration t, respectively.
In SSA, the position of each sparrow represents the candidate solution of the optimization system, set the number of iterations to length1 and define the fitness function as: SSA takes efforts in finding the positions with the best fitness, which are the optimal initial weights of PIDNN.
SSA-PIDNN conquers the adverse effects caused by the random initial weights. However, the gradient descent method in (15) ignores the impact of the past moment weights on the current calculation. To improve the control accuracy and tracking response, a fractional-order weight correction technique is introduced to cope with the problem.

SSA-PIDNN by the fractional-order learning algorithm
A valuable property of fractional calculus is that it considers the influence of past information on the current variables, so more accurate results can be obtained. One of the most widely used fractional calculus definitions is the discrete fractional differential operator proposed by Grünwald-Letnikov: where D is a differential operator, p 1 is the sampling time, α is the fractional order, f (·) is the function of time t, where Γ (·) is Gamma function: which is the binomial coefficient, and the recursive formula is denoted by: From (20), the weights in (15) can be modified by fractional order gradient descent algorithm as follows: The structure of the fractional-order PIDNN is the same as that of the integer-order PIDNN, but the adjustment of the weights of FPIDNN adopts the fractional-order gradient descent algorithm. The following calculation methods determine the weights between each layer: (1) Hidden layer to output layer The connection weights from the hidden layer to the output layer are updated by: where η 1 is the learning rate from the hidden layer to the output layer, and (2) Input layer to hidden layer The connection weights from the input layer to the hidden layer are updated by: where η 2 is the learning rate from the input layer to the hidden layer, and  (4) 16: Update ω sih (p) by (27) 17: Compute x ′ sh (p) by (9) 18: Form ω sho (p) according to (25) 19: (12) 5 Examples Example 1: Consider a coupling system with 3-input and 3-output as the controlled object of the PIDNN, which is as follows: where y = [y 1 , y 2 , y 3 ] the control object of the greenhouse system can be regarded as a system with 3-input and 2-output. In this paper, the simplified greenhouse system model is the same as that in [22]: From Table 1 and Fig. 3-8, one can find that: (1) As shown in Table 1, the algorithms can perform decoupling control effectively, but the proposed SSA-FPIDNN can track the target outputs in the shortest time. (2) The traditional PIDNN in Fig. 3 and Fig. 6 have the decoupling capability.
However, due to the local optimization and integer-order algorithm accu- racy problems, the output has a certain error to the tracking signal, and the response is relatively slow.
(3) The SSA-PIDNN in Fig. 4 and Fig. 7 have a faster response speed than PIDNN. It solves the problem of PIDNN initial weights selection and achieves better tracking performance.
(4) Compared with PIDNN and SSA-PIDNN, the SSA-FPIDNN in Fig. 5 and which has a strong decoupling control capability. So the output of the controlled object can track the ideal signal quickly with a small steadystate error.

Conclusions
In this study, an SSA-FPIDNN is proposed for decoupling control of multivariable and strong coupling systems. It exploits SSA to obtain the global optimal initial weights of FPIDNN, which can solve difficulties caused by random initial weights and then decouples the variables with the help of FPIDNN.
The proposed method can implement more reliable operations than PIDNN and SSA-PIDNN because the fractional-order gradient descent algorithm has a long memory function. Combined with SSA in the weight adjustment process can further enhance the control performance. This work provides a reliable method for the control of multi-variable systems with strong coupling in the industry.