Radial basis function neural network with extreme learning machine algorithm for solving ordinary differential equations

We present a novel numerical method for solving ordinary differential equations using radial basis function (RBF) network with extreme learning machine algorithm. A single-layer RBF link neural network model has been developed for the proposed method. The weight from the hidden layer to the output layer can be calculated efficiently by extreme learning machine algorithm. The experimental comparison of various methods proves that the proposed method shows better performance than the existing methods.


Introduction
Differential equations (DEs) play an important role in various fields of science and engineering. Many problems encountered in many fields of physics, economics, biology, chemistry, population, resources, etc., can be solved by DEs models. Therefore, when the ODEs were put forward, they became a useful tool for human beings to understand nature and explore the laws of motion of the material world. However, in many cases, the analytical solution of the DEs does not exist or is difficult to obtain. Therefore, the numerical solution of DEs becomes an important research direction. At present, there are many numerical methods for solving DEs, such as finite difference method, finite element method, finite volume method, Runge-Kutta method, and other methods. The computational complexity of traditional methods increases rapidly with the increase in sampling points, while the method based on artificial neural network (ANN) can effectively avoid this problem. In addition, the traditional methods can only obtain numerical solutions at finite points, and it needs repeated calculation to obtain numerical solutions at other points, while the result obtained by ANN-based model is a closed analytic form. We can use this form to get the numerical solution of any point.
In the past few decades, researchers have been devoting themselves to the study of various machine intelligence methods, especially ANN-based model for solving Des. In 1990, Lee and kang presented a Hopfield neural network model for the solutions of DEs. In 2006, Malek and Beidokhti presented a hybrid neural network for solving higher-order differential equations. In 2016, Mall and Chakraverty introduced.
Legendre functional link neural network for the solution of DEs. In 2017, Mall and Chakraverty used Chebyshev polynomial as an activation function to construct the approximate solution of DEs.
RBFNN has the advantages of simple training, fast convergence and can overcome the local minimum problem. It is widely used in function approximation, speech recognition, pattern recognition, image processing, and other fields. In 1991, Park and Sandberg proved that RBFNN with a hidden layer can be effectively used for universal approximation. In 2012, Lin, Chen and Sze proposed a radial basis function method for solving the Helmholtz problem. In 2017, Qu  Langevin equation by applying the cosine radial basis function network. The rest of this paper is organized as follows: Section 2 describes the problem to be solved. Section 3 introduces the proposed RBFNN. Section 4 introduces the process of solving parameters of RBFNN using extreme learning machine algorithm. Section 5 shows some numerical results obtained with the RBFNN model. Finally, the last section is some conclusions.

Radial basis functional link neural network model
Single-layer radial basis functional link neural network model has been considered for the present problem. Figure 1 depicts the three-layer structure of RBFNN. The first layer of the network has only one node; the input data is the independent variable of the ordinary differential equation. The general form of the nonlinear Gaussian basis function is as follows: There are two parameters in the Gaussian basis function, and the choice of these two parameters has a great influence on the construction of the model, which will eventually affect the accuracy of the approximate solution. There are usually two ways to select the center. The first way is to select from sample points, and the second way is to self-organize selection method, such as clustering samples, and gradient training method. When solving the problem of solving differential equations, the sample point is the independent variable x. The following method of selecting sample points is to sample uniformly in the solution area. Therefore, whether the first method is used to select the center point or the second method, the final center point will be uniformly sampled from the solution area, where r is the width parameter, which determines the shape and scope of the Gaussian basis function. This value is chosen empirically. c i is the center of the i-th Gaussian basis function, x is the input of the network. More closer the input x is to the center c i , more larger the output of the corresponding hidden layer node is. Finally, the output of the network can be calculated by the following formula: where b i ; i ¼ 1; 2; . . .n is the weight from the hidden layer to the output layer. RBF is used as the base of hidden layer neurons to form the hidden layer space, so that input vectors can be mapped directly to the hidden space (i.e., without weight connection). When the center of RBF is determined, the mapping relationship is determined. The mapping from the hidden layer space to the output layer space is linear, that is, the output of the network is the linear weighted sum of the output of the hidden layer neurons, where the weight is the adjustable parameter of the network. Thus, in general, the mapping of the network from input to output is nonlinear, while the network is linear for adjustable parameters.

Description of the problem
The general form of ordinary differential equations is as follows: where r i y x ð Þ; i ¼ 1; 2. . .k:. The trial solution can be written as follows: The first part of the trial solution A x ð Þ ensures the trial solution satisfies the initial or boundary conditions of the ordinary differential equation and contains no adjustable parameters. The second part of the trial solution contains adjustable parameters. The closer the error function is to zero, the better the parameters are.
In Sect. 5, Examples 1, 2, and 3 show the results of RBFNN method for solving first-order ordinary differential equations. The general form of first-order ordinary differential equation is as follows: When the initial value condition satisfies y a ð Þ ¼ A, the trial solution can be written as follows: Fig. 1 The structure of RBFNN In Sect. 5, Examples 4 and 5 are two examples of solving a system of first-order ordinary differential equations. The general form of first-order system of ordinary differential equations is as follows: When the initial value condition satisfies y i a ð Þ ¼ A i , the trial solution can be written as follows: Examples 6 and 7 are two second-order ordinary differential equations; the general form of second-order ordinary differential equations is as follows: Second-order ordinary differential equations can be divided into initial value problems and boundary value problems. When the initial value condition satisfies y a ð Þ ¼ A; y b ð Þ ¼ B, the trial solution can be written as follows: When the boundary value condition satisfies y a ð Þ ¼ A; y 0 a ð Þ ¼ A 0 , the trial solution can be written as follows: 4 Extreme learning machine algorithm with proposed RBFNN model We minimize the error function by adjusting the parameters b. For a point x in the solution region, when the numerical solution is equal to the exact solution, the error at this point is zero. Then, the Eq. (12) can be obtained by substituting the trial solution y T x; b ð Þ and its derivatives of this point into the equation to be solved.
In Eq. (12), only b is unknown. Put the part containing b on the left side of the equation and the other parts on the right side of the equation to get a system of linear equations Let us take the following system of ordinary differential equations as an example: Selecting m sampling points and n hidden layer neurons, according to (8), the trial solutions can be written as follows: By substituting (14) and m sampling points into (13), the following linear equations with respect to b are obtained.
Set H 11 ; H 12 ; H 21 ; H 22 be matrices of size m Â n,b 1 ; b 2 be matrices of size n Â 1,T 1 ; T 2 be matrices of size m Â 1, and Radial basis function neural network with extreme learning machine algorithm for solving ordinary… 3957 Then, (15) can be written in the following matrix form: Hb ¼ T: Solutions of the system of linear equation Hb ¼ T are the most appropriate parameter value. When H is a reversible matrix,b ¼ H À1 T. In many cases, if H not a reversible matrix, then b ¼ H y T, where H y is the Moore-Penrose generalized inverse of matrix. Moreover, the solution of Hb ¼ T is unique.
The steps of using extreme learning machine algorithm with the proposed RBFNN model to solve ODEs are as follows: • Step 1 Determinate the sampling points x 1 ; x 2 ; . . .x m ; • Step 2 Calculate the network output based on sampling points z x i ; b ð Þ; i ¼ 1; 2; . . .m; • Step 3 Write the expression of the trial solution y T x; b ð Þ according to the initial value condition or the boundary value condition; • Step 4 substituting the trial solution y T x; b ð Þ and its derivatives of this point into the equation to be solved. Then, we get this equation: Step 5 Transform this formula into a matrix form Hb ¼ T; • Step 6 Solving equations in Step 5 by b ¼ H y T; • Step 7 Substitute the parameter values obtained in step 6 into the trial function. Thus, the numerical solution of any point in the domain can be obtained by substituting the independent variable into the trial function.

Numerical examples
In this section, some examples are given to verify the effectiveness of the proposed method. To compare the proposed method with other methods conveniently, two error functions are used in this paper. Suppose the sampling point is x 1 ; x 2 ; . . .x m and the exact solution of the ordinary differential equation at x i is y x i ð Þ.The means of average mean-squared error are as follows: and the maximum absolute error is as follows: We illustrate the effectiveness of the proposed method from three perspectives. The first method is to compare the approximate solutions of sampling points (training points) and test points with their exact solutions, namely y t x j ; b À Á À Y x j À Á . The second is to compare the approximate solution obtained by our method with that obtained by other classical methods. And the third is to compare the solutions obtained using different numbers of hidden layer neurons and different numbers of sampling points. In this paper, the center of the activation function RBF is obtained by uniform sampling in the solution region, and we take the width parameter r ¼ 0:8. Example 1 Consider the following first-order ordinary differential equations: with the analytical solution y x ð Þ ¼ e À x 2 2 1þxþx 3 þ x 2 ; x 2 0; 1 ½ . The RBFNN trail solution in this case is: Table 1 compares the proposed algorithm with the RBF Net (Rizaner et al. 2018) that minimizes error with the gradient descent method. When the number of hidden neurons and activation function is the same, the solution accuracy of the proposed algorithm is higher. Table 2 compares the approximate solutions obtained by the proposed method with other methods. It is clearly seen from the data in Table 2 that the proposed method outperforms all the other methods. Figure 2 shows the exact solution and approximations of Example 1. The basis function of the network in RBF net (Rizaner et al. 2018) is also the radial basis function. When the number of hidden layer neurons is 9, the accuracy of RBF Net is the highest with an error of 6:80 Â 10 À10 . When the number of hidden layer neurons in our network is 9, the accuracy is higher than that in RBF Net, and the error function is 7:56 Â 10 À14 .   Radial basis function neural network with extreme learning machine algorithm for solving ordinary… 3959 Example 2 Next, we look at the following first-order ordinary differential equations: 5 ; x 2 0; 3 ½ . The RBFNN trail solution in this case is:  Fig. 3 Exact solution and approximations of example 2 Fig. 4 The maximum absolute error of example 3 From Table 3, we can see clearly that our algorithm is more accurate than the algorithm in RBF Net (Rizaner et al. 2018) when the network structure is the same. As can be seen from Table 4, with 21 RBF basis functions, the average mean-squared error of the solution obtained by extreme learning machine method is 5:85 Â 10 À12 . In RBF Net (Rizaner et al. 2018), the average mean squared is 1:44 Â 10 À06 .When the number of hidden layer basis functions is 90, the average mean-squared error can be reduced to 9:95 Â 10 À13 using the proposed method. Increasing the number of neurons in the hidden layer to more than 90 does not improve the accuracy effectively. Figure 3 shows the exact solution and approximations of example 2.
Example 3 Consider the following first-order ordinary differential equations: :5e x . The number of hidden layer neurons is 20, and 30 points are sampled equidistantly at interval [0, 2]. The maximum absolute error of the numerical solution is 2:20 Â 10 À6 . Figure 4 shows the maximum absolute error. By comparison, the maximum absolute error of the BeNN method (Sun et al. 2018) is 2:7 Â 10 À3 , and the maximum absolute error in [30] is 1:9 Â 10 À2 .
Example 5 One more problem is given by The corresponding exact solution is In this case, the RBFNN trail solution can be written as: Using 40 hidden neurons and 40 points sampled equidistantly from interval [0, 2], the maximum absolute error of the numerical solution is 2:15 Â 10 À6 . Figure 6a, b compares the exact solution with the approximate solution obtained by the proposed method. Figure 6c, d shows the maximum absolute error.
Example 6 Next, consider a second-order ordinary differential boundary value problem.
the analytical solution is:y x ð Þ ¼ cos 1 ð ÞÀ2 sin 1 ð Þ sin x ð Þ À cos x ð Þþ 2; x 2 0; 1 ½ . In this case, the RBFNN trail can be written as: Take 50 equidistant points from interval ½0; 1 as sampling points. The maximum absolute error of the solution obtained by the proposed method is 8:55 Â 10 À08 . In unsupervised version of kernel least mean square (KLMS) algorithm (Yazdi et al. 2011), the maximum absolute error with 50 points from interval ½0; 1 is 3:5 Â 10 À02 . The numerical solution obtained by the constructed neural networks (CNN) (Tsoulos et al. 2009)  has an error of 5 Â 10 À01 . It can be seen that the proposed method can obtain more accurate numerical results when the sampling points are the same. The maximum absolute error of the proposed method can be reduced to 4:36 Â 10 À09 , by increasing the number of sampling points to 100. Figure 7 shows the maximum absolute error of example 6.
Example 7 Last, consider the following second-order ordinary differential boundary value problem.
The corresponding exact solution is y x ð Þ ¼ x 4 þ x; x 2 0; 1 ½ : In this case, the RBFNN trail solution can be written as: Using 10 hidden neurons and 20 points sampled equidistantly from interval [0, 1], the maximum absolute error of the numerical solution is 2:30 Â 10 À7 . Figure 8 shows the maximum absolute error of example 7.

Conclusion
Traditional methods of solving numerical solutions of differential equations can only find numerical solutions of certain discrete points, while the solving methods based on neural networks often require a large number of repeated iterations when solving network parameters, resulting in low solving efficiency. In this paper, radial basis function (RBF) network with extreme learning machine algorithm is proposed for solving ordinary differential equations (ODEs). A single hidden layer neural network has been used for the solution of ordinary differential equations, and the activation function of the hidden layer is the radial basis function. By substituting the trail solutions of training points into differential equations, a set of equations about network parameters can be obtained. Extreme learning machine algorithm can be used to solve this system of equations. Experiments show that the RBFNN model can be used to solve ordinary differential equations with good results. The advantages of this method are mainly the following two: 1. The numerical solution obtained by this method is an expression of the numerical solution, and this expression can be used to obtain the value of any point in the solution area; 2. The method of calculating neuron weights by this method required simple, efficient, and no iteration. Solution experiments show that the RBFNN model can be used to solve ordinary differential equations with good results.
Funding This study was funded by the Natural Science Foundation of Hunan Province, China under Grants 2022JJ30673.

Declarations
Conflict of interest The authors declare that they have no conflict of interest.
Ethical approval All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards.
Human and animal rights This article does not contain any studies with animals performed by any of the authors. Radial basis function neural network with extreme learning machine algorithm for solving ordinary… 3963 Informed consent Informed consent was obtained from all individual participants included in the study.