Solving localized wave solutions of the derivative nonlinear Schrodinger equation using an improved PINN method

The solving of the derivative nonlinear Schrodinger equation (DNLS) has attracted considerable attention in theoretical analysis and physical applications. Based on the physics-informed neural network (PINN) which has been put forward to uncover dynamical behaviors of nonlinear partial different equation from spatiotemporal data directly, an improved PINN method with neuron-wise locally adaptive activation function is presented to derive localized wave solutions of the DNLS in complex space. In order to compare the performance of above two methods, we reveal the dynamical behaviors and error analysis for localized wave solutions which include one-rational soliton solution, genuine rational soliton solutions and rogue wave solution of the DNLS by employing two methods, also exhibit vivid diagrams and detailed analysis. The numerical results demonstrate the improved method has faster convergence and better simulation effect. On the bases of the improved method, the effects for different numbers of initial points sampled, residual collocation points sampled, network layers, neurons per hidden layer on the second order genuine rational soliton solution dynamics of the DNLS are considered, and the relevant analysis when the locally adaptive activation function chooses different initial values of scalable parameters are also exhibited in the simulation of the two-order rogue wave solution.


Introduction
The derivative nonlinear Schrödinger equation (DNLS) iq t + q xx + i(q 2 q * ) x = 0, (1.1) plays a significant role both in the integrable system theory and many physical applications, especially in space plasma physics and nonlinear optics [1,2]. Here, q = q(x, t) are complex-valued solutions, the superscript " * " denotes complex conjugation, and the subscripts x and t denote the partial derivatives with respect to x and t, respectively. In recent decades, many scholars have invested a lot of time and energy to study various mathematical and physical problems of the DNLS. Mio et al. derived the DNLS of Alfven waves in plasma, and it well describes the propagation of small amplitude nonlinear Alfven waves in low-β plasma, propagating strictly parallel or at a small angle to the ambient magnetic field [3,4]. The results show that the large amplitude Magneto-Hydro-Dynamical waves propagating at arbitrary angle with the surrounding magnetic field in high β plasma are also simulated by the DNLS. In nonlinear optics, the modified nonlinear Schrödinger equation, which is gauge equivalent to the DNLS, is derived in the theory of ultrashort femtosecond nonlinear pulse in optical fiber [5]. While the spectrum width of the pulse is equal to the carrier frequency, the self steepening effect of the pulse should be considered. In addition, the filamentation of lower-hybrid waves can be simulated by the DNLS which governs the asymptotic state of the filamentation, and it admits moving solitary envelope solutions for the electric field [6]. Ichikawa and co-workers obtained the peculiar structure of spiky modulation of amplitude and phase, which is arisen from the derivative nonlinear coupling term [7]. At present, the abundant solutions and integrability of the DNLS have been derived through different methods. Kaup and Newell proved the integrability of the DNLS in the sense of inverse scattering method in 1978 [1]. Nakamura and Chen constructed the first N-soliton formula of the DNLS with the help of the Hirota bilinear transformation method [8]. Furthermore, based on Darboux transform technique, Huang and Chen established the determinant form of N-soliton formula [4]. Kamchatnov and cooperators not only proposed a method for finding periodic solutions of several integrable evolution equations and applied it to the DNLS, but also dealt with the formation of solitons on the sharp front of optical pulse in an optical fiber according to the DNLS [9,10]. The Cauchy problem of the DNLS has been discussed by Hayashi and Ozawa [11]. The compact N-soliton formulae both with asymptotically vanishing and non-vanishing amplitudes were obtained by iterating Bäcklund transformation of the DNLS [12]. In addition, the high-order solitons, high-order rogue waves, and rational solutions for the DNLS have been given out explicitly with the help of two kinds of generalized Darboux transformations which rely on certain limit technique [13]. Recently, more abundant solutions and new physical phenomena of the DNLS are revealed by various methods [14][15][16][17][18][19].
In recent years, due to the explosive growth of available data and computing resources, neural networks(NNs) have been successfully applied in diverse fields, such as recommendation system, speech recognition, mathematical physics, computer vision, pattern recognition and so on [20][21][22][23][24]. Particularly, a physics-informed neural network method (PINN) has been proved to be particularly suitable for solving and inversing equations which have been controlled mathematical physical systems on the basis of NNs, and found that the high-dimensional network tasks can be completed with less data sets [25,26]. The PINN method can not only accurately solve both forward problems, where the approximate solutions of governing equations are obtained, but also precisely deal with the highly ill-posed inverse problems, where parameters involved in the governing equation are inferred from the training data. Based on the abundant solutions and integrability of the integrable systems [27][28][29], we have simulated the one and two order rogue wave solutions of the integrable nonlinear Schrödinger equation by employing the deep learning method with physical constraints [30]. The slow convergence performance leads to the increase of training time and higher performance requirements of experimental equipment, so it is essential to accelerate the convergence of the network without sacrificing the performance. Meanwhile, the original PINN method could not accurately reconstruct the complex solutions in some complicated equations. it is crucial to design a higher efficient and more adaptable deep learning algorithm to not only improve the accuracy of the simulated solution but also reduce the training cost.
As is known to all, a significant feature of NNs is the activation function, which determines the activation of specific neurons and the stability of network performance in the training process. There is just a rule-of-thumb for the choice of activation function, which depends entirely on the problem at hand. In the PINN algorithm, many activation functions such as the sigmoid function, tanh, sin etc are used to solve various problems, refer to [25,31] for details. Recently, a variety of research methods for activation functions have been proposed to optimize convergence performance and raise the training speed. Dushkoff and Ptucha proposed multiple activation functions of per neuron, in which individual neuron chooses between multiple activation functions [32]. Li et al. proposed a tunable activation function while only one hidden layer is used [33]. The authors focused on learning activation functions in convolutional NNs by combining basic activation functions in a data-driven way [34]. Jagtap and collaborators employed adaptive activation functions for regression in PINN to approximate smooth and discontinuous functions as well as solutions of linear and nonlinear partial differential equations, and introduced a scalable parameters in the activation function, which can be optimized to achieve best performance of the network as it changes dynamically the topology of the loss function involved in the optimization process [35]. The adaptive activation function has better learning capabilities than the traditional fixed activation as it improves greatly the convergence rate, especially during early training, as well as the solution accuracy.
In particular, Jagtap et al. presented two different kinds of locally adaptive activation functions, namely layer-wise and neuron-wise locally adaptive activation functions [36]. Compared with global adaptive activation functions [35], the locally adaptive activation functions further improve the training speed and performance of NNs. Furthermore, in order to further speed up the training process, a slope recovery term based on activation slope has been added to the loss function of layer-wise and neuron-wise locally adaptive activation functions to improve the performance of neural network. Recently, we focus on studying abundant solutions of integrable equations [22,26,30,31] due to they have better integrability such as Painlevé integrability, Lax integrability, Liouville integrability and so on [37][38][39]. Significantly, the DNLS has been pointed out that it satisfy important integrability properties, and many types of localized wave solutions have been obtained by various effective methods [1][2][3][4][5]. We extend the PINN based on locally adaptive activation function with slope recovery term which proposed by Jagtap and cooperator [36] to solve the nonlinear integrable equation in complex space, and construct the localized wave solutions which consist of the rational soliton solutions and rogue wave solution of the integrable DNLS. Meanwhile, we also demonstrate the relevant results that contain the rational soliton solutions and rogue wave solution by exploiting the PINN, which are convenient for comparative analysis. The performance comparison between the improved PINN method with locally adaptive activation functions and the PINN method are given out in detail.
This paper is organized as follows. In section 2, we introduce briefly discussions of the original PINN method and improved PINN method with locally adaptive activation function, where also discuss about training data, loss function, optimization method and the operating environment. In Section 3, the one-rational soliton solution and the first order genuine rational soliton solution of the DNLS are obtained by two distinct PINN approaches. Section 4 provides the second order genuine rational solution and two-order rogue wave solution for the DNLS, and the relative L 2 errors of simulating the second order genuine rational solution of the DNLS with different numbers of initial points sampled, residual collocation points sampled, network layers and neurons per hidden layer are also given out in detail. Moreover, the effects of the initial values of scalable parameters on the two-order rogue wave solution are shown. Conclusion is given out in last section.

Methodology
Here, we will consider the general (1+1)-dimensional nonlinear time-dependent integrable equations in complex space, where each contains a dissipative term as well as other partial derivatives, such as nonlinear terms or dispersive terms, as follows q t + N (q, q x , q xx , q xxx , · · · ) = 0, (2.1) where q are complex-valued solutions of x and t to be determined later, and N is a nonlinear functional of the solution q and its derivatives of arbitrary orders with respect to x. Due to the complexity of the structure of the solution q(x, t) of Eq. (2.1), we decompose q(x, t) into the real part u(x, t) and the imaginary part v(x, t), i.e. q = u + iv. It is obvious that u(x, t) and v(x, t) are real-valued functions. Then substituting into Eq. (2.1), we have where N u and N v are nonlinear functionals of the corresponding solution and its derivatives of arbitrary orders with respect to x, respectively. In this section, we will briefly introduce the original PINN method and its improved version, respectively.

The PINN method.
Here, we first construct a simple multilayer feedforward neural network with depth D which contains an input layer, D − 1 hidden layers and an output layer. Without loss of generality, we assume that there are N d neurons in the d th hidden layer. Then, the d th hidden layer receives the post-activation output x d−1 ∈ R N d−1 of the previous layer as its input, and the specific affine transformation is of the form where the network weight W d ∈ R N d ×N d−1 and the bias term b d ∈ R N d to be learned are initialized using some special strategies, such as Xavier initialization or He initialization [40,41]. The nonlinear activation function σ(·) is applied component-wise to the affine output L d of the present layer. In addition, this nonlinear activation is not applied in the output layer for some regression problems, or equivalently, we can say that the identity activation is used in the output layer. Therefore, the neural network can be represented as where the operator " • " is the composition operator, ∈ P represents the learnable parameters to be optimized later in the network, and P is the parameter space, q and x 0 = x are the output and input of the network, respectively. The universal approximation property of the neural network and the idea of physical constraints play key roles in the PINN method. Thus, based on the PINN method [25], we can approximate the potential complex-valued solution q(x, t) of nonlinear integrable equations using a neural network. Then, the underlying laws of physics described by the governing equations are embedded into the network. By the aid of automatic differentiation (AD) mechanism in deep learning [42], we can automatically and conveniently obtain the derivatives of the solution with respect to its inputs, i.e., the time and space coordinates. Compared with the traditional numerical differentiation methods, AD is a mesh-free method and does not suffer from some common errors, such as the truncation errors and round-off errors. To a certain extent, this AD technique enables us to open the black box of the neural network. In addition, the physics constraints can be regarded as a regularization mechanism that allows us to accurately recover the solution using a relatively simple feedforward network and remarkably few amounts of data. Moreover, the underlying physical laws introduce part interpretability into the neural network.
Specifically, we define the residual networks f u (x, t) and f v (x, t), which are given by the left-hand-side of Eq. (2.2) and (2.3), respectively Then the solution q(x, t) will be trained to satisfy these two physical constraint conditions (2.6) and (2.7), which play a vital role of regularization and have been embedded into the mean-squared objective function, that is, the loss function where and denote the initial and boundary value data of q(x, t). Similarly, the collocation points for f u (x, t) and The loss function (2.8) consists of the initial-boundary value data and the structure imposed by Eq. (2.6) and (2.7) at a finite set of collocation points. Specifically, the first and second terms on the right hand side of Eq. (2.8) attempt to fit the solution data, and the third and fourth terms on the right hand side learn to discover the real solution space.

The improved PINN method.
The original PINN method could not accurately reconstruct complex solutions in some complicated nonlinear integrable equations. Thus, we present an improved PINN method (IPINN) where a locally adaptive activation function technique is introduced into the original PINN method. It changes the slope of the activation function adaptively, resulting in non-vanishing gradients and faster training of the network. There are several kinds of locally adaptive activation functions, for example, layer-wise and neuron-wise. In this paper, we only consider the neuron-wise version due to some accuracy and performance requirements. Specifically, we first define such activation function as where n > 1 is a scaling factor and {a d i } are additional Σ D−1 d=1 N d parameters to be optimized. Note that, there is a critical scaling factor n c above which the optimization algorithm will become sensitive in each problem set. The neuron activation function acts as a vector activation function in each hidden layer, and each neuron has its own slope of activation function.
Based on Eq. (2.5), the new neural network with neuron-wise locally adaptive activation function can be represented as , ∀i = 1, 2, · · · , N d ,P is the parameter space. In this method, the initialization of scalable parameters are carried out in the case of na d i = 1, ∀n 1. The resulting optimization algorithm will attempt to find the optimized parameters including the weights, biases, and additional coefficients in the activation to minimize the new loss function defined as where Loss u , Loss v , Loss fu and Loss fv are defined by Eqs. (2.9)-(2.10). The last slope recovery term Loss S in the loss function (2.12) is defined as This term Loss S forces the neural network to increase the activation slope value quickly, which ensures the nonvanishing of the gradient of the loss function and improves the network's training speed. Compared with the PINN method in Section 2.1, the improved method induces a new gradient dynamics, which results in better convergence points and faster convergence rate. Jagtap et al. stated that a gradient descent algorithm such as stochastic gradient descent (SGD) minimizing the loss function (2.12) does not converge to a sub-optimal critical point or a sub-optimal local minimum, for the neuron-wise locally adaptive activation function, given certain appropriate initialization and learning rates [36].
In both methods, all loss functions are simply optimized by employing the L-BFGS algorithm, which is a full-batch gradient descent optimization algorithm based on a quasi-Newton method [43]. Especially, the scalable parameters in the adaptive activation function are initialized generally as n = 10, a d i = 0.1, unless otherwise specified. In addition, we select relatively simple multi-layer perceptrons (i.e., feedforward neural networks) with the Xavier initialization and the tanh activation function. All the codes in this article is based on Python 3.8 and Tensorflow 1.15, and all numerical experiments reported here are run on a DELL Precision 7920 Tower computer with 2.10 GHz 8-core Xeon Silver 4110 processor and 64 GB memory.
3. One-rational soliton solution and first order genuine rational soliton solution of the DNLS In this section, two different neural network methods mentioned in the previous section are used to obtain the simulation solution of the DNLS, and the dynamic behavior, error analysis and related plots of the one-rational soliton solution and first order genuine rational soliton solution for the DNLS are listed out in detail. We consider the DNLS along with Dirichlet boundary conditions given by where x 0 , x 1 represent the lower and upper boundaries of x respectively. Similarly, t 0 and t 1 represent the initial and final times of t respectively. The initial condition q 0 (x) is an arbitrary complex-valued function. The rational soliton solutions of the DNLS have been obtained by generalized Darboux transformations [13]. In this part, we will employ two different types of approaches which contain the PINN and IPINN to simulate two different forms of rational soliton solutions. Compared with the known exact solutions of the DNLS, so as to prove that the numerical solutions q(x, t) obtained by neural network models is effective. From Ref. [13], we can derived the form about one-rational soliton solution and first order genuine rational soliton solution of the DNLS. the one-rational soliton solution formulation shown as follow where a, c are arbitrary constants, i 2 = −1. Therefore, the velocity for this one-rational soliton solution is a 2 /4 and the center is along the line a 2 x − 4t + a 2 c = 0, the altitude for |q(x, t)| is 16/a 2 . On the other hand, the first order genuine rational soliton solution of the DNLS can be represented as following which is nothing but the rational traveling wave solution with non-vanishing background. In the next two sections, we use the PINN method and the improved PINN method to simulate the above two solutions, respectively. Some necessary comparisons and analyses are exhibited in detail.
In this section, based on the neural network structure which contains nine hidden layers, each layer has 40 neurons, we numerically construct one-rational soliton solution of the DNLS via the PINN method and improved PINN method. One can obtain the exact one-rational soliton solution of Eq. (3.1) after taking a = 1, c = 1 into Eq. (3.2) as follow We employ the traditional finite difference scheme on even grids in MATLAB to simulate Eq. (3.1) with the initial data (3.4) to acquire the training data. In particular, the initialization of scalable parameters is n = 5, a m i = 0.2. Specifically, divide space [−2.0, 0.0] into 513 points and time [−0.1, 0.1] into 401 points, one-rational soliton solution q(x, t) is discretized into 401 snapshots accordingly. We generate a smaller training dataset that containing initialboundary data by randomly extracting N q = 100 from original dataset and N f = 10000 collocation points which are generated by the Latin hypercube sampling method [44]. After giving a dataset of initial and boundary points, the latent one-rational soliton solution q(x, t) has been successfully learned by tuning all learnable parameters of the neural network and regulating the loss function (2.8) and (2.12). The model of PINN achieves a relative L 2 error of 4.345103e−02 in about 1314.1089 seconds, and the number of iterations is 15395. Nevertheless, the network of IPINN achieves a relative L 2 error of 1.998304e−02 in about 1358.9031 seconds, and the number of iterations is 10966.
In Figure 1 and Figure 2, the density plots, the sectional drawing of the latent one-rational soliton solution q(x, t) and the iteration number curve plots under PINN and IPINN structures are plotted respectively. The pictures (a) in Fig. 1 and Fig. 2 clearly compare the exact solution and the predicted spatiotemporal solution of the two different methods, respectively. We particularly present a comparison between the exact solution and the predicted solution at different times t = −0.05, 0, 0.05 in the bottom panel of (a) in Fig. 1 and Fig. 2. Obviously, the bottom panel of picture (a) in Figure 2 shows that the predicted solution of DNLS equation is more consistent with the exact solution than the bottom panel of picture (a) in Figure 1. In other words, the simulation effect of IPINN is better than PINN. It is not hard to see that the training loss curve of picture (b) in Fig. 2, which revealing the relation between iteration number and loss function, is more smooth and stable than the curve of picture (b) in Fig. 1. In this test case, the IPINN with slope recovery term perform better than PINN in terms of convergence speed and accuracy of the solution.

First order genuine rational soliton solution.
In this section, we numerically construct the first order genuine rational soliton solution of Eq.
With the same data generation and sampling method in Section 3.1, and we numerically simulate the first order genuine rational soliton solution of the DNLS (1.1) by using the PINN and IPINN, respectively. The training dataset that composed of initial-boundary data and collocation points is produced via randomly subsampling N q = 100 from the original dataset and selecting N f = 10000 configuration points which are generated by LHS. After training the first order genuine rational soliton solution with the help of PINN, the neural network achieves a relative L 2 error of 5.598548e−03 in about 349.5862 seconds, and the number of iterations is 3305. However, the network model by using the improved PINN method achieves a relative L 2 error of 4.969464e−03 in about 1103.1358 seconds, and the number of iterations is 10384. Apparently, when simulating the first order genuine rational soliton solution, the IPINN has more iterations, longer training time and smaller L 2 error than the PINN. Fig. 3 shows the density plots, profile and loss curve plots of the first order genuine rational soliton solution by employing the PINN. Figure 4 illustrates the density diagrams, profiles at different instants, error dynamics diagrams, three dimensional motion and loss curve figure of the first order genuine rational soliton solution based on the IPINN. We can clearly see that both methods can accurately simulate the first order genuine rational soliton solution From the (a) in Fig. 3 and Fig. 4. However, comparing the b-graph of Fig. 3 with the d-graph of Fig. 4, we can clearly observe that the loss function curve of the IPINN decreases faster and smoother, while the loss function curve of PINN fluctuates greatly when the number of iterations is about 1500, and the burr phenomenon is remarkable obvious in the whole PINN training process. Furthermore, we can also gain that the ideal effect has been achieved when the IPINN is used for training after 2000 iterations from the loss function curve in Figure 4, so we can artificially control the appropriate number of iterations to save the training cost in some specific cases. At t = −0.40, 0, 0.40, we reveal the profiles of the three moments in bottom rows of (a) in Fig. 3 and Fig. 4, and find the first order genuine rational solution has the property of soliton due to the amplitude does not change with time. The (b) of Fig. 4 exhibt the error dynamics of the difference value between the exact solution and the predicted solution for the first order genuine rational soliton solution. In Fig. 4, the corresponding plot3d of the first order genuine rational soliton solution is plotted, it is evident that the first order genuine rational soliton solution is similar to the single-soliton solution with |q| = 1 plane wave.

Second order genuine rational soliton solution and two-order rogue wave solution of the DNLS
In this section, we will use two diverse methods described in Section 2, which are consisted of PINN and IPINN, to construct the second order genuine rational soliton solution and two-order rogue wave solution of the DNLS, respectively. The detailed results and analysis are given out in the following two parts.

Second order genuine rational soliton solution.
In this section, based on the Dirichlet boundary conditions Eq. (3.1), we will numerically predict the second order genuine rational soliton solution of the DNLS by using the PINN method and improved PINN method, separately. The second order genuine rational soliton solution of the DNLS has been derived in Ref. [13], the form is as follows , where α = ± 5 12 . The "ridge" of this soliton (4.1) approximately lays on the line x = 3t. When t → ±∞, above the second order genuine rational soliton solution (4.1) approaches to the first order genuine rational soliton solution represented by (3.3) along its "ridge".
. Next, we obtain the initial and boundary value data set by the same data discretization method in Section 3.1. By randomly subsampling N q = 200 from the original dataset and selecting N f = 20000 configuration points, a training dataset composed of initial-boundary data and collocation points is generated with the help of LHS. Then the data set is substituted into two neural network models which composed of two different neural network algorithms to simulate the second order genuine rational soliton solution. After training, the neural network model of PINN achieves a relative L 2 error of 3.680510e−02 in about 705.3579 seconds, and the number of iterations is 6167. However, the network structure of IPINN achieves a relative L 2 error of 4.295123e−02 in about 874.1350 seconds, and the number of iterations is 6142.
The PINN experiment results have been summarized in Fig. 5, and we simulate the solution of q(x, t) and obtain the density plots, profile, iterative curve plots of the second order genuine rational soliton solution. From (b) of Figure  5, it can be clearly observed that the curve of loss function declines very slowly, and there have a particularly large fluctuation after 6500 iterations, which indicate that the PINN has slow convergence and poor stability of loss function. Fig. 6 displays the training outcome by choosing the improved PINN method, and the density diagrams, profiles at different instants, error dynamics diagrams, three dimensional motion and loss curve figure of the second order genuine rational soliton solution q(x, t) are illustrated. The top panel of (a) of Fig. 6 gives the density map of hidden solution q(x, t), and when combing (b) of Fig. 6 with the bottom panel of (a) in Fig. 6, we can see that the relative L 2 error is relatively large at t 0.20. From (d) of Fig. 6, in contrast with the first order genuine rational soliton solution by utilizing the improved PINN method in Section 3.2, the loss function curve of the second order genuine rational soliton solution is relatively stable, and the whole iterative process is relatively long, which is completely different from the sharp drop of the loss function curve about the first order genuine rational soliton solution and the less number of effective iterations in (d) of Fig. 4. In a word, from the two neural network methods, the results show that both the PINN and IPINN can simulate the second order genuine rational soliton solution accurately, and the training time, relative error and iteration number are similar, but the iterative process of IPINN is more stable and the training performance is better. There is no doubt that the IPINN is more reliable in training higher order solutions of the DNLS.
In addition, according to the neural network model of IPINN, we obtain the following two tables specifically. Based on the same initial and boundary values of the second order genuine rational soliton solution in the case of N q = 200 and N f = 20000, we employ the control variable method which is often used in physic to study the effects of different levels of neural networks and different numbers of single-layer neurons on the second order genuine rational soliton solution dynamics of the DNLS. Moreover, the relative L 2 error of different layers of neural networks and different numbers of single-layer neurons are given in Table 1. From the data in Table 1, we can see that when the number of neural network layers is fixed, the more the number of single-layer neurons, the smaller the relative L 2 error. Of course, due to the influence of randomness, there are individual data results that do not meet the previous conclusion, but on the whole the conclusion is tenable. Similarly, when the number of single-layer neurons is fixed, the deeper the layer is, the smaller the relative error is. To sum up, we can draw the conclusion that the number of layers of neural network and the number of single-layer neurons jointly determine the relative L 2 error, and when the number of layers is not less than 6 and the number of neurons in a single layer is not less than 30, the overall relative error is small. In the case of the same original dataset, Table 2 shows the relative L 2 error of nine-layer neural network and single-layer neural network with 40 neurons when taking different number of sampling points N q in the initial-boundary data and different number of collocation points N f which are generated by the Latin hypercube sampling method. From the table 2, we can see that the influences of N q and N f on the relative L 2 error of neural network are not so obvious. After careful observation, when taking N f = 20000, regardless of the number of N q , the overall relative L 2 error is small, which also explain why the neural network model can simulate more accurate numerical solutions with smaller initial data set. 5.945028e-01 5.162760e-02 6.452197e-02 1.540266e-01 7.040157e-02 9 2.476185e-01 1.089718e-01 4.295123e-02 1.045272e-01 1.788330e-02 12 3.268944e-01 5.060934e-02 6.087790e-02 5.869449e-02 1.037406e-01 Recently, the study of rogue waves is one of the hot topics in many areas including optics, plasma, ocean dynamics, machine learning, Bose-Einstein condensate and even finance and so on [30,[45][46][47][48][49][50]. In addition to the peak amplitude more than twice of the background wave, rogue waves also have the characteristics of instability and unpredictability. Therefore, the researches and applications of rogue waves play an momentous role in real life, especially how to avoid the damage to ships caused by ocean rogue waves is of great practical significance. At present, Marcucci et al. have investigated the computational machine in which nonlinear waves replace the internal layers of neural networks, discussed learning conditions, and demonstrated functional interpolation, data interpolation, data sets, and Boolean operations. When the nonlinear Schrödinger equation is considered, the use of highly nonlinear regions means that solitons, rogue waves and shock waves play a leading role in the training and calculation [47]. Moreover, the dynamical behaviors and error analysis about the one-order and two-order rogue waves of the nonlinear Schrödinger equation have been revealed by the deep learning neural network with physical constraints for the first time [30]. The rogue wave solutions of the DNLS were derived in via Darboux transformation [51], and the high-order rogue wave solutions are obtained by generalized Darboux transformation [13]. However, to the best of our knowledge, the machine learning with neural network model has not been exploited to simulate the rogue wave solution of the DNLS. In this section, we construct the two-order rogue wave solution of the DNLS by employing the PINN and IPINN, respectively. Some vital comparisons are given out to better describe the advantages of PINN and IPINN.
On the basis of the Dirichlet boundary conditions Eq. (3.1), we will numerically training the two-order rogue wave solution of the DNLS by employing the PINN method and improved PINN method, separately. The two-order rogue wave solution of the DNLS has been derived in Ref. [13], the form can be represented as following where  Similar to the discretization method in Section 3.1, we randomly sample N q = 300 from the original initial boundary value condition data set, and select N f = 20000 configuration points which are generated by the LHS method. Thus, the training dataset of initial boundary value data and configuration points are formed. After training with two methods, the neural network model of PINN achieves a relative L 2 error of 8.412217e-02 in about 1188.4475 seconds, and the number of iterations is 9470. Moreover, introducing the IPINN, the structure attains a relative L 2 error of 7.262528e-02 in about 2924.0589 seconds, and the number of iterations is 18394. It can be seen from the above results that under the same experimental conditions, the relative L 2 error of IPINN method is smaller than that of PINN for simulating rogue wave solution, but the improved PINN method has longer training time and more iterations. Next, we will give the specific numerical results and correlation analysis.
The density plots, the sectional drawing and the error density plots of the two-order rogue wave solution are exhibited by employing the PINN method in Fig. 7. In bottom panel of (a) in Figure 7, one can observe that the wave peak of the two-order rogue wave is well simulated, but the simulation on both sides of the wave peak is poor, which can also be verified from the error diagram in figure (b) in Fig. 7. On the other hand, as for the neural network model which applying the improved PINN method, its density plot, section drawing, error density plots, three-dimensional diagram and loss function curve diagram are shown in detail in Figure 8. Similarly, we find that the wave peak simulation in (a) of Fig. 8 is not as good as that in Figure 7, but the simulation on both sides of the wave peak is better, which is just opposite to the situation simulated of the PINN. On the whole, the simulation satisfaction of IPINN is higher, and it has more research value. The chart (b) in Figure 8 shows that there is a little error at the middle peak, where the error is the difference value between the accurate solution and the predicted solution. The 3D plots and the loss function curve are shown in (c) and (d) of Figure 8 be seen that the loss value fluctuates greatly when the number of iterations is around 2500, and then decreases slowly from 1 to 0.1. Futhermore, initialization of the scaled parameters can be done in various ways as long as such value does not cause divergence of the loss. In this work, the scaled parameters are initialized as na m i = 1, ∀n 1. Although, an increase in scaling factor speeds up the convergence rate, at the same time the parameter a m i becomes more sensitive. In order to better understand the influence of initialization of scalable parameters on the improved PINN algorithm model, we present four different initialization conditions of scalable parameters to obtain the two-order rogue wave solution by employing the improved PINN method in Table 3. From the Table 3, we can drastically observe that when amplify the scaled hyper-parameter n in the initialization conditions of scalable parameters, the number of iterations and training time increase, but the relative L 2 error does not blindly dwindle. When the hyper-parameter n = 10, the relative L 2 error is minimum and the training effect is better in Table 3. This also reveals why we generally choose the initialization of scalable parameters as n = 10, a m i = 0.1 in the IPINN with locally adaptive activation function in this paper.
The rogue wave is a kind of wave that comes and goes without trace, the research of seeking and simulating solution can provide an significant theoretical basis for the prediction and utilization of the rogue waves. Compared with the nonlinear Schrödinger equation, the form of rogue wave solution of the DNLS is more complex. We have successfully utilized the PINN to simulate the rogue wave solutions of the nonlinear Schrödinger equation in Ref. [30]. In this section, a large number of experiments and analysis have been carried out, and finally the two-order rogue wave solution of the DNLS has been imitated. In term of the same experimental conditions and environment, the PINN is better at simulating the wave crest, and the IPINN has better comprehensive effect on wave crest and both sides of wave crest. Apparently, the IPINN has more advantages about the overall effect, especially in simulation of the more complex rogue wave solutions.

Conclusion
Compared with traditional numerical methods, the PINN method has no mesh size limits and gives full play to the advantages of computer science. Moreover, due to the physical constraints, the neural network is trained with remarkably few data and fast convergence rate, and has a better physical interpretability. These numerical methods showcase a series of results of various problems in the interdisciplinary field of applied mathematics and computational science which open a new path for using machine learning to simulate unknown solutions and correspondingly discover the parametric equations in scientific computing. It also provides a theoretical and practical basis for dealing with some high-dimensional scientific problems that can not be solved before.
In this paper, based on the PINN method, an improved PINN method which contains the locally adaptive activation function with scalable parameters is introduced to solve the classical integrable DNLS. The improved PINN method achieves a better performance of the neural network through such learnable parameters in the activation function. Specifically, applying two data-driven algorithms which including the PINN and IPINN to deduce the localized wave solutions which consist of the one-rational soliton, genuine rational soliton solutions and rogue wave solution for the DNLS. In all these cases, compared with the original PINN method, it is shown that the decay of loss function is faster in the case of the improved PINN method, and correspondingly the relative L 2 error in the simulation of solution is shown to be similar or even smaller in the proposed approach. We outline how different types of localized wave solutions are generated due to different choices of initial and boundary value data. Remarkably, these numerical results show that the improved PINN method with locally adaptive activation function is more powerful than the PINN method in exactly recovering the different dynamic behaviors of the DNLS.
The improved PINN approach is a promising and powerful method to increase the efficiency, robustness and accuracy of the neural network based approximation of nonlinear functions as well as abundant localized wave solutions of integrable equations. Furthermore, more general nonlinear integrable equation, such as the Hirota equation which has been widely concerned in integrable systems, is not investigated in our work. Due to the ability of the improved PINN to accelerate the convergence rate and improve the network performance, more complex integrable equations could also be considered, such as the Kaup-Newell systems, Sasa-Satsuma equation, Camassa-Holm equation and so on. How to combine machine learning with integrable system theory more deeply and build significant integrable deep learning algorithm is an urgent problem to be solved in the future. These new problems and challenges will be considered in the future research.