Non-intrusive Nonlinear Model Reduction via Machine Learning Approximations to Low-dimensional Operators

Although projection-based reduced-order models (ROMs) for parameterized nonlinear dynamical systems have demonstrated exciting results across a range of applications, their broad adoption has been limited by their intrusivity: implementing such a reduced-order model typically requires significant modifications to the underlying simulation code. To address this, we propose a method that enables traditionally intrusive reduced-order models to be accurately approximated in a non-intrusive manner. Specifically, the approach approximates the low-dimensional operators associated with projection-based reduced-order models (ROMs) using modern machine-learning regression techniques. The only requirement of the simulation code is the ability to export the velocity given the state and parameters as this functionality is used to train the approximated low-dimensional operators. In addition to enabling nonintrusivity, we demonstrate that the approach also leads to very low computational complexity, achieving up to $1000\times$ reduction in run time. We demonstrate the effectiveness of the proposed technique on two types of PDEs.


Introduction
Modern computational architectures have enabled the detailed numerical simulation of incredibly complex physical and engineering systems at a vast range of scales in both space and time [Lee et al., 2013]. Even with improved high-performance computing, the iterative numerical solutions required for design and optimization may quickly become intractable; the computational demands for real-time feedback control are even more challenging [Brunton and Noack, 2015]. Fortunately, many high-dimensional dynamical systems, such as a discretized simulation of a fluid flow, are characterized by energetic coherent structures that evolve on an attractor or manifold with a lower dimensional intrinsic dimension [Aubry et al., 1988, Berkooz et al., 1993, Holmes et al., 1996. This observed low-dimensionality has formed the basis of reduced-order modeling, which now plays a central role in modern design, optimization, and control of complex systems. Despite the growing importance of model reduction, the challenges of nonlinearity, identifying an effective low-dimensional basis, and multiscale dynamics have limited its widespread use across the engineering and natural sciences [Brunton and Kutz, 2018]. A leading method for reduced-order modeling involves the Galerkin projection of known governing equations onto a mathematical or empirical basis, although this approach is intrusive and challenging to implement. The leading alternative involves black-box system identification, which is purely data-driven, but generally yields uninterpretable models. In this work, we investigate and compare several emerging techniques from machine learning, i.e. applied data-driven optimization, for non-intrusive reduced-order modeling.
Intrusive model reduction methods, based on a working and decomposable numerical simulation of the governing equations, provide the most general and widely used set of techniques. Foremost in this arsenal is the Galerkin projection of the governing equations onto a low-dimensional linear subspace, usually spanned by orthogonal modes, such as Fourier, or data-driven modes from proper orthogonal decomposition (POD) [Aubry et al., 1988, Berkooz arXiv:2106LG] 17 Jun 2021 et al., 1993, Holmes et al., 1996, Noack et al., 2003. These approaches benefit from a strong connection to the underlying physics, the ability to include constraints to enforce preservation of the underlying dynamic structure, and adaptivity [Amsallem et al., 2009, Carlberg et al., 2013. In addition, there are several extensions around hyper-reduction to improve efficiency for systems with parametric dependence or higher order nonlinearities, based on the empirical interpolation method (EIM) [Barrault et al., 2004] and the discrete empirical interpolation method (DEIM) [Chaturantabut and Sorensen, 2010]. However, these models are limited to the dynamic terms of the governing equations and the linear projection basis may not be optimal for capturing the dynamics. The main drawback of intrusive methods is that they are challenging to implement, requiring considerable human effort and knowledge to manipulate the governing equations and simulation code.
In contrast to intrusive model reduction, data-driven techniques are becoming increasingly prevalent, catalyzed by the increasing availability of measurement data, the lack of governing equations for many modern systems, and emerging methods in machine learning and data-driven optimization. Collectively, these data-driven techniques form the field of system identification [Ljung, 1999, Juang, 1994, Nelles, 2013. Many techniques in system identification use regression to identify linear models, such as the eigensystem realization algorithm (ERA) [Juang and Pappa, 1985] and dynamic mode decomposition (DMD) [Schmid, 2010, Tu et al., 2014, Kutz et al., 2016, Bai et al., 2020; recently, both techniques have been connected to nonlinear systems via the Koopman operator [Mezić, 2005, Rowley et al., 2009, Brunton et al., 2017. Another classic technique in nonlinear system identification is the NARMAX algorithm [Kukreja and Brenner, 2007, Billings, 2013, Semeraro et al., 2016. Recently, powerful techniques in machine learning are re-defining what is possible in system identification, leveraging increasingly large datasets and more powerful optimization and regression techniques. Deep learning, based on multi-layer neural networks, is increasingly used to model fluid dynamics and obtain closure models [Milano and Koumoutsakos, 2002, Ling et al., 2016, Zhang and Duraisamy, 2015, Singh et al., 2017, Duraisamy et al., 2018, Wang et al., 2017, Kutz, 2017. More generally, machine learning is having a considerable impact in the modeling of dynamical systems and physics [Bongard and Lipson, 2007, Schmidt and Lipson, 2009, Brunton et al., 2016, Raissi and Karniadakis, 2018, for example relevant work in cluster-based reduced order models [Kaiser et al., 2014], long-short time memory networks (LSTMs) , Vlachas et al., 2018, and Gaussian process regression Karniadakis, 2017, Wan andSapsis, 2017]. Of particular interest are techniques based on the principle of parsimony that seek the simplest models, with the fewest terms necessary to describe the observed dynamics [Bongard and Lipson, 2007, Schmidt and Lipson, 2009, Brunton et al., 2016, Rudy et al., 2017, Schaeffer, 2017. For example, the sparse identification of nonlinear dynamics (SINDy) algorithm [Brunton et al., 2016] uses sparse regression to identify the few active terms in a differential equation. The resulting models balance model complexity with descriptive power, avoiding overfitting and remaining both interpretable and generalizable.
Data-driven techniques in system identification and machine learning are currently being employed for advanced, non-intrusive, and reduced-order modeling, removing the need for full-order model equations or a modifiable numerical code.  extended the SINDy modeling framework to enforce known constraints, such as energy conserving quadratic nonlinearities in incompressible flows. This so-called Galerkin regression also enables higher order nonlinearities than are possible with standard Galerkin projection, providing effective closure for the dynamics of truncated modes. Swischuk et al. [Swischuk et al., 2018] develop parametric models between input parameters and POD coefficients for physics and engineering problems, comparing many leading methods in machine learning. Peherstorfer and Willcox [Peherstorfer and Willcox, 2016] develop time-dependent reduced order models for partial differential equations, by non-intrusively inferring the reduced-order model operator output as a function of initial conditions, inputs, trajectories of states, and full-order model outputs, without requiring access to the full-order model. They prove that for dynamics that are linear in the state or are characterized by low-order polynomial nonlinearities, the ROM converges to the projection of the FOM onto a low-order subspace. Carlberg et al. [Carlberg et al., 2019] propose dimensionality reduction on autoencoders to learn dynamics for recovering missing CFD data. Other non-intrusive ROMs have been developed based on neural networks [Wang et al., 2016, Kani and Elsheikh, 2017, Xie et al., 2019, Hesthaven and Ubbiali, 2018, Wang et al., 2018, radial basis functions [Xiao et al., 2015], and kernel methods [Vaerenbergh, 2010, Wirtz et al., 2015. Hybrid methods are also promising, for example by modeling the error of a Galerkin model to produce closures [Xie et al., 2018].
Despite the considerable recent progress in leveraging machine learning for non-intrusive reduced-order modeling, current approaches still have a number of shortcomings. First, many methods are limited to low-order polynomial nonlinearities. All of the above approaches learn the low-dimensional operators from full-system trajectories, which is expensive and limits the predictive accuracy, as the ROM trajectory is likely to quickly deviate from the full-system trajectory. Current methods also have limited consideration for stability and the interplay between regression methods and time integration has not been explored. Finally, current approaches do not provide a framework for model selection, as each method will likely be best suited to a different problem class.
In this work, we continue to develop and explore machine learning techniques for non-intrusive model reduction, addressing many of the shortcomings described above. In particular, we develop and compare numerous leading techniques in machine learning to produce accurate and efficient reduced order models. We focus primarily on the class of Galerkin projection models, although these approaches may be extended to more general models. To study the dynamical system in the latent space, we map from the low-dimensional state at the current time instance to the next, in explicit or implicit time integration schemes. We investigate a variety of regression models for the mapping, including k-nearest neighbors [Altman, 1992], support vector regression [Smola and Schölkopf, 2004], random forest [Breiman, 2001], boosted trees [Hastie et al., 2009], the vectorial kernel orthogonal greedy algorithm (VKOGA) Haasdonk, 2013, Wirtz et al., 2015] and SINDy [Brunton et al., 2016]. In the following, we explicitly consider stability and explore the connection between various regression methods and their suitability with different time integration schemes; for example, non-differentiable machine learning models are only amenable to explicit time integration. We also explore the modeling procedures of varying complexities on two examples, the 1D Burgers' equation and the 2D convection-diffusion equation using both explicit and implicit time integrators.
2 Problem formulation 2.1 General nonlinear dynamical systems This work considers parameterized nonlinear dynamical systems characterized by the system of nonlinear ODEṡ where t ∈ [0, T ] denotes time with T ∈ R + representing the final time, x ∈ R N denotes the state, µ ∈ D ⊆ R p denotes the system parameters, x 0 : D → R N is the parameterized initial condition, and f : R N × R + × D → R N denotes the velocity.

Galerkin projection
We assume that we we have low-dimensional trial-basis matrix V ∈ R N ×n (with n N ) computed, e.g., via proper orthogonal decomposition (POD), such that the solution can be approximated as x(t, µ) ≈x(t, µ) =x(µ) + Vx(t, µ) ≈ x withx ∈ R n denoting the reduced state. Then, the Galerkin ROM can be expressed aṡ (2) Critically, note that the ROM is defined by the low-dimensional mapping The reduced velocity f r is thus simply a function that maps the reduced state and inputs to a low-dimensional vector. Thus, we can rewrite Equation (2) asẋ However, this approach is intrusive to implement in computational-mechanics codes, as it requires querying the full-order model code to compute f (Vx, t; µ) for every instance ofx and µ encountered during the ROM simulation; without hyper-reduction (e.g., DEIM or gappy POD), it is also computationally expensive, as computing V T f given f incurs O(N n) flops. Even if hyper-reduction is present, however, the approach remains intrusive, as sampled entries of the velocity must be computed for every instance ofx and µ encountered during the ROM simulation.

Regression-based reduced operator approximation
This section outlines learning the low dimensional operator through regression methods. We commence describing the general formulation of the framework in Section 3.1, and then Section 3.2 discusses the proposed regression models and their computational complexity. In Section 3.3, we derive the boundedness and stability of of the approximated discrete-time dynamical system.

Mathematical formulation
The objective is to develop a data-driven, non-intrusive approximation to the reduced velocity with reduced-order models of different dynamical systems. f r . In particular, we aim to devise an approximate mappinĝ such thatf As the domain and codomain off r exhibit no specialized structure, we can consider general high-dimensional regression techniques (e.g., kernel regression, random-forest regression).
Similar to Equation (6), the reduced order model is given bẏ with the initial conditionx We assume that the sequences of reduced velocity f r can be studied from the Markovian discrete-time dynamical systemx for discrete-time velocity g : R n × R p → R n . If such a reduced velocity exists and we can compute the operator, the whole time sequence of the reduced states can then be estimated from Equation (11) by specifying the initial (reduced) state. By considering the mapping between the current reduced state and the next, we cast the problem into a set of regression models.

Surrogate ROM
In this work, we collect the reduced states from the Galerkin projection as in Section 2.2 and approximate the operator g by regressing each component of the reduced velocity. To be specific, we construct individual regressionĝ i ≈ g i for i = 1, · · · , n, the ith equation of g is: wherex = [x 1 · · · x n ] and g = [g 1 · · · g n ]. Equation (12) shows that we can map the reduced state at time stepx n to each component of the state at the next time stepx n+1 i for j = 0, · · · , N t − 1.
In the offline stage, we obtain training data from the existing Galerkin projection model. By querying both features (x j ) and responses (x j+1 i ) at every time instance, we generate a data set T train,i = (x j , x j+1 i ) for model training and cross validation.
We consider a variety of conventional statistical models, including support vector regression (SVR) with kernel functions (Gaussian, polynomials) [Smola and Schölkopf, 2004], tree-based methods (boosted decision trees [Hastie et al., 2009], random forest [Breiman, 2001]) and k-nearest neighbors [Altman, 1992], for regressing the feature-response pairs. Three types of kernel functions are explored in SVR. Specifically, we refer to the SVR model using the 2 nd , 3 rd order polynomial kernel as SVR2, SVR3 respectively. When the Gaussian radial basis function (RBF) kernel is used, we refer to the model as SVRrbf. In addition, we investigate the vectorial kernel orthogonal greedy algorithm (VKOGA) and Sparse identification of nonlinear dynamics (SINDy) as advanced regression models. More details of VKOGA and SINDy can be found in Appendix A.1. We compare the computational complexity of all the proposed regression models in Appendix A.1. For all the candidate models, we employ cross validation to optimize the hyperparameters for model selection, aiming for a balanced bias and variance. We also explore the training and validation error as varying the number of samples in Appendix A.2, and we select a fixed sample size for performance comparison between models.
For time integration along the trajectory of the dynamical system, we investigate the appropriate time step for each regression model. We implement the 4th-order Runge-Kutta as the explicit method, as well as Newton-Raphson and fixed-point iteration both in backward Euler as the implicit methods. We report the numerical results, with respect to the Galerkin reduced-order model (ROM) and the full-order model (FOM) in a sequence of time in Section 4.

Error analysis
Assuming the considered regression model generates bounded in reduced space, we examine the boundedness of the surrogate FOM on the time evolution of the states along the trajectory. Let J t = [0, T ] denote the time domain, J µ ⊂ R p be a compact space of µ, x : J t → R n denote the state variable, and f (x, t; µ) be the vector field. Let f r (ŷ, t; µ) = f (Vŷ, t; µ) represent the Galerkin reduced vector field. Letx(t) denote an approximate reduced vector field constructed by a machine learning method andf r (x, t; µ) be the corresponding vector field.
The error of the reduced model can be defined as e(t) : which denotes the error component orthogonal to range(V ), and e i (t) := P e(t), which denotes the component of error parallel to range(V ). Letx := P x denote the direct projection of x. Thus, we have However, since the system is evolutionary with time, further approximations of the projection-based reduced model result in an additional error e i (t), and we have Although e i (t) and e o (t) are orthogonal to each other, they are not independent.
lemma Consider the dynamical system as Equation (1) proof: For notation simplification, we fix µ and do not explicitly denote it in vector fields.
Substituting Equation (1) and (9) Left multiplying (16) by P and recognizing that P V = V giveṡ Using this equation by expanding e i (x + h) and applying triangular inequality yields

Rearranging this inequality and applying the Lipschitz conditions gives
Since O(h) can be uniformly bounded independent of e i (t), using the mean value theorem and letting h → 0 give d dt Rewriting the above inequality into integral form, Simplifying the integral of the right-hand side of the above inequality gives for any t ∈ J t . If follows that Combining the above inequality with e ∞ ≤ e i ∞ + e o ∞ , one can obtain Equation (15).

Remark:
The above lemma provides a bound for e i (t) in terms of e o ∞ and e i (0) . We have e i (0) = 0 when the initial condition of the reduced model is given byx(0) = V * x 0 for Equation (9). In this situation, Equation (15) becomes e ∞ ≤ e KT e o ∞ + C K (e KT − 1).

Numerical experiments
To assess the proposed non-intrusive ROM, we consider two parameterized PDEs: (i) 1D inviscid Burgers' equation, and (ii) 2D convection-diffusion equation. We implement explicit integrator, including 4th-order Runge-Kutta solvers, and Newton-Raphson, fixed-point iteration in backward Euler as the implicit methods.

1D Burgers' equation
The experiment first employs a 1D parameterized inviscid Burgers' equation. The input parameters µ = (a, b) in the space [1.5, 2] × [0.02, 0.025]. In the current setup, the parameters for online test are fixed to be µ = (1.8, 0.0232). In the FOM, the problem is often solved using a conservative finite-volume formulation and Backward Euler in time.
The 1D domain is discretized using a grid with 501 nodes, corresponding to x = i × (100/500), i = 0, · · · , 500. The solution u(x, t) is computed in the time interval t ∈ [0, 25] using different time step sizes considering the convergence in each time integrator. ∂u(x, t) ∂t is computed in the time interval t ∈ [0, 25] using a uniform computational time-step size ∆t.

Data collection
We investigate time step verification on choosing an appropriate ∆t of the time integrator for the problem. We collect the solutions under an increasing number of time steps N t = [25,50,100,200,400,800,1600,3200,6400] using both Runge-Kutta as well as backward Euler integrator. Throughout the paper, we select the time step at 99% of the asymptotic rate of convergence. The verification results in Figure 1 show that N t = 200 is a reasonable number of time steps to use for the 4th-order Runge-Kutta and nT = 800 for backward Euler method. During the offline stage, we run four full simulations corresponding to the corner parameters of the space [1.5, 2] × [0.02, 0.025]. Then, we sample the training data from Latin-hypercube, . In the sampling, N training and N validation instances of the state, time and parameters are generated following the criterion that the minimum distances between the data points are maximized. For this study, the default size of the training set is N training = 1000 and the default size of the validation set is N validation = 500. The reduced vector field f r is computed for each input pairs (x, t; µ). Note that both the training and validation stage only involves pure machine learning. Then in the test stage, we evaluate the proposed ROM. The parameters are fixed to be µ = (1.8, 0.0232) for testing purpose.

Model validation
We use SVR with kernel functions (2 nd , 3 rd order polynomials and radial basis function), kNN, Random Forest, Boosting, VKOGA, and SINDy as regression models to approximate reduced velocity. In particular, for each regression method, we change the model hyperparameters and plot the relative training error and validation error. The relative error is defined by We then plot the learning curve of each regression method and compare the performance of each model on training and validation data over a varying number of training instances in Appendix A.2. By properly choosing the hyperparameters and the number of training instances, our regression models can effectively balance bias and variance.

Simulation of the surrogate ROM
We can now solve the problem using the surrogate model along the trajectory of the dynamical system. After applying time integration to the regression-based ROM, we compute the relative error of the proposed models as a function of time. We investigate both Newton-Raphson, fixed-point iteration in backward Euler and 4th-order Runge-Kutta in explicit methods. Let x(t),x b (t) andx(t) denote the solution of the FOM, the Galerkin ROM, and non-intrusive ROMs respectively. We define the relative error with respect to FOM e FOM (t) and Galerkin ROM e ROM (t) as The corresponding averaged relative error over the entire time domain e FOM and e ROM can be computed as Let t F OM , t ROM , and τ denote the running time of FOM, Galerkin ROM, and non-intrusive ROM respectively. Define the relative running time with respect to the FOM and the Galerkin ROM: and The following are the simulation results from backward Euler method with N t = 800. Figure 2 plots the state-space error with respect to the FOM and ROM using the backward Euler integrator. As validation results predict, SVR3 and SINDy behave better than the other models, achieving a relative ROM error below 1e-4 over the entire time domain, and the relative error in terms of FOM is well bounded. Figure 3 plots the Pareto frontier error as a function of the relative running time using the backward Euler integrator. For differential models, the relative time is calculated using the less expensive approach, i.e. Newton's method. By comparison, SINDy requires much less relative time than SVR3, at a comparable level of relative error in both FOM and ROM. We examine simulation results from 4th-order Runge-Kutta method with N t = 200. Figure 4 shows the state-space error with respect to the FOM and ROM using the Runge-Kutta integrator. SVR2, SVR3 and SINDy have a comparable performance, and result in a bigger ROM error relative to the backward Euler solver. We notice that the random forest model begins to diverge after t > 10 in the explicit scheme. This can be explained by the corresponding performance in model evaluation in Appendix A.1. Figure 5 plots the Pareto frontier error with respect to the relative running time using the backward Euler integrator. VKOGA has the smallest relative time in both the FOM and ROM comparison.  SINDy requires slightly more running time while the accuracy outperforms VKOGA. Table 2 summarizes the online running time of all methods, the mean time-integrated error with respect to the FOM, and the mean time-integrated error with respect to the Galerkin ROM using the Runge-Kutta solver. For a fair comparison, all the ROM and FOM solutions are computed at the verified Runge-kutta time step. As shown in the table, SVR based models, e.g. SVR2 and SVR3, yield the smallest relative errors, however the computational cost is more expensive than the FOM. Note that the non-intrusive ROM (VKOGA) can speed up the solver by 6.9× relative to the FOM and 3.3× over the Galerkin ROM at a relative error of 0.0353 and 0.0164 respectively.

Data collection
Similar to the 1D case, first we investigate the appropriate time step ∆t for solving the ODE. We collect the solutions of a sequence number of time steps N t = [25, 50, 100, 200, 400, 800, 1600, 3200, 6400] for (i) explicit Runge-Kutta and (ii) implicit backward-Euler integrator. The verification results in Figure 7 shows that N t = 200 is a reasonable number of time steps to use for Runge-Kutta and N t = 800 for backward Euler method. During the offline stage, we run four full simulations corresponding to the corner parameters of the space [9, 10] 2 . Then, the training data are sampled from a Latin-hypercube for better covering the parameter space. In the sampling, N training and N validation instances of the state, time and parameters are generated following the criterion that the minimum distances between the data points are maximized. We use the default size of the training set N training = 1000 and of the validation set N validation = 500. Then the reduced vector field f r is computed for each input pairs (x, t; µ). In the training and validation stage, we regress the reduced vector field f r by the input (x, t; µ); in the test stage, we evaluate the ROM. The parameters are fixed to be (µ 1 , µ 2 ) = (9.5, 9.5).

Model validation
We report the performance of SVR(2 nd , 3 rd poly and rbf), kNN, Random Forest, Boosting, VKOGA, and SINDy as regression models to approximate reduced velocity. In particular, as in Section 4.1.2, for each regression method, we change the model hyperparameters and plot the relative training and validation error. The relative error is defined by Equation (18). Similarly, we plot the learning curve of each regression method and compare the performance of each model on training and validation data over a varying number of training instances in Appendix A.2. We aim to balance bias and variance in each regression model, by properly choosing hyperparameters and the number of training instances.

Simulation of the surrogate ROM
We can now solve this 2D problem using the surrogate model along the trajectory of the dynamical system. After applying time integration to the regression-based ROM, we compute the relative error of the proposed models as a function of time. As in Section 4.1.3, we investigate both the backward Euler and Runge-Kutta integrators. The following are the simulation results from backward Euler method with N t = 800. Figure 8 plots the state-space error with respect to FOM and ROM using the backward Euler integrator. VKOGA outperforms the other models, achieving a relative ROM error below 6e-2 over the entire time domain, and the accuracy is closest to Galerkin ROM. Figure 9 plots the Pareto frontier error as a function of the relative running time using the backward Euler integrator. VKOGA performs best in terms of both accuracy and time efficiency, when comparing with the Galerkin ROM. Table 3 represents online running time of all methods, the mean time-integrated error with respect to FOM, and the mean time-integrated error with respect to the Galerkin ROM using the backward Euler integrator for the 2D convection-diffusion equation. For a fair comparison, all the ROM and FOM solutions are computed at the verified backward Euler time step. Note that the non-intrusive ROM, e.g. VKOGA with Newton's method, can improve the solve time by three orders of magnitude over the FOM and 111.2× compared to the Galerkin ROM at a relative error of 0.0059 and 0.0041 respectively.  We examine the simulation results from the 4th-order Runge-Kutta method with N t = 200. Figure 10 shows the state-space error with respect to FOM and ROM using the Runge-Kutta integrator. SVR2, SVR3, VKOGA, and SINDy have a comparable performance, and result in a smaller ROM error relative to in the backward Euler solver. We notice that the random forest, boosting models and kNN begin to diverge quickly in the second half of the time domain. Figure 11 plots the Pareto frontier error as a function of the relative running time using the backward Euler integrator. VKOGA and SINDy outperform the other models in terms of both computation accuracy and time cost. The relative error compared to Galerkin ROM and FOM is below 1e-2. Table 4 presents online running time of all methods, the mean time-integrated error with respect to the FOM, and the mean time-integrated error with respect to the Galerkin ROM using the Runge-Kutta integrator for the 2D convection-diffusion equation. For a fair comparison, all the ROM and FOM solutions are computed at the verified backward Euler time step. Note that the computational efficiency of the non-intrusive ROM performs significantly better than Galerkin ROM. VKOGA using Runge-Kutta can speed up the solver 3406.9× that of the FOM and 468.3× that of the Galerkin ROM at a relative error of 0.0083 and 0.0069. SINDy using Runge-Kutta can accelerate the solver 2524.1× over the FOM and 347.0× compared to the Galerkin ROM at a relative error of 0.0077 and 0.0066 respectively.

Conclusions
In this study, we demonstrate the effectiveness of a data-driven model reduction method for solving two types of parametric PDEs non-intrusively, in both explicit and implicit time integration schemes. The approach successfully avoids the cost and intrusiveness of querying the full-order model and reduced-order model in the simulation, by approximating low-dimensional operators using regression methods. In the offline stage, we train the regression models using state-of-the art techniques from machine learning and specific dynamical learning methods in a reduced order architecture; in the online stage, the surrogate ROM then solves problems more efficiently (orders of magnitude in speedup) using the learned reduced operators. The proposed approach speeds up the full-order model simulation by one order of magnitude in the 1D Burgers' equation and three order of magnitude in the 2D convection-diffusion equation.
Among the machine learning models we evaluate, VKOGA and SINDy distinguish themselves from the other models, delivering superior cost vs. error trade-off.
Further work involves a number of important extensions and directions that arise out of this work. First, it will be interesting to investigate the effectiveness on solving a different types of PDE or a more complex nonlinear dynamical systems, e.g. Euler's equation. Morerover, for nonlinear Lagrangian dynamical systems, we need to develop structurepreserving approximations for all reduced Lagrangian ingredients in the model reduction. Rather than apply a Galerkin projection to obtain the ROM, one can alternatively employ a least-square Petrov-Galerkin (LSPG) [Carlberg et al., 2011[Carlberg et al., , 2013[Carlberg et al., , 2017, which requires a regression method predicting non-negative values. The growing intersection of dynamical systems and data science are driving innovations in estimating the prediction of complex systems [Brunton and Kutz, 2019]. With a deeper understanding of neural networks, it may be possible to generalize the non-intrusive approach to another level of accuracy. Furthermore, physics-informed models [Raissi et al., 2019] may accelerate learning from data and add interpretability to existing approaches. This framework can also be combined with distributed systems to enable parallelism for extreme-scale high performance computing.

A Regression models and tuning A.1 Special regression models
In this section, we detail two non-transitional machine learning models, VKOGA and SINDy, in addition to the SVR, kNN, random forest and boosting.
A.1.1 Vectorial kernel orthogonal greedy algorithm (VKOGA) Developed from kernel-based methods, VKOGA makes the assumption that a common subspace can be used for every component of vector-valued response. With such, it significantly reduces the training and evaluation time when the number of kernels in the resulting regression model is large. VKOGA constructs the regression model where K : R Nx × R Nx → R denotes a kernel function, for each individual regression model h i , i = 1, · · · , N x , and α i ∈ R xx , i = 1, · · · , N x are vector-valued basis functions. VKOGA uses a greedy algorithm to compute kernel functions K(x, ·). The greedy algorithm determines kernel centers from Ω =x 1 , · · · ,x ntrain . Let the initial state Ω 0 = ∅, and then at stage m, choose where φ m−1 x denotes the orthogonal remainder of K(x, ·) with respect to the reproducing kernel Hilbert space spanned by K(x 1 , ·), · · · , K(x m−1 , ·). Then the kernel centers are updated Ω m = Ω m−1 ∪ x m . At the next step, the basis function α i , i = 1, · · · , n VKOGA are determined by a least-squares approximation to fit the training data. In the numerical experiments, we apply the Gaussian RBF kernel K( where the hyperparameter is the number of kernel functions used.

A.1.2 Sparse identification of nonlinear dynamics (SINDy)
The sparse identification of nonlinear dynamics (SINDy) method [Brunton et al., 2016] identifies nonlinear dynamical system from measurement data, seeking to approximate the vector field f by a generalized linear model. This architecture avoids overfitting by identifying a parsimonious model, which is more explainable or predictable than black-box machine learning. Although SINDy was originally devised for continuous-time dynamics, it can be extended to discrete-time systems. In particular, SINDy yields a model where p i : R Nx → R, i = 1, · · · , n SINDy denotes a "library" of candidate basis functions. The relevant terms that are active in the dynamics are solved using sparse regression that penalizes the number of functions in the library P (x): where · 1 promotes sparsity in the coefficient vector θ i and the parameter α balances low model complexity with accuracy. The least absolute shrinkage and selection operator (LASSO) [Tibshirani, 1996] or the sequential threshold least-squares (STLS) [Brunton et al., 2016] can be used to determine a sparse set of coefficients (θ 1 , · · · , θ nSINDy ). In the numerical experiments, we consider linear and quadratic functions in the library, i.e., p i ∈ {x 1 , · · · , x Nx , x 1 x 1 , x 1 x 2 , · · · , x Nx x Nx }. Hyperparameters in this approach consist of the prescribed basis functions.
A.1 summarizes the computational complexities of all the considered regression models. Note that the hyperparameter varies in different models, i.e. N trees denotes the number of decision tress in random forest, N learners denotes the number of weak learners in the boosting method, N functions is the numner of kernel functions in VKOGA, and N bases represents the number of bases in SINDy with 2 nd order polynomials.

A.2 Cross-validation and hyperparameter tuning
For each regression model proposed in Section 3.2, we use cross-validation to prevent overfitting and select hyperparameters.     Figure A.2(d) shows that within 40 trees, the validation error of the random forest model is still relatively large. Moreover, there is a big gap between the training error and test error that implies the random forest model may suffer from overfitting for the given data. We select N trees = 15 in this study. As shown in Figure A.2(e), the validation error in Boosting approaches the minimum at N learners = 40, and thus we select 40 weak learners. Figure A.2(f) shows that by choosing the number of neighbors K ≈ 4, the validation error is minimized. Thus, K = 4 is a good hyperparameter to balance bias and variance. In Figure A.2(g) we observe that the more kernel functions we use, the smaller value of training and validation error we obtain. In this study, we select 500 kernel functions in VKOGA.
We examine the performance of the models as an increasing size of the training sample in Figure A.2. Figure A.2(a) shows that with 200 training instances, the SVR2 model has very small validation error, while SVR3 model requires a bigger training set. For SVRrbf, the more training instances we have, the smaller validation error we obtain. This is because the resolution of the radial basis function is determined by the density of input parameter space. Figure A. 2(d) shows that the more data are input, the less the relative error. Intuitively, more data give higher density in the input parameters space; and thus each leaf of a decision tree provides finer coverage for test data. Similarly, for the kNN model as in Figure A.2(f), close neighbors give better representation of test data. As a result, more data can effectively reduce the variance and solve the overfitting issue. Figure A.2(e) shows that with approximately 1000 training samples,   the training error and validation error begin to approach 2e-3. This implies that a relative large number of data are required in order to constrain the variance of the boosting method. Figure A.2(g) shows that with 1000 training instances, the training and validation errors begin to converge for VKOGA, while SINDy (2 nd order polynomials) only requires a small number of training instances to get a very accurate prediction, as seen in A.2(h). Similarly, we select the following hyperparameters: = 10 −4 for SVR2, = 10 −5 for SVR3, = 10 −3 for SVRrbf, N trees = 15 for random forest, N learners = 40 for boosting, K = 6 for k-nearest neighbours, and 500 kernel functions for VKOGA in the 1D Burgers' implementation. Figure A.3 summarizes the overall performance of the above models for the 1D Burgers' and 2D convection-diffusion equation. All the proposed models are validated using the selected hyperparameters. We notice that SVR with the kernel of 3 nd order polynomials outperforms the other models for the 1D Burgers' equation. This can be related to the discretization structure of quadratic nonlinearties in the PDE solver. VKOGA has the a better reperformance than RF, Boosting, kNN, with a relative error less than 1e-4. SINDy reaches a relative error of 3e-8. For the 2D convection-diffusion problem, SVR models again perform better than RF, Boosting and kNN, and SVR3 outperforms the other two kernel functions. VKOGA has the best accuracy with relative error ≈ 1e-5, while SINDy reaches a relative error of 1e-3. Note that to be consistent, we select fixed training and validation sample sizes for all the considered regression models. However, as illustrated in Figure A.2(h), SINDy can approach the same level of accuracy with a small number of training and validation samples.