NONLINEAR MODEL REDUCTION VIA DISCRETE EMPIRICAL INTERPOLATION

Size: px
Start display at page:

Download "NONLINEAR MODEL REDUCTION VIA DISCRETE EMPIRICAL INTERPOLATION"

Transcription

1 SIAM J. SCI. COMPUT. Vol. 3, No. 5, pp c Society for Industrial and Applied Mathematics NONLINEAR MODEL REDUCTION VIA DISCRETE EMPIRICAL INTERPOLATION SAIFON CHATURANTABUT AND DANNY C. SORENSEN Abstract. A dimension reduction method called discrete empirical interpolation is proposed and shown to dramatically reduce the computational complexity of the popular proper orthogonal decomposition (POD) method for constructing reduced-order models for time dependent and/or parametrized nonlinear partial differential equations (PDEs). In the presence of a general nonlinearity, the standard POD-Galerkin technique reduces dimension in the sense that far fewer variables are present, but the complexity of evaluating the nonlinear term remains that of the original problem. The original empirical interpolation method (EIM) is a modification of POD that reduces the complexity of evaluating the nonlinear term of the reduced model to a cost proportional to the number of reduced variables obtained by POD. We propose a discrete empirical interpolation method (DEIM), a variant that is suitable for reducing the dimension of systems of ordinary differential equations (ODEs) of a certain type. As presented here, it is applicable to ODEs arising from finite difference discretization of time dependent PDEs and/or parametrically dependent steady state problems. However, the approach extends to arbitrary systems of nonlinear ODEs with minor modification. Our contribution is a greatly simplified description of the EIM in a finite-dimensional setting that possesses an error bound on the quality of approximation. An application of DEIM to a finite difference discretization of the one-dimensional FitzHugh Nagumo equations is shown to reduce the dimension from 4 to order 5 variables with negligible error over a long-time integration that fully captures nonlinear limit cycle behavior. We also demonstrate applicability in higher spatial dimensions with similar state space dimension reduction and accuracy results. Key words. nonlinear model reduction, proper orthogonal decomposition, empirical interpolation methods, nonlinear partial differential equations AMS subject classifications. 65L, 65M DOI..37/ Introduction. Model order reduction (MOR) seeks to reduce the computational complexity and computational time of large-scale dynamical systems by approximations of much lower dimension that can produce nearly the same input/output response characteristics. The method proposed here is concerned with dimension reduction of high-dimensional nonlinear ordinary differential equation (ODE) systems. Our approach applies to virtually any system of ODEs. However, systems arising from discretization of partial differential equations (PDEs) are primary examples. Dimension reduction of discretized time dependent and/or parametrized nonlinear PDEs is of great value in reducing computational times in many applications, including the neuron modeling and steady state flow problems presented here as illustrations. These discrete systems often must become very high dimensional to achieve the desired accuracy in the numerical solutions. We introduce a discrete empirical interpolation method (DEIM) to greatly improve the dimension reduction efficiency of proper orthogonal decomposition (POD) with Galerkin projection, a popular approach for constructing reduced-order models of these discrete systems. Received by the editors July 8, 9; accepted for publication (in revised form) May 3, ; published electronically September 7,. This work was supported in part by NSF grant CCF and AFOSR grant FA A brief synopsis of some of the results in this paper appeared in Proceedings of the 48th IEEE Conference on Decision and Control and the 8th Chinese Control Conference (CDC/CCC 9), 9, pp Department of Computational and Applied Mathematics, MS-34, Rice University, 6 Main Street, Houston, TX (sc3@rice.edu, sorensen@rice.edu). 737

2 738 SAIFON CHATURANTABUT AND DANNY C. SORENSEN The POD-Galerkin approach has provided reduced-order models of systems in numerous applications such as compressible flow [38], fluid dynamics [3], aerodynamics [9], and optimal control []. However, effective dimension reduction for the POD-Galerkin approach is usually limited to problems with linear or bilinear terms as in [,, 8, 33]. In fact, this limitation occurs when Galerkin projection is applied with any type of reduced basis, as noted, for example, in [8] for the case of the reduced basis proposed in [3]. Its success is limited to the problems of linear elliptic parabolic PDEs with affine parameters or low-order polynomial nonlinearities [3, 5, 6, 4, 9]. When a general nonlinearity is present, the cost to evaluate the projected nonlinear function still depends on the dimension of the original system, resulting in simulation times that hardly improve over the original system. In the finite element (FE) context, this inefficiency arises from the high computational complexity in repeatedly calculating the inner products required to evaluate the weak form of the nonlinearities as discussed in [7, 9, 8]. In particular, in [8], Nguyen and Peraire discuss the limitations of such approaches and give a number of examples of equations involving nonpolynomial nonlinearities. Specifically, they study linear elliptic equations with nonaffine parameter dependence, nonlinear elliptic equations, and nonlinear time dependent convection-diffusion equations. They demonstrate for these examples that the standard POD-Galerkin approach does not admit the sort of precomputation that is possible with polynomial nonlinearities. They propose a reduced basis method with a best-points interpolation method (BPIM; see [7]) to selecting interpolation points. Several approaches have been proposed to address the problem of reducing the complexity of evaluating the nonlinear term of the POD reduced model in the context of finite difference (FD) and finite volume (FV) discretization as well as differentialalgebraic equations (e.g., in circuit simulation). Missing point estimation (MPE) was originally proposed in [] to improve the complexity of the POD-Galerkin reduced system from FV discretization, essentially, by solving only a subset of equations of the original model. A reduced system is obtained by first extracting certain equations corresponding to specially chosen spatial grid points and then projecting the extracted system onto the space spanned by the restricted POD with components/rows corresponding to only these selected grid points. This procedure can be viewed as performing the Galerkin projection onto the truncated POD basis via a specially constructed inner product as defined in [5] which evaluates only at selected grid points instead of computing the usual L inner product. Two heuristic methods for selecting these spatial grid points are introduced in the thesis [] (also in subsequent publications; see, e.g., [, 4, 3]) by aiming to minimize aliasing effects in using only partial spatial points. This was shown to be equivalent to a criterion for preserving the orthogonality of the restricted POD basis vectors which is further translated into a criterion for controlling condition number growth. These grid point selection procedures are later improved by incorporating a greedy algorithm from [4]. The applications of the MPE method are primarily in the context of a linear time varying system arising from FV discretization of a nonlinear computational fluid dynamic model for a glass melting furnace [,, 4, 3]. It has also been used in modeling heat transfer in electrical circuits [4] and in subsurface flow simulation []. Alternatively, techniques for approximating a nonlinear function can be used in conjunction with the POD-Galerkin projection method to overcome this computational inefficiency. There are a number of examples that use MOR approaches with nonlinear approximation based on precomputation of coefficients defining multilinear

3 DISCRETE EMPIRICAL INTERPOLATION METHOD 739 forms of polynomial nonlinearities followed by POD-Galerkin projection [4, 5, 3, 6, 6, ]. One of these approaches is found in the trajectory piecewise-linear (TPWL) approximation [34, 37], which is based on approximating a nonlinear function by a weighted sum of linearized models at selected points along a state trajectory. These linearization points are selected using prior knowledge from a training trajectory (or its approximation) of the full-order nonlinear system [36]. The TPWL approach was successfully applied to several practical nonlinear systems, especially in the circuit simulations [35, 36, 37, 4, 8]. However, there are still many nonlinear functions that may not be approximated well by using low degree piecewise polynomials unless there are very many constituent polynomials. The DEIM approach proposed here approximates a nonlinear function by combining projection with interpolation. DEIM constructs specially selected interpolation indices that specify an interpolation-based projection to provide a nearly L optimal subspace approximation to the nonlinear term without the expense of orthogonal projection. This approach is a discrete variant of the empirical interpolation method (EIM) introduced by Barrault, Maday, Nguyen, and Patera [7], which was originally posed in an empirically derived finite-dimensional function space. We were motivated to develop this DEIM variant to apply to arbitrary systems of ODEs regardless of their origin. For illustration purposes, we shall focus on FD discretized systems of time dependent and/or parametrized nonlinear PDEs. The procedure presented in this paper can also be applied to general nonlinear ODEs, including a FV discretized system and the system of coefficients from FE discretization. Our DEIM approach is closely related to MPE in the sense that both methods employ a small selected set of spatial grid points to avoid evaluation of the expensive L inner products at every time step that are required to evaluate the nonlinearities. However, the fundamental procedures for constructing a reduced system and the algorithms for selecting a set of spatial grid points are different. While MPE focuses on reducing the number of equations and using a restricted inner product on the POD basis vectors, DEIM focuses on approximating each nonlinear function so that a certain coefficient matrix can be precomputed and, as a result, the complexity in evaluating the nonlinear term becomes proportional to the small number of selected spatial indices. Hence, the reduced system from the MPE procedure considers only a POD basis for the state variables, but the one from the DEIM procedure considers both a POD basis for the state variables and a POD basis related to each nonlinear term. The POD-DEIM approach is also closely related to the approach called interpolation of function snapshots suggested in [4] as an alternative to MPE for constructing a reduced system for a nonlinear circuit model. The main steps of both approaches are the same. The nonlinear approximation is computed by using some selected spatial points, and then Galerkin projection is applied to the system. However, a key difference is that in [4] the basis matrices used for spanning the unknowns (state variables) and the nonlinear function in the reduced system are obtained from a leastsquares solution of the snapshot matrices in such a way that the unknown coefficients of the resulting reduced system still have the original interpretations of state variables instead of using basis matrices from SVD truncation as done in our POD-DEIM approach. No concrete algorithm was proposed in [4] for selecting indices (besides the ones used in MPE). However, it was suggested in [4] to select them to minimize an upper bound of the approximation error, which is an idea similar to the one leading to our error bound for DEIM approximation (see (3.8) and (3.9) in section 3.).

4 74 SAIFON CHATURANTABUT AND DANNY C. SORENSEN More recently, Galbally et al. [7] applied the techniques of gappy POD, EIM, and BPIM to develop an approach to uncertainty quantification in a nonlinear combustion problem governed by an advection-diffusion-reaction PDE. The nonlinear term involved an exponential nonlinearity of Arrhenius type. In [7], there is a detailed explanation of why POD-Galerkin does not reduce the complexity of evaluating the nonlinear term. They also developed a masked projection framework that is very similar to the projection methodology developed in this paper that shows the similarity of the gappy POD, EIM, and BPIM approaches. Our discussion is organized as follows. Section describes the problem setup. Dimension reduction via POD is reviewed in section. followed by a discussion of the fundamental complexity issue in section.. The DEIM approximation is introduced in section 3. The key to complexity reduction is to replace the orthogonal projection of POD with the interpolation projection of DEIM in the same POD basis. An algorithm for selecting the interpolation indices used in the DEIM approximation is presented in section 3.. Section 3. provides an error bound on this interpolatory approximation indicating that it is nearly as good as orthogonal projection. Section 3.3 illustrates the validity of this error bound and the high quality of the DEIM approximations with selected numerical examples. In section 3.4, we explain how to apply the DEIM approximation to nonlinear terms in POD reduced-order models of FD discretized systems, and then we extend this to general nonlinear ODEs in section 3.5. Finally, in section 4, we give computational evidence of DEIM effectiveness in two specific problem settings. DEIM applied to a 4 variable discretization of the FitzHugh Nagumo equations produced a 5 variable reduced-order model which was able to capture limit cycle behavior over a long-time integration. Similar effectiveness is demonstrated for a two-dimensional steady state parametrically dependent flow problem. Throughout the discussion, we shall refer to a reduced-order system obtained directly from POD-Galerkin projection as POD reduced system and the one obtained from the POD-Galerkin approach with the DEIM approximation as POD- DEIM reduced system. The numerical examples given here were selected because they are simple and yet still present a challenge to model reduction. The purpose is to illustrate the main ideas without the complexity of the equations potentially obscuring them. In two recent papers, we have demonstrated the effectiveness of DEIM in two very different and far more complex applications. One is in neural modeling, where we reduce numerous examples of full Hodgkin Huxley models of realistic spiking neurons []. The other is in two-phase miscible flow in porous media with varying Peclet number, both with and without chemistry at the interface of the different fluids [].. Problem formulation. The method we are about to develop is really a method for reducing the dimension of general large-scale ODE systems regardless of their origin. However, a considerable source of such systems is the semidiscretization of time dependent or parameter dependent PDEs. Thus, we shall develop this method in the context of FD discretized systems arising from two types of nonlinear PDEs which are used for our numerical examples in section 4. One is time dependent, and the other is a parametrized steady state problem. We explain how to handle general nonlinearities in section 3.5. An FD discretization of a scalar nonlinear PDE in one spatial variable results in asystemofnonlinearodesoftheform (.) d y(t) =Ay(t)+F(y(t)), dt

5 DISCRETE EMPIRICAL INTERPOLATION METHOD 74 with appropriate initial conditions. Here t [,T] denotes time, y(t) =[y (t),..., y n (t)] T R n, A R n n is a constant matrix, and F is a nonlinear function evaluated at y(t) componentwise, i.e., F =[F (y (t)),...,f(y n (t))] T, F : I R for I R. The matrix A is the discrete approximation of the linear spatial differential operator, and F is a nonlinear function of a scalar variable. Steady nonlinear PDEs (in several spatial dimensions) might give rise similarly to a corresponding FD discretized system of the form (.) Ay(μ)+F(y(μ)) =, with the corresponding Jacobian (.3) J(y(μ)) := A + J F (y(μ)), where y(μ) =[y (μ),...,y n (μ)] T R n, with A and F defined as for (.). Note that, from (.3), the Jacobian of the nonlinear function is a diagonal matrix given by (.4) J F (y(μ)) = diag{f (y (μ)),...,f (y n (μ))} R n n, where F denotes the first derivative of F. The parameter μ D R d, d =,,..., generally represents the system s configuration in terms of its geometry, material properties, etc. To simplify exposition, we have considered time dependence and parametric dependence separately. Note, however, that the two may be merged to address time dependent parametrized systems. For example, if one wished to allow for a variety of initial conditions in a time dependent problem, including them as parameters would be a possibility. The dimension n of (.) and (.) reflects the number of spatial grid points used in the FD discretization. As noted, the dimension n can become extremely large when high accuracy is required. Hence, solving these systems becomes computationally intensive or possibly infeasible. Approximate models with much smaller dimensions are needed to recover the efficiency. Projection-based techniques are commonly used for constructing a reduced-order system. They construct a reduced-order system of order k n that approximates the original system from a subspace spanned by a reduced basis of dimension k in R n. We use Galerkin projection as the means for dimension reduction. In particular, let V k R n k be a matrix whose orthonormal columns are the vectors in the reduced basis. Then, by replacing y(t) in (.) by V k ỹ(t), ỹ(t) R k and projecting the system (.) onto V k, the reduced system of (.) is of the form (.5) d dtỹ(t) =VT k AV k ỹ(t)+vk T F(V k ỹ(t)). }{{} Ã Similarly, the reduced-order system of (.) is of the form (.6) Vk T AV k ỹ(μ)+vk T F(V k ỹ(μ)) =, }{{} Ã with corresponding Jacobian (.7) J(ỹ(μ)) := Ã + VT k J F(V k ỹ(μ))v k,

6 74 SAIFON CHATURANTABUT AND DANNY C. SORENSEN where à = VT k AV k R k k. The choice of the reduced basis clearly affects the quality of the approximation. The techniques for constructing a set of reduced basis use a common observation that, for a particular system, the solution space is often attracted to a low-dimensional manifold. POD constructs a set of global basis functions from a singular value decomposition (SVD) of snapshots, which are discrete samples of trajectories associated with a particular set of boundary conditions and inputs. It is expected that the samples will be on or near the attractive manifold. Once the reduced model has been constructed from this reduced basis, it may be used to obtain approximate solutions for a variety of initial conditions and parameter settings, provided the set of samples is rich enough. This empirically derived basis is clearly dependent on the sampling procedure. Among the various techniques for obtaining a reduced basis, POD constructs a reduced basis that is optimal in the sense that a certain approximation error concerning the snapshots is minimized. Thus, the space spanned by the basis from POD often gives an excellent low-dimensional approximation. The POD approach is therefore used here as a starting point... Proper orthogonal decomposition (POD). POD is a method for constructing a low-dimensional approximation representation of a subspace in Hilbert space. It is essentially the same as the SVD in a finite-dimensional space or in Euclidean space. It efficiently extracts the basis elements that contain characteristics of the space of expected solutions of the PDE. The POD basis in Euclidean space may be specified formally as follows. Given a set of snapshots {y,...,y ns } R n (recall snapshots are samples of trajectories), let Y =span{y,...,y ns } R n and r = dim(y ). A POD basis of dimension k<ris a set of orthonormal vectors {φ} k i= Rn whose linear span best approximates the space Y. The basis set {φ} k i= solves the minimization problem n s k min y j (yj T φ i)φ i, (.8) with φ T i φ j = δ ij = {φ i} k i= j= { if i = j, if i j, i= i, j =,...,k. It is well known that the solution to (.8) is provided by the set of the left singular vectors of the snapshot matrix Y =[y,...,y ns ] R n ns. In particular, suppose that the SVD of Y is Y = VΣW T, where V =[v,...,v r ] R n r and W =[w,...,w r ] R ns r are orthogonal and Σ = diag(σ,...,σ r ) R r r, with σ σ σ r >. The rank of Y is r min(n, n s ). Then the POD basis or the optimal solution of (.8) is {v i } k i=.the minimum -norm error from approximating the snapshots using the POD basis is then given by n s k r y i (yj T v i )v i = σi. j= i= i=k+ We refer the reader to [3] for more details on the POD basis in general Hilbert space. The choice of the snapshot ensemble is a crucial factor in constructing a POD basis. This choice can greatly affect the approximation of the original solution space,

7 DISCRETE EMPIRICAL INTERPOLATION METHOD 743 but it is a separate issue and will not be discussed here. POD is a popular approach because it works well in many applications and often provides an excellent reduced basis. However, as discussed in the introduction, when POD is used in conjunction with the Galerkin projection, effective dimension reduction is usually limited to the linear terms or low-order polynomial nonlinearities. Systems with general nonlinearities need additional treatment, which will be presented in section 3... A problem with complexity of the POD-Galerkin approach. This section illustrates the computational inefficiency that occurs in solving the reducedorder system that is directly obtained from the POD-Galerkin approach. Equation (.5) has the nonlinear term (.9) Ñ(ỹ) := Vk T F(V k ỹ(t)). }{{}}{{} k n n Ñ(ỹ) has a computational complexity that depends on n, the dimension of the original full-order system (.). It requires on the order of nk flops for matrix-vector multiplications, and it also requires a full evaluation of the nonlinear function F at the n-dimensional vector V k ỹ(t). In particular, suppose the complexity for evaluating the nonlinear function F with q components is O(α(q)), where α is some function of q. Then the complexity for computing (.9) is roughly O(α(n) + 4nk). As a result, solving this system might still be as costly as solving the original system. Here, the 4nk flops are a result of the two matrix-vector products required to form the argument of f and then to form the projection. We count both the multiplications and additions as flops. The same inefficiency occurs when solving the reduced-order system (.6) for the steady nonlinear PDEs by Newton iteration. At each iteration, besides the nonlinear term of the form (.9), the Jacobian of the nonlinear term (.7) must also be computed with a computational cost that still depends on the full-order dimension n: (.) J F (ỹ(μ)) := Vk T }{{} k n J F (V k ỹ(μ)) }{{} n n V k }{{} n k. The computational complexity for computing (.) is roughly O(α(n)+n k+nk + nk) ifwetreatj F as dense. The n k term becomes O(nk) ifj F is sparse or diagonal. 3. Discrete empirical interpolation method (DEIM). An effective way to overcome the difficulty described in section. is to approximate the nonlinear function in (.5) or (.6) by projecting it onto a subspace that approximates the space generated by the nonlinear function and that is spanned by a basis of dimension m n. This section considers the nonlinear functions F(V k ỹ(t)) and F(V k ỹ(μ)) of the reduced-order systems (.5) and (.6), respectively, represented by f(τ), where τ = t or μ. The approximation from projecting f(τ) onto the subspace spanned by the basis {u,...,u m } R n is of the form (3.) f(τ) Uc(τ), where U =[u,...,u m ] R n m and c(τ) is the corresponding coefficient vector. To determine c(τ), we select m distinguished rows from the overdetermined system f(τ) =Uc(τ). In particular, consider a matrix (3.) P =[e,...,e m ] R n m,

8 744 SAIFON CHATURANTABUT AND DANNY C. SORENSEN where e i =[,...,,,,...,] T R n is the i th column of the identity matrix I n R n n for i =,...,m. Suppose P T U is nonsingular. Then the coefficient vector c(τ) can be determined uniquely from (3.3) P T f(τ) =(P T U)c(τ), and the final approximation from (3.) becomes (3.4) f(τ) Uc(τ) =U(P T U) P T f(τ). To obtain the approximation (3.4), we must specify. the projection basis {u,...,u m } and. the interpolation indices {,..., m } used in (3.). The projection basis {u,...,u m } for the nonlinear function f is constructed by applying the POD on the nonlinear snapshots obtained from the original full-order system. These nonlinear snapshots are the sets {F(y(t )),...,F(y(t ns ))} and {F(y(μ )),..., F(y(μ ns ))} obtained from (.9) and (.), respectively. Note that these values are needed to generate the trajectory snapshots Y and hence represent no additional cost other than the SVD required to obtain U. The interpolation indices,..., m, used for determining the coefficient vector c(τ) in the approximation (3.), are selected inductively from the basis {u,...,u m } by the DEIM algorithm introduced in the next section. 3.. DEIM: Algorithm for interpolation indices. DEIM is a discrete variant of the empirical interpolation method (EIM) proposed in [7] for constructing an approximation of a nonaffine parametrized function with spatial variable defined in a continuous bounded domain Ω. The DEIM algorithm treats the continuous domain Ωasafinite set of discrete points in Ω. The DEIM algorithm selects an index corresponding to one of these discrete spatial points at each iteration to limit growth of an error bound. This provides a derivation of a global error bound as presented in section 3.. For general systems of nonlinear ODEs that are not FD approximations to PDEs, this spatial connotation of indices will no longer exist. However, the formal procedure remains unchanged. Algorithm. DEIM INPUT: {u l } m l= Rn linearly independent OUTPUT: =[,..., m ] T R m : [ ρ, ]=max{ u } : U =[u ], P =[e ], =[ ] 3: for l =tom do 4: Solve (P T U)c = P T u l for c 5: r = u l Uc 6: [ ρ, l ]=max{ r } [ ] 7: U [U u l ], P [P e l ], l 8: end for

9 DISCRETE EMPIRICAL INTERPOLATION METHOD 745 The notation max in Algorithm is the same as the function max in MATLAB. Thus, [ ρ, l ]=max{ r } implies ρ = r l =max i=,...,n { r i }, with the smallest index taken in case of a tie. Note that we define ρ = r l in each iteration l =,...,m. From Algorithm, the DEIM procedure constructs a set of indices inductively on the input basis. The order of the input basis {u l } m l= according to the dominant singular values is important, and an error analysis indicates that the POD basis is a suitable choice for this algorithm. The process starts from selecting the first interpolation index {,...,n} corresponding to the entry of the first input basis u with largest magnitude. The remaining interpolation indices, l for l =,...,m,are selected so that each of them corresponds to the entry with the largest magnitude of the residual r = u l Uc from line 5 of Algorithm. The term r canbeviewedasthe residual or the error between the input basis u l and its approximation Uc from interpolating the basis {u,...,u l } at the indices,..., l in line 4 of Algorithm. Hence, r i =fori =,...,l. However, the linear independence of the input basis {u l } m l= guarantees that, in each iteration, r is a nonzero vector and hence ρ is also nonzero. We shall demonstrate in Lemma 3. that ρ at each step implies that P T U is always nonsingular and hence that the DEIM procedure is well defined. This also implies that the interpolation indices { i } m i= are hierarchical and nonrepeated. Figure 3. illustrates the selection procedure in Algorithm for DEIM interpolation indices. To summarize, the DEIM approximation is given formally as follows. Definition 3.. Let f : D R n be a nonlinear vector-valued function with D R d for some positive integer d. Let {u l } m l= Rn be a linearly independent set for m {,...,n}. For τ D, the DEIM approximation of order m for f(τ) in the space spanned by {u l } m l= is given by (3.5) ˆf(τ) :=U(P T U) P T f(τ), where U =[u,...,u m ] R n m and P =[e,...,e m ] R n m,with{,..., m } being the output from Algorithm with the input basis {u i } m i=. Notice that ˆf in (3.5) is indeed an interpolation approximation for the original function f, sinceˆf is exact at the interpolation indices; i.e, for τ D, P T ˆf(τ) =P T ( U(P T U) P T f(τ) ) =(P T U)(P T U) P T f(τ) =P T f(τ). Notice also that the DEIM approximation is uniquely determined by the projection basis {u i } m i=. This basis not only specifies the projection subspace used in the approximation, but it also determines the interpolation indices used for computing the coefficient of the approximation. Hence, the choice of projection basis can greatly affect the accuracy of the approximation in (3.5) as shown also in the error bound of the DEIM approximation (3.8) in the next section. As noted, POD introduced in section. is an effective method for constructing this projection basis, since it provides an optimal global basis that captures the dynamics of the space generated from snapshots of the nonlinear function. The selection of the interpolation points is basis dependent. However, once the set of DEIM interpolation indices { l } m l= is determined from {u i} m i=,thedeim approximation is independent of the choice of basis spanning the space Range(U). In particular, let {q l } m l= be any basis for Range(U). Then (3.6) U(P T U) P T f(τ) =Q(P T Q) P T f(τ),

10 746 SAIFON CHATURANTABUT AND DANNY C. SORENSEN.6 DEIM#.4 u current point.5. DEIM# u. Uc r = u Uc.5 current point previous points DEIM# DEIM# 4 u Uc r = u Uc current point previous points.. u.3 Uc r = u Uc.4 current point previous points DEIM# DEIM# 6 u Uc r = u Uc current point previous points... u Uc.3 r = u Uc current point previous points Fig. 3.. Illustration of the selection process of indices in Algorithm for the DEIM approximation. The input basis vectors are the first six eigenvectors of the discrete Laplacian. From the plots, u = u l, Uc, andr = u l Uc are defined as in the iteration l of Algorithm. where Q =[q,...,q m ] R n m. To verify (3.6), note that Range(U) =Range(Q) so that U = QR for some nonsingular matrix R R m m. This substitution gives U(P T U) P T f(τ) =(QR)((P T Q)R) P T f(τ) =Q(P T Q) P T f(τ). 3.. Error bound for DEIM. This section provides an error bound for the DEIM approximation. The bound is obtained recursively by limiting the local growth of a certain magnification factor of the best -norm approximation error. This error bound is given formally as follows.

11 DISCRETE EMPIRICAL INTERPOLATION METHOD 747 Lemma 3.. Let f R n be an arbitrary vector. Let {u l } m l= Rn be a given orthonormal set of vectors. From Definition 3., the DEIM approximation of order m n for f in the space spanned by {u l } m l= is (3.7) ˆf = U(P T U) P T f, where U =[u,...,u m ] R n m and P =[e,...,e m ] R n m,with{,..., m } being the output from Algorithm with the input basis {u i } m i=. An error bound for ˆf is then given by (3.8) f ˆf CE (f), where (3.9) C = (P T U) and E (f) = (I UU T )f is the error of the best -norm approximation for f from the space Range(U). The constant C is bounded by (3.) C ( + n) m e T u =(+ n) m u. Proof. This proof provides motivation for the DEIM selection process in terms of minimizing the local error growth for the approximation. Consider the DEIM approximation ˆf given by (3.4). We wish to determine a bound for the error f ˆf in terms of the optimal -norm approximation for f from Range(U). This best approximation is given by (3.) f = UU T f, which minimizes the error f ˆf over Range(U). Consider (3.) f =(f f )+f = w + f, where w = f f =(I UU T )f. Define the projector P = U(P T U) P T. From (3.7) and (3.), (3.3) ˆf = Pf = P(w + f )=Pw + Pf = Pw + f. Equations (3.) and (3.3) imply f ˆf =(I P)w and (3.4) f ˆf = (I P)w I P w. Note that (3.5) I P = P = U(P T U) P T U (P T U) P T = (P T U). The first equality in (3.5) follows from the fact that I P = P for any projector P or I (see [39]). The last equality in (3.5) follows from the fact that U = P T =, since each of the matrices U and P has orthonormal columns.

12 748 SAIFON CHATURANTABUT AND DANNY C. SORENSEN Note that E (f) := w is the minimum -norm error for f defined in (3.). From (3.5), the bound for the error in (3.4) becomes (3.6) f ˆf (P T U) E (f), and (P T U) is the magnification factor needed to express the DEIM error in terms of the optimal approximation error. This establishes the error bound (3.8). An upper bound for f ˆf is now available by giving a bound for the matrix norm. The matrix norm (P T U) depends on the DEIM selection of indices,..., m through the matrix P. We now show that each iteration of the DEIM algorithm aims to select an index to limit stepwise growth of (P T U) and hence to limit the size of the bound for the error f ˆf. To simplify notation, for l =,...,m, we denote the relevant quantities at iteration l by Ū = [u,...,u l ] R n (l ), P = [e,...,e l ] R n (l ), (3.7) u = u l R n, p = e l R n, U = [Ū u] Rn l, P = [ P p] R n l. Put M = P T U, and consider the matrix norm M from Algorithm. At the initial step of Algorithm, P = e and U = u.thus, (3.8) M = P T U = e T u, M = e T u = u. That is, for m =, C = M = u. Note that the choice of the first interpolation index minimizes the matrix norm M and hence minimizes the error bound (3.8). Now consider a general step l with matrices defined in (3.7). With M = P T U, we can write M = [ ] M PT u p T Ūp T u,where M = PT Ū and M canbefactoredin the form of (3.9) [ M = M p T Ū P ] [ T u M p T = u a T ρ ][ I c where a T = p T Ū, c = M PT u,andρ = p T u a T c = p T (u Ū M PT u). Note that ρ = r,whereris defined in step 5 of Algorithm. Now, the inverse of M is [ ][ ] M I c M = (3.) ρ a T M ρ [ ][ ][ ] I c I M (3.) = ρ a T ρ {[ ] [ ] I = + ρ c [a T, ]}[ ] M (3.). A bound for the -norm of M is then given by (3.3) M { [ I ] + ρ [ c ], ] [ ] } [ a T, M ].

13 Now observe that [ ] c [a T, ] (3.4) (3.5) (3.6) Substituting this into (3.3) gives (3.7) DISCRETE EMPIRICAL INTERPOLATION METHOD 749 [ = [Ū, u] c ] [a T, ] [ Ūc u a T, ] + a n Ūc u n ρ. M [ + n] M ( + n) m u, with the last inequality obtained by recursively applying this stepwise bound over the m steps. Since the DEIM procedure selects the index l that maximizes ρ, itminimizes the reciprocal ρ, which controls the increment in the bound of M at iteration l as shown in (3.3). Therefore, the selection process for the interpolation index in each iteration of DEIM (line 6 of Algorithm ) can be explained in terms of limiting growth of the error bound of the approximation ˆf. This error bound from Lemma 3. applies to any nonlinear vector-valued function f(τ) approximated by DEIM. However, the bound in (3.) is not useful as an a priori estimate, since it is very pessimistic and grows far more rapidly than the actual observed values of (P T U). In practice, we just compute this norm (the matrix is typically small) and use it to obtain an a posteriori estimate. For a given dimension m of the DEIM approximation, the constant C does not depend on f, and hence it applies to the approximation ˆf(τ) off(τ) from Definition 3. for any τ D. However, the best approximation error E = E (f(τ)) is dependent upon f(τ) and changes with each new value of τ. This would be quite expensive to compute, so an easily computable estimate is highly desirable. A reasonable estimate is available with the SVD of the nonlinear snapshot matrix F =[f, f,...,f ns ]. Let F = Range( F), and let F = Û ΣŴT be its SVD, where Û =[U, Ũ] andu represents the leading m columns of the orthogonal matrix Û. Partition Σ = [ ] Σ Σ to conform with the partitioning of Û. The singular values are ordered as usual with σ σ σ m σ m+ σ n. The diagonal matrix Σ has the leading m singular values on its diagonal. The orthogonal matrix Ŵ =[W, W] is partitioned accordingly. Any vector f F maybewrittenintheform where g = W T ĝ and g = W T ĝ.thus f = Fĝ = UΣg + Ũ Σ g, f f = (I UU T )f = Ũ Σ g σ m+ g. For vectors f nearly in F, wehavef = Fĝ + w, with w T Fĝ =, and thus (3.8) E = E (f) σ m+

14 75 SAIFON CHATURANTABUT AND DANNY C. SORENSEN is reasonable so long as w is small ( w = O(σ m+ ) ideally). The POD approach (and hence the resulting DEIM approach) is most successful when the trajectories are attracted to a low-dimensional subspace (or manifold). Hence, the vectors f(τ) should nearly lie in F, and this approximation will then serve for all of them. To illustrate the error bound for DEIM approximation, we shall present numerical results for some examples of nonlinear parametrized functions defined on onedimensional (-D) and two-dimensional (-D) discrete spatial points. These experiments show that the approximate error bound using σ m+ in place of E is quite reasonable in practice Numerical examples of the DEIM error bound. This section demonstrates the accuracy and efficiency of the approximation from DEIM as well as its error bound given in section 3.. The examples here use the POD basis in the DEIM approximation. The POD basis is constructed from a set of snapshots corresponding to a selected set of elements in D. In particular, we define (3.9) D s = {μ s,...,μs n s } D, used for constructing a set of snapshots given by (3.3) F = {f(μ s ),...,f(μ s n s )}, which is used for computing the POD basis {u l } m l= for the DEIM approximation. To evaluate the accuracy, we apply the DEIM approximation ˆf in (3.5) to the elements in the set (3.3) D = { μ,..., μ n } D, which is different from and larger than the set D s used for the snapshots. Then we consider the average error for DEIM approximation ˆf over the elements in D given by (3.3) Ē(f) = n n i= f( μ i ) ˆf( μ i ) and the average POD error in (3.9) for POD approximation ˆf from (3.) over the elements in D given by (3.33) n Ē (f) = n f( μ i ) ˆf ( μ i ) = n i= n i= E (f( μ i )). From Lemma 3., the average error bound is then given by (3.34) Ē(f) CĒ (f), with the corresponding approximation using (3.8): (3.35) Ē(f) Cσ m+. This estimate is purely heuristic. Although it does seem to provide a reasonable qualitative estimate of the expected error, this quantity is clearly not a rigorous bound, and we are not optimistic that any such rigorous bound can be obtained mathematically.

15 DISCRETE EMPIRICAL INTERPOLATION METHOD A nonlinear parametrized function with spatial points in one dimension. Consider a nonlinear parametrized function s :Ω D R defined by (3.36) s(x; μ) =( x)cos(3πμ(x +))e (+x)μ, where x Ω = [, ] and μ D = [, π]. This nonlinear function is from an example in [7]. Let x =[x,...,x n ] T R n, with x i equidistantly spaced points in Ωfori =,...,n, n =. Define f : D R n by (3.37) f(μ) =[s(x ; μ),...,s(x n ; μ)] T R n for μ D. This example uses 5 snapshots f(μ s j ) to construct POD basis {u l} m l=, with μ s,...,μ s 5 selected as equally spaced points in [,π]. Figure 3. shows the singular values of these snapshots and the corresponding first six POD basis vectors with the first six spatial points selected from the DEIM algorithm using this POD basis as an input. Figure 3.3 compares the approximate functions from DEIM of dimension with the original function of dimension at different values of μ D. This demonstrates that DEIM gives a good approximation at arbitrary values μ D. Figure 3.4 illustrates the average errors defined in (3.3) and (3.33), with the average error bound and its approximation computed from the right-hand sides of (3.34) and (3.35), respectively, with μ,..., μ n D selected uniformly over D and n =. 5 Singular values of 5 Snapshots 3 EIM points and POD bases ( 6) PODbasis PODbasis PODbasis 3 3 PODbasis 4 PODbasis 5 4 PODbasis 6 EIM pts Fig. 3.. Singular values and the corresponding first six POD bases with DEIM points of snapshots from (3.37) A nonlinear parametrized function with spatial points in two dimensions. Consider a nonlinear parametrized function s :Ω D R defined by (3.38) s(x, y; μ) = (x μ ) +(y μ ) +., where (x, y) Ω=[.,.9] R and μ =(μ,μ ) D =[,.] R. This example is modified from one given in [9]. Let (x i,y j ) be uniform grid points in Ω for i =,...,n x and j =,...,n y. Define s : D R nx ny by (3.39) s(μ) =[s(x i,y j ; μ)] R nx ny for μ D and i =,...,n x,andj =,...,n y. In this example, the full dimension is n = n x n y = 4 (n x = n y = ). Note that we can define a corresponding

16 75 SAIFON CHATURANTABUT AND DANNY C. SORENSEN μ =.7 exact DEIM approx μ =.5 exact DEIM approx.5.5 μ =.3 exact DEIM approx μ = 3. exact DEIM approx.5.5 Fig The approximate functions from DEIM of dimension compared with the original functions (3.37) of dimension n = at μ =.7,.5,.3, Avg Error and Avg Error Bound (D) Error POD Error DEIM Error Bound Approx Error Bound Avg Error m (Reduced dim) Fig Comparison of average errors of POD and DEIM approximations for (3.37) with the average error bounds and their approximations given in (3.34) and (3.35), respectively. vector-valued function f : D R n for this problem by reshaping the matrix s(μ) to a vector of length n = n x n y. The 5 snapshots constructed from uniformly selected parameters μ s =(μ s,μ s ) in parameter domain D are used for constructing the POD basis. A different set of 65 pairs of parameters μ are used for testing (error and CPU time). Figure 3.5 shows the singular values of these snapshots and the corresponding first six POD basis vectors. Figure 3.6 illustrates the distribution of the first spatial points selected from the DEIM algorithm using this POD basis as an input. Notice that most of the selected points cluster close to the origin, where the function s increases sharply. Figure 3.7 shows that the approximate functions from DEIM of dimension 6 can reproduce the original function of dimension 4 very well at arbitrarily selected value μ D. Figure 3.8 gives the average errors with the bounds from the last section and the corresponding average CPU times for different dimensions of POD and DEIM approximations. The average errors of POD and DEIM approximations are computed from (3.3) and (3.33), respectively. The average error bounds and their approximations are computed from the right-hand sides of (3.34)

17 DISCRETE EMPIRICAL INTERPOLATION METHOD 753 POD basis # POD basis # POD basis # Singular Values of Snapshots y x y x y.5 x POD basis #4 POD basis #5 POD basis # y x y x y.5 x Fig Singular values and the first six corresponding POD basis vectors of the snapshots of the nonlinear function (3.39). 7. DEIM points y x Fig First points selected by DEIM for the nonlinear function (3.39). and (3.35), respectively. This example uses μ,..., μ n D selected uniformly over D and n = 65. The CPU times are averaged over the same set D Application of DEIM to nonlinear FD discretized systems. The DEIM approximation (3.4) developed in the previous section may now be used to approximate the nonlinear term in (.9) and the Jacobian in (.) with nonlinear approximations having computational complexity proportional to the number of reduced variables obtained with POD. In the case of nonlinear time dependent PDEs, from (3.4), set τ = t and f(t) = F(V k ỹ(t)); then the nonlinear function in (.5) approximated by DEIM can be written as (3.4) (3.4) F(V k ỹ(t)) U(P T U) P T F(V k ỹ(t)) = U(P T U) F(P T V k ỹ(t)).

18 754 SAIFON CHATURANTABUT AND DANNY C. SORENSEN Full dim= 4,[μ,μ ] = [.5,.5] POD: dim = 6, L error: 8.e 3 DEIM: dim = 6, L error:.8e s(x,y;μ) 3 s(x,y;μ) 3 s(x,y;μ) y x y x y x Fig Compare the original nonlinear function (3.39) of dimension 4 with the POD and DEIM approximations of dimension 6 at parameter μ =(.5,.5). Avg Error 5 Avg Error and Avg Error Bound (D) Error POD Error DEIM Error Bound Approx Error Bound Time (sec) Avg CPU time POD DEIM m (Reduced dim) 5 5 Reduced dim Fig Left: Average errors of POD and DEIM approximations for (3.39) with the average error bounds given in (3.34) and their approximations given in (3.35). Right: Average CPU time for evaluating the POD and DEIM approximations. The last equality in (3.4) follows from the fact that the function F evaluates componentwise at its input vector. The nonlinear term in (.9) can thus be approximated by (3.4) Ñ(ỹ) Vk T U(P T U) F(P T V k ỹ(t)). }{{}}{{} m precomputed:k m Note that the term Vk T U(PT U) in (3.4) does not depend on t, and therefore it can be precomputed before solving the system of ODEs. Note also that P T V k ỹ(t) R m in (3.4) can be obtained by extracting the rows,..., m of V k and then multiplying against ỹ, which requires mk operations. Therefore, if α(m) denotes the cost of evaluating m components of F, the complexity for computing this approximation of the nonlinear term becomes roughly O(α(m) + 4km), which is independent of dimension n of the full-order system (.). Similarly, in the case of steady parametrized nonlinear PDEs, from (3.4), set τ = μ and f(μ) =F(V k ỹ(μ)). Then the nonlinear function in (.6) approximated by DEIM can be written as (3.43) F(V k ỹ(μ)) U(P T U) F(P T V k ỹ(μ)),

19 DISCRETE EMPIRICAL INTERPOLATION METHOD 755 and the approximation for the Jacobian of the nonlinear term (.) is of the form (3.44) J F (ỹ(μ)) Vk T U(PT U) J F (P T V k ỹ(μ)) P T V k, }{{}}{{}}{{} m m m k precomputed:k m where J F (P T V k ỹ(μ)) = J F (y r (μ)) = diag{f (y r (μ)),...,f (y r m(μ))}, and y r (μ) =P T V k ỹ(μ), which can be computed with complexity independent of n as noted earlier. Therefore, the computational complexity for the approximation in (3.44) is roughly O(α(m)+mk +γmk +mk ), where γ is the average number of nonzero entries per row of the Jacobian. The approximations from DEIM are now in the form of (3.4) and (3.44) that recover the computational efficiency of (.9) and (.), respectively. Note that the nonlinear approximations from DEIM in (3.4) and (3.43) are obtained by exploiting the special structure of the nonlinear function F being evaluated componentwise at y. The next section provides a completely general scheme Interpolation of general nonlinear functions. Theverysimplecase of F(y) =[F (y ),...,F(y n )] T has been discussed for purposes of illustration and is indeed important in its own right. However, DEIM extends easily to general nonlinear functions. MATLAB notation is used here to explain this generalization: (3.45) [F(y)] i = F i (y) =F i (y j i, y j i, y j i 3,...,y j i ni )=F i (y(j i )), where F i : Y i R, Y i R ni, and the integer vector j i =[j i,ji,ji 3,...,ji n i ] T denotes the indices of the subset of components of y required to evaluate the ith component of F(y) fori =,...,n. The nonlinear function of the reduced-order system obtained from the POD- Galerkin method by projecting on the space spanned by columns of V k R n k is in the form of F(V k ỹ), where the components of ỹ R k are the reduced variables. Recall that the DEIM approximation of order m for F(V k ỹ)isgivenby (3.46) F(V k ỹ) U(P T U) P T F(V k ỹ), }{{}}{{} k m m where U R n m is the projection matrix for the nonlinear function F, P =[e,..., e m ] R n m,and,..., m are interpolation indices from the DEIM point selection algorithm. In the simple case when F is evaluated componentwise at y, we have P T F(V k ỹ)=f(p T V k ỹ), where P T V k can be obtained by extracting rows of V k corresponding to,..., m, and hence its computational complexity is independent of n. However, this is clearly not applicable to the general nonlinear vector-valued function. An efficient method for computing P T F(V k ỹ) in the DEIM approximation (3.46) of a general nonlinear function is possible using a certain sparse matrix data structure. Notice that, since y j V k (j, :)ỹ, an approximation to F(y) isprovidedby (3.47) F(V k ỹ)=[f (V k (j, :)ỹ),...,f n (V k (j n, :)ỹ)] T R n, and thus (3.48) P T F(V k ỹ)=[f (V k (j, :)ỹ),...,f m (V k (j m, :)ỹ)] T R m.

20 756 SAIFON CHATURANTABUT AND DANNY C. SORENSEN The complexity for evaluating each component i, i =,...,m, of (3.48) (3.49) F i (ỹ) :=F i (V k (j i, :)ỹ) is n i k flops plus the complexity of evaluating the nonlinear scalar-valued function F i of the n i variables indexed by j i. The sparse evaluation procedure may be implemented using a compressed sparse row data structure as used in sparse matrix factorizations. Two linear integer arrays are needed: irstart is a vector of length m + containing pointers to locations in the vector jrow, which is of length n = m i= n i. The successive n i entries of jrow(irstart(i)) indicate the dependence of the i component of F(y) on the selected variables from y. Inparticular, irstart(i) contains the location of the start of the ith row with irstart(m + ) = n +. I.e., irstart() =, and irstart(i) =+ i j= n j for i =,...,m+. jrow contains the indices of components in y required to compute the i th function F i in locations irstart(i) toirstart(i +) fori =,...,m. I.e., jrow = irstart() irstart() irstart(m) [ ] T j,...,jn, j,...,j n,..., j m,...,j m n n m Z +. }{{}}{{}}{{} j j j m Given V k and ỹ, the following demonstrates how to compute the approximation F i (ỹ) in (3.49), for i =,...,m, from the vectors irstart and jrow: for i =:m j i = jrow(irstart(i) :irstart(i +) ) F i (ỹ) =F i (V k (j i, :)ỹ) end Typically, the Jacobians of large-scale problems are sparse, and this scheme will be very efficient. However, if the Jacobian is dense (or nearly so), the complexity would be on the order of mn, wherem is the number of interpolation points. The next section will discuss the computational complexity used for constructing and solving the reduced-order systems. It will also illustrate in terms of complexity as well as computation time that solving the POD reduced system could be more expensive than solving the original full-order system Computational complexity. Recall that the POD-DEIM reduced system for the unsteady nonlinear problem (.) is d (3.5) dtỹ(t) =Ãỹ(t)+BF(V ỹ(t)), and the approximation for the steady state problem (.) is given by (3.5) Ãỹ(t)+BF(V ỹ(t)) =, where à = VT k AV k R k k,andb = Vk T UU Rk m, with U = P T U and V = P T V k. This section summarizes the computational complexity for constructing and solving the POD-DEIM reduced system compared to both the original full-order

Discrete Empirical Interpolation for Nonlinear Model Reduction

Discrete Empirical Interpolation for Nonlinear Model Reduction Discrete Empirical Interpolation for Nonlinear Model Reduction Saifon Chaturantabut and Danny C. Sorensen Department of Computational and Applied Mathematics, MS-34, Rice University, 6 Main Street, Houston,

More information

RICE UNIVERSITY. Nonlinear Model Reduction via Discrete Empirical Interpolation by Saifon Chaturantabut

RICE UNIVERSITY. Nonlinear Model Reduction via Discrete Empirical Interpolation by Saifon Chaturantabut RICE UNIVERSITY Nonlinear Model Reduction via Discrete Empirical Interpolation by Saifon Chaturantabut A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy

More information

Applications of DEIM in Nonlinear Model Reduction

Applications of DEIM in Nonlinear Model Reduction Applications of DEIM in Nonlinear Model Reduction S. Chaturantabut and D.C. Sorensen Collaborators: M. Heinkenschloss, K. Willcox, H. Antil Support: AFOSR and NSF U. Houston CSE Seminar Houston, April

More information

POD/DEIM 4D-Var for a Finite-Difference Shallow Water Equations Model

POD/DEIM 4D-Var for a Finite-Difference Shallow Water Equations Model .. POD/DEIM 4D-Var for a Finite-Difference Shallow Water Equations Model North Carolina State University rstefan@ncsu.edu POD/DEIM 4D-Var for a Finite-Difference Shallow Water Equations Model 1/64 Part1

More information

POD/DEIM 4DVAR Data Assimilation of the Shallow Water Equation Model

POD/DEIM 4DVAR Data Assimilation of the Shallow Water Equation Model nonlinear 4DVAR 4DVAR Data Assimilation of the Shallow Water Equation Model R. Ştefănescu and Ionel M. Department of Scientific Computing Florida State University Tallahassee, Florida May 23, 2013 (Florida

More information

Proper Orthogonal Decomposition (POD)

Proper Orthogonal Decomposition (POD) Intro Results Problem etras Proper Orthogonal Decomposition () Advisor: Dr. Sorensen CAAM699 Department of Computational and Applied Mathematics Rice University September 5, 28 Outline Intro Results Problem

More information

Complexity Reduction for Parametrized Catalytic Reaction Model

Complexity Reduction for Parametrized Catalytic Reaction Model Proceedings of the World Congress on Engineering 5 Vol I WCE 5, July - 3, 5, London, UK Complexity Reduction for Parametrized Catalytic Reaction Model Saifon Chaturantabut, Member, IAENG Abstract This

More information

LOCALIZED DISCRETE EMPIRICAL INTERPOLATION METHOD

LOCALIZED DISCRETE EMPIRICAL INTERPOLATION METHOD SIAM J. SCI. COMPUT. Vol. 36, No., pp. A68 A92 c 24 Society for Industrial and Applied Mathematics LOCALIZED DISCRETE EMPIRICAL INTERPOLATION METHOD BENJAMIN PEHERSTORFER, DANIEL BUTNARU, KAREN WILLCOX,

More information

Low Rank Approximation Lecture 3. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL

Low Rank Approximation Lecture 3. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL Low Rank Approximation Lecture 3 Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL daniel.kressner@epfl.ch 1 Sampling based approximation Aim: Obtain rank-r approximation

More information

Computer Science Technical Report CSTR-19/2015 June 14, R. Ştefănescu, A. Sandu

Computer Science Technical Report CSTR-19/2015 June 14, R. Ştefănescu, A. Sandu Computer Science Technical Report CSTR-19/2015 June 14, 2015 arxiv:submit/1279278 [math.na] 14 Jun 2015 R. Ştefănescu, A. Sandu Efficient approximation of sparse Jacobians for time-implicit reduced order

More information

POD/DEIM Nonlinear model order reduction of an ADI implicit shallow water equations model

POD/DEIM Nonlinear model order reduction of an ADI implicit shallow water equations model Manuscript Click here to view linked References POD/DEIM Nonlinear model order reduction of an ADI implicit shallow water equations model R. Ştefănescu, I.M. Navon The Florida State University, Department

More information

POD-DEIM APPROACH ON DIMENSION REDUCTION OF A MULTI-SPECIES HOST-PARASITOID SYSTEM

POD-DEIM APPROACH ON DIMENSION REDUCTION OF A MULTI-SPECIES HOST-PARASITOID SYSTEM POD-DEIM APPROACH ON DIMENSION REDUCTION OF A MULTI-SPECIES HOST-PARASITOID SYSTEM G. Dimitriu Razvan Stefanescu Ionel M. Navon Abstract In this study, we implement the DEIM algorithm (Discrete Empirical

More information

Krylov Subspace Methods for Nonlinear Model Reduction

Krylov Subspace Methods for Nonlinear Model Reduction MAX PLANCK INSTITUT Conference in honour of Nancy Nichols 70th birthday Reading, 2 3 July 2012 Krylov Subspace Methods for Nonlinear Model Reduction Peter Benner and Tobias Breiten Max Planck Institute

More information

CME 345: MODEL REDUCTION

CME 345: MODEL REDUCTION CME 345: MODEL REDUCTION Methods for Nonlinear Systems Charbel Farhat & David Amsallem Stanford University cfarhat@stanford.edu 1 / 65 Outline 1 Nested Approximations 2 Trajectory PieceWise Linear (TPWL)

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Downloaded 12/22/15 to Redistribution subject to SIAM license or copyright; see

Downloaded 12/22/15 to Redistribution subject to SIAM license or copyright; see SIAM J. SCI. COMPUT. Vol. 37, No. 4, pp. A2123 A2150 c 2015 Society for Industrial and Applied Mathematics ONLINE ADAPTIVE MODEL REDUCTION FOR NONLINEAR SYSTEMS VIA LOW-RANK UPDATES BENJAMIN PEHERSTORFER

More information

Application of POD-DEIM Approach for Dimension Reduction of a Diffusive Predator-Prey System with Allee Effect

Application of POD-DEIM Approach for Dimension Reduction of a Diffusive Predator-Prey System with Allee Effect Application of POD-DEIM Approach for Dimension Reduction of a Diffusive Predator-Prey System with Allee Effect Gabriel Dimitriu 1, Ionel M. Navon 2 and Răzvan Ştefănescu 2 1 The Grigore T. Popa University

More information

Application of POD-DEIM Approach on Dimension Reduction of a Diffusive Predator-Prey System with Allee effect

Application of POD-DEIM Approach on Dimension Reduction of a Diffusive Predator-Prey System with Allee effect Application of POD-DEIM Approach on Dimension Reduction of a Diffusive Predator-Prey System with Allee effect Gabriel Dimitriu 1, Ionel M. Navon 2 and Răzvan Ştefănescu 2 1 The Grigore T. Popa University

More information

POD/DEIM Nonlinear model order reduction of an. ADI implicit shallow water equations model

POD/DEIM Nonlinear model order reduction of an. ADI implicit shallow water equations model POD/DEIM Nonlinear model order reduction of an arxiv:.5v [physics.comp-ph] Nov ADI implicit shallow water equations model R. Ştefănescu, I.M. Navon The Florida State University, Department of Scientific

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Manuel Matthias Baumann

Manuel Matthias Baumann Delft University of Technology Faculty of Electrical Engineering, Mathematics and Computer Science Delft Institute of Applied Mathematics Nonlinear Model Order Reduction using POD/DEIM for Optimal Control

More information

An Empirical Chaos Expansion Method for Uncertainty Quantification

An Empirical Chaos Expansion Method for Uncertainty Quantification An Empirical Chaos Expansion Method for Uncertainty Quantification Melvin Leok and Gautam Wilkins Abstract. Uncertainty quantification seeks to provide a quantitative means to understand complex systems

More information

Parameterized Partial Differential Equations and the Proper Orthogonal D

Parameterized Partial Differential Equations and the Proper Orthogonal D Parameterized Partial Differential Equations and the Proper Orthogonal Decomposition Stanford University February 04, 2014 Outline Parameterized PDEs The steady case Dimensionality reduction Proper orthogonal

More information

CME 345: MODEL REDUCTION - Projection-Based Model Order Reduction

CME 345: MODEL REDUCTION - Projection-Based Model Order Reduction CME 345: MODEL REDUCTION - Projection-Based Model Order Reduction Projection-Based Model Order Reduction Charbel Farhat and David Amsallem Stanford University cfarhat@stanford.edu 1 / 38 Outline 1 Solution

More information

Linear Algebra and Eigenproblems

Linear Algebra and Eigenproblems Appendix A A Linear Algebra and Eigenproblems A working knowledge of linear algebra is key to understanding many of the issues raised in this work. In particular, many of the discussions of the details

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

CME 345: MODEL REDUCTION

CME 345: MODEL REDUCTION CME 345: MODEL REDUCTION Proper Orthogonal Decomposition (POD) Charbel Farhat & David Amsallem Stanford University cfarhat@stanford.edu 1 / 43 Outline 1 Time-continuous Formulation 2 Method of Snapshots

More information

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Numerical Methods. Elena loli Piccolomini. Civil Engeneering.  piccolom. Metodi Numerici M p. 1/?? Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement

More information

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Jim Lambers MAT 610 Summer Session Lecture 1 Notes Jim Lambers MAT 60 Summer Session 2009-0 Lecture Notes Introduction This course is about numerical linear algebra, which is the study of the approximate solution of fundamental problems from linear algebra

More information

Nonlinear Model Reduction for Uncertainty Quantification in Large-Scale Inverse Problems

Nonlinear Model Reduction for Uncertainty Quantification in Large-Scale Inverse Problems Nonlinear Model Reduction for Uncertainty Quantification in Large-Scale Inverse Problems Krzysztof Fidkowski, David Galbally*, Karen Willcox* (*MIT) Computational Aerospace Sciences Seminar Aerospace Engineering

More information

Towards parametric model order reduction for nonlinear PDE systems in networks

Towards parametric model order reduction for nonlinear PDE systems in networks Towards parametric model order reduction for nonlinear PDE systems in networks MoRePas II 2012 Michael Hinze Martin Kunkel Ulrich Matthes Morten Vierling Andreas Steinbrecher Tatjana Stykel Fachbereich

More information

POD/DEIM Nonlinear model order reduction of an ADI implicit shallow water equations model

POD/DEIM Nonlinear model order reduction of an ADI implicit shallow water equations model POD/DEIM Nonlinear model order reduction of an ADI implicit shallow water equations model R. Ştefănescu, I.M. Navon The Florida State University, Department of Scientific Computing, Tallahassee, Florida

More information

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013. The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment Two Caramanis/Sanghavi Due: Tuesday, Feb. 19, 2013. Computational

More information

Practical Linear Algebra: A Geometry Toolbox

Practical Linear Algebra: A Geometry Toolbox Practical Linear Algebra: A Geometry Toolbox Third edition Chapter 12: Gauss for Linear Systems Gerald Farin & Dianne Hansford CRC Press, Taylor & Francis Group, An A K Peters Book www.farinhansford.com/books/pla

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

Multipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations

Multipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations Multipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations Oleg Burdakov a,, Ahmad Kamandi b a Department of Mathematics, Linköping University,

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

Notes on Householder QR Factorization

Notes on Householder QR Factorization Notes on Householder QR Factorization Robert A van de Geijn Department of Computer Science he University of exas at Austin Austin, X 7872 rvdg@csutexasedu September 2, 24 Motivation A fundamental problem

More information

Basic Elements of Linear Algebra

Basic Elements of Linear Algebra A Basic Review of Linear Algebra Nick West nickwest@stanfordedu September 16, 2010 Part I Basic Elements of Linear Algebra Although the subject of linear algebra is much broader than just vectors and matrices,

More information

Towards Reduced Order Modeling (ROM) for Gust Simulations

Towards Reduced Order Modeling (ROM) for Gust Simulations Towards Reduced Order Modeling (ROM) for Gust Simulations S. Görtz, M. Ripepi DLR, Institute of Aerodynamics and Flow Technology, Braunschweig, Germany Deutscher Luft und Raumfahrtkongress 2017 5. 7. September

More information

Simple Examples on Rectangular Domains

Simple Examples on Rectangular Domains 84 Chapter 5 Simple Examples on Rectangular Domains In this chapter we consider simple elliptic boundary value problems in rectangular domains in R 2 or R 3 ; our prototype example is the Poisson equation

More information

arxiv: v1 [math.na] 16 Feb 2016

arxiv: v1 [math.na] 16 Feb 2016 NONLINEAR MODEL ORDER REDUCTION VIA DYNAMIC MODE DECOMPOSITION ALESSANDRO ALLA, J. NATHAN KUTZ arxiv:62.58v [math.na] 6 Feb 26 Abstract. We propose a new technique for obtaining reduced order models for

More information

Conditions for Robust Principal Component Analysis

Conditions for Robust Principal Component Analysis Rose-Hulman Undergraduate Mathematics Journal Volume 12 Issue 2 Article 9 Conditions for Robust Principal Component Analysis Michael Hornstein Stanford University, mdhornstein@gmail.com Follow this and

More information

2. Review of Linear Algebra

2. Review of Linear Algebra 2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear

More information

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 26, 2010 Linear system Linear system Ax = b, A C m,n, b C m, x C n. under-determined

More information

Rank Determination for Low-Rank Data Completion

Rank Determination for Low-Rank Data Completion Journal of Machine Learning Research 18 017) 1-9 Submitted 7/17; Revised 8/17; Published 9/17 Rank Determination for Low-Rank Data Completion Morteza Ashraphijuo Columbia University New York, NY 1007,

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Numerical Linear Algebra Background Cho-Jui Hsieh UC Davis May 15, 2018 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works CS68: The Modern Algorithmic Toolbox Lecture #8: How PCA Works Tim Roughgarden & Gregory Valiant April 20, 206 Introduction Last lecture introduced the idea of principal components analysis (PCA). The

More information

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T. Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where

More information

Iterative methods for positive definite linear systems with a complex shift

Iterative methods for positive definite linear systems with a complex shift Iterative methods for positive definite linear systems with a complex shift William McLean, University of New South Wales Vidar Thomée, Chalmers University November 4, 2011 Outline 1. Numerical solution

More information

MET Workshop: Exercises

MET Workshop: Exercises MET Workshop: Exercises Alex Blumenthal and Anthony Quas May 7, 206 Notation. R d is endowed with the standard inner product (, ) and Euclidean norm. M d d (R) denotes the space of n n real matrices. When

More information

Krylov-Subspace Based Model Reduction of Nonlinear Circuit Models Using Bilinear and Quadratic-Linear Approximations

Krylov-Subspace Based Model Reduction of Nonlinear Circuit Models Using Bilinear and Quadratic-Linear Approximations Krylov-Subspace Based Model Reduction of Nonlinear Circuit Models Using Bilinear and Quadratic-Linear Approximations Peter Benner and Tobias Breiten Abstract We discuss Krylov-subspace based model reduction

More information

12. Cholesky factorization

12. Cholesky factorization L. Vandenberghe ECE133A (Winter 2018) 12. Cholesky factorization positive definite matrices examples Cholesky factorization complex positive definite matrices kernel methods 12-1 Definitions a symmetric

More information

Linear Algebra. Paul Yiu. Department of Mathematics Florida Atlantic University. Fall A: Inner products

Linear Algebra. Paul Yiu. Department of Mathematics Florida Atlantic University. Fall A: Inner products Linear Algebra Paul Yiu Department of Mathematics Florida Atlantic University Fall 2011 6A: Inner products In this chapter, the field F = R or C. We regard F equipped with a conjugation χ : F F. If F =

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Lecture 5: Numerical Linear Algebra Cho-Jui Hsieh UC Davis April 20, 2017 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

Bare minimum on matrix algebra. Psychology 588: Covariance structure and factor models

Bare minimum on matrix algebra. Psychology 588: Covariance structure and factor models Bare minimum on matrix algebra Psychology 588: Covariance structure and factor models Matrix multiplication 2 Consider three notations for linear combinations y11 y1 m x11 x 1p b11 b 1m y y x x b b n1

More information

Geometric Modeling Summer Semester 2010 Mathematical Tools (1)

Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Recap: Linear Algebra Today... Topics: Mathematical Background Linear algebra Analysis & differential geometry Numerical techniques Geometric

More information

Rapid Modeling of Nonlinearities in Heat Transfer. Jillian Chodak Free. Doctor of Philosophy In Mechanical Engineering

Rapid Modeling of Nonlinearities in Heat Transfer. Jillian Chodak Free. Doctor of Philosophy In Mechanical Engineering Rapid Modeling of Nonlinearities in Heat Transfer Jillian Chodak Free Dissertation submitted to the faculty of the Virginia Polytechnic Institute and State University in partial fulfillment of the requirements

More information

Theorem A.1. If A is any nonzero m x n matrix, then A is equivalent to a partitioned matrix of the form. k k n-k. m-k k m-k n-k

Theorem A.1. If A is any nonzero m x n matrix, then A is equivalent to a partitioned matrix of the form. k k n-k. m-k k m-k n-k I. REVIEW OF LINEAR ALGEBRA A. Equivalence Definition A1. If A and B are two m x n matrices, then A is equivalent to B if we can obtain B from A by a finite sequence of elementary row or elementary column

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

Proper Orthogonal Decomposition. POD for PDE Constrained Optimization. Stefan Volkwein

Proper Orthogonal Decomposition. POD for PDE Constrained Optimization. Stefan Volkwein Proper Orthogonal Decomposition for PDE Constrained Optimization Institute of Mathematics and Statistics, University of Constance Joined work with F. Diwoky, M. Hinze, D. Hömberg, M. Kahlbacher, E. Kammann,

More information

Large Scale Data Analysis Using Deep Learning

Large Scale Data Analysis Using Deep Learning Large Scale Data Analysis Using Deep Learning Linear Algebra U Kang Seoul National University U Kang 1 In This Lecture Overview of linear algebra (but, not a comprehensive survey) Focused on the subset

More information

MATH 205C: STATIONARY PHASE LEMMA

MATH 205C: STATIONARY PHASE LEMMA MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)

More information

Least Squares Optimization

Least Squares Optimization Least Squares Optimization The following is a brief review of least squares optimization and constrained optimization techniques. I assume the reader is familiar with basic linear algebra, including the

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

14 Singular Value Decomposition

14 Singular Value Decomposition 14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Reduced-dimension Models in Nonlinear Finite Element Dynamics of Continuous Media

Reduced-dimension Models in Nonlinear Finite Element Dynamics of Continuous Media Reduced-dimension Models in Nonlinear Finite Element Dynamics of Continuous Media Petr Krysl, Sanjay Lall, and Jerrold E. Marsden, California Institute of Technology, Pasadena, CA 91125. pkrysl@cs.caltech.edu,

More information

Cambridge University Press The Mathematics of Signal Processing Steven B. Damelin and Willard Miller Excerpt More information

Cambridge University Press The Mathematics of Signal Processing Steven B. Damelin and Willard Miller Excerpt More information Introduction Consider a linear system y = Φx where Φ can be taken as an m n matrix acting on Euclidean space or more generally, a linear operator on a Hilbert space. We call the vector x a signal or input,

More information

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2 HE SINGULAR VALUE DECOMPOSIION he SVD existence - properties. Pseudo-inverses and the SVD Use of SVD for least-squares problems Applications of the SVD he Singular Value Decomposition (SVD) heorem For

More information

Empirical Gramians and Balanced Truncation for Model Reduction of Nonlinear Systems

Empirical Gramians and Balanced Truncation for Model Reduction of Nonlinear Systems Empirical Gramians and Balanced Truncation for Model Reduction of Nonlinear Systems Antoni Ras Departament de Matemàtica Aplicada 4 Universitat Politècnica de Catalunya Lecture goals To review the basic

More information

Proper Orthogonal Decomposition (POD) for Nonlinear Dynamical Systems. Stefan Volkwein

Proper Orthogonal Decomposition (POD) for Nonlinear Dynamical Systems. Stefan Volkwein Proper Orthogonal Decomposition (POD) for Nonlinear Dynamical Systems Institute for Mathematics and Scientific Computing, Austria DISC Summerschool 5 Outline of the talk POD and singular value decomposition

More information

IV. Matrix Approximation using Least-Squares

IV. Matrix Approximation using Least-Squares IV. Matrix Approximation using Least-Squares The SVD and Matrix Approximation We begin with the following fundamental question. Let A be an M N matrix with rank R. What is the closest matrix to A that

More information

Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm

Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm References: Trefethen & Bau textbook Eigenvalue problem: given a matrix A, find

More information

The QR Decomposition

The QR Decomposition The QR Decomposition We have seen one major decomposition of a matrix which is A = LU (and its variants) or more generally PA = LU for a permutation matrix P. This was valid for a square matrix and aided

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

L26: Advanced dimensionality reduction

L26: Advanced dimensionality reduction L26: Advanced dimensionality reduction The snapshot CA approach Oriented rincipal Components Analysis Non-linear dimensionality reduction (manifold learning) ISOMA Locally Linear Embedding CSCE 666 attern

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition An Important topic in NLA Radu Tiberiu Trîmbiţaş Babeş-Bolyai University February 23, 2009 Radu Tiberiu Trîmbiţaş ( Babeş-Bolyai University)The Singular Value Decomposition

More information

4.2. ORTHOGONALITY 161

4.2. ORTHOGONALITY 161 4.2. ORTHOGONALITY 161 Definition 4.2.9 An affine space (E, E ) is a Euclidean affine space iff its underlying vector space E is a Euclidean vector space. Given any two points a, b E, we define the distance

More information

Computational Methods. Eigenvalues and Singular Values

Computational Methods. Eigenvalues and Singular Values Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations

More information

DEIM-BASED PGD FOR PARAMETRIC NONLINEAR MODEL ORDER REDUCTION

DEIM-BASED PGD FOR PARAMETRIC NONLINEAR MODEL ORDER REDUCTION VI International Conference on Adaptive Modeling and Simulation ADMOS 213 J. P. Moitinho de Almeida, P. Díez, C. Tiago and N. Parés (Eds) DEIM-BASED PGD FOR PARAMETRIC NONLINEAR MODEL ORDER REDUCTION JOSE

More information

Course Summary Math 211

Course Summary Math 211 Course Summary Math 211 table of contents I. Functions of several variables. II. R n. III. Derivatives. IV. Taylor s Theorem. V. Differential Geometry. VI. Applications. 1. Best affine approximations.

More information

Problem 1: Toolbox (25 pts) For all of the parts of this problem, you are limited to the following sets of tools:

Problem 1: Toolbox (25 pts) For all of the parts of this problem, you are limited to the following sets of tools: CS 322 Final Exam Friday 18 May 2007 150 minutes Problem 1: Toolbox (25 pts) For all of the parts of this problem, you are limited to the following sets of tools: (A) Runge-Kutta 4/5 Method (B) Condition

More information

The Generalized Empirical Interpolation Method: Analysis of the convergence and application to data assimilation coupled with simulation

The Generalized Empirical Interpolation Method: Analysis of the convergence and application to data assimilation coupled with simulation The Generalized Empirical Interpolation Method: Analysis of the convergence and application to data assimilation coupled with simulation Y. Maday (LJLL, I.U.F., Brown Univ.) Olga Mula (CEA and LJLL) G.

More information

Lecture 6. Numerical methods. Approximation of functions

Lecture 6. Numerical methods. Approximation of functions Lecture 6 Numerical methods Approximation of functions Lecture 6 OUTLINE 1. Approximation and interpolation 2. Least-square method basis functions design matrix residual weighted least squares normal equation

More information

Singular Value Decomposition

Singular Value Decomposition Chapter 6 Singular Value Decomposition In Chapter 5, we derived a number of algorithms for computing the eigenvalues and eigenvectors of matrices A R n n. Having developed this machinery, we complete our

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

Model reduction of large-scale dynamical systems

Model reduction of large-scale dynamical systems Model reduction of large-scale dynamical systems Lecture III: Krylov approximation and rational interpolation Thanos Antoulas Rice University and Jacobs University email: aca@rice.edu URL: www.ece.rice.edu/

More information

Lecture 9: Numerical Linear Algebra Primer (February 11st)

Lecture 9: Numerical Linear Algebra Primer (February 11st) 10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

Empirical Interpolation of Nonlinear Parametrized Evolution Operators

Empirical Interpolation of Nonlinear Parametrized Evolution Operators MÜNSTER Empirical Interpolation of Nonlinear Parametrized Evolution Operators Martin Drohmann, Bernard Haasdonk, Mario Ohlberger 03/12/2010 MÜNSTER 2/20 > Motivation: Reduced Basis Method RB Scenario:

More information

Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices

Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices Vahid Dehdari and Clayton V. Deutsch Geostatistical modeling involves many variables and many locations.

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J Olver 8 Numerical Computation of Eigenvalues In this part, we discuss some practical methods for computing eigenvalues and eigenvectors of matrices Needless to

More information

7. Dimension and Structure.

7. Dimension and Structure. 7. Dimension and Structure 7.1. Basis and Dimension Bases for Subspaces Example 2 The standard unit vectors e 1, e 2,, e n are linearly independent, for if we write (2) in component form, then we obtain

More information

arxiv: v5 [math.na] 16 Nov 2017

arxiv: v5 [math.na] 16 Nov 2017 RANDOM PERTURBATION OF LOW RANK MATRICES: IMPROVING CLASSICAL BOUNDS arxiv:3.657v5 [math.na] 6 Nov 07 SEAN O ROURKE, VAN VU, AND KE WANG Abstract. Matrix perturbation inequalities, such as Weyl s theorem

More information