Downloaded 05/27/14 to Redistribution subject to SIAM license or copyright; see

Size: px
Start display at page:

Download "Downloaded 05/27/14 to Redistribution subject to SIAM license or copyright; see"

Transcription

1 SIAM J. SCI. COMPUT. Vol. 35, No. 5, pp. B1010 B1033 c 2013 Society for Industrial and Applied Mathematics MODEL REDUCTION OF DESCRIPTOR SYSTEMS BY INTERPOLATORY PROJECTION METHODS SERKAN GUGERCIN, TATJANA STYKEL, AND SARAH WYATT Abstract. In this paper, we investigate an interpolatory projection framework for model reduction of descriptor systems. With a simple numerical example, we first illustrate that directly applying the subspace conditions from the standard state space settings to descriptor systems generically leads to unbounded H 2 or H errors due to the mismatch of the polynomial parts of the full and reducedorder transfer functions. We then develop modified interpolatory subspace conditions based on the deflating subspaces that guarantee a bounded error. For the special cases of index-1 and index-2 descriptor systems, we also show how to avoid computing these deflating subspaces explicitly while still enforcing interpolation. The question of how to choose interpolation points optimally naturally arises as in the standard state space setting. We answer this question in the framework of the H 2 -norm by extending the iterative rational Krylov algorithm to descriptor systems. Several numerical examples are used to illustrate the theoretical discussion. Key words. interpolatory model reduction, differential-algebraic equations, H 2 approximation AMS subject classifications. 41A05, 93A15, 93C05, 37M99 DOI / Introduction. We discuss interpolatory model reduction of differentialalgebraic equations (DAEs), or descriptor systems, given by E ẋ(t) =Ax(t)+Bu(t), (1.1) y(t) =Cx(t)+Du(t), where x(t) R n, u(t) R m,andy(t) R p are the states, inputs, and outputs, respectively, E R n n is a singular matrix, A R n n, B R n m, C R p n, and D R p m. Taking the Laplace transformation of system (1.1) with zero initial condition x(0) = 0, weobtainŷ(s) = G(s)û(s), where û(s) andŷ(s) denotethe Laplace transforms of u(t) andy(t), respectively, and G(s) =C(sE A) 1 B + D is a transfer function of (1.1). By following the standard abuse of notation, we will denote both the dynamical system and its transfer function by G. Systems of the form (1.1) with extremely large state space dimension n arise in various applications such as electrical circuit simulations, multibody dynamics, and semidiscretized partial differential equations. Simulation and control in these largescale settings is a huge computational burden. Efficient model utilization becomes crucial where model reduction offers a remedy. The goal of model reduction is to replace the original dynamics in (1.1) by a model of the same form but with much smaller state space dimension such that this reduced model is a high-fidelity approximation to the original one. Hence, we seek a reduced-order model Submitted to the journal s Computational Methods in Science and Engineering section January 22, 2013; accepted for publication (in revised form) June 11, 2013; published electronically September 26, Department of Mathematics, Virginia Tech, Blacksburg, VA (gugercin@math. vt.edu). This author s work was supported in part by NSF grant DMS Institut für Mathematik, Universität Augsburg, Universitätsstraße 14, Augsburg, Germany (stykel@math.uni-augsburg.de). This author s work was supported in part by the Research Network FROPT: Model Reduction Based Optimal Control for Field-Flow Fractionation, funded by the German Federal Ministry of Education and Science (BMBF), grant 05M10WAB. Department of Mathematics, Indian River State College, Fort Pierce, FL (swyatt@ irsc.edu). B1010

2 INTERPOLATORY PROJECTION METHODS FOR DAEs B1011 (1.2) Ẽ x(t) =Ã x(t)+ Bu(t), ỹ(t) = C x(t)+ Du(t), where Ẽ, Ã Rr r, B R r m, C R p r,and D R p m such that r n, andthe error y ỹ is small with respect to a specific norm over a wide range of inputs u(t) with bounded energy. In the frequency domain, this means that the transfer function of (1.2) given by G(s) = C(sẼ Ã) 1 B + D approximates G(s) well, i.e., the error G(s) G(s) is small in a certain system norm. The reduced-order model (1.2) can be obtained via projection as follows. We first construct two n r matrices V and W, approximate the full-order state x(t) by V x(t), and then enforce the Petrov Galerkin condition ( W T EV x(t) ) AV x(t) Bu(t) = 0, ỹ(t) = CV x(t)+ Du(t). As a result, we obtain the reduced-order model (1.2) with the system matrices (1.3) Ẽ = W T EV, Ã = W T AV, B = W T B, C = CV, D = D. The projection matrices V and W determine the subspaces of interest and can be computed in many different ways. In this paper, we consider projection-based interpolatory model reduction methods, where the choice of V and W enforces certain tangential interpolation of the original transfer function. These methods will be presented in section 2 in more detail. Projection-based interpolation with multiple interpolation points was initially proposed in [9, 35, 34. Grimme [13 later developed a numerically efficient framework using the rational Krylov subspace method of Ruhe [27. The tangential rational interpolation framework we will be using here is due to a recent work [11. One might assume that extending interpolatory model reduction from standard state space systems with E = I to descriptor systems with singular E is as simple as replacing I by E. In section 2, we present an example showing that this naive approach may lead to a poor approximation with an unbounded error G(s) G(s) although the classical interpolatory subspace conditions are satisfied. In section 3, we modify these conditions in order to enforce bounded error. The theoretical result will take advantage of the spectral projectors. Then using the new subspace conditions, we extend in section 4 the optimal H 2 model reduction method of [18 to descriptor systems. Sections 3 and 4 make explicit usage of deflating subspaces which could be numerically demanding for general problems. Thus, for the special cases of index-1 and index-2 descriptor systems, we show in sections 5 and 6, respectively, how to apply interpolatory model reduction without explicitly computing the deflating subspaces. Theoretical discussion will be supported by several numerical examples. In particular, in section 5.2 we present an example, where the balanced truncation approach [29 is prone to failing due to problems solving the generalized Lyapunov equations, while the (optimal) interpolatory model reduction can be effectively applied. 2. Model reduction by tangential rational interpolation. The goal of model reduction by tangential interpolation is to construct a reduced-order model (1.2) such that its transfer function G(s) interpolates the original one, G(s), at selected points in the complex plane along selected directions. We will use the notation of [3 to define this problem more precisely: Given G(s) =C(sE A) 1 B + D, the

3 B1012 SERKAN GUGERCIN, TATJANA STYKEL, AND SARAH WYATT left interpolation points {μ i } q i=1, μ i C, together with the left tangential directions {c i } q i=1, c i C p, and the right interpolation points {σ j } r j=1, σ j C, together with the right tangential directions {b j } r j=1, b j C m, we seek to find a reduced-order model G(s) = C(sẼ Ã) 1 B + D that is a tangential interpolant to G(s), i.e., c T i (2.1) G(μ i)=c T G(μ i i ), i =1,...,ρ, G(σ j )b j = G(σ j )b j, j =1,...,r. Throughout the paper, we will assume ρ = r, meaning that the same number of left and right interpolation points are used. In addition to interpolating G(s), one might ask for matching the higher-order derivatives of G(s) along the tangential directions as well. This scenario will also be handled. By combining the projection-based reduced-order modeling technique with the interpolation framework, we want to find the n r matrices W and V such that the reduced-order model (1.2), (1.3) satisfies the tangential interpolation conditions (2.1). This approach is called projection-based interpolatory model reduction. How to enforce the interpolation conditions via projection is shown in the following theorem, where the lth derivative of G(s) with respect to s evaluated at s = σ is denoted by G (l) (σ). Theorem 2.1 ([3, 11). Let σ, μ C be such that s E A and s Ẽ Ã are both invertible for s = σ, μ, andletb C m and c C p be fixed nontrivial vectors. 1. If ( (2.2) (σ E A) 1 E ) j 1 (σ E A) 1 Bb Im(V), j =1,...,N, then G (l) (σ)b = G (l) (σ)b for l =0, 1,...,N If ( (2.3) (μ E A) T E T ) j 1 (μ E A) T C T c Im(W), j =1,...,M, then c T G (l) (μ) =c T G(l) (μ) for l =0, 1,...,M If both (2.2) and (2.3) hold, and if σ = μ, thenc T G (l) (σ)b = c T G(l) (σ)b for l =0, 1,...,M + N +1. One can see that to solve the rational tangential interpolation problem via projection all one has to do is to construct the matrices V and W as in Theorem 2.1. The dominant cost is to solve sparse linear systems. We also note in Theorem 2.1 that the interpolated values G (l) (σ)b and c T G (l) (μ) are never explicitly computed. This is crucial since that computation is known to be poorly conditioned [10. To illustrate the result of Theorem 2.1 for a special case of Hermite bi-tangential interpolation, we take the same right and left interpolation points {σ i } r i=1, left tangential directions {c i } r i=1, and right tangential directions {b i} r i=1. Then for the projection matrices V = [ (σ 1 E A) 1 Bb 1,..., (σ r E A) 1 (2.4) Bb r, W = [ (σ 1 E A) T C T c 1,..., (σ r E A) T C T (2.5) c r, the reduced-order model G(s) = C(sẼ Ã) 1 B + D as in (1.3) satisfies (2.6) G(σ i )b i = G(σ i )b i, c T i G(σ i)=c T i G(σ i ), c T i G (σ i )b i = c T i G (σ i )b i for i =1,...,r,providedσ i E A and σ i Ẽ Ã are both nonsingular. Here, G (σ i ) and G (σ i ) denote the first derivatives of G(s) and G(s), respectively, at s = σ i.

4 INTERPOLATORY PROJECTION METHODS FOR DAEs B1013 G1 L 1 Gq 1 L q 1 V C 1 C2 Fig RLC circuit. Note that Theorem 2.1 does not distinguish between the singular E case and the standard state space case with E = I. In other words, the interpolation conditions hold regardless as long as the matrices σ i E A and σ i Ẽ Ã are invertible. This is the precise reason why one might simply assume that extending model reduction from G(s) =C(sI A) 1 B + D to G(s) =C(sE A) 1 B + D is as simple as replacing I by E; see, for example, [26. However, as the following example shows, this is not the case. Example 2.1. Consider an RLC circuit shown in Figure 2.1 which is modeled by an index-2 SISO descriptor system (1.1), see section 3 for a definition of index. For q = 500, the descriptor system has order n = We approximate this system with a model (1.2) of order r = 20 using Hermite interpolation. The carefully chosen interpolation points were taken as the mirror images of the dominant poles of the transfer function G(s). Since these interpolation points are known to be good points for model reduction [14, 16, one would expect the interpolant to be a good approximation as well. However, the situation is indeed the opposite. Let ı = 1andω R denote the frequency in rad/sec. For the frequency range ω [10 8, 10 16, Figure 2.2 shows the amplitude Bode plots of the frequency responses G(ıω) and G(ıω) (upper plot) and the absolute error G(ıω) G(ıω) (lower plot). One can see that the error grows unbounded as the frequency ω increases, and, hence, the approximation is extremely poor with unbounded H 2 and H error norms even though it satisfies Hermite interpolation at carefully selected effective interpolation points. The reason is simple. Even though E is singular, Ẽ = WT EV will generically be a nonsingular matrix for r rank(e). Then the transfer function G(s) ofthe reduced-order model (1.2) is proper, i.e., lim s G(s) <, although G(s) mightbe improper. In this case, G(s) can be additively decomposed as (2.7) G(s) =G sp (s)+p(s), where G sp (s) andp(s) denote, respectively, the strictly proper part and the polynomial part of G(s). Hence, special care needs to be taken in order to match the polynomial part of G(s). Consider again the RLC circuit in Example 2.1. For q =2, the system matrices take the form C G 1 G E = 0 0 C L 1 0, A = G 1 G , B T = [ = C, D =0, with capacitances C 1, C 2, inductance L 1, and conductance G 1, and the transfer function C q

5 B1014 SERKAN GUGERCIN, TATJANA STYKEL, AND SARAH WYATT G(ıω), G(ıω) G(ıω) G(ıω) Amplitude Bode Plots of G and G Amplitude Bode Plot of G G G G ~ freq (rad/sec) Fig Example 2.1: amplitude plots of G(ıω) and G(ıω) (upper); the absolute error G(ıω) G(ıω) (lower). G(s) =C(sE A) 1 B = sc 2 G 1 s 2 C 2 L 1 G 1 + sc 2 + G 1 + sc 1 has the strictly proper part G sp (s) =sc 2 G 1 /(s 2 C 2 L 1 G 1 + sc 2 + G 1 ) and the polynomial part P(s) =sc 1. We note that the polynomial part of G(s) has to match that of G(s) exactly. Otherwise, regardless of how good the interpolation points are, the error will always grow unbounded as s. For the very special descriptor systems with the proper transfer functions and only for interpolation around s =, a solution to the above interpolation problem is offered in [7. For descriptor systems of index 1, where the polynomial part of G(s) is a constant matrix, a remedy is also suggested in [3 by an appropriate choice of D. However, the general case is remained unsolved. We will tackle the problem, where (1.1) is a descriptor systems of higher index, its transfer function G(s)mayhave a higher order polynomial part, and interpolation is at arbitrary points in the complex plane. Moreover, we will show how to choose interpolation points and tangential directions optimally for interpolatory model reduction of descriptor systems. 3. Interpolatory projection methods for descriptor systems. As stated above, in order to have bounded H and H 2 errors, the polynomial part of G(s)hasto match the polynomial part of G(s) =G sp (s)+p(s) exactly. We enforce the reducedorder model G(s) to have the decomposition G(s) = G sp (s)+ P(s) with P(s) =P(s). This implies that the error transfer function does not contain a polynomial part, i.e., G err (s) =G(s) G(s) =G sp (s) G sp (s) is strictly proper meaning lim s G err (s) = 0. Hence, if G sp (s) interpolates G sp (s), we will be able to enforce that G(s) interpolates G(s).

6 INTERPOLATORY PROJECTION METHODS FOR DAEs B1015 We introduce now the spectral projectors P l and P r which will play a vital role in interpolatory-based model reduction. Let the pencil λe A be transformed into the Weierstrass canonical form (3.1) E = S [ Inf 0 0 N T 1, [ J 0 A = S T 0 I 1, n where S and T are nonsingular and N is nilpotent, i.e., N ν =0forsomeν > 0; see [12. The smallest ν such that N ν 1 0andN ν = 0 is called the index of the pencil λe A and also of the descriptor system (1.1). The spectral projectors P l and P r onto the left and right deflating subspaces of λe A corresponding to the finite eigenvalues along left and right deflating subspaces corresponding to an eigenvalue at infinity are defined as (3.2) P l = S [ Inf S 1, [ Inf 0 P r = T 0 0 T 1. See also [1, 2 for model reduction of DAEs using the concept of tractability index. The following theorem provides the projection matrices W and V satisfying new subspace conditions such that the reduced-order model G(s) obtained by projection as in (1.3) will not only satisfy the interpolation conditions but also match the polynomial part of G(s). Theorem 3.1. Given a full-order model G(s) =C(sE A) 1 B + D, define P l and P r to be the spectral projectors onto the left and right deflating subspaces of the pencil λe A corresponding to the finite eigenvalues. Let the columns of W and V span the left and right deflating subspaces of λe A corresponding to the eigenvalue at infinity. Let σ, μ C be interpolation points such that se A and sẽ Ã are nonsingular for s = σ, μ, andletb Cm and c C p. Define V f and W f such that (3.3) (3.4) Im(V f )=span{ ((σe A) 1 E ) j 1 (σe A) 1 P l Bb, j =1,...,N Im(W f )=span{ ((μe A) T E T ) j 1 (μe A) T P T r CT c, j =1,...,M Then with the choice of W =[W f, W and V =[V f, V, the reduced-order model G(s) = C(s Ẽ Ã) 1 B + D obtained via projection as in (1.3) satisfies 1. P(s) =P(s), 2. G (l) (σ)b = G (l) (σ)b for l =0, 1,..., N 1, 3. c T G (l) (μ) =c T G(l) (μ) for l =0, 1,..., M 1. If σ = μ, we have, additionally, c T G (l) (σ)b = c T G(l) (σ)b for l =0,..., M+ N +1. Proof. LetλE A be in the Weiertrass canonical form (3.1) and let T =[T 1, T 2 and S 1 =[S 1, S 2 T be partitioned according to E and A. Then the matrices W and V take the form W = S 2 R S =(I P T l )W, V = T 2 R T =(I P r )V with nonsingular R S and R T. Furthermore, the strictly proper and polynomial parts of G(s) in (2.7) are given by G sp (s) =CT 1 (si nf J) 1 S T 1 B and P(s) =CT 2 (sn I n ) 1 S T 2 B + D, }, }.

7 B1016 SERKAN GUGERCIN, TATJANA STYKEL, AND SARAH WYATT respectively. It follows from (3.1) and (3.2) that and, hence, EP r = P l E, AP r = P l A, (se A) 1 P l = P r (se A) 1, (3.5) W f = P T l W f and V f = P r V f. Then the system matrices of the reduced-order model have the form [ W T Ẽ = W T f EV f Wf T EV = EV [ W T f EV f 0 = W EV T f W EV T 0 W T EV, [ W T Ã = W T f AV f Wf T AV = AV [ W T f AV f 0 W T AV f W T AV =, 0 W AV T [ W T B = W T f B B = W T B, C = CV =[CVf, CV, D = D. Thus, the strictly proper and polynomial parts of G(s) aregivenby G sp (s) =CV f (sw T f EV f WT f AV f ) 1 W T f B, P(s) =CV (sw T EV W T AV ) 1 W T B + D = CT 2 (sn I) 1 S T 2 B + D = P(s). One can see that the polynomial parts of G(s) and G(s) coincide, and the proof of the interpolation result reduces to proving the interpolation conditions for the strictly proper parts of G(s) and G(s). To prove this, we first note that (3.1) and (3.2) imply that CP r (σe A) 1 P l B = CT [ I [ σi J 0 0 σn I = CT 1 (σi J) 1 S T 1 B = G sp(σ). Furthermore, it follows from the relations (3.5) that CP r V f = CV f, W T f P l B = W T f B. 1 [ I S 1 B Due to the definitions of V f and W f in (3.3) and (3.4), respectively, Theorem 2.1 gives G sp (σ)b = CP r V f (σw T f EV f W T f AV f ) 1 W T f P lbb = G sp (σ)b, c T G sp (μ) =c T CP r V f (μw T f EV f W T f AV f ) 1 W T f P lb = c T G sp (μ). Since both parts 1 and 2 of Theorem 2.1 hold, we have c T G sp (σ)b = c T G sp(σ)b for σ = μ. The other interpolatory relations for the derivatives of the transfer function can be proved analogously. Next, we illustrate that even though Theorem 3.1 has a very similar structure to that of Theorem 2.1, replacing B and C in (2.2) and (2.3) by P l B and CP r,

8 INTERPOLATORY PROJECTION METHODS FOR DAEs B1017 G(ıω), G(ıω) G(ıω) G(ıω) G G ~ Amplitude Bode Plots of G and G Amplitude Bode Plot of G G freq (rad/sec) Fig Amplitude plots of G(ıω) and G(ıω) (upper); the absolute error G(ıω) G(ıω) (lower). respectively, and including the bases for the deflating subspaces corresponding to the eigenvalue at infinity in the reduction subspaces makes a big difference in the resulting reduced-order model. Toward this goal, we revisit Example 2.1. We reduce the same full-order model using the same interpolation points but imposing the subspace conditions of Theorem 3.1 instead. Figure 3.1 depicts the resulting amplitude plots of G(ıω) and G(ıω) (upper plot) and that of the error G(ıω) G(ıω) (lower plot) when the new subspace conditions of Theorem 3.1 are used. Unlike the case in Example 2.1, where the error G(ıω) G(ıω) grew unbounded, for the new reduced-order model, the maximum error is below 10 2 and the error decays to zero as ω approaches, since the polynomial part is captured exactly. In some applications, the deflating subspaces of λe A corresponding to the eigenvalues at infinity may have large dimension n. However, the order of the system can still be reduced if it contains states that are uncontrollable and unobservable at infinity. Such states can be removed from the system without changing its transfer function, hence preserving the interpolation conditions as in Theorem 3.1. In this case the projection matrices W and V can be determined as proposed in [29 by solving the projected discrete-time Lyapunov equations (3.6) (3.7) AXA T EXE T = (I P l )BB T (I P l ) T, X =(I P r )X(I P r ) T, A T YA E T YE = (I P r ) T C T C(I P r ), Y =(I P l ) T Y(I P l ). Let X C and Y C be the Cholesky factors of X = X C X T C and Y = Y C YT C, respectively, and let Y T C AX C =[U 1, U 0 diag(σ, 0)[V 1, V 0 T be a singular value decomposition, where [U 1, U 0 and[v 1, V 0 are orthogonal and Σ is nonsingular. Then the projection matrices W and V can be taken as W = Y C U 1 and V = X C V 1.Note that the Cholesky factors X C and Y C can be computed directly using the generalized Smith method [30. In this method, it is required to solve ν 1 linear systems only,

9 B1018 SERKAN GUGERCIN, TATJANA STYKEL, AND SARAH WYATT where ν is the index of the pencil λe A or, equivalently, the nilpotence index of N in (3.1). The computation of the projectors P l and P r is, in general, a difficult problem. However, for some structured problems arising in circuit simulation, multibody systems and computational fluid dynamics, these projectors can be constructed in explicit form that significantly reduces the computational complexity of the method; see [30 for details. 4. Interpolatory optimal H 2 model reduction for descriptor systems. The choice of interpolation points and tangential directions is the central issue in interpolatory model reduction. This choice determines whether the reduced-order model is high fidelity. Until recently, selection of interpolation points was largely ad hoc and required several model reduction attempts to arrive at a reasonable approximation. However, Gugercin, Antoulas, and Beattie [18 introduced an interpolatory model reduction method for generating a reduced model G of order r which is an optimal H 2 approximation to the original system G in the sense that it minimizes the H 2 -norm error, i.e., (4.1) G G H2 = min G G r H2, dim( G r)=r where (4.2) G H2 := ( 1 2π + ) 1/2 G(ıω) 2 F dω and F denotes the Frobenius matrix norm. Since this is a nonconvex optimization problem, the computation of a global minimizer is a very difficult task. Hence, instead, one tries to find high-fidelity reduced models that satisfy first-order necessary optimality conditions. There exist, in general, two approaches for solving this problem. These are Lyapunov-based optimal H 2 methods presented in [19, 21, 28, 32, 33, 36 and interpolation-based optimal H 2 methods considered in [4, 6, 8, 15, 17, 18, 22, 25, 31. While the Lyapunov-based approaches require solving a series of Lyapunov equations, which becomes costly and sometimes intractable in large-scale settings, the interpolatory approaches only require solving a series of sparse linear systems and have proved to be numerically very effective. Moreover, as shown in [18, both frameworks are theoretically equivalent, which further motivates the usage of interpolatory model reduction techniques for the optimal H 2 approximation. For SISO systems, interpolation-based H 2 optimality conditions were originally developed by Meier and Luenberger [25. Then, based on these conditions, an effective algorithm for interpolatory optimal H 2 approximation, called the Iterative Rational Krylov Algorithm (IRKA), was introduced in [15, 17. This algorithm has also been recently extended to MIMO systems using the tangential interpolation framework; see [8, 18, 31 for more details. The model reduction methods mentioned above, however, only deal with the system G(s) =C(sE A) 1 B + D with a nonsingular matrix E. In this section, we will extend IRKA to descriptor systems. First, we establish the interpolatory H 2 optimality conditions in the new setting. Theorem 4.1. Let G(s) =G sp (s)+p(s) be decomposed into the strictly proper and polynomial parts, and let G(s) = G sp (s)+ P(s) have an rth-order strictly proper part G sp (s) = C sp (sẽsp Ãsp) 1 Bsp.

10 INTERPOLATORY PROJECTION METHODS FOR DAEs B If G(s) minimizes the H 2 error G G H2 over all reduced-order models with an rth-order strictly proper part, then P(s) =P(s) and G sp (s) minimizes the H 2 error G sp G sp H2. 2. Suppose that the pencil sẽsp Ãsp has distinct eigenvalues { λ i } r i=1. Let y i and z i denote the left and right eigenvectors associated with λ i so that à sp z i = λ i Ẽ sp z i, yi Ãsp = λ i yi Ẽsp, andyi Ẽspz j = δ ij. Then for c i = C sp z i and b T i = yi B sp, we have (4.3) G( λ i )b i = G( λ i )b i, c T i G( λ i )=c T G( λ i i ), and c T i G ( λ i )b i = c T G i ( λ i )b i for i =1,...,r. Proof. 1. The polynomial parts of G(s)and G(s) coincide, since otherwise the H 2 -norm of the error G(s) G(s) would be unbounded. Then it readily follows that G sp (s) minimizes G sp (s) G sp (s) H2 since G(s) G(s) =G sp (s) G sp (s). 2. Since P(s) =P(s), the H 2 optimal model reduction problem for G(s) now reduces to the H 2 optimal problem for the strictly proper transfer function G sp (s). Hence, the optimal H 2 conditions of [18 require that G sp (s) is a bi-tangential Hermite interpolant to G sp (s) with { λ i } r i=1 being the interpolation points and {c i } r i=1 and {b i} r i=1 being the corresponding left and right tangential directions, respectively. Thus, the interpolation conditions (4.3) hold since P(s) =P(s). Unfortunately, the H 2 optimal interpolation points and associated tangent directions are not known a priori, since they depend on the reduced-order model to be computed. To overcome this difficulty, an iterative algorithm IRKA was developed [15, 17 which is based on successive substitution. In IRKA, the interpolation points are corrected iteratively by the choosing mirror images of poles of the current reducedorder model as the next interpolation points. The tangential directions are corrected in a similar way; see [3, 18 for details. The situation in the case of descriptor systems is similar, where the optimal interpolation points and the corresponding tangential directions depend on the strictly proper part of the reduced-order model to be computed. Moreover, we need to make sure that the final reduced model has the same polynomial part as the original one. Hence, we will modify IRKA to meet these challenges. In particular, we will correct not the poles and the tangential directions of the intermediate reduced-order model at the successive iteration step but that of the strictly proper part of the intermediate reduced-order model. As in the case of Theorem 3.1, the spectral projectors P l and P r will be used to construct the required interpolatory subspaces. A sketch of the resulting model reduction method is given in Algorithm 4.1. Note that until step 4 of Algorithm 4.1, the polynomial part is not included since the interpolation parameters result from the strictly proper part G sp (s). In a sense, step 3 runs the optimal H 2 iteration on G sp (s). Hence, at the end of step 3, we construct an optimal H 2 interpolant to G sp (s). However, in step 5, we append the interpolatory subspaces with V and W (which can be computed as described at the end of section 3) so that the final reduced-order model in step 6 has the same polynomial part as G(s), and, consequently, the final reduced-order model G(s) satisfies the optimality conditions of Theorem 4.1. One can see this from step 3c: upon convergence, the interpolation points are the mirror images of the poles of G sp (s) and

11 B1020 SERKAN GUGERCIN, TATJANA STYKEL, AND SARAH WYATT Algorithm 4.1. Interpolatory H 2 optimal model reduction method for descriptor systems. 1. Make an initial selection of the interpolation points {σ i } r i=1 and the tangential directions {b i } r i=1 and {c i} r i=1. 2. V f = [ (σ 1 E A) 1 P l Bb 1,..., (σ r E A) 1 P l Bb r, W f = [ (σ 1 E A) T P T r C T c 1,..., (σ r E A) T P T r C T c r. 3. while (not converged) a. Ã sp = Wf T AV f, Ẽsp = Wf T EV f, B sp = Wf T B,and C sp = CV f. b. Compute Ãspz i = λ i Ẽ sp z i and yi Ãsp = λ i yi Ẽsp with yi Ẽspz j = δ ij, where y i and z i are left and right eigenvectors associated with λ i. c. σ i λ i, b T i yi B sp and c i C sp z i for i =1,...,r. d. V f = [ (σ 1 E A) 1 P l Bb 1,..., (σ r E A) 1 P l Bb r, W f = [ (σ 1 E A) T P T r C T c 1,..., (σ r E A) T P T r C T c r. end while 4. Compute W and V such that Im(W )=Im(I P T l ) and Im(V )=Im(I P r ). 5. Set V =[V f, V and W =[W f, W. 6. Ẽ = W T EV, Ã = WT AV, B = W T B, C = CV, D = D. the tangential directions are the residue directions from G sp (s) as the optimality conditions require. Since Algorithm 4.1 uses the projected quantities P l B and CP r, theoretically iterating on a strictly proper dynamical system, the convergence behavior of this algorithm will follow the same pattern of IRKA, which has been observed to converge rapidly in numerous numerical applications. Summarizing, we have shown so far how to reduce descriptor systems such that the transfer function of the reduced descriptor systems is a tangential interpolant to the original one and matches the polynomial part preventing unbounded H and H 2 error norms. However, this model reduction approach involves the explicit computation of the spectral projectors or the corresponding deflating subspaces, which could be numerically infeasible for general large-scale problems. In the next two sections, we will show that for certain important classes of descriptor systems, the same can be achieved without explicitly forming the spectral projectors. 5. Semiexplicit descriptor systems of index 1. We consider the semiexplicit descriptor system (5.1) E 11 ẋ 1 (t)+e 12 ẋ 2 (t) =A 11 x 1 (t)+a 12 x 2 (t)+b 1 u(t), 0 = A 21 x 1 (t)+a 22 x 2 (t)+b 2 u(t), y(t) =C 1 x 1 (t)+c 2 x 2 (t)+du(t), wherethestateisx(t) =[x T 1 (t), xt 2 (t)t R n with x 1 (t) R n1, x 2 (t) R n2,and n 1 + n 2 = n, the input is u(t) R m, the output is y(t) R p,ande 11, A 11 R n1 n1, E 12, A 12 R n1 n2, A 21 R n2 n1, A 22 R n2 n2, B 1 R n1 m, B 2 R n2 m, C 1 R p n1, C 2 R p n2, D R p m. We assume that A 22 and E 11 E 12 A 1 22 A 21 are both nonsingular. In this case system (5.1) is of index 1. We now compute the polynomial part of this system. Proposition 5.1. Let G(s) be a transfer function of the descriptor system (5.1), where A 22 and E 11 E 12 A 1 22 A 21 are both nonsingular. Then the polynomial part of G(s) is a constant matrix given by

12 INTERPOLATORY PROJECTION METHODS FOR DAEs B1021 P(s) =C 1 M 1 B 2 + C 2 M 2 B 2 + D, where (5.2) M 1 =(E 11 E 12 A 1 22 A 21) 1 E 12 A 1 22, (5.3) M 2 = A 1 22 A 21(E 11 E 12 A 1 22 A 21) 1 E 12 A 1 22 A Proof. Consider [ 1 [ [ (se A) 1 se11 A B = 11 se 12 A 12 B1 F1 (s) =. A 21 A 22 B 2 F 2 (s) This leads to (5.4) (5.5) (se 11 A 11 )F 1 (s)+(se 12 A 12 )F 2 (s) =B 1, A 21 F 1 (s) A 22 F 2 (s) =B 2. Solving (5.5) for F 2 (s) givesf 2 (s) = A 1 22 (B 2 + A 21 F 1 (s)), and, thus, F 1 (s) = ( (se 11 A 11 ) (se 12 A 12 )A 1 22 A ) 1 ( 21 B1 +(se 12 A 12 )A 1 22 B 2), implying that lim F 1(s) = ( E 11 E 12 A 1 s 22 A ) 1 21 E12 A 1 22 B 2. Taking into account (5.5), we have [ lim F 2(s) = A 1 s 22 A ( 21 E11 E 12 A 1 22 A ) 1 21 E12 A 1 22 A 1 22 B 2. Finally, note that P(s) = lim s G(s) = lim s (C 1 F 1 (s) +C 2 F 2 (s) +D), which leads to the desired conclusion. We are now ready to state the interpolation result for the descriptor system (5.1). This result was briefly hinted at in the recent survey [3. Here, we present it with a formal proof together with the formula developed for P(s) in Proposition 5.1. As our main focus will be H 2 -based model reduction, we will list the interpolation conditions only for the bi-tangential Hermite interpolation. Extension to the higherorder derivative interpolation is straightforward, as shown in the earlier sections. Lemma 5.2. Let G(s) be a transfer function of the semiexplicit descriptor system (5.1). For given r distinct interpolation points {σ i } r i=1, left tangential directions {c i } r i=1, and right tangential directions {b i} r i=1,letv Cn r and W C n r be given by (5.6) (5.7) V =[(σ 1 E A) 1 Bb 1,..., (σ r E A) 1 Bb r, W =[(σ 1 E A) T C T c 1,..., (σ r E A) T C T c r. Furthermore, let B and C be the matrices composed of the tangential directions as (5.8) B =[b 1,..., b r and C =[c 1,..., c r. Define the reduced-order system matrices as (5.9) Ẽ = W T EV, Ã = W T AV + C T DB, B = W T B C T D, C = CV DB, D = C1 M 1 B 2 + C 2 M 2 B 2 + D.

13 B1022 SERKAN GUGERCIN, TATJANA STYKEL, AND SARAH WYATT Then the polynomial parts of G(s) = C(sẼ Ã) 1 B+ D and G(s) match assuming Ẽ is nonsingular, and G(s) satisfies the bi-tangential Hermite interpolation conditions G(σ i )b i = G(σ i )b i, c T i G(σ i)=c T i G(σ i ), c T i G (σ i )b i = c T i G (σ i )b i for i =1,...,r,providedσ i E A and σ i Ẽ Ã are both nonsingular. Proof. Since Ẽ is nonsingular, lim G(s) s = D. But by Lemma 5.1, we have D = lim s G(s), ensuring that the polynomial parts of G(s) and G(s) coincide. The interpolation property is a result of [5, 24, where it is shown that the appropriate shifting of the reduced-order quantities with a nonzero feedthrough term as done in (5.9) attains the original bi-tangential interpolation conditions hidden in V and W of (5.6) and (5.7), respectively. This result leads to Algorithm 5.1, which achieves bi-tangential Hermite interpolation of the semiexplicit descriptor system (5.1) without explicitly forming the spectral projectors. Algorithm 5.1. Interpolatory model reduction for semiexplicit descriptor systems of index Make an initial selection of the interpolation points {σ i } r i=1 and the tangential directions {b i } r i=1 and {c i} r i=1. 2. V =[(σ 1 E A) 1 Bb 1,..., (σ r E A) 1 Bb r, W =[(σ 1 E A) T C T c 1,..., (σ r E A) T C T c r. 3. Define D = C 1 M 1 B 2 + C 2 M 2 B 2 + D, wherem 1 and M 2 are defined in (5.2) and (5.3), respectively. 4. Define B =[b 1,..., b r and C =[c 1,..., c r. 5. Ẽ = W T EV, Ã = W T AV + C T DB, B = W T B C T D, C = CV DB. We want to emphasize that the assumption in Lemma 5.2 that Ẽ is nonsingular is not restrictive. This will be the case generically. If W and V are full-rank n r matrices, the rank of the r r matrix Ẽ = WT EV will be generically r as long as rank(e) >r. The fact that Ẽ will be full-rank is indeed the precise reason why we cannot simply apply Theorem 2.1 to descriptor systems Optimal H 2 model reduction for semiexplicit descriptor systems. Lemma 5.2 provides the theoretical basis for an IRKA-based iteration for H 2 model reduction of semiexplicit descriptor systems. One naive approach would be the following: Given system (5.1), simply apply IRKA of [18 to obtain an intermediate reduced-order model Ĝ(s) =Ĉ(sÊ Â) 1 ˆB + ˆD. Of course, this will be generically an ODE and will not necessarily match the behavior of G(s) around s =. Thus, applying Lemma 5.2 we obtain the final reduced model G(s) = C(sẼ Ã) 1 B + D with (5.10) Ẽ = Ê, Ã = Â + CT DB, B = ˆB C T D, C = Ĉ DB, where D is defined as in Lemma 5.2. While this shifting of the intermediate matrices by the D-term guarantees that the polynomial parts of G(s) and G(s) match,the H 2 optimality conditions will not be satisfied. The reason is as follows. Recall that the H 2 optimality requires bi-tangential Hermite interpolation at the mirror images of the reduced-order poles. The intermediate model Ĝ(s) satisfies these conditions but since it does not match the polynomial part, the resulting H 2 error is unbounded.

14 INTERPOLATORY PROJECTION METHODS FOR DAEs B1023 Then constructing G(s) as in (5.10), we enforce the matching of the polynomial part but G(s) still interpolates G(s) at the same interpolation points as Ĝ(s), i.e., at the mirrorimagesofthepolesofĝ(s). However, clearly due to (5.10), the poles of Ĝ(s) and G(s) are different; thus G(s) will no longer satisfy the optimal H 2 necessary conditions. In order to achieve both the mirror-image interpolation conditions and the polynomial part matching, the D term modification must be included throughout the iteration, not just at the end. This results in Algorithm 5.2. Algorithm 5.2. IRKA for semiexplicit descriptor systems of index Make an initial shift selection {σ i } r i=1 and initial tangential directions {b i} r i=1 and {c i } r i=1. 2. V r = [ (σ 1 E A) 1 Bb 1,..., (σ r E A) 1 Bb r, W r = [ (σ 1 E A) T C T c 1,..., (σ r E A) T C T c r. 3. Define D = C 1 M 1 B 2 + C 2 M 2 B 2 + D, wherem 1 and M 2 are defined in (5.2) and (5.3), respectively. 4. Define B =[b 1,..., b r and C =[c 1,..., c r. 5. while (not converged) a. Ẽ = W T EV, Ã = WT AV + C T DB, B = W T B C T D, C = CV DB. b. Compute Y ÃZ =diag(λ 1,...,λ r ) and Y ẼZ = I r, where the columns of Z =[z 1,...,z r and Y =[y 1,...,y r are, respectively, the right and left eigenvectors of λẽ Ã. c. σ i λ i, b T i y i B and c i Cz i for i =1,...,r. d. V = [ (σ 1 E A) 1 Bb 1,..., (σ r E A) 1 Bb r, W = [ (σ 1 E A) T C T c 1,..., (σ r E A) T C T c r. end while 6. Ẽ = W T EV, Ã = WT AV + C T DB, B = W T B C T D, C = CV DB. The next result is a restatement of the above discussion. Corollary 5.3. Let G(s) be a transfer function of the semiexplicit descriptor system (5.1) and let G(s) = C(sẼ Ã) 1 B + D be obtained by Algorithm 5.2. Then G(s) satisfies the first-order necessary conditions of the H 2 optimal model reduction problem Supersonic inlet flow example. Consider the Euler equations modeling the unsteady flow through a supersonic diffuser as described in [23. Linearization around a steady-state solution and spatial discretization using a finite volume method leads to a semiexplicit descriptor system (5.1) of dimension n = For simplicity, we focus on the single-input single-output subsystem dynamics corresponding to the input as the bleed actuation mass flow and the output as the average Mach number. It is important to emphasize that applying balanced truncation to this model is far from trivial because of difficulty solving the Lyapunov equations. Instead, we apply the proposed method in Algorithm 5.2 to obtain an H 2 -optimal reduced-model of order r = 11, where the only cost is sparse linear solves and the need for computing the spectral projectors is removed. As pointed out in [23, the frequencies of practical interest are the low-frequency components. Figure 5.1 shows the amplitude and phase plots of G(ıω) and G(ıω) forω [0, 25, illustrating a very accurate match of the original model. The resulting model reduction errors are

15 B1024 SERKAN GUGERCIN, TATJANA STYKEL, AND SARAH WYATT Fig Supersonic inlet flow model: amplitude and phase Bode plots of G(s) and G(s). (5.11) G G H G H = and G sp G sp H G sp H = We have also run IRKA using its original form for ODEs as introduced in [18. The resulting reduced model is an ODE which does not match P(s), the polynomial part of G(s), and thus leads to unbounded H 2 error. To remedy this situation, we have explicitly computed P(s) which is a constant matrix for this example and constructed a reduced model as in (5.10), enforcing P(s) =P(s). But, as discussed above, we emphasize that this reduced model does not satisfy the H 2 optimality conditions anymore. However, it still produced an effective approximation as illustrated by the relative errors (5.12) G H H G H = and G sp H sp H G sp H = , where H denotes the new reduced model and H sp is the strictly proper part of H. These numbers are not surprising. For this example, P(s) and the modification in (5.10) to enforce the matching of the polynomial part perturbed the strictly proper part of the reduced model only slightly. Thus H errors in (5.11) and (5.12) are close to each other; indeed the deviation is on the order of 10 5,closeto the value of the polynomial part. 6. Stokes-type descriptor systems of index 2. In this section, we consider a Stokes-type descriptor system of the form (6.1) E 11 ẋ 1 (t) =A 11 x 1 (t)+a 12 x 2 (t)+b 1 u(t), 0 = A 21 x 1 (t)+b 2 u(t), y(t) =C 1 x 1 (t)+c 2 x 2 (t)+du(t),

16 INTERPOLATORY PROJECTION METHODS FOR DAEs B1025 wherethestateisx(t) = [ x T 1 (t), x T 2 (t) T R n with x 1 (t) R n1, x 2 (t) R n2,and n 1 + n 2 = n, the input is u(t) R m, the output is y(t) R p,ande 11, A 11 R n1 n1, A 12 R n1 n2, A 21 R n2 n1, B 1 R n1 m, B 2 R n2 m, C 1 R p n1, C 2 R p n2, and D R p m. We assume that E 11 is nonsingular, A 12 and A T 21 both have full column rank, and A 21 E 1 11 A 12 is nonsingular. In this case, system (6.1) is of index 2. In [20, the authors showed how to apply ADI-based balanced truncation to systems of the form (6.1) without explicit projector computation. Here, we extend this analysis to interpolatory model reduction and show how to reduce (6.1) optimally in the H 2 -norm without computing the deflating subspaces. Unlike [20, E 11 is not assumed to be symmetric and positive definite, and A 21 is not assumed to be equal to A T 12. First, consider system (6.1) with B 2 = 0, asthecaseofb 2 0 follows similarly. Following the exposition of [20, consider the projectors Π l = I E 1 11 A 12(A 21 E 1 11 A 12) 1 A 21, Π r = I A 12 (A 21 E 1 11 A 12) 1 A 21 E Then the descriptor system (6.1) can be decoupled into a system (6.2) with and an algebraic equation Π l E 11 Π r ẋ 1 (t) =Π l A 11 Π r x 1 (t)+π l B 1 u(t), y(t) =CΠ r x 1 (t)+du(t) C = C 1 C 2 (A 21 E11 1 A 12) 1 A 21 E 1 11 A 11, D = D C 2 (A 21 E 1 11 A 12) 1 A 21 E 1 11 B 1 x 2 (t) = (A 21 E 1 11 A 12) 1 A 21 E 1 11 A 11x 1 (t) (A 21 E 1 11 A 12) 1 A 21 E 1 11 B 1u(t). By decomposing Π l and Π r as (6.3) Π l = Θ l,1 Θ T l,2, Π r = Θ r,1 Θ T r,2 with Θ l,j, Θ r,j R n1 (n1 n2) such that (6.4) Θ T l,2θ l,1 = I, Θ T r,2θ r,1 = I, and defining x 1 (t) =Θ T r,2 x 1(t), system (6.2) becomes (6.5) Θ T l,2e 11 Θ r,1 x 1 (t) =Θ T l,2a 11 Θ r,1 x 1 (t)+θ T l,2b 1 u(t), y(t) =CΘ r,1 x 1 (t)+du(t). Then the reduction of the descriptor system (6.1) is equivalent to the reduction of system (6.2) or (6.5). However, the beauty of this equivalence lies in the observation that the matrix Θ T l,2 E 11Θ r,1 is nonsingular. Therefore, standard model reduction procedures for ODEs can be applied to system (6.5), and the obtained reduced-order model will approximate the descriptor system (6.1). It is important to emphasize that even though (6.2) and (6.5) are equivalent to (6.1), the ultimate goal of this section is to develop an interpolatory model reduction method that does not require the explicit computation of either the projectors Π l, Π r or the basis matrices Θ l,2, Θ r,1.forthis purpose, define the matrices

17 B1026 SERKAN GUGERCIN, TATJANA STYKEL, AND SARAH WYATT (6.6) E = Π l E 11 Π r, A = Π l A 11 Π r, B = Π l B 1, C = CΠ r. In an interpolation setting, the matrix of interest will be σe A with σ C. Luckily, several key properties of E + τa, τ C, were already introduced in [20. However, we present these results in terms of σe A instead of E + τa. Lemma 6.1. Let Θ l,2 and Θ r,1 be the matrices defined in (6.3) and let σ C be such that σθ T l,2 E 11Θ r,1 Θ T l,2 A 11Θ r,1 is nonsingular. The matrix defined as (6.7) satisfies (σe A) I := Θ r,1 (σθ T l,2e 11 Θ r,1 Θ T l,2a 11 Θ r,1 ) 1 Θ T l,2 (σe A) I (σe A) =Π r and (σe A)(σE A) I = Π l. Similarly, the matrix defined as (6.8) satisfies (σe T A T ) I := Θ l,2 (σθ T r,1 ET 11 Θ l,2 Θ T r,1 AT 11 Θ l,2) 1 Θ T r,1 (σe T A T ) I (σe T A T )=Π T l and (σe T A T )(σe T A T ) I = Π T r. Proof. Following an argument similar to that in [20, the proof of the first equality follows directly from (6.3) and (6.7). Indeed, we have (σe A) I (σe A) =Θ r,1 (σθ T l,2e 11 Θ r,1 Θ T l,2a 11 Θ r,1 ) 1 Θ T l,2π l (σe 11 A 11 )Π r = Θ r,1 (σθ T l,2 E 11Θ r,1 Θ T l,2 A 11Θ r,1 ) 1 Θ T l,2 (σe 11 A 11 )Θ r,1 Θ T r,2 = Θ r,1 Θ T r,2 = Π r. The remaining equalities follow similarly. At first glance, the definition of the generalized inverses in (6.7) and (6.8) may seem to be irrelevant for model reduction of the descriptor system (6.1). Recall that reducing (6.1) is equivalent to reducing system (6.2) and the interpolatory projection method for (6.2) will require inverting (σe A) and(σe T A T ). However, these inverses do not exist. As a result, definitions (6.7) and (6.8) become pivotal in order to achieve interpolatory model reduction of (6.2) and, thereby, of (6.1), as shown in the next theorem. Theorem 6.2. Let s = σ, μ C be such that the matrices sθ T l,2 E 11Θ r,1 Θ T l,2 A 11Θ r,1 and sw T E 11 V W T A 11 V are invertible. Define the reduced-order model (6.9) G(s) =CV(sW T E 11 V W T A 11 V) 1 W T B 1 + D. Let b C m and c C p be fixed nontrivial vectors. 1. If (σe A) I Bb Im(V) Im(Θ r,1 ) and (μe T A T ) I C T c Im(W) Im(Θ l,2 ),theng(σ)b = G(σ)b and c T G(μ) =c T G(μ). 2. If, in addition, σ = μ, thenc T G (σ)b = c T G (σ)b. Remark 6.1. Before presenting the proof, we want to emphasize that this interpolation result is different from the usual interpolation framework given in Theorem 2.1, where the projection matrices V and W are constructed using A, E, B, andc and

18 INTERPOLATORY PROJECTION METHODS FOR DAEs B1027 then the projection is applied to the same quantities. In Theorem 6.2, however, the projection matrices V and W are constructed using the system matrices of (6.2), namely, A, E, B, andc. But then the projection (model reduction) is applied to the system matrices of (6.1), namely, E 11, A 11, B 1,andC. Thus, the proof will serve to fill in this important gap. Proof. Since systems (6.1) and (6.5) are equivalent, they have the same transfer function given by G(s) =CΘ r,1 (sθ T l,2 E 11Θ r,1 Θ T l,2 A 11Θ r,1 ) 1 Θ T l,2 B 1 + D. Since Θ T l,2e 11 Θ r,1 in (6.5) is nonsingular, we make use of Theorem 2.1. Define Ṽ and W such that (6.10) V = Θ r,1 Ṽ and W = Θ l,2 W. Plugging these matrices into (6.9), we obtain that G(s) =CΘ r,1 Ṽ(s W T Θ T l,2e 11 Θ r,1 Ṽ W T Θ T l,2a 11 Θ r,1 Ṽ) 1 WT Θ T l,2b 1 + D. Moreover, it follows from (6.4) that Ṽ = ΘT r,2 V and W = Θ T l,1w. To prove the first claim in part 1, we note that (6.3) implies that (6.11) Θ T l,2b = Θ T l,2π l B 1 = Θ T l,2θ l,1 Θ T l,2b 1 = Θ T l,2b 1. Since (σe A) I Bb Im(V), there exists q R r such that (σe A) I Bb = Vq. Using (6.7), (6.10), and (6.11), this equation can be written as Θ r,1 (σθ T l,2 E 11Θ r,1 Θ T l,2 A 11Θ r,1 ) 1 Θ T l,2 B 1b = Θ r,1 Ṽq. The left multiplication by Θ T r,2 gives (σθ T l,2e 11 Θ r,1 Θ T l,2a 11 Θ r,1 ) 1 Θ T l,2b 1 b = Ṽq. Hence, (σθ T l,2 E 11Θ r,1 Θ T l,2 A 11Θ r,1 ) 1 Θ T l,2 B 1b Im(Ṽ). Then it follows from Theorem 2.1 that G(σ)b = G(σ)b. The equation c T G(σ) =c T G(σ) can be obtained similarly. The proof of part 2 follows from part 3 of Theorem 2.1. It should be noted that the conditions Im(V) Im(Θ r,1 )andim(w) Im(Θ l,2 ) in part 1 of Theorem 6.2 are automatically fulfilled if for given interpolation points {σ i } r i=1, {μ i} r i=1 and tangential directions {b i} r i=1, {c i} r i=1,wechoose Im(V) =span{(σ 1 E A) I Bb 1,...,(σ r E A) I Bb r }, Im(W) =span{(μ 1 E T A T ) I C T c 1,...,(σ r E T A T ) I C T c r } Computational issues related to the reduction of index-2 descriptor systems. Even though Theorem 6.2 shows how to enforce interpolation for the descriptor system (6.1), the spectral projectors are still implicitly hidden in the definitions of (σe A) I and (σe T A T ) I. It has been shown in [20 how to compute the matrix-vector product (E + τa) I f for a given vector f without explicitly forming (E + τa) I. This approach can also be used in interpolatory model reduction, where the quantities of interest are (σe A) I Bb and (μe T A T ) I C T c. The proof of the following result is analogous to those in [20 and, therefore, is omitted.

19 B1028 SERKAN GUGERCIN, TATJANA STYKEL, AND SARAH WYATT Lemma 6.3. Let s = σ, μ be such that sθ T l,2 E 11Θ r,1 Θ T l,2 A 11Θ r,1 is invertible. Then the vector (6.12) solves (6.13) and the vector (6.14) solves (6.15) [ σe11 A 11 A 12 A 21 0 [ μe T 11 A T 11 A T 21 A T 12 0 v =(σe A) I Bb [ v z = w =(μe T A T ) I C T c [ w q = [ B1 b 0 [ C T c 0 From a computational perspective of implementing Theorem 6.2, the importance of this result is clear. To achieve interpolation, Theorem 6.2 relies on computing the quantities (σe A) I Bb and (σe T A T ) I C T c, both of which involve the computation of Θ l,2 and Θ r,1. However, Lemma 6.3 illustrates that the computation of these basis matrices is unnecessary and only the linear systems (6.13) and (6.15) need to be solved. This observation leads to Algorithm 6.1 for interpolatory model reduction of Stokes-type descriptor systems of index 2. Algorithm 6.1. Interpolatory model reduction for Stokes-type descriptor systems of index Make an initial selection of the interpolation points {σ i } r i=1 and the tangent directions {b i } r i=1 and {c i} r i=1. 2. For i =1,...,r,solve [ [ [ σi E 11 A 11 A 12 vi B1 b = i, A 21 0 z 0 [ σi E T 11 AT 11 A T 21 A T 12 0 [ wi q =,. [ C T c i 0 3. V =[v 1,...,v r, W =[w 1,...,w r. 4. Ẽ = W T E 11 V, Ã = WT A 11 V, B = W T B 1, C = CV, D = D. Once a computationally effective bi-tangential Hermite interpolation framework is established for index-2 descriptor systems, extending it to optimal H 2 model reduction via IRKA is straightforward and is given in Algorithm 6.2. It follows from the structure of this algorithm that upon convergence G(s) = C(sẼ Ã) 1 B+ D satisfies the first-order conditions for H 2 optimality. Remark 6.2. As shown in [20, the general case B 2 0 can be handled similarly to the case B 2 = 0. First note that the state x 1 (t) can be decomposed as x 1 (t) =x 0 (t)+x g (t), where x g (t) = E11 1 A 12(A 21 E 1 11 A 12) 1 B 2 u(t) andx 0 (t) satisfies A 21 x 0 (t) = 0. After some algebraic manipulations, this leads to (6.16) Π l E 11 Π r ẋ 0 (t) =Π l A 11 Π r x 0 (t)+π l Bu(t), y(t) =CΠ r x 0 (t)+du(t) C 2 (A 21 E 1 11 A 12) 1 B 2 u(t),.

Institut für Mathematik

Institut für Mathematik U n i v e r s i t ä t A u g s b u r g Institut für Mathematik Serkan Gugercin, Tatjana Stykel, Sarah Wyatt Model Reduction of Descriptor Systems by Interpolatory Projection Methods Preprint Nr. 3/213 29.

More information

Approximation of the Linearized Boussinesq Equations

Approximation of the Linearized Boussinesq Equations Approximation of the Linearized Boussinesq Equations Alan Lattimer Advisors Jeffrey Borggaard Serkan Gugercin Department of Mathematics Virginia Tech SIAM Talk Series, Virginia Tech, April 22, 2014 Alan

More information

Model Reduction for Unstable Systems

Model Reduction for Unstable Systems Model Reduction for Unstable Systems Klajdi Sinani Virginia Tech klajdi@vt.edu Advisor: Serkan Gugercin October 22, 2015 (VT) SIAM October 22, 2015 1 / 26 Overview 1 Introduction 2 Interpolatory Model

More information

Iterative Rational Krylov Algorithm for Unstable Dynamical Systems and Generalized Coprime Factorizations

Iterative Rational Krylov Algorithm for Unstable Dynamical Systems and Generalized Coprime Factorizations Iterative Rational Krylov Algorithm for Unstable Dynamical Systems and Generalized Coprime Factorizations Klajdi Sinani Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University

More information

Model Reduction of Inhomogeneous Initial Conditions

Model Reduction of Inhomogeneous Initial Conditions Model Reduction of Inhomogeneous Initial Conditions Caleb Magruder Abstract Our goal is to develop model reduction processes for linear dynamical systems with non-zero initial conditions. Standard model

More information

Interpolatory Model Reduction of Large-scale Dynamical Systems

Interpolatory Model Reduction of Large-scale Dynamical Systems Interpolatory Model Reduction of Large-scale Dynamical Systems Athanasios C. Antoulas, Christopher A. Beattie, and Serkan Gugercin Abstract Large-scale simulations play a crucial role in the study of a

More information

Model reduction via tangential interpolation

Model reduction via tangential interpolation Model reduction via tangential interpolation K. Gallivan, A. Vandendorpe and P. Van Dooren May 14, 2002 1 Introduction Although most of the theory presented in this paper holds for both continuous-time

More information

Model reduction of large-scale systems by least squares

Model reduction of large-scale systems by least squares Model reduction of large-scale systems by least squares Serkan Gugercin Department of Mathematics, Virginia Tech, Blacksburg, VA, USA gugercin@mathvtedu Athanasios C Antoulas Department of Electrical and

More information

Model reduction of nonlinear circuit equations

Model reduction of nonlinear circuit equations Model reduction of nonlinear circuit equations Tatjana Stykel Technische Universität Berlin Joint work with T. Reis and A. Steinbrecher BIRS Workshop, Banff, Canada, October 25-29, 2010 T. Stykel. Model

More information

CME 345: MODEL REDUCTION

CME 345: MODEL REDUCTION CME 345: MODEL REDUCTION Balanced Truncation Charbel Farhat & David Amsallem Stanford University cfarhat@stanford.edu These slides are based on the recommended textbook: A.C. Antoulas, Approximation of

More information

Balanced Truncation 1

Balanced Truncation 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.242, Fall 2004: MODEL REDUCTION Balanced Truncation This lecture introduces balanced truncation for LTI

More information

Model reduction of coupled systems

Model reduction of coupled systems Model reduction of coupled systems Tatjana Stykel Technische Universität Berlin ( joint work with Timo Reis, TU Kaiserslautern ) Model order reduction, coupled problems and optimization Lorentz Center,

More information

Fluid flow dynamical model approximation and control

Fluid flow dynamical model approximation and control Fluid flow dynamical model approximation and control... a case-study on an open cavity flow C. Poussot-Vassal & D. Sipp Journée conjointe GT Contrôle de Décollement & GT MOSAR Frequency response of an

More information

Krylov Techniques for Model Reduction of Second-Order Systems

Krylov Techniques for Model Reduction of Second-Order Systems Krylov Techniques for Model Reduction of Second-Order Systems A Vandendorpe and P Van Dooren February 4, 2004 Abstract The purpose of this paper is to present a Krylov technique for model reduction of

More information

Structure-preserving tangential interpolation for model reduction of port-hamiltonian Systems

Structure-preserving tangential interpolation for model reduction of port-hamiltonian Systems Structure-preserving tangential interpolation for model reduction of port-hamiltonian Systems Serkan Gugercin a, Rostyslav V. Polyuga b, Christopher Beattie a, and Arjan van der Schaft c arxiv:.3485v2

More information

H 2 -optimal model reduction of MIMO systems

H 2 -optimal model reduction of MIMO systems H 2 -optimal model reduction of MIMO systems P. Van Dooren K. A. Gallivan P.-A. Absil Abstract We consider the problem of approximating a p m rational transfer function Hs of high degree by another p m

More information

arxiv: v1 [math.na] 12 Jan 2012

arxiv: v1 [math.na] 12 Jan 2012 Interpolatory Weighted-H 2 Model Reduction Branimir Anić, Christopher A. Beattie, Serkan Gugercin, and Athanasios C. Antoulas Preprint submitted to Automatica, 2 January 22 arxiv:2.2592v [math.na] 2 Jan

More information

Model reduction of interconnected systems

Model reduction of interconnected systems Model reduction of interconnected systems A Vandendorpe and P Van Dooren 1 Introduction Large scale linear systems are often composed of subsystems that interconnect to each other Instead of reducing the

More information

Model order reduction of electrical circuits with nonlinear elements

Model order reduction of electrical circuits with nonlinear elements Model order reduction of electrical circuits with nonlinear elements Andreas Steinbrecher and Tatjana Stykel 1 Introduction The efficient and robust numerical simulation of electrical circuits plays a

More information

Model reduction of large-scale dynamical systems

Model reduction of large-scale dynamical systems Model reduction of large-scale dynamical systems Lecture III: Krylov approximation and rational interpolation Thanos Antoulas Rice University and Jacobs University email: aca@rice.edu URL: www.ece.rice.edu/

More information

Control Systems. Laplace domain analysis

Control Systems. Laplace domain analysis Control Systems Laplace domain analysis L. Lanari outline introduce the Laplace unilateral transform define its properties show its advantages in turning ODEs to algebraic equations define an Input/Output

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

An iterative SVD-Krylov based method for model reduction of large-scale dynamical systems

An iterative SVD-Krylov based method for model reduction of large-scale dynamical systems An iterative SVD-Krylov based method for model reduction of large-scale dynamical systems Serkan Gugercin Department of Mathematics, Virginia Tech., Blacksburg, VA, USA, 24061-0123 gugercin@math.vt.edu

More information

Determining the order of minimal realization of descriptor systems without use of the Weierstrass canonical form

Determining the order of minimal realization of descriptor systems without use of the Weierstrass canonical form Journal of Mathematical Modeling Vol. 3, No. 1, 2015, pp. 91-101 JMM Determining the order of minimal realization of descriptor systems without use of the Weierstrass canonical form Kamele Nassiri Pirbazari

More information

Projection of state space realizations

Projection of state space realizations Chapter 1 Projection of state space realizations Antoine Vandendorpe and Paul Van Dooren Department of Mathematical Engineering Université catholique de Louvain B-1348 Louvain-la-Neuve Belgium 1.0.1 Description

More information

Krylov-Subspace Based Model Reduction of Nonlinear Circuit Models Using Bilinear and Quadratic-Linear Approximations

Krylov-Subspace Based Model Reduction of Nonlinear Circuit Models Using Bilinear and Quadratic-Linear Approximations Krylov-Subspace Based Model Reduction of Nonlinear Circuit Models Using Bilinear and Quadratic-Linear Approximations Peter Benner and Tobias Breiten Abstract We discuss Krylov-subspace based model reduction

More information

BALANCING-RELATED MODEL REDUCTION FOR DATA-SPARSE SYSTEMS

BALANCING-RELATED MODEL REDUCTION FOR DATA-SPARSE SYSTEMS BALANCING-RELATED Peter Benner Professur Mathematik in Industrie und Technik Fakultät für Mathematik Technische Universität Chemnitz Computational Methods with Applications Harrachov, 19 25 August 2007

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

L 2 -optimal model reduction for unstable systems using iterative rational

L 2 -optimal model reduction for unstable systems using iterative rational L 2 -optimal model reduction for unstable systems using iterative rational Krylov algorithm Caleb agruder, Christopher Beattie, Serkan Gugercin Abstract Unstable dynamical systems can be viewed from a

More information

Model reduction for linear systems by balancing

Model reduction for linear systems by balancing Model reduction for linear systems by balancing Bart Besselink Jan C. Willems Center for Systems and Control Johann Bernoulli Institute for Mathematics and Computer Science University of Groningen, Groningen,

More information

Implicit Volterra Series Interpolation for Model Reduction of Bilinear Systems

Implicit Volterra Series Interpolation for Model Reduction of Bilinear Systems Max Planck Institute Magdeburg Preprints Mian Ilyas Ahmad Ulrike Baur Peter Benner Implicit Volterra Series Interpolation for Model Reduction of Bilinear Systems MAX PLANCK INSTITUT FÜR DYNAMIK KOMPLEXER

More information

Inexact Solves in Krylov-based Model Reduction

Inexact Solves in Krylov-based Model Reduction Inexact Solves in Krylov-based Model Reduction Christopher A. Beattie and Serkan Gugercin Abstract We investigate the use of inexact solves in a Krylov-based model reduction setting and present the resulting

More information

Zeros and zero dynamics

Zeros and zero dynamics CHAPTER 4 Zeros and zero dynamics 41 Zero dynamics for SISO systems Consider a linear system defined by a strictly proper scalar transfer function that does not have any common zero and pole: g(s) =α p(s)

More information

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling 1 Introduction Many natural processes can be viewed as dynamical systems, where the system is represented by a set of state variables and its evolution governed by a set of differential equations. Examples

More information

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr The discrete algebraic Riccati equation and linear matrix inequality nton. Stoorvogel y Department of Mathematics and Computing Science Eindhoven Univ. of Technology P.O. ox 53, 56 M Eindhoven The Netherlands

More information

Model Reduction for Linear Dynamical Systems

Model Reduction for Linear Dynamical Systems Summer School on Numerical Linear Algebra for Dynamical and High-Dimensional Problems Trogir, October 10 15, 2011 Model Reduction for Linear Dynamical Systems Peter Benner Max Planck Institute for Dynamics

More information

Fourier Model Reduction for Large-Scale Applications in Computational Fluid Dynamics

Fourier Model Reduction for Large-Scale Applications in Computational Fluid Dynamics Fourier Model Reduction for Large-Scale Applications in Computational Fluid Dynamics K. Willcox and A. Megretski A new method, Fourier model reduction (FMR), for obtaining stable, accurate, low-order models

More information

Realization-independent H 2 -approximation

Realization-independent H 2 -approximation Realization-independent H -approximation Christopher Beattie and Serkan Gugercin Abstract Iterative Rational Krylov Algorithm () of [] is an effective tool for tackling the H -optimal model reduction problem.

More information

Passivity Preserving Model Reduction for Large-Scale Systems. Peter Benner.

Passivity Preserving Model Reduction for Large-Scale Systems. Peter Benner. Passivity Preserving Model Reduction for Large-Scale Systems Peter Benner Mathematik in Industrie und Technik Fakultät für Mathematik Sonderforschungsbereich 393 S N benner@mathematik.tu-chemnitz.de SIMULATION

More information

arxiv: v1 [math.na] 5 May 2011

arxiv: v1 [math.na] 5 May 2011 ITERATIVE METHODS FOR COMPUTING EIGENVALUES AND EIGENVECTORS MAYSUM PANJU arxiv:1105.1185v1 [math.na] 5 May 2011 Abstract. We examine some numerical iterative methods for computing the eigenvalues and

More information

Inexact Solves in Interpolatory Model Reduction

Inexact Solves in Interpolatory Model Reduction Inexact Solves in Interpolatory Model Reduction Sarah Wyatt Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial fulfillment of the requirements for the

More information

An iterative SVD-Krylov based method for model reduction of large-scale dynamical systems

An iterative SVD-Krylov based method for model reduction of large-scale dynamical systems Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 1964 1986 www.elsevier.com/locate/laa An iterative SVD-Krylov based method for model reduction of large-scale dynamical

More information

Issues in Interpolatory Model Reduction: Inexact Solves, Second-order Systems and DAEs

Issues in Interpolatory Model Reduction: Inexact Solves, Second-order Systems and DAEs Issues in Interpolatory Model Reduction: Inexact Solves, Second-order Systems and DAEs Sarah Wyatt Dissertation submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

H 2 optimal model reduction - Wilson s conditions for the cross-gramian

H 2 optimal model reduction - Wilson s conditions for the cross-gramian H 2 optimal model reduction - Wilson s conditions for the cross-gramian Ha Binh Minh a, Carles Batlle b a School of Applied Mathematics and Informatics, Hanoi University of Science and Technology, Dai

More information

Downloaded 06/21/12 to Redistribution subject to SIAM license or copyright; see

Downloaded 06/21/12 to Redistribution subject to SIAM license or copyright; see SIAM J. SCI. COMPUT. Vol. 33, No. 5, pp. 2489 2518 c 2011 Society for Industrial and Applied Mathematics INTERPOLATORY PROJECTION METHODS FOR PARAMETERIZED MODEL REDUCTION ULRIKE BAUR, CHRISTOPHER BEATTIE,

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

CDS Solutions to Final Exam

CDS Solutions to Final Exam CDS 22 - Solutions to Final Exam Instructor: Danielle C Tarraf Fall 27 Problem (a) We will compute the H 2 norm of G using state-space methods (see Section 26 in DFT) We begin by finding a minimal state-space

More information

State Regulator. Advanced Control. design of controllers using pole placement and LQ design rules

State Regulator. Advanced Control. design of controllers using pole placement and LQ design rules Advanced Control State Regulator Scope design of controllers using pole placement and LQ design rules Keywords pole placement, optimal control, LQ regulator, weighting matrixes Prerequisites Contact state

More information

Computing Transfer Function Dominant Poles of Large Second-Order Systems

Computing Transfer Function Dominant Poles of Large Second-Order Systems Computing Transfer Function Dominant Poles of Large Second-Order Systems Joost Rommes Mathematical Institute Utrecht University rommes@math.uu.nl http://www.math.uu.nl/people/rommes joint work with Nelson

More information

An Interpolation-Based Approach to the Optimal H Model Reduction

An Interpolation-Based Approach to the Optimal H Model Reduction An Interpolation-Based Approach to the Optimal H Model Reduction Garret M. Flagg Dissertation submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial fulfillment

More information

Large-scale and infinite dimensional dynamical systems approximation

Large-scale and infinite dimensional dynamical systems approximation Large-scale and infinite dimensional dynamical systems approximation Igor PONTES DUFF PEREIRA Doctorant 3ème année - ONERA/DCSD Directeur de thèse: Charles POUSSOT-VASSAL Co-encadrant: Cédric SEREN 1 What

More information

ẋ n = f n (x 1,...,x n,u 1,...,u m ) (5) y 1 = g 1 (x 1,...,x n,u 1,...,u m ) (6) y p = g p (x 1,...,x n,u 1,...,u m ) (7)

ẋ n = f n (x 1,...,x n,u 1,...,u m ) (5) y 1 = g 1 (x 1,...,x n,u 1,...,u m ) (6) y p = g p (x 1,...,x n,u 1,...,u m ) (7) EEE582 Topical Outline A.A. Rodriguez Fall 2007 GWC 352, 965-3712 The following represents a detailed topical outline of the course. It attempts to highlight most of the key concepts to be covered and

More information

16. Local theory of regular singular points and applications

16. Local theory of regular singular points and applications 16. Local theory of regular singular points and applications 265 16. Local theory of regular singular points and applications In this section we consider linear systems defined by the germs of meromorphic

More information

CME 345: MODEL REDUCTION - Projection-Based Model Order Reduction

CME 345: MODEL REDUCTION - Projection-Based Model Order Reduction CME 345: MODEL REDUCTION - Projection-Based Model Order Reduction Projection-Based Model Order Reduction Charbel Farhat and David Amsallem Stanford University cfarhat@stanford.edu 1 / 38 Outline 1 Solution

More information

21 Linear State-Space Representations

21 Linear State-Space Representations ME 132, Spring 25, UC Berkeley, A Packard 187 21 Linear State-Space Representations First, let s describe the most general type of dynamic system that we will consider/encounter in this class Systems may

More information

Chris Beattie and Serkan Gugercin

Chris Beattie and Serkan Gugercin Krylov-based model reduction of second-order systems with proportional damping Chris Beattie and Serkan Gugercin Department of Mathematics, Virginia Tech, USA 44th IEEE Conference on Decision and Control

More information

Math 396. An application of Gram-Schmidt to prove connectedness

Math 396. An application of Gram-Schmidt to prove connectedness Math 396. An application of Gram-Schmidt to prove connectedness 1. Motivation and background Let V be an n-dimensional vector space over R, and define GL(V ) to be the set of invertible linear maps V V

More information

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition) Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational

More information

Control Systems Design

Control Systems Design ELEC4410 Control Systems Design Lecture 13: Stability Julio H. Braslavsky julio@ee.newcastle.edu.au School of Electrical Engineering and Computer Science Lecture 13: Stability p.1/20 Outline Input-Output

More information

Model order reduction of large-scale dynamical systems with Jacobi-Davidson style eigensolvers

Model order reduction of large-scale dynamical systems with Jacobi-Davidson style eigensolvers MAX PLANCK INSTITUTE International Conference on Communications, Computing and Control Applications March 3-5, 2011, Hammamet, Tunisia. Model order reduction of large-scale dynamical systems with Jacobi-Davidson

More information

MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS

MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS Ulrike Baur joint work with Peter Benner Mathematics in Industry and Technology Faculty of Mathematics Chemnitz University of Technology

More information

7 Planar systems of linear ODE

7 Planar systems of linear ODE 7 Planar systems of linear ODE Here I restrict my attention to a very special class of autonomous ODE: linear ODE with constant coefficients This is arguably the only class of ODE for which explicit solution

More information

Linear Systems. Linear systems?!? (Roughly) Systems which obey properties of superposition Input u(t) output

Linear Systems. Linear systems?!? (Roughly) Systems which obey properties of superposition Input u(t) output Linear Systems Linear systems?!? (Roughly) Systems which obey properties of superposition Input u(t) output Our interest is in dynamic systems Dynamic system means a system with memory of course including

More information

Control Systems. Frequency domain analysis. L. Lanari

Control Systems. Frequency domain analysis. L. Lanari Control Systems m i l e r p r a in r e v y n is o Frequency domain analysis L. Lanari outline introduce the Laplace unilateral transform define its properties show its advantages in turning ODEs to algebraic

More information

Adaptive rational Krylov subspaces for large-scale dynamical systems. V. Simoncini

Adaptive rational Krylov subspaces for large-scale dynamical systems. V. Simoncini Adaptive rational Krylov subspaces for large-scale dynamical systems V. Simoncini Dipartimento di Matematica, Università di Bologna valeria@dm.unibo.it joint work with Vladimir Druskin, Schlumberger Doll

More information

Clustering-Based Model Order Reduction for Multi-Agent Systems with General Linear Time-Invariant Agents

Clustering-Based Model Order Reduction for Multi-Agent Systems with General Linear Time-Invariant Agents Max Planck Institute Magdeburg Preprints Petar Mlinarić Sara Grundel Peter Benner Clustering-Based Model Order Reduction for Multi-Agent Systems with General Linear Time-Invariant Agents MAX PLANCK INSTITUT

More information

ECE 275A Homework #3 Solutions

ECE 275A Homework #3 Solutions ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =

More information

ME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ

ME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ ME 234, Lyapunov and Riccati Problems. This problem is to recall some facts and formulae you already know. (a) Let A and B be matrices of appropriate dimension. Show that (A, B) is controllable if and

More information

Passive Interconnect Macromodeling Via Balanced Truncation of Linear Systems in Descriptor Form

Passive Interconnect Macromodeling Via Balanced Truncation of Linear Systems in Descriptor Form Passive Interconnect Macromodeling Via Balanced Truncation of Linear Systems in Descriptor Form Boyuan Yan, Sheldon X.-D. Tan,PuLiu and Bruce McGaughy Department of Electrical Engineering, University of

More information

H 2 optimal model approximation by structured time-delay reduced order models

H 2 optimal model approximation by structured time-delay reduced order models H 2 optimal model approximation by structured time-delay reduced order models I. Pontes Duff, Charles Poussot-Vassal, C. Seren 27th September, GT MOSAR and GT SAR meeting 200 150 100 50 0 50 100 150 200

More information

An iterative SVD-Krylov based method for model reduction of large-scale dynamical systems

An iterative SVD-Krylov based method for model reduction of large-scale dynamical systems Proceedings of the 44th IEEE Conference on Decision and Control, and the European Control Conference 2005 Seville, Spain, December 12-15, 2005 WeC10.4 An iterative SVD-Krylov based method for model reduction

More information

Model Order Reduction via Matlab Parallel Computing Toolbox. Istanbul Technical University

Model Order Reduction via Matlab Parallel Computing Toolbox. Istanbul Technical University Model Order Reduction via Matlab Parallel Computing Toolbox E. Fatih Yetkin & Hasan Dağ Istanbul Technical University Computational Science & Engineering Department September 21, 2009 E. Fatih Yetkin (Istanbul

More information

CONTROL DESIGN FOR SET POINT TRACKING

CONTROL DESIGN FOR SET POINT TRACKING Chapter 5 CONTROL DESIGN FOR SET POINT TRACKING In this chapter, we extend the pole placement, observer-based output feedback design to solve tracking problems. By tracking we mean that the output is commanded

More information

8 A pseudo-spectral solution to the Stokes Problem

8 A pseudo-spectral solution to the Stokes Problem 8 A pseudo-spectral solution to the Stokes Problem 8.1 The Method 8.1.1 Generalities We are interested in setting up a pseudo-spectral method for the following Stokes Problem u σu p = f in Ω u = 0 in Ω,

More information

Peter C. Müller. Introduction. - and the x 2 - subsystems are called the slow and the fast subsystem, correspondingly, cf. (Dai, 1989).

Peter C. Müller. Introduction. - and the x 2 - subsystems are called the slow and the fast subsystem, correspondingly, cf. (Dai, 1989). Peter C. Müller mueller@srm.uni-wuppertal.de Safety Control Engineering University of Wuppertal D-497 Wupperta. Germany Modified Lyapunov Equations for LI Descriptor Systems or linear time-invariant (LI)

More information

Gramians based model reduction for hybrid switched systems

Gramians based model reduction for hybrid switched systems Gramians based model reduction for hybrid switched systems Y. Chahlaoui Younes.Chahlaoui@manchester.ac.uk Centre for Interdisciplinary Computational and Dynamical Analysis (CICADA) School of Mathematics

More information

Problem Set 5 Solutions 1

Problem Set 5 Solutions 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.245: MULTIVARIABLE CONTROL SYSTEMS by A. Megretski Problem Set 5 Solutions The problem set deals with Hankel

More information

Controllability, Observability, Full State Feedback, Observer Based Control

Controllability, Observability, Full State Feedback, Observer Based Control Multivariable Control Lecture 4 Controllability, Observability, Full State Feedback, Observer Based Control John T. Wen September 13, 24 Ref: 3.2-3.4 of Text Controllability ẋ = Ax + Bu; x() = x. At time

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

arxiv: v1 [cs.sy] 17 Nov 2015

arxiv: v1 [cs.sy] 17 Nov 2015 Optimal H model approximation based on multiple input/output delays systems I. Pontes Duff a, C. Poussot-Vassal b, C. Seren b a ISAE SUPAERO & ONERA b ONERA - The French Aerospace Lab, F-31055 Toulouse,

More information

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University of Minnesota 2 Department

More information

CSE 206A: Lattice Algorithms and Applications Winter The dual lattice. Instructor: Daniele Micciancio

CSE 206A: Lattice Algorithms and Applications Winter The dual lattice. Instructor: Daniele Micciancio CSE 206A: Lattice Algorithms and Applications Winter 2016 The dual lattice Instructor: Daniele Micciancio UCSD CSE 1 Dual Lattice and Dual Basis Definition 1 The dual of a lattice Λ is the set ˆΛ of all

More information

Structure preserving Krylov-subspace methods for Lyapunov equations

Structure preserving Krylov-subspace methods for Lyapunov equations Structure preserving Krylov-subspace methods for Lyapunov equations Matthias Bollhöfer, André Eppler Institute Computational Mathematics TU Braunschweig MoRePas Workshop, Münster September 17, 2009 System

More information

Stability and Inertia Theorems for Generalized Lyapunov Equations

Stability and Inertia Theorems for Generalized Lyapunov Equations Published in Linear Algebra and its Applications, 355(1-3, 2002, pp. 297-314. Stability and Inertia Theorems for Generalized Lyapunov Equations Tatjana Stykel Abstract We study generalized Lyapunov equations

More information

Subdiagonal pivot structures and associated canonical forms under state isometries

Subdiagonal pivot structures and associated canonical forms under state isometries Preprints of the 15th IFAC Symposium on System Identification Saint-Malo, France, July 6-8, 29 Subdiagonal pivot structures and associated canonical forms under state isometries Bernard Hanzon Martine

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Parametric Model Order Reduction for Linear Control Systems

Parametric Model Order Reduction for Linear Control Systems Parametric Model Order Reduction for Linear Control Systems Peter Benner HRZZ Project Control of Dynamical Systems (ConDys) second project meeting Zagreb, 2 3 November 2017 Outline 1. Introduction 2. PMOR

More information

The Cyclic Decomposition of a Nilpotent Operator

The Cyclic Decomposition of a Nilpotent Operator The Cyclic Decomposition of a Nilpotent Operator 1 Introduction. J.H. Shapiro Suppose T is a linear transformation on a vector space V. Recall Exercise #3 of Chapter 8 of our text, which we restate here

More information

6.245: MULTIVARIABLE CONTROL SYSTEMS by A. Megretski. Solutions to Problem Set 1 1. Massachusetts Institute of Technology

6.245: MULTIVARIABLE CONTROL SYSTEMS by A. Megretski. Solutions to Problem Set 1 1. Massachusetts Institute of Technology Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.245: MULTIVARIABLE CONTROL SYSTEMS by A. Megretski Solutions to Problem Set 1 1 Problem 1.1T Consider the

More information

Lecture 4: Numerical solution of ordinary differential equations

Lecture 4: Numerical solution of ordinary differential equations Lecture 4: Numerical solution of ordinary differential equations Department of Mathematics, ETH Zürich General explicit one-step method: Consistency; Stability; Convergence. High-order methods: Taylor

More information

3 Gramians and Balanced Realizations

3 Gramians and Balanced Realizations 3 Gramians and Balanced Realizations In this lecture, we use an optimization approach to find suitable realizations for truncation and singular perturbation of G. It turns out that the recommended realizations

More information

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T 1 1 Linear Systems The goal of this chapter is to study linear systems of ordinary differential equations: ẋ = Ax, x(0) = x 0, (1) where x R n, A is an n n matrix and ẋ = dx ( dt = dx1 dt,..., dx ) T n.

More information

Domain decomposition on different levels of the Jacobi-Davidson method

Domain decomposition on different levels of the Jacobi-Davidson method hapter 5 Domain decomposition on different levels of the Jacobi-Davidson method Abstract Most computational work of Jacobi-Davidson [46], an iterative method suitable for computing solutions of large dimensional

More information

DESIGN OF OBSERVERS FOR SYSTEMS WITH SLOW AND FAST MODES

DESIGN OF OBSERVERS FOR SYSTEMS WITH SLOW AND FAST MODES DESIGN OF OBSERVERS FOR SYSTEMS WITH SLOW AND FAST MODES by HEONJONG YOO A thesis submitted to the Graduate School-New Brunswick Rutgers, The State University of New Jersey In partial fulfillment of the

More information

The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications

The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications MAX PLANCK INSTITUTE Elgersburg Workshop Elgersburg February 11-14, 2013 The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications Timo Reis 1 Matthias Voigt 2 1 Department

More information

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A = STUDENT S COMPANIONS IN BASIC MATH: THE ELEVENTH Matrix Reloaded by Block Buster Presumably you know the first part of matrix story, including its basic operations (addition and multiplication) and row

More information

Some notes on Coxeter groups

Some notes on Coxeter groups Some notes on Coxeter groups Brooks Roberts November 28, 2017 CONTENTS 1 Contents 1 Sources 2 2 Reflections 3 3 The orthogonal group 7 4 Finite subgroups in two dimensions 9 5 Finite subgroups in three

More information