Realization-independent H 2 -approximation
|
|
- Osborne Earl Hill
- 5 years ago
- Views:
Transcription
1 Realization-independent H -approximation Christopher Beattie and Serkan Gugercin Abstract Iterative Rational Krylov Algorithm () of [] is an effective tool for tackling the H -optimal model reduction problem. However, so far it has relied on a first-order state-space realization of the model-to-be-reduced. In this paper, by exploiting the Loewner-matrix approach for interpolation, we develop a new formulation of that only requires transfer function evaluations without access to any particular realization. This, in turn, extends to H approximation of irrational, infinite-dimensional dynamical systems. We also introduce a residue-correction step in that adjusts the vector residues to minimize the H error at the end of each cycle using a new set of necessary and sufficient conditions for H optimality. This new step further improves the convergence speed and performance of. Three numerical examples illustrate the effectiveness of the proposed methods. I. INTRODUCTION Dynamical systems are the basic framework for modeling and control of a wide variety of complex systems having scientific interest and industrial value. Examples include signal propagation and interference in electric circuits, heat transfer and temperature control in various media, and behavior of micro-electro-mechanical systems. For a collection of such examples, we refer the reader to [] and [6]. Direct numerical simulation of the associated models has been one of few available means for studying complex underlying physical phenomena. However, the ever present need for improved accuracy drives the inclusion of ever more detail in the modeling stage, leading inevitably to ever larger scale, ever more complex dynamical systems. Simulations in such largescale settings often lead to unmanageably large demands on computational resources, creating the main motivation for model reduction. Simply stated, the goal is to produce a relatively low order system approximation with input/output behavior very close to the original one. We consider stable multiple-input/multiple-output (MIMO) linear dynamical systems described via transfer functions, H(s), that are assumed to be meromorphic functions with poles in the open left half-plane (hence analytic in the open right halfplane). We have particular interest in cases where the order of H(s) (total number of poles) is very large, perhaps on the order of 0 5 or more. The methods we develop here only require the ability to evaluate H(s) at selected points, s C. Notably, we do not require access to any particular realization of H(s). We assume H(s) to be an -function where denotes the set of p m matrix-valued functions, H(s), with This work was supported in part by the NSF Grants DMS C. Beattie and S. Gugercin are affiliated with the Department of Mathematics, Virginia Tech, Blacksburg, VA, , USA. {beattie, gugercin}@math.vt.edu components, h ij (s), that are analytic for s in the open right half plane, Re(s) > 0, and such that for each fixed Re(s) x > 0, h ij (x + ıy) is square integrable as a function of y (, ) in such a way that sup x>0 h ij (x + ıy) dy <. is a Hilbert space. Indeed, if G(s) and H(s) are - functions then the -inner product can be defined as G, H H def π trace ( G(ıω) H(ıω) T ) dω () with an associated norm defined as ( def + / H H H(ıω) F dω), () π where M F M, M F and M, N F trace ( M N ) T defines the Frobenius norm and Frobenius inner product, respectively. Notice that if G(s) and H(s) represent real dynamical systems then G, H H H, G H and G, H H itself must be real. Our goal is to find, for a given order r, a reduced model described by H r (s) : C r (se r A r ) B r (3) with A r, E r R r r, B r R r p, and C r R m r, such that H r (s) is an optimal approximation to H(s): H H r H min H G r G H. (4) r:stable Since finding global minimizers can be very difficult, the more modest goal of finding local minimizers is more usual. Optimal approximation in this sense has been investigated extensively; see for instance [6], [3], [4], [5], [5], [7] for Lyapunov-based methods and [0], [], [0], [4], [6], [8], [7], [3], [4] for interpolation-based methods. These methods, however, require access to a standard first-order realization for H(s), a drawback we will overcome in this paper. Although the Lyapunov and interpolation frameworks are theoretically equivalent [], we focus on the latter as it presents significant computational advantages. Interpolatory optimality conditions for systems with a single scalar-valued input and a single scalar-valued output ( SISO systems), were introduced by Meier and Luenberger [0] and have been recently extended to MIMO systems by [], [6], [4]. In [] we introduced an efficient interpolatory model reduction algorithm, the Iterative Rational Krylov Algorithm (), that generates a reduced model satisfying first-order optimality conditions. This method
2 has proved remarkably effective in many diverse problems. For many small-order problems used as model reduction benchmarks, it will typically find the global optimum among numerous local minimizers (though this behavior is not guaranteed). For modest order examples, consistently yields a better reduced order approximation than does balanced truncation. This comparison is significant since balanced truncation will yield reduced order models having small and often near-optimal error [], [5]; generally, balanced truncation is viewed as the gold standard for model reduction. More importantly, due to the numerical efficiency of, it has been successfully applied to systems having hundreds of thousands degrees of freedom; see, for example, [5] for an application in highenergy efficient building design where is applied to construct a 30 th order optimal reduced-model for a system of dimension bigger than The purpose of this paper is threefold. First, we develop an implementation of that only requires transfer function evaluations without access to any particular realization. This would allow, in turn, extending optimal model reduction to irrational, infinite-dimensional dynamical systems. Second, by incorporating an easy-to-compute least-squares minimization step in the implementation of, we further improve the convergence speed of, especially for the cases where the input and output dimension are large. Finally. this least-squares minimization step leads to a new set of necessary and sufficient conditions for a constrained optimization problem. We restrict ourselves to reduced models having simple poles which we label as λ, λ,... λ r. The corresponding residue of H r (s) at each λ i is matrix valued and rank one: res[h r (s), λ i ] c i b T i, for nontrivial c i and b i. H r (s) can be represented directly in terms of its poles and residues: H r (s) k c k b T k (5) For systems of this form we have Theorem : [4] Suppose that H r (s) has the form given in (5) and that both H r (s) and H(s) are transfer functions associated with real dynamical systems. Then and H, H r H H H r trace ( res[h( s)h T r (s), λ k ] ) k c T k H( λ k ) b k (6) k H c T k H( λ k )b k (7) k + c k λ k,l k λ c l b l b k. l For H r (s) as in (6), one can also write the first-order optimality conditions directly in terms of the poles and residues of H r (s). Theorem : [] Let H r (s) in (6) be the best rth order approximation of H(s) with respect to the norm. Then H( λ k )b k H r ( λ k )b k, c T k H( λ k ) c T k H r ( λ k ), (8) and c T k H ( λ k )b k c T k H r( λ k )b k, for each k,,..., r. Theorem states that if H r (s) is a solution to the optimal- approximation problem (4), then it is a bitangential Hermite interpolant to H(s) at the mirror images of the its own (reduced-order) poles along directions determined by its (vector) residues. The rest of the paper is organized as follows: In Section II, we introduce TF-, an optimal model reduction method which only uses transfer function evaluations and can be applied without accessing any specific realization of H(s). In Section III, we incorporate a residue correction step in TF- which re-assigns residue directions at each step of TF- so as to minimize the for the current reduced-order poles. In both sections, numerical examples illustrate the effectiveness of the proposed methods. II. OPTIMAL MODEL REDUCTION FROM TRANSFER FUNCTION EVALUATIONS [] has provided an efficient way of constructing reduced order models satisfying the first-order - optimality conditions. However, so far, it has been applied only for the special case of first-order realizations, i.e. H(s) C(sE A) B. In this section, we will extend the optimal model reduction by to approximating any -transfer function H(s) without any specific constraint on the structure. H(s) can correspond to an infinite dimensional (irrational) transfer function, can contain delays or can correspond to a second-order dynamical system H(s) C(s M + sg + K) B. The vital point towards this goal is to understand what [] performs in the intermediate steps in terms of the transfer function. Recall the first-order conditions for -optimality in (8): H r (s) should be a bitangential Hermite interpolant to H(s) at the mirror images of the poles of H r (s). For the first-order case H(s) C(sE A) B, uses projection-based interpolatory model reduction to construct such an H r (s) at every step; for details, see []. Thus, the goal is to be able to construct an H r (s) that is an Hermite bitangential interpolant to H(s), which does not necessarily correspond to a firstorder state-space model. The Loewner matrix framework for interpolation by Mayo and Antoulas [9] is the perfect tool for this goal. A. Loewner-matrix approach for interpolation [9] Given H(s), interpolation points {s,..., s r } and tangential directions {b,..., b r } and {c,..., c r }, we would like to construct a reduced model H r (s) C r (se r A r ) B r
3 that satisfies, for k,,..., r, H(s k )b k H r (s k )b k, c T k H(s k ) c T k H r (s k ), (9) and c T k H (s k )b k c T k H r(s k )b k The framework of [9] only requires evaluating H(s) and H (s) at s k without any constraint on the structure of H(s): Construct (E r) i,j : ct i (H(s i) H(s j)) b j s i s j if i j, (0) c T i H (s i)b i if i j ct i (s ih(s i) s jh(s j)) b j if i j (A r) i,j : s i s j, () c T i [sh(s)] ssi b i if i j and C r [H(s )b,..., H(s r)b r], B r c T H(s ). c T r H(s r) () Then the reduced model H r (s) C r (se r A r ) B r satisfies (9). E r from (0) and A r from () are called, respectively, the Loewner matrix and the shifted Loewner matrix associated with H(s). The construction in (0)-() assumes that A r s i E r is invertible for i,..., r. If this is not the case, one uses the singular value decomposition of A r s i E r to truncate the redundant data; for details, see [9], []. B. using transfer function evaluations The Loewner-matrix framework of Section II-A gives us the necessary tools to extend to to general H(s) settings. Our algorithm will be an iterative interpolation method as at the end producing a reduced model satisfying the first-order optimality conditions. Let H r (s) r c ib T i i s λ i be the current reduced model. Then, the next iterate will be a Hermite bitangential interpolant to H(s) at the interpolation points s i λ i with the tangential directions b i and c i. This step will be achieved using the Loewner matrix approach outlined above and the process will be repeated until convergence. A brief sketch of the resulting iteration is given below in Algorithm TF-. Step a) creates an rth order bitangential Hermite interpolant to H(s) at the current interpolation points and tangent directions. Steps b) and c) determine a pole-residue representation of the current rth order model which defines the next set of interpolation points and potentially also the next set of tangent directions if the residue correction step of Step d) is omitted. The optional Step d) is described in Section III. Notice that upon convergence, the interpolation points are the mirror images of the poles of H r (s) and the interpolation tangent directions are residue directions of H r (s); thus the first-order optimality conditions (8) are satisfied. As for, this algorithm is a fixed point iteration and a ALGORITHM: TF- using transfer function evaluations Given: a transfer function H(s); reduction order r; initial interpolation points {s,..., s r} and tangent directions {b,..., b r} and {c,..., c r} (chosen to be closed under conjugation). Return: H r(s) C r(se r A r) B r, a reduced system realization satisfying -optimality conditions (8). ) repeat until convergence a) Construct E r, A r, C r and B r as in (0)-(). b) Compute A r x i λ i E r x i and yi A r λ i yi E r with yi E rx j δ ij where yi and x i are left and right eigenvectors associated with λ i. c) s i λ i, b T i yi B r, and c i C r x i, for i,..., r. d) (optional) Residue Correction; see Section III. ) Construct E r, A r, C r and B r as in (0)-(). similar convergence analysis can be mustered. These details are left for a separate work. We note that for the special case of generic first-order realizations H(s) C(sE A) B, TF- is theoretically equivalent to regular even though the numerical implementation are different, in other words, TF- contains as a special case. C. A one dimensional heat equation: The example taken from [] analyzes the temperature distribution on a semi-infinite rod. Let x 0 describe the half-line for the rod and let y(z, t) denote the temperature at position x and time t. Then, the temperature distribution is given by y t y 0. (3) x Following [], we assume that the temperature is controlled at x 0 and we are interested in the temperature at x. Using the initial conditions y(x, 0) 0 for x 0 and the boundary condition y(, t) 0 for t 0, one obtains the transfer function representing the dynamics from y(0, t) to y(, t) as H(s) Y (, s) Y (0, s) s e, where Y (, s) denote the Laplace transform of y(, t). Note that H(s) e s is an -function. Equivalently the impulse response can be obtained as h(t) 4πt 3 e 4t, t 0. Now, our goal is to construct an optimal reduced model approximation for H(s) e s directly without any discretization (in space), i.e. we will not discretize the PDE in (3) to obtain a state-space description; we will directly apply TF- to H(s) e s. We choose r 6. The main costs of the algorithm are simply evaluating
4 H(s) e s and H (s) s e s, and solving an 6 6 generalized eigenvalue problem; both of which are trivial computational tasks. Compare this with reducing a large-scale semi-discretized version of the original PDE, for example. Since this example is single-input/single-output, all the tangential directions are simply b i c i. Initial shifts are chosen as 6 real logarithmically spaced points between 0 5 and 0 3. Then, Algorithm TF- converges to the optimal interpolation points Impulse Responses h(t) and h r (t) h(t) h r (t) s, ± 3.96ı, s , s , s , s leading to H r (s) C r (se r A r ) B r where H r(s) s5 3.86s s s s s s s s s s We note that since we use H(s) e s evaluations directly, our 6 th -order rational approximation is an exact Hermite interpolant to H(s) e s. This is in direct contrast to the methods where H(s) e s is first approximated by a high-order rational function approximation which is then fed to generic model reduction techniques such as balanced truncation. In those instances, the final reduced order model is no longer an exact interpolant to the original (irrational) transfer function. The (time domain) impulse responses of the original model and the reduced model are depicted in Figure illustrating that the reduced model is virtually indistinguishable from the original one. Santarelli [] has also approximated this model using an L optimization approach. Figure 6.4 on page 764 of [] depicts the impulse response comparison using the method of [] with r 0. Even with r 0 (recall that we have used r 6), the reduced model of [] shows a bigger deviation from the original model for t < 0.5; indeed the approximation is poor around t 0. This is expected since the method of [] is restrictive in the sense that it pre-assumes a specific structure on the reduced order poles (such as one distinct pole repeated r times or two distinct poles repeated k and r k times etc). In our case, we do not enforce any conditions on either reduced order poles or the residues. We let reduced order poles vary through out the iteration to align themselves in a way to minimize the resulting error. III. RESIDUE CORRECTION typically converges quickly for systems having a modest number of input or output dimensions. This has been discussed and illustrated via several examples in the literature; see, for example, [], [], [9]. Convergence may The vector fitting method [3] also allows construction of reduced models using only transfer function evaluations. However, this approach produces an H r(s) with a nontrivial D r term, H r(s) C r(se r A r) B r +D r. Thus leading to an unbounded error. For example, we have applied the vector fitting method to reduce H ( s) e s, obtaining a reduced model having a small, but nonzero D r value (D r ). We do not consider vector fitting in this paper as it is more appropriate for an H approximation setting time (s) Fig.. Impulse responses of the original and reduced model slow down or become erratic when the number of both inputs and outputs grows large. This could be anticipated since when m and p are large, the number of decision variables in the minimization problem grow as well; overall there are r(m + p) degrees of freedom in H r (s). We consider here the effect of adding an additional correction step that adjusts only the vector residues at the end of each cycle of Algorithm TF-. In particular, suppose that the set of reduced order poles, {λ, λ,... λ r }, at the end of any cycle of Algorithm TF- consists of r distinct values in the open left half-plane. We view these values as fixed for the time being. Consider the set of rth order p m transfer functions: P r Hr H H r(s) r c k b T k k s λ k for c k C p and b k C m ; H r is real analytic. The condition H r is real analytic implies that we may assume without loss of generality that the vector residues, {c k } and {b k } are closed sets under conjugation with the same conjugation symmetry as the set of (fixed) poles {λ, λ,... λ r }. We are interested in finding a reduced order system H r P r that has an optimal adjustment of vector residues such that H H r H min H r P r H H r H (4) Notice that the objective function (7) is biquadratic with respect to the residue directions, that is, quadratic with respect to each of c k and b k, separately. This observation suggests the use of alternating least squares with respect to left and right families of residue directions in order to solve (4) within each cycle of Algorithm TF-. No additional transfer function evaluations are necessary and the added computational effort is negligible; added complexity per cycle will not exceed O(r 3 ).
5 Suppose for the moment, that reduced order poles, {λ, λ,... λ r }, and right residue directions, {b, b,... b r }, are fixed. We assume that only the left residue directions, {c, c,... c r }, are allowed to vary. For C [c, c,... c r ] C r r, define the linear map M : C r r P r as M(C) k c k b T k C and seek C C r r that solves k e k b T k, min H M(C). (5) C C r r C C r r may be characterized via the pseudoinverse of M: C M (H). We obtain an expression for M via a singular value decomposition of M as follows: Define the r r Cauchy matrix: [ b i M b ] j. (6) λ i λ j Since the reduced order poles, {λ, λ,... λ r }, are in the left half-plane, M is Hermitian positive definite, and Mu j σj u j, for unit norm eigenvectors, u j, and values σ... σ r > 0. Now define Φ ij e i u T j. Note that {Φ ij} is an orthonormal set in C r r with respect to the Frobenius norm: Φ ij, Φ kl F δ ik δ jl, hence {Φ ij } constitutes an orthonormal basis for C r r. Furthermore, {M(Φ ij )} ij is an orthogonal set in : M(Φ ij), M(Φ kl ) H ν Φ ije νb T ν s λ ν, µ Φ kl e µb T µ s λ µ ( ) trace e iu b ν b µ j e ν e T µ u l e T k λ ν λ µ µ,ν u j Mu l δ ik σ j δ jl δ ik We find that Φ ij is a right singular vector for M and ( ) G ij (s) u T j e k e i b T k M(Φ ij ) σ j σ j k is a left singular vector for M, both associated with the corresponding singular value σ j. Thus, M(C) σ j G ij (s) Φ ij, C F i,j and the pseudoinverse of M is directly available as M (H) Φ ij G ij, H σ H j with G ij, H H i,j H, G ij H (u T j σ e k) trace ( H( λ k )b k e T i j k (u j e k ) e T i H( λ k ) b k. σ j k ) The Cauchy matrix, M, may be extremely poorly conditioned leading to singular values, σ j, for M that may be very close to zero. Instead of using the full singular value decomposition of M to solve the best approximation problem (5), one may wish to truncate the SVD of M and solve a regularized version of (5): For truncation index, ρ r, define the truncated operator M ρ (C) i j σ j G ij (s) Φ ij, C F. Then the solution to the regularized problem is given by C M ρ(h) i,k k min H M ρ(c) C C r r H. (7) i,k j Φ ij σj (u j e k ) e T i H( λ k ) b k e i e T i H( λ k ) b k (e T k ūj) σ j j j H( λ k ) b k (e T k ūj) σ j Denoting now Y [ H( λ )b,..., H( λ r )b r ], Σ ρ diag(σ,..., σ ρ ), and U ρ [ u, u,..., u ρ ], we have C Y U ρ Σ ρ U T ρ. Notice that when ρ r then C YM. We summarize these findings in a theorem: Theorem 3: Let H(s) be an -function. For the fixed reduced order poles, {λ, λ,... λ r }, and right residue directions, {b, b,... b r }, H r (s) r k s λ k c k b T k minimizes H H r H if and only if H( λ i )b i H r ( λ i )b i, for i,,..., r. Let C [c,..., c r ] denote the left-residue directions that minimizes H H r H. Then, C YM where M and Y are as defined above. Note that once the poles and the right-residues are fixed, the optimality conditions become necessary and sufficient. This theorem extends the Gaier s result in [7] for SISO systems to MIMO systems we are considering here. An analogous development may be followed to determine the best right residues, given fixed poles and fixed left residues. Toward that end, suppose that reduced order poles, {λ, λ,... λ r }, and left residue directions, {c, c,... c r }, are fixed. The right residue directions, {b, b,... b r }, are allowed to vary: let B [b, b,... b r ] C r r and define the linear map N : C r r P r as N(B) k c k b T k k u T j u T j c k e T k B T.
6 We seek B C r r that solves min H N(B). (8) B C r r Analogous to the previous development, B C r r may be characterized via the pseudoinverse of N: B N (H), which in turn can be represented through a singular value decomposition of N. Define [ c i N c ] j (9) λ i λ j and note that N is Hermitian positive definite, and Nv j ςj v j, for orthonormal eigenvectors, {v j }, and values ς... ς r > 0. Now define Ψ ij e i vj T. Note that {Ψ ij} is an orthonormal set in C r r with respect to the Frobenius norm: Ψ ij, Ψ kl F δ ik δ jl, hence {Ψ ij } constitutes an orthonormal basis for C r r. Furthermore, {N(Ψ ij )} ij is an orthogonal set in N(Ψ ij), N(Ψ kl ) H : ν c ν e T ν Ψ T ij s λ ν, µ c µe T µ Ψ T kl s λ µ ) ( trace e k vl T c T µ c ν e µ e T ν v je T i λ µ λ ν µ,ν v T l N v j δ ik v l N vj δ ik ς j δ jl δ ik We find that Ψ ij is a right singular vector for N and ( ) F ij (s) v T j e k c k e T i N(Ψ ij ) ς j ς j k is a left singular vector for N, both associated with the corresponding singular value ς j. Thus, N(B) ς j F ij (s) Ψ ij, B F i,j and the pseudoinverse of N is directly available as N (H) Ψ ij F ij, H ς H j with F ij, H H i,j H, F ij H (vj T ς e k) trace(h( λ k )e ic T k ) j k (vj e k ) c kh( λ k )e i. ς j k The Cauchy matrix, N, may be extremely poorly conditioned leading to singular values, ς j, for N that may be very close to zero. Instead of using the full singular value decomposition of N to solve the best approximation problem (8), one may wish to truncate the SVD of N and solve a regularized version of (8): For truncation index, ρ r, define the truncated operator N ρ (B) ς j F ij (s) Ψ ij, B F. i j Then the solution to the regularized problem is given by B N ρ(h) min H N ρ(b) B C r r H. (0) i,k j i,k j k j Ψ ij ςj (vj e k ) c kh( λ k )e i e i e T i H( λ k ) T c k (e T k v j) ς j H( λ k ) T c k (e T k v j) ς j Denoting now Z [ H( λ ) T c,..., H( λ r ) T c r ], Σ ρ diag(ς,..., ς ρ ), and V ρ [ v, v,..., v ρ ], we have B Z V Σ ρ ρ Vρ T. Notice that when ρ r then B ZN. An analogous result to Theorem 3 holds here as well: Theorem 4: Let H(s) be an -function. For the fixed reduced order poles, {λ, λ,... λ r }, and left residue directions, {c, c,... c r }, H r (s) r k s λ k c k b T k minimizes H H r H if and only if c T i H( λ i ) c T i H r ( λ i ), for i,,..., r. Let B [b,..., b r ] denote the left-residue directions that minimizes H H r H. Then, B ZN where N and Z are as defined above. In order to solve the residue correction subproblem: min Hr P H H r r H, we alternate making optimal adjustments on left vector residues and right vector residues: RESIDUE CORRECTION [Step d) in TF-] Given: evaluations of the transfer function:{h( λ i)} r i; a pole-residue representation of a current reduced order model: H r(s) r c k b T k k s λ k ; a truncation index ρ: ρ r. Return: An updated reduced order model H r(s) r k having the same poles as H r(s) with vector residues adjusted so that H H r H min Hr P r v T j v T j c k b T k s λ k H H r H ) repeat until convergence a) Evaluate Y [H( λ )b,..., H( λ r)b r] b) Evaluate the Cauchy matrix (6); find Mu j σj u j, Σ ρ diag(σ,..., σ ρ ) and U ρ [ u, u,..., u ρ ]; c) Calculate [c, c,... c r ] Y U ρ Σ ρ U T ρ. d) Evaluate Z [H( λ ) T c,..., H( λ r) T c r] e) Evaluate the Cauchy matrix (9); find Nv j ςj v j, Σ ρ diag(ς,..., ς ρ ) and V ρ [ v, v,..., v ρ ]; f) Calculate [b, b,... b r ] Z V Σ ρ ρ Vρ T. g) Equilibrate magnitudes of left/right residues: τ k ck b k ; c k τ k c k ; b k τ k b k. ) c k c k ; b k b k.
7 A. Linearized Shallow Water Equations The full-order model, taken from [8], represents the D linearized shallow water equations with tidal forcing used in the modeling of St. Louis Bay. It has 58 inputs (corresponding to 9 wind-forecast locations) and 5 outputs (corresponding to surface elevations at 5 measurement stations). A finite element discretization leads to a model of the form H(s) C(sE A) B with E, A R , B R and C R Hence, in this case the original model is indeed a rational function itself with degree n 363; and and TF- are theoretically the same even though their implementations are different. We reduce the order to r 0 using both with and without the residue correction step. Figure shows the comparison for the initial interpolation point selection logspace( 6,, 0). The top plot in Figure depicts the relative error during the iteration. While the with the residue correction converges only after 3 steps, regular takes 8 steps to converge; hence the residue correction improves the convergence of even further. The missing data points in for k and k correspond to intermediate unstable models., after the third step, successfully corrects these models and converges to a stable system. This has been a common observation for. Even for the cases that an intermediate unstable model has developed, correct these reduced order poles (or equivalently the interpolation points) and leads to a stable reduced model. Interestingly, for this example with the residue correction step avoids these unstable intermediate steps. We note that at the end both methods converge to the same reduced model. The bottom plot in Figure shows the relative distance in the -norm between the interpolation points from k th and (k + ) th step, further illustrating the faster convergence in with the residue correction Evolution of the Relative H error Convergence of the interpolation points k: Iteration index Fig.. Evolution of We repeat the same numerical study for r 0, now using logspace( 5, 0, 0) as the starting interpolation points. The results are shown in Figure 3. The first observation is that both versions of (with and without the residue correction) converge to the same stable reduced model obtained in the previous case. While the regular takes 6 steps to converge, with residue correction converges after only 5 steps and avoids the intermediate unstable models thus further illustrating the effectiveness of the residue correction algorithm Evolution of the Relative H error Convergence of the interpolation points k: Iteration index Fig. 3. Evolution of B. International Space Station R Module For the previous model, even though the convergence behavior was different, with and without the residue correction have converged to the same reduced model. In this example, we illustrate that the residue correction step not only can improve convergence but also can lead to a smaller error. To illustrate this, we use one of the benchmark models for model reduction, namely the International Space Station R Module with m 3 inputs and p 3 outputs []. As in the previous example, the original model is a rational function itself having a degree n 70 with a corresponding transfer function H(s) C(sI A) B with A R 70 70, B R 70 3 and C R For r 0,, 4,..., 30, we construct a degree-r rational approximant using with and without residue correction using the same initalization. The top plot in Figure 4 shows the relative error vs r for both cases. As the plot illustrates even though for some r values both methods converge to the same reduced model, in most of the cases with the residue correction converges to a smaller error value. This is better illustrated in the lower plot of Figure 4 where we depict the ratio of the error from to the error from with the residue correction. Except for the r, r 6 and r 6 cases where both methods yielded the same error, the with the residue correction has outperformed. For the r 0 case, the Only even r values are considered since the model originates from a large-scale second-order model H(s) C (s M + sg + K) B.
8 gain is almost double with an error ratio of 0.543, an almost 50% improvement. 0 0 Evolution of the Relative H error vs r H error ratios vs r r: Reduced order Fig. 4. Relative error as r varies IV. CONCLUSION By incorporating the Loewner-matrix approach in, we have developed an optimal approximation method, namely TF-, that only uses transfer function evaluations without access to any particular realization; thus extending optimal model reduction to irrational, infinitedimensional dynamical systems such as systems with delays. Moreover, we have introduced a residue-correction step in that adjusts the vector residues at the end of each cycle of TF- so as to minimize the error for the current pole selection. Several numerical examples have been used to illustrate the effectiveness of the proposed methods. REFERENCES [] A.C. Antoulas. Approximation of large-scale dynamical systems. Society for Industrial Mathematics, 005. [] A.C. Antoulas, C.A. Beattie, and S. Gugercin. Interpolatory model reduction of large-scale dynamical systems. In J. Mohammadpour and K. Grigoriadis, editors, Efficient Modeling and Control of Large-Scale Systems. Springer-Verlag, 00. [3] C.A. Beattie and S. Gugercin. Krylov-based minimization for optimal H model reduction. 46th IEEE Conference on Decision and Control, pages , Dec [4] C.A. Beattie and S. Gugercin. A trust region method for optimal H model reduction. In Proceedings of the 48th IEEE Conference on Decision and Control, Dec 009. [5] J. Borggaard and S Cliff, E. amd Gugercin. Model reduction for indoor-air behavior in control design for energy-efficient buildings. In Proceedings of the 0 American Control Conference, Accepted to appear, 0. [6] A. Bunse-Gerstner, D. Kubalinska, G. Vossen, and D. Wilczek. H - norm optimal model reduction for large scale discrete dynamical MIMO systems. Journal of computational and applied mathematics, 33(5):0 6, 00. [7] D. Gaier. Lectures on complex approximation. Birkhäuser, 987. [8] S. Gugercin. An iterative rational Krylov algorithm () for optimal model reduction. In Householder Symposium XVI, Seven Springs Mountain Resort, PA, USA, May 005. [9] S. Gugercin. Model reduction for large-scale dynamical systems. In Advances in Model Order Reduction, University of Manchester, Manchester, UK, July 0. [0] S. Gugercin, A. Antoulas, and C. Beattie. A rational Krylov iteration for optimal H model reduction. In Proceedings of MTNS, volume 006, 006. [] S. Gugercin, A. C. Antoulas, and C. A. Beattie. H model reduction for large-scale linear dynamical systems. SIAM J. Matrix Anal. Appl., 30(): , 008. [] S. Gugercin, AC Antoulas, and N. Bedrossian. Approximation of the International Space Station R and A models. In Decision and Control, 00. Proceedings of the 40th IEEE Conference on, volume, pages IEEE, 00. [3] B. Gustavsen and A. Semlyen. Rational approximation of frequency domain responses by vector fitting. Power Delivery, IEEE Transactions on, 4(3):05 06, 999. [4] Y. Halevi. Frequency weighted model reduction via optimal projection. IEEE Trans. Automatic Control, 37(0):537 54, 99. [5] D. Hyland and D. Bernstein. The optimal projection equations for model reduction and the relationships among the methods of Wilson, Skelton, and Moore. IEEE Trans. Automatic Control, 30():0, 985. [6] J.G. Korvink and E.B. Rudnyi. Oberwolfach benchmark collection. In P. Benner, V. Mehrmann, and D.C. Sorensen, editors, Dimension Reduction of Large-Scale Systems, volume 45 of Lecture Notes in Computational Science and Engineering, pages Springer- Verlag, Berlin/Heidelberg, Germany, 005. [7] D. Kubalinska, A. Bunse-Gerstner, G. Vossen, and D. Wilczek. H - optimal interpolation based model reduction for large-scale systems. In Proceedings of the 6 th International Conference on System Science, Poland, 007. [8] T. C. Massey. A Krylov H optimal reduced order modeling technique applied to the two-dimensional linearized shallow water equations: A case study. In SIAM Conference on Computational Science and Engineering, Miami, FL, July 009. [9] A.J. Mayo and A.C. Antoulas. A framework for the solution of the generalized realization problem. Linear Algebra and Its Applications, 45(-3):634 66, 007. [0] L. Meier III and D. Luenberger. Approximation of linear constant systems. IEEE Trans. Automatic Control, (5): , 967. [] Keith R. Santarelli. A framework for reduced order modeling with mixed moment matching and peak error objectives. SIAM Journal on Scientific Computing, 3(): , 00. [] D.C. Sorensen and A.C. Antoulas. The Sylvester equation and approximate balanced reduction. Linear algebra and its applications, 35:67 700, 00. [3] J. T. Spanos, M. H. Milman, and D. L. Mingori. A new algorithm for L optimal model reduction. Automatica (Journal of IFAC), 8(5): , 99. [4] P. Van Dooren, KA Gallivan, and P.A. Absil. H -optimal model reduction of MIMO systems. Applied Mathematics Letters, ():67 73, 008. [5] D. A. Wilson. Optimum solution of model-reduction problem. Proc. Inst. Elec. Eng., 7(6):6 65, 970. [6] W. Y. Yan and J. Lam. An approximate approach to h optimal model reduction. IEEE Trans. Automatic Control, 44(7):34 358, 999. [7] D. Zigic, L. T. Watson, and C. Beattie. Contragredient transformations applied to the optimal projection equations. Linear Algebra Appl., 88: , 993.
An iterative SVD-Krylov based method for model reduction of large-scale dynamical systems
Proceedings of the 44th IEEE Conference on Decision and Control, and the European Control Conference 2005 Seville, Spain, December 12-15, 2005 WeC10.4 An iterative SVD-Krylov based method for model reduction
More informationH 2 -optimal model reduction of MIMO systems
H 2 -optimal model reduction of MIMO systems P. Van Dooren K. A. Gallivan P.-A. Absil Abstract We consider the problem of approximating a p m rational transfer function Hs of high degree by another p m
More informationarxiv: v1 [cs.sy] 17 Nov 2015
Optimal H model approximation based on multiple input/output delays systems I. Pontes Duff a, C. Poussot-Vassal b, C. Seren b a ISAE SUPAERO & ONERA b ONERA - The French Aerospace Lab, F-31055 Toulouse,
More informationarxiv: v1 [math.na] 12 Jan 2012
Interpolatory Weighted-H 2 Model Reduction Branimir Anić, Christopher A. Beattie, Serkan Gugercin, and Athanasios C. Antoulas Preprint submitted to Automatica, 2 January 22 arxiv:2.2592v [math.na] 2 Jan
More informationModel Reduction for Unstable Systems
Model Reduction for Unstable Systems Klajdi Sinani Virginia Tech klajdi@vt.edu Advisor: Serkan Gugercin October 22, 2015 (VT) SIAM October 22, 2015 1 / 26 Overview 1 Introduction 2 Interpolatory Model
More informationL 2 -optimal model reduction for unstable systems using iterative rational
L 2 -optimal model reduction for unstable systems using iterative rational Krylov algorithm Caleb agruder, Christopher Beattie, Serkan Gugercin Abstract Unstable dynamical systems can be viewed from a
More informationInexact Solves in Krylov-based Model Reduction
Inexact Solves in Krylov-based Model Reduction Christopher A. Beattie and Serkan Gugercin Abstract We investigate the use of inexact solves in a Krylov-based model reduction setting and present the resulting
More informationH 2 optimal model reduction - Wilson s conditions for the cross-gramian
H 2 optimal model reduction - Wilson s conditions for the cross-gramian Ha Binh Minh a, Carles Batlle b a School of Applied Mathematics and Informatics, Hanoi University of Science and Technology, Dai
More informationIterative Rational Krylov Algorithm for Unstable Dynamical Systems and Generalized Coprime Factorizations
Iterative Rational Krylov Algorithm for Unstable Dynamical Systems and Generalized Coprime Factorizations Klajdi Sinani Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University
More informationTechnische Universität Berlin
; Technische Universität Berlin Institut für Mathematik arxiv:1610.03262v2 [cs.sy] 2 Jan 2017 Model Reduction for Large-scale Dynamical Systems with Inhomogeneous Initial Conditions Christopher A. Beattie
More informationCase study: Approximations of the Bessel Function
Case study: Approximations of the Bessel Function DS Karachalios, IV Gosea, Q Zhang, AC Antoulas May 5, 218 arxiv:181339v1 [mathna] 31 Dec 217 Abstract The purpose of this note is to compare various approximation
More informationModel Reduction of Inhomogeneous Initial Conditions
Model Reduction of Inhomogeneous Initial Conditions Caleb Magruder Abstract Our goal is to develop model reduction processes for linear dynamical systems with non-zero initial conditions. Standard model
More informationApproximation of the Linearized Boussinesq Equations
Approximation of the Linearized Boussinesq Equations Alan Lattimer Advisors Jeffrey Borggaard Serkan Gugercin Department of Mathematics Virginia Tech SIAM Talk Series, Virginia Tech, April 22, 2014 Alan
More informationInterpolatory Model Reduction of Large-scale Dynamical Systems
Interpolatory Model Reduction of Large-scale Dynamical Systems Athanasios C. Antoulas, Christopher A. Beattie, and Serkan Gugercin Abstract Large-scale simulations play a crucial role in the study of a
More informationKrylov-based model reduction of second-order systems with proportional damping
Krylov-based model reduction of second-order systems with proportional damping Christopher A Beattie and Serkan Gugercin Abstract In this note, we examine Krylov-based model reduction of second order systems
More informationOptimal Approximation of Dynamical Systems with Rational Krylov Methods. Department of Mathematics, Virginia Tech, USA
Optimal Approximation of Dynamical Systems with Rational Krylov Methods Serkan Güǧercin Department of Mathematics, Virginia Tech, USA Chris Beattie and Virginia Tech. jointly with Thanos Antoulas Rice
More informationKrylov-Subspace Based Model Reduction of Nonlinear Circuit Models Using Bilinear and Quadratic-Linear Approximations
Krylov-Subspace Based Model Reduction of Nonlinear Circuit Models Using Bilinear and Quadratic-Linear Approximations Peter Benner and Tobias Breiten Abstract We discuss Krylov-subspace based model reduction
More informationProjection of state space realizations
Chapter 1 Projection of state space realizations Antoine Vandendorpe and Paul Van Dooren Department of Mathematical Engineering Université catholique de Louvain B-1348 Louvain-la-Neuve Belgium 1.0.1 Description
More informationStructure-preserving tangential interpolation for model reduction of port-hamiltonian Systems
Structure-preserving tangential interpolation for model reduction of port-hamiltonian Systems Serkan Gugercin a, Rostyslav V. Polyuga b, Christopher Beattie a, and Arjan van der Schaft c arxiv:.3485v2
More informationKrylov-based model reduction of second-order systems with proportional damping
Proceedings of the 44th IEEE Conference on Decision and Control, and the European Control Conference 005 Seville, Spain, December 1-15, 005 TuA05.6 Krylov-based model reduction of second-order systems
More informationAn iterative SVD-Krylov based method for model reduction of large-scale dynamical systems
An iterative SVD-Krylov based method for model reduction of large-scale dynamical systems Serkan Gugercin Department of Mathematics, Virginia Tech., Blacksburg, VA, USA, 24061-0123 gugercin@math.vt.edu
More informationH 2 optimal model approximation by structured time-delay reduced order models
H 2 optimal model approximation by structured time-delay reduced order models I. Pontes Duff, Charles Poussot-Vassal, C. Seren 27th September, GT MOSAR and GT SAR meeting 200 150 100 50 0 50 100 150 200
More informationAn iterative SVD-Krylov based method for model reduction of large-scale dynamical systems
Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 1964 1986 www.elsevier.com/locate/laa An iterative SVD-Krylov based method for model reduction of large-scale dynamical
More informationZentrum für Technomathematik
Zentrum für Technomathematik Fachbereich 3 Mathematik und Informatik h 2 -norm optimal model reduction for large-scale discrete dynamical MIMO systems Angelika Bunse-Gerstner Dorota Kubalińska Georg Vossen
More informationOn Solving Large Algebraic. Riccati Matrix Equations
International Mathematical Forum, 5, 2010, no. 33, 1637-1644 On Solving Large Algebraic Riccati Matrix Equations Amer Kaabi Department of Basic Science Khoramshahr Marine Science and Technology University
More informationInstitut für Mathematik
U n i v e r s i t ä t A u g s b u r g Institut für Mathematik Serkan Gugercin, Tatjana Stykel, Sarah Wyatt Model Reduction of Descriptor Systems by Interpolatory Projection Methods Preprint Nr. 3/213 29.
More informationS. Gugercin and A.C. Antoulas Department of Electrical and Computer Engineering Rice University
Proceedings of the 39" IEEE Conference on Decision and Control Sydney, Australia December, 2000 A Comparative Study of 7 Algorithms for Model Reduct ion' S. Gugercin and A.C. Antoulas Department of Electrical
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More information2. Review of Linear Algebra
2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear
More information6.241 Dynamic Systems and Control
6.241 Dynamic Systems and Control Lecture 12: I/O Stability Readings: DDV, Chapters 15, 16 Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology March 14, 2011 E. Frazzoli
More informationImplicit Volterra Series Interpolation for Model Reduction of Bilinear Systems
Max Planck Institute Magdeburg Preprints Mian Ilyas Ahmad Ulrike Baur Peter Benner Implicit Volterra Series Interpolation for Model Reduction of Bilinear Systems MAX PLANCK INSTITUT FÜR DYNAMIK KOMPLEXER
More informationKrylov Techniques for Model Reduction of Second-Order Systems
Krylov Techniques for Model Reduction of Second-Order Systems A Vandendorpe and P Van Dooren February 4, 2004 Abstract The purpose of this paper is to present a Krylov technique for model reduction of
More informationMaths for Signals and Systems Linear Algebra in Engineering
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE
More informationStability preserving post-processing methods applied in the Loewner framework
Ion Victor Gosea and Athanasios C. Antoulas (Jacobs University Bremen and Rice University May 11, 2016 Houston 1 / 20 Stability preserving post-processing methods applied in the Loewner framework Ion Victor
More informationNormed & Inner Product Vector Spaces
Normed & Inner Product Vector Spaces ECE 174 Introduction to Linear & Nonlinear Optimization Ken Kreutz-Delgado ECE Department, UC San Diego Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 1 / 27 Normed
More informationMODEL-order reduction is emerging as an effective
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I: REGULAR PAPERS, VOL. 52, NO. 5, MAY 2005 975 Model-Order Reduction by Dominant Subspace Projection: Error Bound, Subspace Computation, and Circuit Applications
More informationarxiv: v1 [math.na] 5 May 2011
ITERATIVE METHODS FOR COMPUTING EIGENVALUES AND EIGENVECTORS MAYSUM PANJU arxiv:1105.1185v1 [math.na] 5 May 2011 Abstract. We examine some numerical iterative methods for computing the eigenvalues and
More informationON SUM OF SQUARES DECOMPOSITION FOR A BIQUADRATIC MATRIX FUNCTION
Annales Univ. Sci. Budapest., Sect. Comp. 33 (2010) 273-284 ON SUM OF SQUARES DECOMPOSITION FOR A BIQUADRATIC MATRIX FUNCTION L. László (Budapest, Hungary) Dedicated to Professor Ferenc Schipp on his 70th
More informationModel reduction via tangential interpolation
Model reduction via tangential interpolation K. Gallivan, A. Vandendorpe and P. Van Dooren May 14, 2002 1 Introduction Although most of the theory presented in this paper holds for both continuous-time
More informationControl Systems. Laplace domain analysis
Control Systems Laplace domain analysis L. Lanari outline introduce the Laplace unilateral transform define its properties show its advantages in turning ODEs to algebraic equations define an Input/Output
More informationMATH 350: Introduction to Computational Mathematics
MATH 350: Introduction to Computational Mathematics Chapter V: Least Squares Problems Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2011 fasshauer@iit.edu MATH
More informationRecycling Bi-Lanczos Algorithms: BiCG, CGS, and BiCGSTAB
Recycling Bi-Lanczos Algorithms: BiCG, CGS, and BiCGSTAB Kapil Ahuja Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial fulfillment of the requirements
More informationSquare root of a symmetric matrix
Square root of a symmetric matrix Mario Berljafa Stefan Güttel June 2015 Contents 1 Introduction 1 2 Running rkfit 1 3 Evaluating the rational approximant 2 4 Some different choices for the initial poles
More informationA NUMERICAL METHOD TO SOLVE A QUADRATIC CONSTRAINED MAXIMIZATION
A NUMERICAL METHOD TO SOLVE A QUADRATIC CONSTRAINED MAXIMIZATION ALIREZA ESNA ASHARI, RAMINE NIKOUKHAH, AND STEPHEN L. CAMPBELL Abstract. The problem of maximizing a quadratic function subject to an ellipsoidal
More informationCOMP 558 lecture 18 Nov. 15, 2010
Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to
More informationControl System Design
ELEC ENG 4CL4: Control System Design Notes for Lecture #24 Wednesday, March 10, 2004 Dr. Ian C. Bruce Room: CRL-229 Phone ext.: 26984 Email: ibruce@mail.ece.mcmaster.ca Remedies We next turn to the question
More informationFluid flow dynamical model approximation and control
Fluid flow dynamical model approximation and control... a case-study on an open cavity flow C. Poussot-Vassal & D. Sipp Journée conjointe GT Contrôle de Décollement & GT MOSAR Frequency response of an
More informationBALANCING-RELATED MODEL REDUCTION FOR DATA-SPARSE SYSTEMS
BALANCING-RELATED Peter Benner Professur Mathematik in Industrie und Technik Fakultät für Mathematik Technische Universität Chemnitz Computational Methods with Applications Harrachov, 19 25 August 2007
More informationModel reduction of large-scale systems by least squares
Model reduction of large-scale systems by least squares Serkan Gugercin Department of Mathematics, Virginia Tech, Blacksburg, VA, USA gugercin@mathvtedu Athanasios C Antoulas Department of Electrical and
More informationAn homotopy method for exact tracking of nonlinear nonminimum phase systems: the example of the spherical inverted pendulum
9 American Control Conference Hyatt Regency Riverfront, St. Louis, MO, USA June -, 9 FrA.5 An homotopy method for exact tracking of nonlinear nonminimum phase systems: the example of the spherical inverted
More informationExponentials of Symmetric Matrices through Tridiagonal Reductions
Exponentials of Symmetric Matrices through Tridiagonal Reductions Ya Yan Lu Department of Mathematics City University of Hong Kong Kowloon, Hong Kong Abstract A simple and efficient numerical algorithm
More informationLarge-scale and infinite dimensional dynamical systems approximation
Large-scale and infinite dimensional dynamical systems approximation Igor PONTES DUFF PEREIRA Doctorant 3ème année - ONERA/DCSD Directeur de thèse: Charles POUSSOT-VASSAL Co-encadrant: Cédric SEREN 1 What
More informationControl System Design
ELEC ENG 4CL4: Control System Design Notes for Lecture #4 Monday, January 13, 2003 Dr. Ian C. Bruce Room: CRL-229 Phone ext.: 26984 Email: ibruce@mail.ece.mcmaster.ca Impulse and Step Responses of Continuous-Time
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More informationDownloaded 06/21/12 to Redistribution subject to SIAM license or copyright; see
SIAM J. SCI. COMPUT. Vol. 33, No. 5, pp. 2489 2518 c 2011 Society for Industrial and Applied Mathematics INTERPOLATORY PROJECTION METHODS FOR PARAMETERIZED MODEL REDUCTION ULRIKE BAUR, CHRISTOPHER BEATTIE,
More informationRITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY
RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY ILSE C.F. IPSEN Abstract. Absolute and relative perturbation bounds for Ritz values of complex square matrices are presented. The bounds exploit quasi-sparsity
More informationInfinite dimensional model analysis through data-driven interpolatory techniques
Infinite dimensional model analysis through data-driven interpolatory techniques... applications to time-delayed and irrational functions C. Poussot-Vassal 20th Conference of the International Linear Algebra
More informationBALANCING AS A MOMENT MATCHING PROBLEM
BALANCING AS A MOMENT MATCHING PROBLEM T.C. IONESCU, J.M.A. SCHERPEN, O.V. IFTIME, AND A. ASTOLFI Abstract. In this paper, we treat a time-domain moment matching problem associated to balanced truncation.
More informationApplied Linear Algebra in Geoscience Using MATLAB
Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in
More informationAN OVERVIEW OF MODEL REDUCTION TECHNIQUES APPLIED TO LARGE-SCALE STRUCTURAL DYNAMICS AND CONTROL MOTIVATING EXAMPLE INVERTED PENDULUM
Controls Lab AN OVERVIEW OF MODEL REDUCTION TECHNIQUES APPLIED TO LARGE-SCALE STRUCTURAL DYNAMICS AND CONTROL Eduardo Gildin (UT ICES and Rice Univ.) with Thanos Antoulas (Rice ECE) Danny Sorensen (Rice
More informationAn Interpolation-Based Approach to the Optimal H Model Reduction
An Interpolation-Based Approach to the Optimal H Model Reduction Garret M. Flagg Dissertation submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial fulfillment
More informationCONTROL SYSTEMS, ROBOTICS AND AUTOMATION CONTENTS VOLUME VII
CONTENTS VOLUME VII Control of Linear Multivariable Systems 1 Katsuhisa Furuta,Tokyo Denki University, School of Science and Engineering, Ishizaka, Hatoyama, Saitama, Japan 1. Linear Multivariable Systems
More informationConsensus Protocols for Networks of Dynamic Agents
Consensus Protocols for Networks of Dynamic Agents Reza Olfati Saber Richard M. Murray Control and Dynamical Systems California Institute of Technology Pasadena, CA 91125 e-mail: {olfati,murray}@cds.caltech.edu
More informationChapter Four Gelfond s Solution of Hilbert s Seventh Problem (Revised January 2, 2011)
Chapter Four Gelfond s Solution of Hilbert s Seventh Problem (Revised January 2, 2011) Before we consider Gelfond s, and then Schneider s, complete solutions to Hilbert s seventh problem let s look back
More informationModel reduction of large-scale dynamical systems
Model reduction of large-scale dynamical systems Lecture III: Krylov approximation and rational interpolation Thanos Antoulas Rice University and Jacobs University email: aca@rice.edu URL: www.ece.rice.edu/
More informationSingular Value Decomposition
Chapter 6 Singular Value Decomposition In Chapter 5, we derived a number of algorithms for computing the eigenvalues and eigenvectors of matrices A R n n. Having developed this machinery, we complete our
More informationClustering-Based Model Order Reduction for Multi-Agent Systems with General Linear Time-Invariant Agents
Max Planck Institute Magdeburg Preprints Petar Mlinarić Sara Grundel Peter Benner Clustering-Based Model Order Reduction for Multi-Agent Systems with General Linear Time-Invariant Agents MAX PLANCK INSTITUT
More informationTechnische Universität Berlin
Technische Universität Berlin Institut für Mathematik A Matlab Toolbox for the Regularization of Descriptor Systems Arising from Generalized Realization Procedures A. Binder V. Mehrmann A. Miedlar P. Schulze
More informationCANONICAL LOSSLESS STATE-SPACE SYSTEMS: STAIRCASE FORMS AND THE SCHUR ALGORITHM
CANONICAL LOSSLESS STATE-SPACE SYSTEMS: STAIRCASE FORMS AND THE SCHUR ALGORITHM Ralf L.M. Peeters Bernard Hanzon Martine Olivi Dept. Mathematics, Universiteit Maastricht, P.O. Box 616, 6200 MD Maastricht,
More informationOctober 25, 2013 INNER PRODUCT SPACES
October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal
More informationLinear Algebra and Eigenproblems
Appendix A A Linear Algebra and Eigenproblems A working knowledge of linear algebra is key to understanding many of the issues raised in this work. In particular, many of the discussions of the details
More informationLECTURE NOTE #10 PROF. ALAN YUILLE
LECTURE NOTE #10 PROF. ALAN YUILLE 1. Principle Component Analysis (PCA) One way to deal with the curse of dimensionality is to project data down onto a space of low dimensions, see figure (1). Figure
More informationLyapunov-based control of quantum systems
Lyapunov-based control of quantum systems Symeon Grivopoulos Bassam Bamieh Department of Mechanical and Environmental Engineering University of California, Santa Barbara, CA 936-57 symeon,bamieh@engineering.ucsb.edu
More informationComparison of Model Reduction Methods with Applications to Circuit Simulation
Comparison of Model Reduction Methods with Applications to Circuit Simulation Roxana Ionutiu, Sanda Lefteriu, and Athanasios C. Antoulas Department of Electrical and Computer Engineering, Rice University,
More informationZentrum für Technomathematik
Zentrum für Technomathematik Fachbereich 3 Mathematik und Informatik Necessary optimality conditions for H 2 -norm optimal model reduction Georg Vossen Angelika Bunse-Gerstner Dorota Kubalińska Daniel
More informationLinear Algebra (Review) Volker Tresp 2018
Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c
More informationControl Systems. System response. L. Lanari
Control Systems m i l e r p r a in r e v y n is o System response L. Lanari Outline What we are going to see: how to compute in the s-domain the forced response (zero-state response) using the transfer
More informationCME 345: MODEL REDUCTION
CME 345: MODEL REDUCTION Balanced Truncation Charbel Farhat & David Amsallem Stanford University cfarhat@stanford.edu These slides are based on the recommended textbook: A.C. Antoulas, Approximation of
More informationLinear Algebra Review. Vectors
Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors
More information12.4 Known Channel (Water-Filling Solution)
ECEn 665: Antennas and Propagation for Wireless Communications 54 2.4 Known Channel (Water-Filling Solution) The channel scenarios we have looed at above represent special cases for which the capacity
More informationRational Krylov Methods for Model Reduction of Large-scale Dynamical Systems
Rational Krylov Methods for Model Reduction of Large-scale Dynamical Systems Serkan Güǧercin Department of Mathematics, Virginia Tech, USA Aerospace Computational Design Laboratory, Dept. of Aeronautics
More informationFast Angular Synchronization for Phase Retrieval via Incomplete Information
Fast Angular Synchronization for Phase Retrieval via Incomplete Information Aditya Viswanathan a and Mark Iwen b a Department of Mathematics, Michigan State University; b Department of Mathematics & Department
More informationLinear Algebra. Session 12
Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)
More information14 Singular Value Decomposition
14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing
More information2nd Symposium on System, Structure and Control, Oaxaca, 2004
263 2nd Symposium on System, Structure and Control, Oaxaca, 2004 A PROJECTIVE ALGORITHM FOR STATIC OUTPUT FEEDBACK STABILIZATION Kaiyang Yang, Robert Orsi and John B. Moore Department of Systems Engineering,
More informationInexact Solves in Interpolatory Model Reduction
Inexact Solves in Interpolatory Model Reduction Sarah Wyatt Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial fulfillment of the requirements for the
More informationComputing generalized inverse systems using matrix pencil methods
Computing generalized inverse systems using matrix pencil methods A. Varga German Aerospace Center DLR - Oberpfaffenhofen Institute of Robotics and Mechatronics D-82234 Wessling, Germany. Andras.Varga@dlr.de
More informationIssues in Interpolatory Model Reduction: Inexact Solves, Second-order Systems and DAEs
Issues in Interpolatory Model Reduction: Inexact Solves, Second-order Systems and DAEs Sarah Wyatt Dissertation submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial
More informationLecture 2: Linear Algebra Review
EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1
More informationLecture 4: Analysis of MIMO Systems
Lecture 4: Analysis of MIMO Systems Norms The concept of norm will be extremely useful for evaluating signals and systems quantitatively during this course In the following, we will present vector norms
More informationSingular Value Decompsition
Singular Value Decompsition Massoud Malek One of the most useful results from linear algebra, is a matrix decomposition known as the singular value decomposition It has many useful applications in almost
More informationDownloaded 05/27/14 to Redistribution subject to SIAM license or copyright; see
SIAM J. SCI. COMPUT. Vol. 35, No. 5, pp. B1010 B1033 c 2013 Society for Industrial and Applied Mathematics MODEL REDUCTION OF DESCRIPTOR SYSTEMS BY INTERPOLATORY PROJECTION METHODS SERKAN GUGERCIN, TATJANA
More informationBalancing-Related Model Reduction for Large-Scale Systems
Balancing-Related Model Reduction for Large-Scale Systems Peter Benner Professur Mathematik in Industrie und Technik Fakultät für Mathematik Technische Universität Chemnitz D-09107 Chemnitz benner@mathematik.tu-chemnitz.de
More informationRECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK
RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK TRNKA PAVEL AND HAVLENA VLADIMÍR Dept of Control Engineering, Czech Technical University, Technická 2, 166 27 Praha, Czech Republic mail:
More informationLarge-scale eigenvalue problems
ELE 538B: Mathematics of High-Dimensional Data Large-scale eigenvalue problems Yuxin Chen Princeton University, Fall 208 Outline Power method Lanczos algorithm Eigenvalue problems 4-2 Eigendecomposition
More informationPART IV Spectral Methods
PART IV Spectral Methods Additional References: R. Peyret, Spectral methods for incompressible viscous flow, Springer (2002), B. Mercier, An introduction to the numerical analysis of spectral methods,
More informationCS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation
Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80
More informationComparisons of Performance of Various Transmission Schemes of MIMO System Operating under Rician Channel Conditions
Comparisons of Performance of Various ransmission Schemes of MIMO System Operating under ician Channel Conditions Peerapong Uthansakul and Marek E. Bialkowski School of Information echnology and Electrical
More informationModel reduction of interconnected systems
Model reduction of interconnected systems A Vandendorpe and P Van Dooren 1 Introduction Large scale linear systems are often composed of subsystems that interconnect to each other Instead of reducing the
More information