Zentrum für Technomathematik

Size: px
Start display at page:

Download "Zentrum für Technomathematik"

Transcription

1 Zentrum für Technomathematik Fachbereich 3 Mathematik und Informatik Necessary optimality conditions for H 2 -norm optimal model reduction Georg Vossen Angelika Bunse-Gerstner Dorota Kubalińska Daniel Wilczek Report Berichte aus der Technomathematik Report Oktober 2007

2

3 NECESSARY OPTIMALITY CONDITIONS FOR H 2 -NORM OPTIMAL MODEL REDUCTION G. VOSSEN, A. BUNSE-GERSTNER, D. KUBALINSKA AND D. WILCZEK Abstract. This paper deals withh 2 -norm optimal model reduction for linear time invariant continuous MIMO systems. We will give an overview on several representations of linear systems in state space as well as in Laplace space and discuss theh 2 -norm for continuous MIMO systems with multiple poles. On this basis, necessary optimality conditions for theh 2 -norm optimal model reduction problem are developed. Key words. Model reduction,h 2 -norm, necessary optimality conditions AMS subject classifications.. Problem formulation. Consider the following linear time invariant (LTI descriptor system ( { } A B ẋ(t Ax(t + Bu(t, Σ : : (. C 0 y(t Cx(t, where x n, u m and y p are called the state variable, the input variable and the output variable, respectively. The matrices A n,n, B n,m and C p,n are constant matrices w.r.t. the time variable t. The system Σ is referred to as stable if all eigenvalues of A are in the left half complex plane (LHP, i.e., all eigenvalues λ j of A satisfy Re(λ j < 0. Let us now introduce the reachability matrix R n and the observability matrix O N defined by R n (Σ [B, AB,..., A n B] n,nm, O n (Σ [C, A C,...,(A n C ] pn,n. A system is called reachable, resp., observable if R n, resp., O n has full rank n. In this paper we assume that all occuring systems are stable, reachable and observable. The goal of model reduction is to find a reduced system ˆΣ : ˆx(t ˆx(t + ˆBu(t, ŷ(t Ĉˆx(t (.2 with ˆx r,  r,r, ˆB r,m and Ĉ p,r with the property that a certain norm of the so-called error system A 0 B Σ ˆΣ 0  ˆB C Ĉ 0 is small. In this paper, we are interested in optimal model reduction with respect to the H 2 -norm of the system which is defined as follows. The Laplace transform L{f(t}(s 0 f(te st dt applied to the system (. leads to a purely algebraic system of equations in the frequency domain: sx(s x(0 AX(s + BU(s, Y (s CX(s (.3

4 2 G. Vossen, A. Bunse-Gerstner, D. Kubalinska and D. Wilczek where X(s L{x(t}(s, U(s L{u(t}(s and Y (s L{y(t}(s denote the Laplace transform of the state, input and output, respectively. For simplicity, we assume x(0 0. Solving system (.3 for Y leads to where Y (s H(sU(s H(s C(sI n A B, s (.4 with I n being the n-th order identity matrix is called the transfer function. The H 2 - norm of the system is Σ is defined as the H 2 -norm of its transfer function H given by H 2 H 2 2π trace(h(iw H(iw dw, (.5 cf., e.g., [] where i is the imaginary number with i 2. Hence, the aim of H 2 -norm optimal model reduction is ˆΣ J(ˆΣ : Σ ˆΣ 2 H 2. (.6 or equivalently, ˆΣ J(Ĥ H Ĥ 2 H 2. (.7 We note that representation (. of the system Σ is not unique as the state basis can be transformed by x S x with any regular matrix S n,n which yields x(t Ā x(t + Bu(t, y(t C x(t (.8 with Ā S AS, B S B and C CS where A and Ā have the same eigenvalues. However, the transfer function for (.8 is the same as for (.. Therefore, it is common to identify the system not with its matrices A, B and C but with its transfer function H and hence, also to speak of H as a system. Definition. (Real system. A system H is called real if there exist real matrices A, B and C such that H C(s Id A B holds where Id is the identity mapping. In the following, we will also use the function H defined by H(s : H(s B (si n A C, s. (.9 2. Properties of a transfer function. 2.. SISO. We will now introduce different representations of a transer function H. On the one hand, H can be written as a quotient of two polynomials: n k0 H(s α ks k n k0 β ks, β k n, (2. with complex coefficients α k, k 0,...,n and β k, k 0,...,n. Here, the zeros of the denominator are the poles of the system, resp., the eigenvalues of the matrix

5 Necessary optimality conditions in model reduction 3 A in (.. On the other hand, around each pole λ j, j,...,n, we can expand H into its Laurent series, i.e., H(s γ jl (s λ j l. (2.2 l n Here, the Laurent series starts at l n as H is a rational function with the denominator s polynom degree being equal to n. The coefficients γ jl are called the Laurent coefficients of H at λ j and γ j, is called the residue denoted by Res(H, λ j. The order l 0 (j of a pole λ j is defined as the highest index l such that γ jl 0 holds for all l l 0 (j. The case l 0 (j will be referred to as simple pole, for l 0 (j < we use the notation multiple pole. We note that n l 0 (j holds as H is rational. We will now give a third representation based on partial frations which will be helpful in the following proofs Simple poles. In many applications, the system has n different simple poles, i.e., l 0 (j holds for j,...,n. Therefore, it is worth to investigate this situation in more detail. By expanding the transfer function H into partial fractions, we obtain H(s n φ j s λ j, φ j 0, j,...,n, (2.3 where the pairwise different λ j are the poles of the system and φ j Res(H, λ j are the residues at λ j, j,...,n. Obviously, the matrices A, B and C defined by A diag[λ,...,λ n ], B [... ], C [φ,..., φ n ] (2.4 describe the system (2.3. Hence, the entries c j of the vector C in the representation (2.4 are the residues of H at the poles λ j. Proposition 2.. The following three conditions are equivalent: (i H is real, (ii α k, k 0,...,n, and β k, k 0,...,n, in (2. are real, (iii the poles λ j, j,..., n, and the residues φ j, j,...,n, in (2.3 appear in conjugate pairs, i.e., if λ j is a pole with residue φ j, then λ j is a pole with residue φ j. Proof: Implication (i (ii is obvious. For implication (ii (iii, equation (2. yields H( s n k0 α k s k n n k0 β k s k k0 ᾱks k n β k0 k s H(s k as α k and β k are real. On the other hand, (2.3 implies Hence, we obtain H( s H(s n n φ j s λ j φ j s λ j n n φ j s λ j. φ j s λ j

6 4 G. Vossen, A. Bunse-Gerstner, D. Kubalinska and D. Wilczek and comparison of coefficients leads to (iii. For implication (iii (i we will now give a representation with real matrices A, B and C if poles and residues of H appear in conjugate pairs. Without loss of generality, we consider the case n 2. The proof for n > 2 can be done blockwise. If we have two different real poles with due to condition (iii real residues, one can take representation (2.4. For nonreal poles λ λ 2 : λ with residues φ φ 2 : φ the transfer function is given by H(s φ s λ + φ s λ. Consider the system represented by the matrices ( ( Reλ Im λ A, B, C (Re φ Im φ, Re φ + Im φ. (2.5 Imλ Re λ Its transfer function is given by ( C(sI 2 A s Re λ Im λ B (Re φ Im φ, Re φ + Im φ Im λ s Re λ 2 (Re φ(s Re λ (Im φ(im λ (s Re λ 2 + (Im λ 2 φ s λ + φ s λ H(s. ( This completes the proof. We note that the well-known reachable (and observable canonical form is also a proof for impication (ii (i. The funtion H is given by H(s H(s n φ j s λ j (2.6 with φ j and λ j from (2.3. Due to Proposition 2., we obtain H H for real systems Multiple poles. In this case, the transfer function has the following form: H(s n N j l φ N jl (s λ j l, n j n, φ j,nj 0, j,..., N, (2.7 where the pairwise different λ j, j,...,n, are the poles, each of order n j with corresponding coefficients φ jl, l,..., n j. For j N, rewrite (2.7 as H(s N n k k l φ N kl (s λ k l k,k j n k l φ kl (s λ k l + n j l φ jl (s λ j l. The first sum is holomorphic around each λ j. Therefore, it can be expanded into the Taylor series around λ j which coincides with all summands of the Laurent series of H(s around λ j with nonnegative indices. The second term corresponds to all summands of the Laurent series with negative indices, i.e., φ jl γ j, l for l,...,n j. This term is often called principal part of the Laurent series around λ j and we will therefore denote the coefficients φ jl, l,...,n j as principal coefficients. For a closer investigation of the principal coefficients define the Jordan matrices J j, j,...,n,

7 Necessary optimality conditions in model reduction 5 as the n j n j -dimensional matrix with λ j on each diagonal, ones on the super-diagonal and zeros elsewhere. Due to (s λ j... (s λ j nj (si n J j...., (2.8 0 (s λ j the following matrices A, B and C describe the system (2.7: A diag[j,...,j N ], B [e n,..., e n N ], C [φ,n,...,φ,..., φ N,nN,...,φ N,n ]. (2.9 Here, e j is the j-th unity vector, i.e., e j (0,..., 0, j. Therefore, the entries of the matrix C in representation (2.9 are the principal coefficients φ jl in representaion (2.7 of H. Proposition 2.2. The following three conditions are equivalent: (i H is real, (ii α k, k 0,...,n, and β k, k 0,...,n, in (2. are real, (iii the poles λ j, j,...,n, and corresponding principal coefficients φ jl, j,...,n, l,...,n j, in (2.7 appear in conjugate pairs, i.e., if λ j is a pole with coefficients φ jl, then λ j is a pole with coefficients φ jl. Proof: Again, (i (ii is obvious and (ii (iii was already proven in Proposition 2.. For (iii (i, it is without loss of generality sufficient to find real matrices A, B and C which represent the following transfer function H with two nonreal poles λ λ 2 : λ and corresponding principal coefficients φ φ 2 : φ, resp., φ 2 φ 22 : ψ: H(s φ s λ + φ s λ + Consider the system represented by ( A I A 2, B 0 A ψ (s λ 2 + ψ (s λ 2. ( B B 2, C (C, C 2. where ( Re λ Im λ A Im λ Re λ ( 0, B 0 C (Re φ Im φ, Re φ + Im φ, (, B 2, C 2 (Re ψ Im ψ, Re ψ + Im ψ. Its transfer function is C(sI 4 A B C ( (si2 A (si 2 A 2 0 (si 2 A B C (si 2 A 2 B 2 + C 2 (si 2 A B 2 (Re φ(s Reλ (Im φ(im λ 2 (s Re λ 2 + (Im λ (Re ψ( (s Re λ 2 Im λ 2(Im ψ(im λ(s Re λ ((s Re λ 2 + (Im λ 2 2

8 6 G. Vossen, A. Bunse-Gerstner, D. Kubalinska and D. Wilczek which completes the proof. The funtion H is given by H(s n N j l φ jl (s λ j l. Equivalently to the SISO case, Proposition 2.2 yields H H for real systems MIMO. Now, B and C are matrices: B [B... B m ], C [(C... (C p ], with column vectors B i (B, i..., Bn i T n representing the i-th input for i,...,m and row vectors C k (C k,..., Ck n n representing the k-th output for k,...,p of the system. Therefore, the transfer function H is a (p m-dimensional matrix with components H ki, i,...,m, k,...,p: H(s H (s... H m (s H p (s... H pm (s, H ki (s C k (si n A B i, (2.0 and each H ki can be seen as a SISO transfer function with input B i and output C k Simple poles. Here, each H ki has the form H ki (s n φ ki j s λ j, (2. where λ j, j,...,n, are the poles of H ki (which are the same for each H ki and φ ki j, j,...,n are the residues of λ j. We note that for p > and m > it is not possible to give representations A, B and C for the system as in (2.4 with B being a constant vector. Hence, the residues now depend on both B and C. Consider a representation where A is diagonalized, i.e., A diag[λ,...,λ n ]. By definition of H, each component H ki is given by H ki (s n C k j Bi j s λ j and comparison of coefficients yields that the residues satisfiy φ ki j Cj k Bi j, i,...,m, k,..., p, j,...,n. ( Multiple poles. Here, the entries H ki of H have the following representations: H ki (s n N j l φ ki N jl (s λ j l, n j n (2.3 with φ ki jl being coefficients in the principal part of Laurent series of H ki around the poles λ j of order n j, j...,n. For a better understanding of the residues we consider a Jordan representation of A, i.e. A diag[j,..., J N ] (cf. the SISO case.

9 Necessary optimality conditions in model reduction 7 We will first investigate the case of one pole λ of order n. Then, we have A J and (2.8 leads to a transfer function with components H ki (s n Cl k l n ql Bq i n (s λ q l+ l n l+ q C k q Bi q+l (s λ l. (2.4 For N > poles, we divide B ( B,..., B N with matrices B j n j,m and C ( C,..., C N with matrices C j p,n j for j,...,n. Due to H(s N C j (si n J j Bj, equation (2.4 implies that each component of H is given by H ki (s n N j l n l+ q ( C j k q( B j i q+l (s λ j l. Hence, we obtain the following representation for the residues: n l+ φ ki jl ( C j k q( B j i q+l. (2.5 q 3. The H 2 -norm. We will now give explicite formulas how to derive the H 2 - norm of a given system H. Due to (.9, the H 2 -norm becomes to H 2 H 2 2π ( trace H( iwh(iw dw 2πi i i 3.. SISO. As in this case H is scalar, (3. simplifies to H 2 H 2 2πi i i ( trace H( zh(z dz. (3. H( zh(zdz. (3.2 We will calculate this complex integral via the classical Residue Theorem from complex analysis (cf., e.g. [2] which we state here in a simple version suitable for our purpose. Theorem 3. (Residue Theorem. Let F : be a complex meromorphic function and γ be Jordan curve, i.e., a simple closed curve, in such that F has no poles on γ. Then, the following holds: 2πi γ F(zdz Res(F, λ. (3.3 λ pole of F inside γ Here and in the following, a point is referred to lie inside a curve if its winding number is equal to one. For example, each point z with z < is inside the anticlockwise parametrized unit circle, i.e., γ(t : e 2πit, t [0, ]. We note that the sum in (3.3 is finite as the function F is meromorphic. The following result will be helpful for the computation of formula (3.2. Proposition 3.2. Let F : be a rational function of order o 2, i.e., N k0 F(z α kz k M k0 β kz, α N, β M 0, o : N M 2, k

10 8 G. Vossen, A. Bunse-Gerstner, D. Kubalinska and D. Wilczek ir -R γ 2 γ -ir Fig. 3.. The curves γ (dashed line and γ 2 (solid line with no poles on the imaginary axis. Then the following holds: 2πi i i F(zdz Res(F, λ. λ pole of F in LHP Proof: Consider for a fixed R > 0 the Jordan curve γ defined by { γ (t, t [0, ], γ (t : ir + 2iRt, t [0, ], γ(t : γ 2 (t, t [, 2], γ 2 (t : ire πit, t [0, ], for t [0, 2] as shown in Figure 3.. Then, we have ir ir F(zdz F(zdz γ γ F(zdz F(zdz. γ 2 (3.4 The first integral can be computed via the Residue theorem: F(zdz Res(F, λ Res(F, λ. 2πi γ R λ pole of F inside γ λ pole of F in LHP The second integral in (3.4 is bounded by F(zdz γ 2 max F(z πr max F(z. γ z γ 2 2 z R For sufficiently large R, there exists a constant c such that F(z c z N M holds for z R which implies F(zdz cπr N M+ cπ γ 2 R 0. R This completes the proof. Due to (2., H( zh(z is a complex rational function of order o 2. Since for a stable system H, the function H( z has no poles in LHP, the function H( zh(z has the same poles in LHP as H(z and the H 2 -norm becomes to ( H 2 H 2 Res H( zh(z, z λ. (3.5 λ pole of H(z in LHP

11 Necessary optimality conditions in model reduction Simple poles. The following result is well-known in complex analysis and can be found, e.g., in [2]. Proposition 3.3. Let F be a meromorphic complex function and λ be a simple pole of F with residue φ. Let furthermore G be a complex function which is holomorphic in λ. Then, we have Res(F G, λ Res(F, λg(λ φg(λ. This brings us to the main result in this paragraph. Proposition 3.4. The H 2 -norm of a stable system H with simple poles λ j and corresponding residues φ j, j,..., n, is given by If H is real, we have H 2 H 2 n φ j H( λ j H 2 H 2 n φ jh( λ j. (3.6 n φ j H( λ j. (3.7 Proof: Using equation (3.5, we directly obtain (3.6 where the second equality arises from the fact that holds. Furthermore, Proposition 2. yields ( Multiple poles. Due to (3.5, we have to calculate residues of H( zh(z in λ j, j,...,n, which are now poles of order n j, respectively. We can use the following well-known result, cf., [2]. Proposition 3.5. Let F be a meromorphic complex function and λ be a pole of order k of F. Then, the residue of the function F in λ is given by Res(F, λ ( ((z (k! lim λ k F(z (k. (3.8 z λ where the superscript (k denotes the (k -th derivative with respect to z. In the following, we will calculate the right hand side of equation (3.8 for F H. Lemma 3.6. For j,...,n and l 0,...,n j we have ( lim ((z λ j nj H(z (nj l φ j,l+ (n j l! (3.9 z λ j Proof: The term ( (z λ j nj H(z (n j l can be written as N n i i,i j k φ ik (z λ j nj (z λ i k (n j l + ( nj (nj l φ jk (z λ j nj k. k Here, the first summand vanishes for z λ j since ( ((z λj nj (z λ i k (n j l zλj 0 holds, whereas for the second summand we obtain ( l+ φ jk (n j l!(z λ j zλj +l k φ j,l+ (n j l! k

12 0 G. Vossen, A. Bunse-Gerstner, D. Kubalinska and D. Wilczek Now we come to the main result of this paragraph. Proposition 3.7. The H 2 -norm of a stable system H with simple poles λ j and corresponding principal coefficients φ jl, j,...,n, l,...,n j, is given by H 2 H 2 n N j l If H is real, we have ( l (l! φ jlh (l ( λ j H 2 H 2 n N j l n N j l Proof: Using Proposition 3.5 and Lemma 3.6, we obtain Res( H( zh(z, λ j ( ( H( zh(z (nj lim (z λ j nj (n j! z λ j (n j! (n j! (n j! n j l lim z λ j n j l0 n j l0 ( nj l0 ( nj l ( nj l ( nj l ( l (l! φ jl H (l ( λ j. ( l (l! φ jlh (l ( λ j. (3.0 ( l (l! φ jlh (l ( λ j. (3. ((z λj nj H(z (n j l( H( z (l ( ((z lim λj nj H(z (n j l ( (l H( z zλj z λ j φ j,l+ (n j l! ( l H(l ( λ j Equation (3.5 directly yields (3.0, and (3. can be obtained from Proposition MIMO. The H 2 norm is given by H 2 H 2 trace(h(iw H(iw dw (H ki (iw H ki (iw dw 2π 2π i,k H ki (iw H ki (iwdw H ki 2 H 2π 2 i,k i,k (3.2 and hence, the results from the previous section about SISO systems can be used to compute the H 2 -norm of a MIMO system Simple poles. Due to (3.6, for each H ki the norm is given by H ki 2 H 2 n (φ ki j H ki ( λ j

13 which, together with (3.2, implies Necessary optimality conditions in model reduction H 2 H 2 i,k n n j H ki ( λ j (φ ki i,k (φ ki j H ki ( λ j ( Multiple poles. Due to (3.0 and (3.2, the norm is H 2 H 2 n N j l i,k 4. Necessary optimality conditions. 4.. SISO. ( l (l! (φki jl H (l ki ( λ j ( Simple poles. As mentioned in the previous section, for simple poles it is possible to find a representation A, B and C of a system such that B e n holds and the only free paramters are the poles of Σ on the diagonal of A and the residues in C. Hence, minimization problem (.7 can be seen as v J(v H Ĥ 2 H 2. (4. with the optimization variable v (ˆλ,..., ˆλ r, ˆφ,..., ˆφ r T 2r. (4.2 If we assume that the poles of the original system are different from the poles of the reduced system, the error system H Ĥ has n + r simple poles λ,...,λ n, ˆλ,..., ˆλ r with corresponding residues φ,...,φ n, ˆφ,..., ˆφ r. Formula (3.6 implies that the cost functional in (4. can be written as J(v n φ j (H( λ j Ĥ( λ j + r ˆφ j (Ĥ( ˆλ j H( ˆλ j which yields a nonsmooth optimization problem with the complex optimization vector v 2r. We consider three different types of optimization problems: Case : v J(v subject to v 2r, (4.3 i.e., the optimization vector v shall real. This is a reasonable task if the original system H is real or even has real poles and residues. In this case, due to (3.7, all superscripts can be deleted in J and we obtain J(v n φ j (H( λ j Ĥ( λ j + r ˆφ j (Ĥ( ˆλj H( ˆλ j. (4.4 Furthermore, instead of adding the constraint v 2r, one can also directly see the problem as a real unconstrained one. Therefore, we obtain a smooth unconstraint optimization problem with the real optimization vector v 2r. Theorem 4.. Necessary optimality conditions for problem (4.3 are given by H( ˆλ j Ĥ( ˆλ j, H ( ˆλ j Ĥ ( ˆλ j, j,...,r, (4.5

14 2 G. Vossen, A. Bunse-Gerstner, D. Kubalinska and D. Wilczek i.e., interpolation of Ĥ and its first derivative with H at ˆλ j, j,...,r. This result together with a proof was already given by Meier/Luenberger [4] and Gugercin/Antoulas/Beattie [3]. However, we note that in both references the authors did not mention that their proof is only applicable in case, i.e., for real poles and residues, as for complex v one has to the cost functional J in (4. which is not differentiable w.r.t. the optimization variable. We furthermore note that, nevertheless, in both publications the result is claimed to be correct and used in examples also for nonreal poles and residues. We will now investigate in which cases the result is still true for a complex v and when it should slightly be modified. Case 2: v J(v, (4.6 i.e., there shall be no constraints. The typical approach for such nonsmooth optimization problems with a complex variable v 2r 4r is to consider the real variable ṽ consisting of real and imaginary parts of the components of v as optimization vector in the general case: ṽ (Re ˆλ, Im ˆλ,..., Re ˆλ r, Im ˆλ r, Re ˆφ, Im ˆφ,..., Re ˆφ r, Im ˆφ r T 4r (4.7 Then, J is smooth w.r.t. ṽ and hence, we obtain a smooth unconstrained optimization problem with optimization variable ṽ 4r. For the statement of the main result in case 2, the following lemma will be helpful. Lemma 4.2. (i For µ,...,r we have: H( λ j Re ˆφ µ H( λ j Im ˆφ µ H( ˆλ j Re ˆφ µ H( ˆλ j Im ˆφ µ 0, Ĥ( λ j Re ˆφ µ i Ĥ( λ j Im ˆφ µ (ii For µ,...,r we have: H( λ j Re ˆλ µ H( λ j Im ˆλ µ 0, H( ˆλ j Re ˆλ µ i Ĥ( ˆλ j Re ˆλ µ Ĥ( ˆλ j Im ˆλ µ λ j ˆλ µ, H( ˆλ j Im ˆλ µ ˆφ µ (ˆλ j +ˆλ µ Ĥ ( ˆλ j + i ˆφ µ (ˆλ j +ˆλ µ iĥ ( ˆλ j Ĥ( ˆλ j Re ˆφ µ i Ĥ( λ j Re ˆλ µ i { 0, j µ, H ( ˆλ j, j µ, 2, j µ, ˆφµ (ˆλ j +ˆλ µ 2, j µ, 2, j µ, ˆφµ (ˆλ j +ˆλ µ 2, j µ. Ĥ( ˆλ j Im ˆφ µ Ĥ( λ j Im ˆλ µ ˆλ j ˆλ µ. ˆφ µ (λ j + ˆλ µ 2, Theorem 4.3. Necessary optimality conditions for problem (4.6 are given by H( ˆλ j Ĥ( ˆλ j, H ( ˆλ j Ĥ ( ˆλ j, j,..., r, (4.8 i.e., interpolation of Ĥ and its first derivative with H at ˆλ j, j,...,r.

15 Necessary optimality conditions in model reduction 3 Note that ˆλ j, j,...,r, are called mirror images. Proof: In view of Lemma 4.2, differentiating J with respect to Re ˆφ µ yields Re ˆφ µ n φ j λ j ˆλ + Ĥ( ˆλ µ + µ r H( ˆλ µ + Ĥ( ˆλ µ + Ĥ( ˆλ µ H( ˆλ µ H( ˆλ µ + Ĥ( ˆλ µ + Ĥ( ˆλ µ H( ˆλ µ 2Re(Ĥ( ˆλ µ H( ˆλ µ. ˆφ j ˆλ j ˆλ µ H( ˆλ µ (4.9 Analogously, we obtain Im ˆφ ih( ˆλ µ iĥ( ˆλ µ + iĥ( ˆλ µ + ih( ˆλ µ µ 2Im (Ĥ( ˆλ µ H( ˆλ µ. (4.0 In view of equations (4.9 and (4.0, the necessary optimality condition / z 0 implies the first condition in (4.8. Differentiation of J with respect to real, resp., imaginary part of ˆλ µ yields n ˆφ Re ˆλ φ µ r j µ (λ j + ˆλ µ + ˆφ ˆφ µ 2 j,j µ (ˆλ j + ˆλ µ 2 ( + Ĥ ( ˆλ µ + ˆφµ (ˆλ µ + ˆλ ˆφ µ 2 µ + ˆφ µ H ( ˆλ µ ˆφ H µ ( ˆλ µ ˆφ Ĥ µ ( ˆλ µ ˆφ µĥ ( ˆλ µ + ˆφ µh ( ˆλ µ 2Re (ˆφ µ (H ( ˆλ µ Ĥ ( ˆλ µ, n ˆφ Im ˆλ i φ µ r j µ (λ j + ˆλ µ + i ˆφ ˆφ µ 2 j,j µ (ˆλ j + ˆλ µ 2 ( ˆφ + i Ĥ ( ˆλ µ µ + (ˆλ µ + ˆλ ˆφ µ 2 µ ˆφ µh ( ˆλ µ 2Im (ˆφ µ (H ( ˆλ µ Ĥ ( ˆλ µ. (4. In view (4., / z 0 implies the second condition in (4.8 which completes the proof. Remark 4.4. If the resulting system in problem (4.6 for case 2 turns out to be real, corresponding optimality conditions (4.8 are equivalent to conditions (4.5 for case. Case 3: v J(v subject to Ĥ is real. (4.2 This case is reasonable if already the original system H is real what we will assume in the following. Now, v can be complex but all poles and residues appear in conjugate pairs as shown in Proposition 2.. For simplicity of notation, we assume that all

16 4 G. Vossen, A. Bunse-Gerstner, D. Kubalinska and D. Wilczek components of v are purely imaginary and hence, that r 2R, R, is even. For all real components one may use the results of case. Then, (4.2 is equivalent to v J(ṽ subject to ˆλj ˆλ R+j, ˆφj ˆφ R+j, j,...,r. (4.3 This can be seen as a constrained smooth optimization problem using again ṽ from (4.7 as optimization vector. Theorem 4.5. Necessary optimality conditions for problem (4.2 are given by H( ˆλ j Ĥ( ˆλ j, H ( ˆλ j Ĥ ( ˆλ j, j,...,r. Proof: The Lagrangian function in normal form for (4.3 is given by L(ψ, χ, v J(v + R Re ψ j (Re ˆλ j Re ˆλ R R+j + Im ψ j (Im ˆλ j + Im ˆλ R+j R + Re χ j (Re ˆφ j Re ˆφ R R+j + Im χ j (Im ˆφ j + Im ˆφ R+j with the complex Lagrange multipliers ψ (ψ,...,ψ R, χ (χ,...,χ R view of (4.9, for µ,...,r we obtain L(ψ, χ, v Re ˆφ µ R. In 2Re (Ĥ( ˆλ µ H( ˆλ µ + Re χ µ (4.4 and L(ψ, χ, v Re ˆφ 2Re (Ĥ( ˆλ R+µ H( ˆλ R+µ Re χ µ R+µ (4.5 2Re (Ĥ( ˆλ µ H( ˆλ µ Re χ µ since ˆλ j ˆλ R+j holds and H and Ĥ are real. Necessary optimality conditions for (4.3 imply that L(ψ, χ, v/ ṽ vanishes. Setting equations (4.4 and (4.5 to zero and subtracting them, we obtain Re χ µ 0 for j,..., R. Analogously, one can see that Im χ µ Re ψ µ Im ψ µ 0 holds which means that all Lagrange multipliers vanish and we obtain the same necessary conditions as in (4.8, resp., (4.5 since H and Ĥ are real Multiple poles. Due to (2.9, for any system H it is always possible to find a representation A, B and C with the same matrix B and only the poles and the principal coefficients as free variables. Hence, the optimization variable for problem (.7 in the case of multiple poles is given by v (ˆλ,..., ˆλ R, ˆφ,..., ˆφ,r,..., ˆφ R,..., ˆφ R,rR T R+r (4.6 where R is the given and fixed number of poles in the reduced system, each of given and fixed order r j, with corresponding principal coefficients, φ jl, j,...,r, l,...,r R, r r R r. Assuming again that all poles of the original system are different from the poles of the reduced system, the error system H Ĥ has n + R simple poles λ,..., λ n, ˆλ,..., ˆλ R with corresponding coefficients

17 Necessary optimality conditions in model reduction 5 φ,...,φ N,nN, ˆφ,..., ˆφ R,rR. Due to (3.0, we have the following cost functional: J(v n N j l + r R j l ( l (l! φ jl (H (l ( λ j Ĥ(l ( λ j ( l (l! ˆφ (Ĥ(l jl ( λ j H (l ( λj. As for simple poles, we will consider the following three cases. Case : Case 2: Case 3: v v v J(v subject to v 2r, (4.7 J(v, (4.8 J(v subject to Ĥ is real. (4.9 We begin with the general case 2 and consider again the real optimization variable ṽ 2(R+r consisting of real and imaginary parts of the components of v. The following result can be proved analogously to Lemma 4.2 in the case of simple poles. Lemma 4.6. (i For µ,...,r, ν,...,r µ we have: H (l ( λ j Re ˆφ µν H(l ( λ j Im ˆφ µν H(l ( ˆλ j Re ˆφ µν H(l ( ˆλ j Im ˆφ µν 0, Ĥ(l ( λ j Re ˆφ µν i Ĥ(l ( ˆλ j Re ˆφ µν i (ii For µ,...,r we have: Ĥ(l ( λ j Im ˆφ µν ( ν (ν + k 2! (ν!(λ j + ˆλ µ ν+l, Ĥ(l ( ˆλ j Im ˆφ µν ( ν (ν + k 2! (ν!(ˆλ j + ˆλ µ ν+l. H (l ( λ j Re ˆλ µ H(l ( λ j Im ˆλ µ 0, Ĥ(l ( λ j Re ˆλ Ĥ(l ( λ j ( k (k + l!ˆφ µk µ i Im ˆλ µ k (k!(λ j + ˆλ, µ k+l H (l ( ˆλ j Re ˆλ H (l { ( ˆλ j 0, j µ, µ i Im ˆλ µ H (k ( ˆλ j, j µ, r µ Ĥ(l ( ˆλ j Re ˆλ µ Ĥ(l ( ˆλ j Im ˆλ µ k r µ ( k (k+l!ˆφ µk (k!(ˆλ j +ˆλ µ k+l, j µ, Ĥ(l ( ˆλ j + r µ r µ k k ( k (k+l!ˆφ µk (k!(ˆλ j +ˆλ µ k+l, j µ. ( k (k+l!ˆφ µk (k!(ˆλ j +ˆλ µ k+l, j µ, iĥ(l ( ˆλ j r µ k ( k (k+l!ˆφ µk (k!(ˆλ j +ˆλ µ k+l, j µ.

18 6 G. Vossen, A. Bunse-Gerstner, D. Kubalinska and D. Wilczek Theorem 4.7. Necessary optimality conditions for problem (4.8 are given by H (l ( ˆλ j Ĥ(l ( ˆλ j, j,...,r, l 0,...,r j (4.20 i.e., interpolation of Ĥ and its first r j derivative with H at ˆλ j, j,...,r. Proof: Similar to the case of simple poles and using Lemma 4.6, differentiating J with respect to ṽ yields Re ˆφ 2 ( ν µν (ν! Re ( ˆλ µ H(ν ( ˆλ µ, (4.2 Im ˆφ 2 ( ν µν (ν! Im ( ˆλ µ H(ν ( ˆλ µ (4.22 for µ,...,r and ν,...,r µ and Re ˆλ µ 2 Im ˆλ µ 2 r µ l r µ l ( l ( (ˆφ (l! Re µl H (l ( ˆλ µ Ĥ(l ( ˆλ µ, ( l ( (ˆφ (l! Im µl H (l ( ˆλ µ Ĥ(l ( ˆλ µ (4.23 for µ,...,r. Equations (4.2 and (4.2 gives interpolation of the first r j derivatives which, together with (4.23, implies interpolation of r j -th derivatives in each ˆλ j, j,...,r. For case, one can use the previous proofs. We obtain all derivative formulas w.r.t. v by omitting the Re notation in all preceding formulas for derivatives w.r.t. the real parts of the components of ṽ. This directly yields the optimality conditions (4.20. Using Proposition 2.2, we obtain the following result. Theorem 4.8. Necessary optimality conditions for problem (4.7 are given by H (l ( ˆλ j Ĥ(l ( ˆλ j, j,...,r, l 0,...,r j i.e., interpolation of Ĥ and its first r j derivative with H at ˆλ j, j,...,r. Case 3 can be handled as for simple poles. Again, we obtain that the Lagrangian multipliers of problem (4.9 vanish and we obtain the following result. Theorem 4.9. Necessary optimality conditions for problem (4.9 are given by H (l ( ˆλ j Ĥ(l ( ˆλ j, j,...,r, l 0,...,r j i.e., interpolation of Ĥ and its first r j derivative with H at ˆλ j, j,...,r MIMO. For MIMO systems, we will only consider the general case (i.e., case 2 for SISO of optimization problem (.7 without constraints. Contrary to the SISO case, the residues of a transfer function now depend on both B and C if A is given in eigenvalue decomposition form. Hence, both ˆB and Ĉ act as optimization variables additionally to the eigenvalues of  Simple poles. The optimization vector is now given by v (ˆλ,..., ˆλ r, ˆB,..., ˆB m r, Ĉ,...,Ĉp r T r+mr+pr, (4.24

19 Necessary optimality conditions in model reduction 7 resp., ṽ (Re v T, Imv T T 2(r+mr+pr. Similarly to the SISO case we consider the error system H Ĥ which has the n + r simple poles λ,...,λ n, ˆλ,..., ˆλ r with corresponding residues φ ki,..., φki ki ki n, ˆφ,..., ˆφ r in the ki-th component of the error system. In view of (3.3 and (2.2, the optimization problem can be formulated as v J(ṽ n i,k + ( j H ki ( λ j Ĥki( λ j (φ ki r ( (ˆφ ki j Ĥ ki ( ˆλ j H ki( ˆλ j i,k (4.25 which is smooth with respect to ṽ and unconstrained. Theorem 4.0. Necessary optimality conditions for problem (4.25 are given by where H( ˆλ j ˆB j Ĥ( ˆλ j ˆB j, Ĉ j H( ˆλ j Ĉ j Ĥ( ˆλ j, Ĉ j H ( ˆλ j ˆB j Ĉ j Ĥ ( ˆλ j ˆB j, j,...,r, (4.26 Ĉ j : j-th column of Ĉ, ˆBj : j-th row of ˆB. In other words, the optimal reduced transfer function Ĥ is a left sided tangential interpoland into the directions Cj, a right sided tangential interpoland into the directions Bj and a both sided tangential interpoland into the directions C j, resp., B j of the original transfer function H. Proof: Since J can be written as J i,k J ki with n ( r ( J ki (φ ki j H ki ( λ j Ĥki( λ j + (ˆφ ki j Ĥ ki ( ˆλ j H ki( ˆλ j we can use the derivative formulas from the SISO case for each J ik to calculate the derivatives of J. Define the vectors ˆφ ki : (ˆφ ki ki,..., ˆφ r T r, ˆφ : (ˆφ,..., ˆφ pm rpm. Then, we have Re ˆφ ( Re ˆφ,..., pm, Re ˆφ pm ( Im ˆφ Im ˆφ,..., pm Im ˆφ pm with entries as in the SISO case given in (4.9 and (4.0. Due to (2.2, the residues satisfy Ĉk ˆB j j i, resp., Re ˆφ ki j ki ˆφ j Re Ĉk j Re ˆB j i Im Ĉk j Im ˆB j, i ki Im ˆφ j Re Ĉk j Im ˆB j i + Im Ĉk j Re ˆB j. i Hence, for i,...,m we obtain Re ˆφ ki j Re Ĉκ µ Re ˆB i µ, Im ˆφ ki j ReĈκ µ Im ˆB i µ, Re ˆφ ki j Im Ĉκ µ Im ˆB i µ, Im ˆφ ki j Im Ĉκ µ Re ˆB i µ

20 8 G. Vossen, A. Bunse-Gerstner, D. Kubalinska and D. Wilczek if κ k and µ j. For κ k or µ j, the derivatives are zero. Then, the chain rule yields Analogously, we obtain Reφ ReĈκ µ Re ˆφ ImĈκ µ Re ˆB ι µ Im ˆB ι µ i,k m 2 Re i i Re Ĉκ µ + Im φ Im ˆφ ReĈκ µ ki Re φ ki Re ˆφ + ki Re Ĉκ µ i,k m Im i ki Im φ ki Im ˆφ ki Re Ĉκ µ (Ĥκi ( ˆλ µ H κi( ˆλ µ Re ˆB i µ (Ĥκi ( ˆλ µ H κi ( ˆλ µ Im ˆB i µ m ( Re [(Ĥκi ( ˆλ µ H κi( ˆλ µ ˆB µ i ]. m i p k p k ( Im [(Ĥκi ( ˆλ µ H κi( ˆλ µ ] ˆBi µ, (Ĉk Re [(Ĥkι ( ˆλ µ H kι ( ˆλ ] µ µ, (Ĉk Im [(Ĥkι ( ˆλ µ H kι( ˆλ µ ] µ. Hence, the necessary optimality condition / z 0 imply m Ĥ κi ( ˆλ µ ( m ˆBi µ H κi ( ˆλ µ (, ˆBi µ i i p (Ĉk Ĥ p µ kι ( ˆλ µ (Ĉk Hkι µ ( ˆλ µ, k k κ,...,p, ι,...,m, for µ,...,r which is equivalent to the first two conditions in (4.26. Differentiation of J with respect to real, resp., imaginary part of ˆλ µ via formulas (4. for J ik directly yields ( (ˆφik ( Re µ H ( ˆλ µ Ĥ ( ˆλ µ, Re ˆλ µ 2 i,k Im ˆλ µ 2 i,k ( (ˆφik ( Im µ H ( ˆλ µ Ĥ ( ˆλ µ for µ,...,r. Due to necessary conditions and φ ki j (Ĉk Ĥ µ ki( ˆλ µ ( ˆBi µ Cj kbi j, we obtain (Ĉk µ H ki ( ˆλ µ ( ˆBi µ, µ,..., r. i,k i,k This is equivalent to the third condition in (4.26 and hence, completes the proof.

21 Necessary optimality conditions in model reduction Multiple poles. Now, the optimization problem is given by v J(ṽ n N j l i,k + r R j l i,k ( l (l! (φki ( l ki (ˆφ (l! ( jl H (l ki ( λ j Ĥ(l ki ( λ j ( jl Ĥ (l ki ( ˆλ j H (l ki ( ˆλ j (4.27 with the optimization vector v (ˆλ,..., ˆλ R, ( ˆ B,..., ( ˆ BN m n N, ( ˆ C,...,( ˆ CN p n N T R+mr+pr, (4.28 resp., ṽ (Re v T, Im v T T 2(R+mr+pr and the principal coefficients φ ki jl and as in (2.5. Theorem 4.. Necessary optimality conditions for problem (4.27 are given by r j l q0 l q0 ( q H (q ( ˆλ q! j ( r ( ˆ Bj j l ( q l+q Ĥ (q ( ˆλ q! j ( ( ˆ Bj, l+q q0 (4.29 ( q ( H l ( ˆ Cj (q l q ( ˆλ j q! ( q ( Ĥ ( ˆ Cj (q l q ( ˆλ j q!, for j,...,r, l,...,r j and r j l for j,...,r where r j l ( l (l! ( l (l! q0 l ( H ( ˆ Cj (l q ( ˆλ j ( ( ˆ Bj q+l q0 ˆφ ki jl (4.30 l ( Ĥ ( ˆ Cj (l q ( ˆλ j ( ( ˆ Bj q+l q0 ( ˆ Cj q : q-th column of the matrix ˆ Cj, ( ˆ Bj q : q-th row of the matrix ˆ Bj. Proof: Again, we write J i,k J ik and use the formulas from the SISO case. Due to the principal coefficients representation (2.5, for i,...,m we have Reφ ki jl Re( ˆ Cµ κ ν Re( ˆ Bµ i ν+l if κ k, µ j, ν r j l +, resp., for k,...,p we obtain Reφ ki jl Re( ˆ Bµ ι ν Re ( ˆ Cµ k ν l+

22 20 G. Vossen, A. Bunse-Gerstner, D. Kubalinska and D. Wilczek if ι i, µ j, ν l + r j l +, whereas derivatives for other indices vanish. Similar formulas hold for derivatives of imaginary parts of ˆφ and also for derivatives with respect to imaginary parts of ˆB and Ĉ. Therefore, the chain rule yields Re ( ˆ C µ κ ν Im( ˆ Cµ κ ν Re ( ˆ Bµ ι ν Im( ˆ Bµ ι ν m i m i p r µ ν+ l r µ ν+ l ν k l p ν k l ( l (l! ( l (l! ( l (l! ( l (l! Re [(Ĥ(l κi Im [(Ĥ(l κi Re [(Ĥ(l kι Im [(Ĥ(l kι (( ( ˆλ µ H(l ( ˆλ µ ˆ Bµ i ν+l κi (( ( ˆλ µ H(l ( ˆλ µ ˆ Bµ i ν+l κi (( ( ˆλ µ H(l ( ˆλ µ ˆ Cµ k ] ν l+, kι (( ( ˆλ µ H(l ( ˆλ µ ˆ Cµ k ν l+ kι ]. ], ], As all derivatives have to vanish, we obtain the two conditions in (4.29. Furthermore, from (4.23 we have Re ˆλ 2 µ i,k Im ˆλ 2 µ i,k r µ l r µ l ( l ( ( (l! Re (ˆφ ki µl H (l ki ( ˆλ µ Ĥ(l ki ( ˆλ µ, ( l ( ( (l! Im (ˆφ ki µl H (l ki ( ˆλ µ Ĥ(l ki ( ˆλ µ which, together with the representation (2.5 for the principal coefficients, yields condition (4.30. Remark 4.2. Note that in case of multiple poles we do not have tangential interpolation as necessary conditions but the sum of certain directions multiplied with certain derivatives of H and Ĥ must coincide. This furthermore implies that contrary to the SISO case, we cannot use the first two conditions (4.29 to obtain a simple third condition which involves only r j -th derivatives of the transfer functions. REFERENCES [] A.C. Antoulas, Approximation of large-scale dynamical systems, Advances in Design and Control 6, SIAM, Philadelphia, [2] J.B. Conway, Functions of one complex variable, Springer, 978. [3] S. Gugercin, C. Beattie and A.C. Antoulas, Rational Krylov Methods for OptimalH 2 Model Reduction, ICAM Technical Report, Virginia Tech, 2006 and submitted to SIAM Journal on Matrix Analysis and Applications. [4] L. Meier and D.G. Luenberger, Approximation of Linear Constant Systems, IEEE Trans. Automatic Control. 2 (

Zentrum für Technomathematik

Zentrum für Technomathematik Zentrum für Technomathematik Fachbereich 3 Mathematik und Informatik h 2 -norm optimal model reduction for large-scale discrete dynamical MIMO systems Angelika Bunse-Gerstner Dorota Kubalińska Georg Vossen

More information

H 2 -optimal model reduction of MIMO systems

H 2 -optimal model reduction of MIMO systems H 2 -optimal model reduction of MIMO systems P. Van Dooren K. A. Gallivan P.-A. Absil Abstract We consider the problem of approximating a p m rational transfer function Hs of high degree by another p m

More information

Krylov Techniques for Model Reduction of Second-Order Systems

Krylov Techniques for Model Reduction of Second-Order Systems Krylov Techniques for Model Reduction of Second-Order Systems A Vandendorpe and P Van Dooren February 4, 2004 Abstract The purpose of this paper is to present a Krylov technique for model reduction of

More information

Model reduction via tangential interpolation

Model reduction via tangential interpolation Model reduction via tangential interpolation K. Gallivan, A. Vandendorpe and P. Van Dooren May 14, 2002 1 Introduction Although most of the theory presented in this paper holds for both continuous-time

More information

Zeros and zero dynamics

Zeros and zero dynamics CHAPTER 4 Zeros and zero dynamics 41 Zero dynamics for SISO systems Consider a linear system defined by a strictly proper scalar transfer function that does not have any common zero and pole: g(s) =α p(s)

More information

Model Reduction for Unstable Systems

Model Reduction for Unstable Systems Model Reduction for Unstable Systems Klajdi Sinani Virginia Tech klajdi@vt.edu Advisor: Serkan Gugercin October 22, 2015 (VT) SIAM October 22, 2015 1 / 26 Overview 1 Introduction 2 Interpolatory Model

More information

Krylov-Subspace Based Model Reduction of Nonlinear Circuit Models Using Bilinear and Quadratic-Linear Approximations

Krylov-Subspace Based Model Reduction of Nonlinear Circuit Models Using Bilinear and Quadratic-Linear Approximations Krylov-Subspace Based Model Reduction of Nonlinear Circuit Models Using Bilinear and Quadratic-Linear Approximations Peter Benner and Tobias Breiten Abstract We discuss Krylov-subspace based model reduction

More information

Identification Methods for Structural Systems

Identification Methods for Structural Systems Prof. Dr. Eleni Chatzi System Stability Fundamentals Overview System Stability Assume given a dynamic system with input u(t) and output x(t). The stability property of a dynamic system can be defined from

More information

RIEMANN SURFACES. max(0, deg x f)x.

RIEMANN SURFACES. max(0, deg x f)x. RIEMANN SURFACES 10. Weeks 11 12: Riemann-Roch theorem and applications 10.1. Divisors. The notion of a divisor looks very simple. Let X be a compact Riemann surface. A divisor is an expression a x x x

More information

arxiv: v1 [cs.sy] 17 Nov 2015

arxiv: v1 [cs.sy] 17 Nov 2015 Optimal H model approximation based on multiple input/output delays systems I. Pontes Duff a, C. Poussot-Vassal b, C. Seren b a ISAE SUPAERO & ONERA b ONERA - The French Aerospace Lab, F-31055 Toulouse,

More information

Positive entries of stable matrices

Positive entries of stable matrices Positive entries of stable matrices Shmuel Friedland Department of Mathematics, Statistics and Computer Science, University of Illinois at Chicago Chicago, Illinois 60607-7045, USA Daniel Hershkowitz,

More information

Implicit Volterra Series Interpolation for Model Reduction of Bilinear Systems

Implicit Volterra Series Interpolation for Model Reduction of Bilinear Systems Max Planck Institute Magdeburg Preprints Mian Ilyas Ahmad Ulrike Baur Peter Benner Implicit Volterra Series Interpolation for Model Reduction of Bilinear Systems MAX PLANCK INSTITUT FÜR DYNAMIK KOMPLEXER

More information

Balanced Truncation 1

Balanced Truncation 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.242, Fall 2004: MODEL REDUCTION Balanced Truncation This lecture introduces balanced truncation for LTI

More information

Determinants of Partition Matrices

Determinants of Partition Matrices journal of number theory 56, 283297 (1996) article no. 0018 Determinants of Partition Matrices Georg Martin Reinhart Wellesley College Communicated by A. Hildebrand Received February 14, 1994; revised

More information

BALANCING-RELATED MODEL REDUCTION FOR DATA-SPARSE SYSTEMS

BALANCING-RELATED MODEL REDUCTION FOR DATA-SPARSE SYSTEMS BALANCING-RELATED Peter Benner Professur Mathematik in Industrie und Technik Fakultät für Mathematik Technische Universität Chemnitz Computational Methods with Applications Harrachov, 19 25 August 2007

More information

H 2 optimal model reduction - Wilson s conditions for the cross-gramian

H 2 optimal model reduction - Wilson s conditions for the cross-gramian H 2 optimal model reduction - Wilson s conditions for the cross-gramian Ha Binh Minh a, Carles Batlle b a School of Applied Mathematics and Informatics, Hanoi University of Science and Technology, Dai

More information

MAT389 Fall 2016, Problem Set 11

MAT389 Fall 2016, Problem Set 11 MAT389 Fall 216, Problem Set 11 Improper integrals 11.1 In each of the following cases, establish the convergence of the given integral and calculate its value. i) x 2 x 2 + 1) 2 ii) x x 2 + 1)x 2 + 2x

More information

Control Systems. Laplace domain analysis

Control Systems. Laplace domain analysis Control Systems Laplace domain analysis L. Lanari outline introduce the Laplace unilateral transform define its properties show its advantages in turning ODEs to algebraic equations define an Input/Output

More information

The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications

The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications MAX PLANCK INSTITUTE Elgersburg Workshop Elgersburg February 11-14, 2013 The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications Timo Reis 1 Matthias Voigt 2 1 Department

More information

Part IB Complex Analysis

Part IB Complex Analysis Part IB Complex Analysis Theorems Based on lectures by I. Smith Notes taken by Dexter Chua Lent 2016 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after

More information

22 APPENDIX 1: MATH FACTS

22 APPENDIX 1: MATH FACTS 22 APPENDIX : MATH FACTS 22. Vectors 22.. Definition A vector has a dual definition: It is a segment of a a line with direction, or it consists of its projection on a reference system xyz, usually orthogonal

More information

Eigenvalue perturbation theory of structured real matrices and their sign characteristics under generic structured rank-one perturbations

Eigenvalue perturbation theory of structured real matrices and their sign characteristics under generic structured rank-one perturbations Eigenvalue perturbation theory of structured real matrices and their sign characteristics under generic structured rank-one perturbations Christian Mehl Volker Mehrmann André C. M. Ran Leiba Rodman April

More information

A Necessary and Sufficient Condition for High-Frequency Robustness of Non-Strictly-Proper Feedback Systems

A Necessary and Sufficient Condition for High-Frequency Robustness of Non-Strictly-Proper Feedback Systems A Necessary and Sufficient Condition for High-Frequency Robustness of Non-Strictly-Proper Feedback Systems Daniel Cobb Department of Electrical Engineering University of Wisconsin Madison WI 53706-1691

More information

Realization-independent H 2 -approximation

Realization-independent H 2 -approximation Realization-independent H -approximation Christopher Beattie and Serkan Gugercin Abstract Iterative Rational Krylov Algorithm () of [] is an effective tool for tackling the H -optimal model reduction problem.

More information

POSITIVE MAP AS DIFFERENCE OF TWO COMPLETELY POSITIVE OR SUPER-POSITIVE MAPS

POSITIVE MAP AS DIFFERENCE OF TWO COMPLETELY POSITIVE OR SUPER-POSITIVE MAPS Adv. Oper. Theory 3 (2018), no. 1, 53 60 http://doi.org/10.22034/aot.1702-1129 ISSN: 2538-225X (electronic) http://aot-math.org POSITIVE MAP AS DIFFERENCE OF TWO COMPLETELY POSITIVE OR SUPER-POSITIVE MAPS

More information

Model reduction of interconnected systems

Model reduction of interconnected systems Model reduction of interconnected systems A Vandendorpe and P Van Dooren 1 Introduction Large scale linear systems are often composed of subsystems that interconnect to each other Instead of reducing the

More information

Matrix functions and their approximation. Krylov subspaces

Matrix functions and their approximation. Krylov subspaces [ 1 / 31 ] University of Cyprus Matrix functions and their approximation using Krylov subspaces Matrixfunktionen und ihre Approximation in Krylov-Unterräumen Stefan Güttel stefan@guettel.com Nicosia, 24th

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science : Dynamic Systems Spring 2011

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science : Dynamic Systems Spring 2011 MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science 6.4: Dynamic Systems Spring Homework Solutions Exercise 3. a) We are given the single input LTI system: [

More information

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS SPRING 006 PRELIMINARY EXAMINATION SOLUTIONS 1A. Let G be the subgroup of the free abelian group Z 4 consisting of all integer vectors (x, y, z, w) such that x + 3y + 5z + 7w = 0. (a) Determine a linearly

More information

Approximation of the Linearized Boussinesq Equations

Approximation of the Linearized Boussinesq Equations Approximation of the Linearized Boussinesq Equations Alan Lattimer Advisors Jeffrey Borggaard Serkan Gugercin Department of Mathematics Virginia Tech SIAM Talk Series, Virginia Tech, April 22, 2014 Alan

More information

CANONICAL LOSSLESS STATE-SPACE SYSTEMS: STAIRCASE FORMS AND THE SCHUR ALGORITHM

CANONICAL LOSSLESS STATE-SPACE SYSTEMS: STAIRCASE FORMS AND THE SCHUR ALGORITHM CANONICAL LOSSLESS STATE-SPACE SYSTEMS: STAIRCASE FORMS AND THE SCHUR ALGORITHM Ralf L.M. Peeters Bernard Hanzon Martine Olivi Dept. Mathematics, Universiteit Maastricht, P.O. Box 616, 6200 MD Maastricht,

More information

Gramians of structured systems and an error bound for structure-preserving model reduction

Gramians of structured systems and an error bound for structure-preserving model reduction Gramians of structured systems and an error bound for structure-preserving model reduction D.C. Sorensen and A.C. Antoulas e-mail:{sorensen@rice.edu, aca@rice.edu} September 3, 24 Abstract In this paper

More information

CONTROL DESIGN FOR SET POINT TRACKING

CONTROL DESIGN FOR SET POINT TRACKING Chapter 5 CONTROL DESIGN FOR SET POINT TRACKING In this chapter, we extend the pole placement, observer-based output feedback design to solve tracking problems. By tracking we mean that the output is commanded

More information

Projection of state space realizations

Projection of state space realizations Chapter 1 Projection of state space realizations Antoine Vandendorpe and Paul Van Dooren Department of Mathematical Engineering Université catholique de Louvain B-1348 Louvain-la-Neuve Belgium 1.0.1 Description

More information

MATH 722, COMPLEX ANALYSIS, SPRING 2009 PART 5

MATH 722, COMPLEX ANALYSIS, SPRING 2009 PART 5 MATH 722, COMPLEX ANALYSIS, SPRING 2009 PART 5.. The Arzela-Ascoli Theorem.. The Riemann mapping theorem Let X be a metric space, and let F be a family of continuous complex-valued functions on X. We have

More information

Taylor and Laurent Series

Taylor and Laurent Series Chapter 4 Taylor and Laurent Series 4.. Taylor Series 4... Taylor Series for Holomorphic Functions. In Real Analysis, the Taylor series of a given function f : R R is given by: f (x + f (x (x x + f (x

More information

RIEMANN S INEQUALITY AND RIEMANN-ROCH

RIEMANN S INEQUALITY AND RIEMANN-ROCH RIEMANN S INEQUALITY AND RIEMANN-ROCH DONU ARAPURA Fix a compact connected Riemann surface X of genus g. Riemann s inequality gives a sufficient condition to construct meromorphic functions with prescribed

More information

MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS

MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS Ulrike Baur joint work with Peter Benner Mathematics in Industry and Technology Faculty of Mathematics Chemnitz University of Technology

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

16. Local theory of regular singular points and applications

16. Local theory of regular singular points and applications 16. Local theory of regular singular points and applications 265 16. Local theory of regular singular points and applications In this section we consider linear systems defined by the germs of meromorphic

More information

Jordan normal form notes (version date: 11/21/07)

Jordan normal form notes (version date: 11/21/07) Jordan normal form notes (version date: /2/7) If A has an eigenbasis {u,, u n }, ie a basis made up of eigenvectors, so that Au j = λ j u j, then A is diagonal with respect to that basis To see this, let

More information

Model reduction of large-scale dynamical systems

Model reduction of large-scale dynamical systems Model reduction of large-scale dynamical systems Lecture III: Krylov approximation and rational interpolation Thanos Antoulas Rice University and Jacobs University email: aca@rice.edu URL: www.ece.rice.edu/

More information

Math Final Exam.

Math Final Exam. Math 106 - Final Exam. This is a closed book exam. No calculators are allowed. The exam consists of 8 questions worth 100 points. Good luck! Name: Acknowledgment and acceptance of honor code: Signature:

More information

Model Reduction of Inhomogeneous Initial Conditions

Model Reduction of Inhomogeneous Initial Conditions Model Reduction of Inhomogeneous Initial Conditions Caleb Magruder Abstract Our goal is to develop model reduction processes for linear dynamical systems with non-zero initial conditions. Standard model

More information

H 2 optimal model approximation by structured time-delay reduced order models

H 2 optimal model approximation by structured time-delay reduced order models H 2 optimal model approximation by structured time-delay reduced order models I. Pontes Duff, Charles Poussot-Vassal, C. Seren 27th September, GT MOSAR and GT SAR meeting 200 150 100 50 0 50 100 150 200

More information

Stability preserving post-processing methods applied in the Loewner framework

Stability preserving post-processing methods applied in the Loewner framework Ion Victor Gosea and Athanasios C. Antoulas (Jacobs University Bremen and Rice University May 11, 2016 Houston 1 / 20 Stability preserving post-processing methods applied in the Loewner framework Ion Victor

More information

Control Systems. Frequency domain analysis. L. Lanari

Control Systems. Frequency domain analysis. L. Lanari Control Systems m i l e r p r a in r e v y n is o Frequency domain analysis L. Lanari outline introduce the Laplace unilateral transform define its properties show its advantages in turning ODEs to algebraic

More information

Iterative Rational Krylov Algorithm for Unstable Dynamical Systems and Generalized Coprime Factorizations

Iterative Rational Krylov Algorithm for Unstable Dynamical Systems and Generalized Coprime Factorizations Iterative Rational Krylov Algorithm for Unstable Dynamical Systems and Generalized Coprime Factorizations Klajdi Sinani Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University

More information

Math 460: Complex Analysis MWF 11am, Fulton Hall 425 Homework 8 Solutions Please write neatly, and in complete sentences when possible.

Math 460: Complex Analysis MWF 11am, Fulton Hall 425 Homework 8 Solutions Please write neatly, and in complete sentences when possible. Math 460: Complex Analysis MWF am, Fulton Hall 45 Homework 8 Solutions Please write neatly, and in complete sentences when possible. Do the following problems from the book:.4.,.4.0,.4.-.4.6, 4.., 4..,

More information

b jσ(j), Keywords: Decomposable numerical range, principal character AMS Subject Classification: 15A60

b jσ(j), Keywords: Decomposable numerical range, principal character AMS Subject Classification: 15A60 On the Hu-Hurley-Tam Conjecture Concerning The Generalized Numerical Range Che-Man Cheng Faculty of Science and Technology, University of Macau, Macau. E-mail: fstcmc@umac.mo and Chi-Kwong Li Department

More information

Model reduction of large-scale systems by least squares

Model reduction of large-scale systems by least squares Model reduction of large-scale systems by least squares Serkan Gugercin Department of Mathematics, Virginia Tech, Blacksburg, VA, USA gugercin@mathvtedu Athanasios C Antoulas Department of Electrical and

More information

arxiv: v1 [math.rt] 7 Oct 2014

arxiv: v1 [math.rt] 7 Oct 2014 A direct approach to the rational normal form arxiv:1410.1683v1 [math.rt] 7 Oct 2014 Klaus Bongartz 8. Oktober 2014 In courses on linear algebra the rational normal form of a matrix is usually derived

More information

SYMMETRIC SUBGROUP ACTIONS ON ISOTROPIC GRASSMANNIANS

SYMMETRIC SUBGROUP ACTIONS ON ISOTROPIC GRASSMANNIANS 1 SYMMETRIC SUBGROUP ACTIONS ON ISOTROPIC GRASSMANNIANS HUAJUN HUANG AND HONGYU HE Abstract. Let G be the group preserving a nondegenerate sesquilinear form B on a vector space V, and H a symmetric subgroup

More information

Robust Control 2 Controllability, Observability & Transfer Functions

Robust Control 2 Controllability, Observability & Transfer Functions Robust Control 2 Controllability, Observability & Transfer Functions Harry G. Kwatny Department of Mechanical Engineering & Mechanics Drexel University /26/24 Outline Reachable Controllability Distinguishable

More information

LTI system response. Daniele Carnevale. Dipartimento di Ing. Civile ed Ing. Informatica (DICII), University of Rome Tor Vergata

LTI system response. Daniele Carnevale. Dipartimento di Ing. Civile ed Ing. Informatica (DICII), University of Rome Tor Vergata LTI system response Daniele Carnevale Dipartimento di Ing. Civile ed Ing. Informatica (DICII), University of Rome Tor Vergata Fondamenti di Automatica e Controlli Automatici A.A. 2014-2015 1 / 15 Laplace

More information

Linear Systems and Matrices

Linear Systems and Matrices Department of Mathematics The Chinese University of Hong Kong 1 System of m linear equations in n unknowns (linear system) a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.......

More information

Network Parameters of a Two-Port Filter

Network Parameters of a Two-Port Filter Contents Network Parameters of a Two-Port Filter S Parameters in the s-domain Relation between ε and ε R 2 2 Derivation of S 22 3 2 Network Parameters in the jω-domain 3 2 S Parameters in the jω-domain

More information

Chap. 3. Controlled Systems, Controllability

Chap. 3. Controlled Systems, Controllability Chap. 3. Controlled Systems, Controllability 1. Controllability of Linear Systems 1.1. Kalman s Criterion Consider the linear system ẋ = Ax + Bu where x R n : state vector and u R m : input vector. A :

More information

arxiv: v1 [math.na] 12 Jan 2012

arxiv: v1 [math.na] 12 Jan 2012 Interpolatory Weighted-H 2 Model Reduction Branimir Anić, Christopher A. Beattie, Serkan Gugercin, and Athanasios C. Antoulas Preprint submitted to Automatica, 2 January 22 arxiv:2.2592v [math.na] 2 Jan

More information

MATH 566 LECTURE NOTES 4: ISOLATED SINGULARITIES AND THE RESIDUE THEOREM

MATH 566 LECTURE NOTES 4: ISOLATED SINGULARITIES AND THE RESIDUE THEOREM MATH 566 LECTURE NOTES 4: ISOLATED SINGULARITIES AND THE RESIDUE THEOREM TSOGTGEREL GANTUMUR 1. Functions holomorphic on an annulus Let A = D R \D r be an annulus centered at 0 with 0 < r < R

More information

Conformal Mappings. Chapter Schwarz Lemma

Conformal Mappings. Chapter Schwarz Lemma Chapter 5 Conformal Mappings In this chapter we study analytic isomorphisms. An analytic isomorphism is also called a conformal map. We say that f is an analytic isomorphism of U with V if f is an analytic

More information

Chapter 7. Linear Algebra: Matrices, Vectors,

Chapter 7. Linear Algebra: Matrices, Vectors, Chapter 7. Linear Algebra: Matrices, Vectors, Determinants. Linear Systems Linear algebra includes the theory and application of linear systems of equations, linear transformations, and eigenvalue problems.

More information

10 Transfer Matrix Models

10 Transfer Matrix Models MIT EECS 6.241 (FALL 26) LECTURE NOTES BY A. MEGRETSKI 1 Transfer Matrix Models So far, transfer matrices were introduced for finite order state space LTI models, in which case they serve as an important

More information

Model reduction of large-scale systems

Model reduction of large-scale systems Model reduction of large-scale systems Lecture IV: Model reduction from measurements Thanos Antoulas Rice University and Jacobs University email: aca@rice.edu URL: www.ece.rice.edu/ aca International School,

More information

Letting be a field, e.g., of the real numbers, the complex numbers, the rational numbers, the rational functions W(s) of a complex variable s, etc.

Letting be a field, e.g., of the real numbers, the complex numbers, the rational numbers, the rational functions W(s) of a complex variable s, etc. 1 Polynomial Matrices 1.1 Polynomials Letting be a field, e.g., of the real numbers, the complex numbers, the rational numbers, the rational functions W(s) of a complex variable s, etc., n ws ( ) as a

More information

Matrix Functions and their Approximation by. Polynomial methods

Matrix Functions and their Approximation by. Polynomial methods [ 1 / 48 ] University of Cyprus Matrix Functions and their Approximation by Polynomial Methods Stefan Güttel stefan@guettel.com Nicosia, 7th April 2006 Matrix functions Polynomial methods [ 2 / 48 ] University

More information

Perspective. ECE 3640 Lecture 11 State-Space Analysis. To learn about state-space analysis for continuous and discrete-time. Objective: systems

Perspective. ECE 3640 Lecture 11 State-Space Analysis. To learn about state-space analysis for continuous and discrete-time. Objective: systems ECE 3640 Lecture State-Space Analysis Objective: systems To learn about state-space analysis for continuous and discrete-time Perspective Transfer functions provide only an input/output perspective of

More information

Synopsis of Numerical Linear Algebra

Synopsis of Numerical Linear Algebra Synopsis of Numerical Linear Algebra Eric de Sturler Department of Mathematics, Virginia Tech sturler@vt.edu http://www.math.vt.edu/people/sturler Iterative Methods for Linear Systems: Basics to Research

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information

Stability, Pole Placement, Observers and Stabilization

Stability, Pole Placement, Observers and Stabilization Stability, Pole Placement, Observers and Stabilization 1 1, The Netherlands DISC Course Mathematical Models of Systems Outline 1 Stability of autonomous systems 2 The pole placement problem 3 Stabilization

More information

Topics in linear algebra

Topics in linear algebra Chapter 6 Topics in linear algebra 6.1 Change of basis I want to remind you of one of the basic ideas in linear algebra: change of basis. Let F be a field, V and W be finite dimensional vector spaces over

More information

L. Levaggi A. Tabacco WAVELETS ON THE INTERVAL AND RELATED TOPICS

L. Levaggi A. Tabacco WAVELETS ON THE INTERVAL AND RELATED TOPICS Rend. Sem. Mat. Univ. Pol. Torino Vol. 57, 1999) L. Levaggi A. Tabacco WAVELETS ON THE INTERVAL AND RELATED TOPICS Abstract. We use an abstract framework to obtain a multilevel decomposition of a variety

More information

= W z1 + W z2 and W z1 z 2

= W z1 + W z2 and W z1 z 2 Math 44 Fall 06 homework page Math 44 Fall 06 Darij Grinberg: homework set 8 due: Wed, 4 Dec 06 [Thanks to Hannah Brand for parts of the solutions] Exercise Recall that we defined the multiplication of

More information

Chapter 30 Minimality and Stability of Interconnected Systems 30.1 Introduction: Relating I/O and State-Space Properties We have already seen in Chapt

Chapter 30 Minimality and Stability of Interconnected Systems 30.1 Introduction: Relating I/O and State-Space Properties We have already seen in Chapt Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology 1 1 c Chapter

More information

ON THE MATRIX EQUATION XA AX = X P

ON THE MATRIX EQUATION XA AX = X P ON THE MATRIX EQUATION XA AX = X P DIETRICH BURDE Abstract We study the matrix equation XA AX = X p in M n (K) for 1 < p < n It is shown that every matrix solution X is nilpotent and that the generalized

More information

A Study of Numerical Elimination for the Solution of Multivariate Polynomial Systems

A Study of Numerical Elimination for the Solution of Multivariate Polynomial Systems A Study of Numerical Elimination for the Solution of Multivariate Polynomial Systems W Auzinger and H J Stetter Abstract In an earlier paper we had motivated and described am algorithm for the computation

More information

Chapter 6: Residue Theory. Introduction. The Residue Theorem. 6.1 The Residue Theorem. 6.2 Trigonometric Integrals Over (0, 2π) Li, Yongzhao

Chapter 6: Residue Theory. Introduction. The Residue Theorem. 6.1 The Residue Theorem. 6.2 Trigonometric Integrals Over (0, 2π) Li, Yongzhao Outline Chapter 6: Residue Theory Li, Yongzhao State Key Laboratory of Integrated Services Networks, Xidian University June 7, 2009 Introduction The Residue Theorem In the previous chapters, we have seen

More information

Chapter 2 Formulas and Definitions:

Chapter 2 Formulas and Definitions: Chapter 2 Formulas and Definitions: (from 2.1) Definition of Polynomial Function: Let n be a nonnegative integer and let a n,a n 1,...,a 2,a 1,a 0 be real numbers with a n 0. The function given by f (x)

More information

A RAPID INTRODUCTION TO COMPLEX ANALYSIS

A RAPID INTRODUCTION TO COMPLEX ANALYSIS A RAPID INTRODUCTION TO COMPLEX ANALYSIS AKHIL MATHEW ABSTRACT. These notes give a rapid introduction to some of the basic results in complex analysis, assuming familiarity from the reader with Stokes

More information

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra Definite versus Indefinite Linear Algebra Christian Mehl Institut für Mathematik TU Berlin Germany 10th SIAM Conference on Applied Linear Algebra Monterey Bay Seaside, October 26-29, 2009 Indefinite Linear

More information

The Residue Theorem. Integration Methods over Closed Curves for Functions with Singularities

The Residue Theorem. Integration Methods over Closed Curves for Functions with Singularities The Residue Theorem Integration Methods over losed urves for Functions with Singularities We have shown that if f(z) is analytic inside and on a closed curve, then f(z)dz = 0. We have also seen examples

More information

On the Characterization Feedback of Positive LTI Continuous Singular Systems of Index 1

On the Characterization Feedback of Positive LTI Continuous Singular Systems of Index 1 Advanced Studies in Theoretical Physics Vol. 8, 2014, no. 27, 1185-1190 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/astp.2014.48245 On the Characterization Feedback of Positive LTI Continuous

More information

ON THE LOCAL VERSION OF THE CHERN CONJECTURE: CMC HYPERSURFACES WITH CONSTANT SCALAR CURVATURE IN S n+1

ON THE LOCAL VERSION OF THE CHERN CONJECTURE: CMC HYPERSURFACES WITH CONSTANT SCALAR CURVATURE IN S n+1 Kragujevac Journal of Mathematics Volume 441 00, Pages 101 111. ON THE LOCAL VERSION OF THE CHERN CONJECTURE: CMC HYPERSURFACES WITH CONSTANT SCALAR CURVATURE IN S n+1 S. C. DE ALMEIDA 1, F. G. B. BRITO,

More information

4 Uniform convergence

4 Uniform convergence 4 Uniform convergence In the last few sections we have seen several functions which have been defined via series or integrals. We now want to develop tools that will allow us to show that these functions

More information

Theorem 1. ẋ = Ax is globally exponentially stable (GES) iff A is Hurwitz (i.e., max(re(σ(a))) < 0).

Theorem 1. ẋ = Ax is globally exponentially stable (GES) iff A is Hurwitz (i.e., max(re(σ(a))) < 0). Linear Systems Notes Lecture Proposition. A M n (R) is positive definite iff all nested minors are greater than or equal to zero. n Proof. ( ): Positive definite iff λ i >. Let det(a) = λj and H = {x D

More information

Course Summary Math 211

Course Summary Math 211 Course Summary Math 211 table of contents I. Functions of several variables. II. R n. III. Derivatives. IV. Taylor s Theorem. V. Differential Geometry. VI. Applications. 1. Best affine approximations.

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

INTRODUCTION TO REAL ANALYTIC GEOMETRY

INTRODUCTION TO REAL ANALYTIC GEOMETRY INTRODUCTION TO REAL ANALYTIC GEOMETRY KRZYSZTOF KURDYKA 1. Analytic functions in several variables 1.1. Summable families. Let (E, ) be a normed space over the field R or C, dim E

More information

Tangent spaces, normals and extrema

Tangent spaces, normals and extrema Chapter 3 Tangent spaces, normals and extrema If S is a surface in 3-space, with a point a S where S looks smooth, i.e., without any fold or cusp or self-crossing, we can intuitively define the tangent

More information

Problem Set 5 Solutions 1

Problem Set 5 Solutions 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.245: MULTIVARIABLE CONTROL SYSTEMS by A. Megretski Problem Set 5 Solutions The problem set deals with Hankel

More information

A proof of the Jordan normal form theorem

A proof of the Jordan normal form theorem A proof of the Jordan normal form theorem Jordan normal form theorem states that any matrix is similar to a blockdiagonal matrix with Jordan blocks on the diagonal. To prove it, we first reformulate it

More information

Numerical Solutions to Partial Differential Equations

Numerical Solutions to Partial Differential Equations Numerical Solutions to Partial Differential Equations Zhiping Li LMAM and School of Mathematical Sciences Peking University Nonconformity and the Consistency Error First Strang Lemma Abstract Error Estimate

More information

Math 408 Advanced Linear Algebra

Math 408 Advanced Linear Algebra Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x

More information

Characteristic Polynomial

Characteristic Polynomial Linear Algebra Massoud Malek Characteristic Polynomial Preleminary Results Let A = (a ij ) be an n n matrix If Au = λu, then λ and u are called the eigenvalue and eigenvector of A, respectively The eigenvalues

More information

Fluid flow dynamical model approximation and control

Fluid flow dynamical model approximation and control Fluid flow dynamical model approximation and control... a case-study on an open cavity flow C. Poussot-Vassal & D. Sipp Journée conjointe GT Contrôle de Décollement & GT MOSAR Frequency response of an

More information

Complex Analysis Qualifying Exam Solutions

Complex Analysis Qualifying Exam Solutions Complex Analysis Qualifying Exam Solutions May, 04 Part.. Let log z be the principal branch of the logarithm defined on G = {z C z (, 0]}. Show that if t > 0, then the equation log z = t has exactly one

More information

Canonical lossless state-space systems: staircase forms and the Schur algorithm

Canonical lossless state-space systems: staircase forms and the Schur algorithm Canonical lossless state-space systems: staircase forms and the Schur algorithm Ralf L.M. Peeters Bernard Hanzon Martine Olivi Dept. Mathematics School of Mathematical Sciences Projet APICS Universiteit

More information

An iterative SVD-Krylov based method for model reduction of large-scale dynamical systems

An iterative SVD-Krylov based method for model reduction of large-scale dynamical systems Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 1964 1986 www.elsevier.com/locate/laa An iterative SVD-Krylov based method for model reduction of large-scale dynamical

More information