Optimal RH 2 - and RH -Approximation of Unstable Descriptor Systems
|
|
- Curtis Austin
- 5 years ago
- Views:
Transcription
1 Optimal RH - and RH -Approximation of Unstable Descriptor Systems Marcus Köhler August, 1 arxiv:13.595v [math.oc] 1 Aug 1 Stability perserving is an important topic in approximation of systems, e.g. model reduction. If the original system is stable, we often want the approximation to be stable. But even if an algorithm preserves stability the resulting system could be unstable in practice because of round-off errors. Our approach is approximating this unstable reduced system by a stable system. More precisely, we consider the following problem. Given an unstable linear time-invariant continuous-time descriptor system with transfer function G, find a stable one whose transfer function is the best approximation of G in the spaces RH and RH, respectively. Explicit optimal solutions are presented under consideration of numerical issues. 1 Introduction We deal with linear time-invariant descriptor systems described by differential algebraic equations see e.g. [KM6], [Dai89] for more details Eẋt = Axt + But yt = Cxt + Dut 1.1 with matrices E, A, B, C, D R n n R n n R n m R p n R p m and the corresponding transfer function s CsE A 1 B +D. In many applications such as model reduction see [ASG1] for an overview an further references or system identification see [Lju99] one wishes to approximate systems such that the transfer functions of the original and the approximated system are as close as possible. An important property of an approximation technique is preservation of stability cf. e.g. balanced truncation [ASG1], hankel norm approximation [Glo84]. However, not every method has this property in general Arnoldi method, Lanczos method [ASG1]. Moreover, a stable approximation can be unstable in practice [BFF98]. This happens in computer implementations because of round-off errors. One way out is restarting the algorithm with 1
2 changed parameters, e.g. interpolation points in Lanczos method. The disadvantage is at least doubled computational complexity. Another approach is to modify the algorithm directly e.g. partial Padé-via-Lanczos method [BF1]. In this paper our approach for stability perserving is to consider the computed unstable approximation of the original system. Given is an unstable descriptor system S 1.1 with transfer function G. Find a stable descriptor system whose transfer function is the best approximation of G in the spaces RL and RL of real rational functions on the imaginary axis ir in the Lebesgue spaces L and L, respectively. We call this system an optimal RH -approximation and optimal RH -approximation of S, respectively. For special cases this problem was already solved. The characterization of the unique optimal RH -approximation of a standard system i.e. E = I in 1.1 with D = is a simple consequence of the Paley-Wiener theorem. The set of all suboptimal RH - approximations of standard systems was determined in [GGLD9] with j-spectral factorizations to solve model matching problems. However, no optimal ones were explicitly given. An explicit representation of a nonunique optimal RH -approximation of minimal antistable standard systems was presented in the proof of [Glo84, Theorem 6.1]. This theorem was only used as an auxiliary result for a more general problem, the optimal hankel norm approximation. That representation requires the possibly ill-conditioned balanced minimal realization of G. This numerical issue was discussed in [SCL9]. Our approach for approximating an unstable descriptor system S in a numerically reliable way is the following. First, we decompose S into a stable and antistable part by combining [GL96, Theorem 7.7.] and [KD9, Section 4.1]. We show that it is sufficient to replace the antistable part by its approximation to get an approximation of S. This was e.g. proposed in [Fra87, Section 8.3] for standard systems and w.r.t. RL. The stable part is the unique optimal RH -approximation of S. Our representation of the optimal RH -approximation only requires at most one singular value decomposition. For discret-time standard systems [Mar] presented an approximation method similar to ours. However, in contrast to our approach they used a balanced minimal realization of the antistable part. The paper is organized as follows. In Section we summarize some results regarding the theory of matrices and matrix pairs. We introduce the setting and recall some system transformations, e.g. balanced realization. Section 3 states and solves both approximation problems. Section 4 presents an algorithm solving our problem and some numerical examples. Preliminaries We define C > := {s C; Res > } and analogously C, C <, C. The set ir is the imaginary axis. For a function f we denote its domain by Df and for M Df the image of M under f by f[m]. The kernel of a matrix A C n m is denoted by kera, the adjoint by A, the spectrum by σa and the Frobenius norm by A F = tracea A.
3 The matrix I n R n n is the identity matrix and n m R n m the zero matrix. The vector e i R n is the ith unit vector. Definition.1. We call E, A R n n R n n a matrix pair. A scalar λ C is called a generalized eigenvalue of E, A, if detλe A =. If there exists v such that Ev = and Av, then is called a generalized eigenvalue of E, A. We denote the set of all eigenvalues of E, A by σe, A C { } and the resolvent set by ρe, A := C\σE, A. If ρe, A =, then E, A is called singular otherwise regular. We denote the set of all regular matrix pairs by P n := {E, A R n n R n n ; E, A regular}. Remark.. If the matrix pair is regular, then the set of eigenvalues is finite. Otherwise every λ C is an eigenvalue. Details about the generalized eigenvalue problem can be found in [Ste7]. To prepare decompositions of systems, we combine the results in [GL96, Theorem 7.7.] and [KD9, Section 4.1] to get the following theorem. Theorem.3. Let E, A P n and M 1, M C { } be two disjoint sets such that σe, A = M 1 M. Then there exist orthogonal U, V R n n such that E1 E UEV, UAV = A1 A,,.1 E 3 A 3 σe 1, A 1 = M 1 and σe 3, A 3 = M. Moreover, there exist R, L R m p such that The matrices P := I L I A 1 R LA 3 = A E 1 R LE 3 = E. satisfy U, Q := V I R I E1 P EQ, P AQ = E 3 A1,. A 3. Remark.4. We get.1 e.g. by applying the generalized Schur decomposition to E, A. Equation. is called generalized or coupled Sylvester equation see [KD9]. We recall the following known result which can be used to test regularity of matrices. Theorem.5 [ZDG96, Section.3]. Let A C n n, B C n m, C C m n, D C m m and M := A B C D. If A is regular, then we have detm = detd CA 1 B deta. If D is regular, then we have detm = deta BD 1 C detd. The matrices D CA 1 B and A BD 1 C are called the Schur complement of A and D in M, respectively. Definition.6. For n, p, m N we define the sets of systems compare 1.1 S n,p,m := P n R n m R p n R p m, S n,p,m := {E, A, B, C, D S n,p,m ; ir ρe, A}, S + n,p,m := {E, A, B, C, D S n,p,m ; C ρe, A}, S n,p,m := {E, A, B, C, D S n,p,m ; C ρe, A, E regular}, 3
4 where S + n,p,m and S n,p,m are the set of the stable and antistable systems, respectively. We call a system S = E, A, B, C, D S n,p,m a standard system if E = I and a descriptor system otherwise. For S i = E i, A i, B i, C i, D i S ni,p,m, i {1, }, and regular P, Q R n 1 n 1 we define the operations E1 A1 B1 S 1 S :=,,, C E A B 1 C, D1 + D, P S 1 Q := P E 1 Q, P A 1 Q, P B 1, C 1 Q, D 1, P S 1 := P S 1 I n1, S 1 Q := I n1 S 1 Q. Some of the following definitions and theorems for standard systems can be given for descriptor systems, too. However, we restricted ourselves to the specializations we need here. Definition.7 [Dai89, Definition -6.1]. Let S = E, A, B, C, D S n,p,m. Then GS: ρe, A s CsE A 1 B + D R p m is called the transfer function of S. We denote the set of all p m transfer functions by T p,m := {GS; S n N S n,p,m}. A system S Sñ,p,m is called a realization of GS, if GSs = G Ss for all s DGS DG S. We denote the set of all realizations of GS by SGS. A realization S 1 S n1,p,m of GS is said to be minimal, if every other realization S S n,p,m of GS fulfils n n 1. Remark.8. Transfer functions are meromorphic. Obviously, GS 1 S = GS 1 +GS and GP S Q = GS hold. We want to transform a system S = E, A, B, C, D into S = Ẽ, Ã, B, C, D with GS = G S, in particular σe, A = σẽ, Ã, such that Ẽ, Ã, B, C, D have a special structure, e.g. to decompose systems into a stable and antistable part. Note that all matrices stay real under the presented transformations. Definition.9 [Dai89, Definition 1-3.1]. Two systems S i = E i, A i, B i, C i, D i S n,p,m, i {1, }, are called restricted system equivalent for short r.s.e., S 1 S, if there exist regular P, Q R n n such that P S 1 Q = S. Lemma.1. For every S = E, A, B, C, D S n,p,m there exist S w S, k {,..., n} and ν N such that Ik J BJ S w =,,, C N I n k B J C N, D, N and N ν =. Moreover, DGS = ρj = ρe, A holds and ν 1 s DGS: GSs = C J si J 1 B J + D s i C N N i B N..3 i= 4
5 Proof. We apply Theorem.3 to E, A with M 1 = σe, A\{ }, M = { } and get regular E 1, A 3, P, Q as in the theorem. Choose P := E 1 1 I P, Q := I A 1 Q 3 and set S w := P S Q. The matrix N := E 3 is nilpotent because σn, I = { } and thus σn = {}. Equation.3 follows from GS = GS w Remark.8 and C N sn I 1 B N = ν 1 i= si C N N i B N Neumann series. Theorem.11 [ZDG96, Theorem 3.1, Theorem 3.17]. Let S = I, A, B, C, D S n,p,m. Then A 11 A 13 S I n, A 1 A A 3 A 4 A 33, A 43 A 44 B 1 B, C 1 C 3, D, where A ii R n i n i, n i {,..., n} for i {1,, 3, 4} and S M := I, A 11, B 1, C 1, D is a minimal realization of GS. This decomposition is known as the Kalman decomposition. All minimal realizations of GS are r.s.e. to S M. Now we introduce the controllability and observability Gramians. Later these matrices will play a central role in our problem of computing an optimal RH -approximation. The next theorem follows easily from [Pen98, Corollary ]. Theorem.1 [Pen98], Lyapunov equation. Let S = E, A, B, C, D S n,p,m. Then there exist X c, X o R n n such that AX c E + EX c A + BB =, A X o E + E X o A + C C =..4 They are unique and symmetric. We denote them by C S := X c controllability Gramian of system S and O S := X o observability Gramian. Lemma.13. Let S = E, A, B, C, D S + n,p,m and E be regular. Let P, Q R n n be regular. Then C S = QC P S Q Q and O S = P O P S Q P hold. Proof. We prove this only for C S since O S is analogously. By definition of C P S Q = P AQC P S Q Q E P + P EQC P S Q Q A P + P BB P, = P AQC P S Q Q E + EQC P S Q Q A + BB P hold. Due to the regularity of P and Q and the uniqueness of the solution of.4 Theorem.1 the assertion is proved. 5
6 Theorem.14 [Glo84, Theorem 4.3]. Let S = I, A, B, C, D S n,p,m. Then there exist S b S, r N, h R such that Σ1 Σ C Sb =, O hi Sb =, h / σσ r hi 1 Σ. r If Σ 1 = Σ are diagonal, then S b is known as the balanced realization of GS. To check whether λ C is an eigenvalue of E, A, we need the following easy generalization of a known result for standard systems e.g. [Mac4, Theorem c]. Lemma.15. Let G T p,m have a realization S = E, A, B, C, D S n,p,m. Then for every finite λ σe, A at least one of the following three statements holds: i λ is a pole of G, ii rank λe A B λe A < n, iii rank < n. C Proof. Let S w S be as in Lemma.1. We set S J := I, J, B J, C J, D S k,p,m. By Lemma.1 we know that λ σe, A\ { } λ σj. By [Mac4, Theorem c] one of the following three statements holds: i λ is a pole of GS J, ii rank λi J B J < k, iii λi J rank < k. Equation.3 shows that i i. By the regularity of λn I we have rank λi J BJ λi J B J + n k = rank = rank λe A B λn I B N and analogously rank λi J C J iii. This finishes the proof. + n k = rank λe A C. Thus ii ii and iii Definition.16 [ZDG96, Section 4.3]. We approximate unstable systems with respect to the following spaces and corresponding norms. Let p, m N. RL p m := {G T p,m ; ir DG, G < }, G := sup Giω ; { RH p m := G RL p m ; G analytic on C > }; RL p m := {G T p,m ; ir DG, G < }, G := { RH p m := G RL p m ; G analytic on C > }. We omit the indices, if there is no risk of confusion. ω R 1 π C J Giω F dω; 6
7 Remark.17. These spaces are often introduced as a subset of socalled real rational functions, i.e. entrywise a quotient of polynomials with real coefficients. Actually, every real rational function is a transfer function see [Dai89, Theorem -6.3] and vice versa, since Gs i,j = e i CsE A 1 Be j +D i,j is the Schur complement of se A in Ms := se A Bej e i C D i,j and thus Gs i,j = detgs i,j = detms detse A be elements of one of these spaces with SF = SG, e.g. s 1 s 1 by Theorem.5. Let F and G s 1s. and s s By definition they are equal on the cofinite set DF DG. Therefore we identify them as the same function. With this and by the identity theorem for analytic functions the spaces are indeed normed spaces. Theorem.18 [Fra87, Section.3]. Let G RL be analytic on C < and G + RH. Then G + G + = G + G + holds. The following known result characterizes these spaces in terms of realizations of transfer functions. Theorem.19 [Fra87, Section.3]. Let G T p,m with ir DG. Then the following hold: a G RL I, A, B, C, D SG, b G RH I, A, B, C, D SG S + n,p,m, c G RL I, A, B, C, SG, d G RH I, A, B, C, SG S + n,p,m. 3 Optimal stable approximations 3.1 Problem statement In this paper, we deal with the following problem. Approximation-Problem AP q. Let q {, }. For S S n,p,m find S S+ n,p,m such that n N GS G S q = inf Ŝ n N S+ n,p,m GS GŜ q. In the next subsections we show that this problem is solvable. We also present an explicit solution in Theorem 3. for q = and Theorem 3.16 for q =. We do not consider systems S = E, A, B, C, D S n,p,m where E, A has imaginary eigenvalues. If there is no imaginary pole of GS, then there exists a realization S S n,p,m of GS and we solve AP q for S. Otherwise, for every Ŝ n N S+ n,p,m we have GS GŜ q = and hence AP q is not solvable. 7
8 First, we show that solving AP q for a descriptor system we can equivalently solve AP q for its antistable part. This was e.g. proposed in [Fra87, Section 8.3] but only for standard systems and q =. We need Theorem 3.1 to use some results in [Glo84] and [SCL9] which are only applicable for antistable standard systems. Theorem 3.1. Let S S n,p,m. Then there exist S + = E +, A +, B +, C +, D S + n +,p,m and S = E, A, B, C, S n,p,m such that S S + S. Let q {, } and γ. Then the following two are equivalent: i S n N S + n,p,m : GS G S q γ, ii S n N S + n,p,m : GS G S q γ. If S satisfies ii, then S S + satisfies i. Proof. Theorem.3 assures the existence of such systems S + and S. Let S S + ñ,p,m satisfy i. Then we have γ GS G S q = GS G S GS + q. Thus Ŝ := S E +, A +, B +, C +, D satisfies ii compare Remark.8. The fact that Ŝ n N S+ n,p,m follows easily from the definition of. Let S S + ñ,p,m fulfils ii. Then analogously S S + satisfies i. 3. Optimal RH -approximations Theorem 3.. Let S S n,p,m. We use the notations of Theorem 3.1. Then S + solves AP. More precisely, we have inf Ŝ n N S+ n,p,m GS GŜ = GS GS + = GS. The solution is unique in the following sense: If S is another solution, then it is also a realization of GS +. Proof. Let Ŝ n N S+ n,p,m with GS GŜ RL. Since GS RL Theorem.19 we have GS + GŜ RH. By Theorem.18 we obtain GS GŜ = GS +GS + GŜ = GS + GS + GŜ GS. The lower bound is attained, if and only if Ŝ SGS +, e.g. if Ŝ = S +. Remark 3.3. In the proof of Theorem 3. it can be seen that the function GS + is even the best approximation in the Hardy space H. The reason is that GS is also orthogonal to H see the more general statement of Theorem.18 in [Fra87, Section.3]. 8
9 3.3 Optimal RH -approximations with balanced realization For a minimal antistable standard system S the problem AP was already solved in [Glo84]. Combining [Glo84, Theorem 6.1] and its proof based on [Glo84, Theorem 6.3], we obtain the following result. We reformulate the statements in terms of realizations instead of transfer functions. Note also the interchange of the stable and antistable system which is easy to show. Theorem 3.4 [Glo84]. Let S S n,p,m be a minimal realization such that I A11 A S =, 1 B1, I r A 1 A B Σ1 Σ C S =, O σ 1 I S = r σ 1 I r, C 1 C, D,, σ 1 / σσ 1 Σ, where σ 1 := max σc S O S compare Theorem.14. Then there exists U R p m with B = C U. Set Γ := Σ 1 Σ σ 1I, Ã := Γ 1 σ 1A 11 + Σ A 11 Σ 1 + σ 1 C 1 UB 1, C := C1 Σ 1 σ 1 UB 1, B := Γ 1 Σ B 1 σ 1 C 1 U, D := D + σ1 U. Then S := I, Ã, B, C, D S + n r,p,m holds and inf Ŝ n N S+ n,p,m GS GŜ = GS G S = σ 1. Remark 3.5. The value σ 1 is the greatest hankel singular value of GS see [Fra87, Section 5.1] for further details. As stated in [Glo84, Theorem 6.1], the function G S is even the best approximation in the more general Hardy space H. Because of Theorem 3.1 we are now able to solve AP, if S is a descriptor system. The disadvantage of algorithms based on Theorem 3.4 is that we need a balanced minimal realization of GS to compute an optimal solution. 3.4 Optimal RH -approximations without balanced realization Some results of [Glo84], concerning optimal hankel norm approximation, were generalized in [SCL9] to nonminimal stable standard systems. We use [SCL9, Theorem 1] and its proof to solve AP for general S S n,p,m. Among some modifications, one of our main tasks is to prove that the optimal solution lies indeed in S + n,p,m, since [SCL9] only considers the poles of the resulting transfer function. First, we need to compute the greatest hankel singular value σ 1 of GS with the formula given in Theorem 3.4 which is restricted to minimal standard systems. However, we 9
10 avoid the determination of a minimal realization. To the author s knowledge, it was never shown that the nonzero hankel singular values of GS equal the roots of the nonzero eigenvalues of C S O S independently of the realization S of GS. This equality was only proved for minimal realizations cf. e.g. [Fra87, Section 5.1, Theorem 3]. Lemma 3.6. Let S = I, A, B, C, D S n,p,m be a minimal realization of G T p,m and let S = Ẽ, Ã, B, C, D S ñ,p,m be a realization of G. Then σ C S O S \ {} = σ C SẼ O SẼ \ {}. In particular, the equation σ1 = max σ C SẼ O SẼ holds for all S S ñ,p,m SG. Proof. We set S I := Ẽ 1 S. By Theorem.11 there exists a regular T Rñ ñ such that A 11 A 13 B 1 S T := T S I T 1 = I, A 1 A A 3 A 4 A 33, B, C 1 C 3, D, A 43 A 44 where S 1 := I, A 11, B 1, C 1, D is a minimal realization of G. The condition σẽ, Ã = σẽ 1 Ã C > implies A11 A11 A σ C A 1 A >, σ 13 C A >, σa 11 C >. 33 X1 X By Theorem.1 there exist unique solutions Y1 Y X X and 4 Y Y 4 of A11 X1 X X1 X A 1 A X + A11 X 4 X + X 4 A 1 A A11 A 13 Y1 Y Y1 Y A 33 Y + A11 A 13 Y 4 Y Y 4 A 33 Thus X 1 = C S1, Y 1 = O S1, B1 B B 1 B =, C C1 + 1 C3 C 3 =. X 1 X Y 1 Y C ST = X X 4 and O S T = Y Y 4. By Theorem.11 and Lemma.13 there exists a regular P R n n with C S1 = P 1 C S P, 1
11 O S1 = P O S P. We conclude σ C SẼ O SẼ \ {} = σ C SI O SI \ {} = σ T CST T T O ST T 1 \ {} X 1 Y 1 X 1 Y = σ C ST O ST \ {} = σ X Y 1 X Y \ {} = σ X 1 Y 1 \ {} = σ CS1 O S1 \ {} = σ CS O S \ {}. The next theorem is one of our main results. Together with Theorem 3.1 it gives suboptimal solutions of AP and, with an additional regularity-condition, an optimal solution. Theorem 3.14 solves AP, if the regularity-condition is not fulfilled. Theorem 3.7. Let S = E, A, B, C, D S n,p,m. Choose γ σ 1 and set R S,γ := O S EC S E γ I, E S,γ := E R S,γ, B S,γ := E O S B, A S,γ := A R S,γ C C S,γ, C S,γ := CC S E. If E S,γ, A S,γ is regular, e.g. if γ > σ 1, then S S,γ := E S,γ, A S,γ, B S,γ, C S,γ, D S + n,p,m and S S,γ fulfils σ 1 GS GS S,γ γ. Proof. First, let E = I. In this and the following proofs we use the equation A S,γ = A R S,γ C C S,γ = R S,γ A B S,γ B = γ A + O S AC S, 3.1 which holds because of A R S,γ C C S,γ = γ A A O S C S C CC S = γ A + O S AC S = γ A O S C S A O S BB = R S,γ A B S,γ B. Thus A S,γ is indeed the same as in [SCL9, Theorem 1]. To apply [SCL9, Theorem 1], we first construct an antistable system S with a hankel singular value greater than γ. This method was proposed in [Glo84, Remark 8.4]. Let γ 1 > γ. We set S 1 := 1, 1 γ 1, γ 1, γ 1, and A B C D S := I, 1 γ,,, S 1 γ 1 γ 1 n+1,p+1,m+1. Then we have CS OS C S =, O γ S =. 1 γ 1 Thus γ 1 is the greatest hankel singular value of GS and σ 1 γ < γ 1. If we apply [SCL9, Theorem 1] to S with K = RH, then the transfer function of ES,γ AS,γ BS,γ CS,γ D S S,γ = γ1, γ 1 γ γ 1 + γ1 3, γ1, γ1, 11
12 is a suboptimal solution, i.e. GS GS S,γ γ. Note that [SCL9, Theorem 1] does not state, that for every K RH with K 1 the resulting transfer function is a suboptimal solution. However, this was shown in the proof see also [Glo84, Remark 8.4]. With GS GSS,γ GS =, GS GS 1 S,γ = GS S1,γ we conclude GS GS S,γ GS GS S,γ γ. Now we show that S S,γ S + n,p,m. [SCL9, Theorem 1] states that GS S,γ has at most one pole λ in C counting multiplicities. The pole of GS S1,γ is λ = γ 1 γ > with γ γ 1 +γ1 3 γ1 γ >, γ γ 1 + γ1 3 >. Thus λ = λ and GS S,γ has no poles in C. Let λ C. We use Lemma.15 to show that λ / σe S,γ, A S,γ. Let v R n satisfy λe S,γ A S,γ v = and C S,γ v =. Hence with A S,γ = A E S,γ C C S,γ we get λi A E S,γ v =. Because of σ A C < we conclude E S,γ v =. Thus s C: se S,γ A S,γ v = se S,γ + A E S,γ + C C S,γ v =. Since the matrix pair E S,γ, A S,γ is regular we have v =. Now let v R n satisfy λe S,γ A S,γv = and B S,γv =. Hence λi + AE S,γv = follows with A S,γ = AE S,γ BB S,γ. Again we deduce v =. Therefore λ / σe S,γ, A S,γ and thus S S,γ S + n,p,m follows. Finally we consider general regular E R n n. We already proved that the assertion holds for S E 1 S,γ. With C E 1 S = C S and O E 1 S = E O S E Lemma.13 we conclude E E 1 S,γ = E O S EC S γ I, B E 1 S,γ = E O S EE 1 B, A E 1 S,γ = A E E E 1 S,γ C C E 1 S,γ, C E 1 S,γ = CC S. Now we see that S S,γ = S E 1 S,γ E. Thus the assertion also holds for S S,γ S E 1 S,γ. If γ > σ 1, the matrix pair E S,γ, A S,γ is regular, since R S,γ, E and thus E S,γ are regular. We formulate some equivalent conditions for the regularity-condition of E S,σ1, A S,σ1 in Theorem 3.7. Corollary 3.8. The following three statements are equivalent. i E S,σ1, A S,σ1 is regular, ii σ ES,σ1, A S,σ1 C< { }, iii A S,σ1 is regular. Proof. The implication i ii follows from Theorem 3.7. The statement ii implies / σ E S,σ1, A S,σ1 and thus ES,σ1 + A S,σ1 is regular. Finally, iii implies / σ E S,σ1, A S,σ1, therefore ρ ES,σ1, A S,σ1 and thus i holds. 1
13 To get optimal solutions if E S,σ1, A S,σ1 is singular, we analyze SS,γ in Theorem 3.7 for γ σ 1 +. For the proofs we use the realization S b S of GS as in Theorem.14 with h = σ 1 because we will take advantage of the special structure of S Sb,σ 1. First we investigate the relation between S S,γ and S Sb,γ. Lemma 3.9. Let S = E, A, B, C, D S n,p,m. Let P, Q R n n be regular. Then S P S Q,γ = Q T S S,γ P T holds for all γ R. Proof. We set S := P S Q. By Lemma.13 the equations C S = QC SQ, O S = P O SP hold. This implies B S,γ = E O S B = E P O SP B = Q B S,γ, C S,γ = CC S E = CQC SQ E = C S,γ P, E S,γ = E O S EC S E γ E = E P O SP EQC SQ E γ E = Q E S,γ P, A S,γ = A E E S,γ + C C S,γ = A E Q E S,γ P + C C S,γ P = Q A S,γ P. Lemma 3.1. Let S b = I, A b, B b, C b, D S be as in Theorem.14 with h = σ 1. Let T 1 R n n be regular such that T 1 S E 1 T1 1 I A11 A = S b =, 1 B1,, C I r A 1 A 1 C, D. Then the following three statements hold: a B B = C C, b kerb = kerc, c E S,σ1, A S,σ1 singular kerc {} kerb {}. Proof. We get B B = C C by adding the equations σ 1 A σ 1 A + B B = and σ 1 A + σ 1A C C = resulting from.4. Therefore v kerb B v = B v = = v B B v = v C C v C v = v kerc. B By Lemma 3.9 and with Γ := Σ Σ 1 σ1 I we have S Sb,σ 1 = T1 E S S,σ1 T1 Γ A =, 11 Γ C1 C 1Σ 1 σ 1 C1 C Σ B A 1 Γ C C 1Σ 1 σ 1 C C, 1, C σ 1 B 1 Σ 1 σ 1 C,. Therefore E S,σ1, A S,σ1 is singular, if and only if ESb,σ 1, A Sb,σ 1 is singular. 13
14 Let kerc {}. Then there exists v with C v =, i.e. for all s C se Sb,σ 1 A Sb,σ 1 = v That means E Sb,σ 1, A Sb,σ 1 is singular. sγ + A 11 Γ + C1 C 1Σ 1 σ 1 C1 C A 1 Γ + C C 1Σ 1 σ 1 C C = v. Let kerc = {}. Then C C is regular. Let s C. The matrix se Sb,σ 1 A Sb,σ 1 is singular, if and only if the Schur complement si + A 11 + C 1 C 1 Σ 1 Γ 1 C 1 C C C 1 A 1Γ + C C 1 Σ 1 Γ 1 Γ =: si MΓ of se Sb,σ 1 A Sb,σ 1 in σ 1 C C is singular Theorem.5. This is equivalent to s σm. Hence E S,σ1, A S,σ1 has a finite number of eigenvalues and consequently is regular. Similar to the proof of [SCL9, Theorem 1] we want to eliminate the singular part of the system S S,σ1 and show that the resulting system is an optimal solution. Lemma Let E S,σ1, A S,σ1 be singular. Then there exists a regular T R n n such that Ẽ11 T E S S,σ1 T 1 Ã11 B1 =,,, C1, D, where Ẽ11, Ã11 is regular. Proof. Our proof is based on the proof of [SCL9, Theorem 1]. Let S b, T 1 and Γ be as in Lemma 3.1. By Lemma 3.1 we have d :=dimkerc >. Therefore there exists an orthogonal W R r r such that Ĉ p d = C W and kerĉ = {} e.g. singular value decomposition. We define B B 3 := B W with appropriate dimensions. By Lemma 3.1 we have B 3 =. Now we set T := I W, conclude 14
15 A 1 Γ C C 1Σ 1 = σ 1 B B1 from 3.1 and compute Σ B 1 I Σ B T B Sb,σ 1 = T C Sb B b = 1 = σ W σ 1 B 1 B =: d m C Sb,σ 1 T 1 = C b O Sb T 1 = C 1 Σ 1 σ 1 C W = C 1 Σ 1 σ 1 Ĉ p d Γ T E Sb,σ 1 T 1 I Γ I = W W = =: d d T A Sb,σ 1 T 1 A = T 11 Γ C1 C 1Σ 1 σ 1 C1 C σ 1 B B1 σ 1 C C T 1 A 11 Γ C 1 C 1Σ 1 σ 1 C 1 Ĉ = σ 1 B B1 σ 1 Ĉ Ĉ =: d d =: B1 d m C1 p d Ẽ11 d d Ã11. d d Since Ĉ Ĉ is regular, there exists s C such that the Schur complement si + A 11 + C1 C 1 Σ 1 Γ 1 σ 1 C1 ĈĈ Ĉ 1 B B1 Γ 1 Γ of sẽ11 Ã11 in σ 1 Ĉ Ĉ is regular. Theorem.5 states that sẽ11 Ã11 is regular. Thus Ẽ11, Ã11 is regular. With T := T T 1 the proof is finished. Lemma 3.1. We set S 1 := Ẽ11, Ã11, B 1, C 1, D. For every ω R with iω ρẽ11, Ã11 the unequality GSiω GS 1 iω σ 1 holds. Moreover, GS 1 has no poles in C. Proof. First we show that GS S,γ s γ σ 1+ GS 1 s for all s C except for a finite number. Let T be as in Lemma We rewrite GS S,γ s for γ > σ 1 and s C as GS S,γ s = GT E S S,γ T 1 s = C1 T E se S,γ A S,γ T 1 1 B1 + D sẽ11 = C1 Ã11 where holds with 1 B1 + σ1 γ si + T E A T 1 E S,γ = E S,σ1 + E S,γ E S,σ1 = E S,σ1 + σ 1 γ E, A S,γ = A S,σ1 σ 1 γ A. + D, 15
16 We set M1 M M 3 M 4 := T E A T 1 with appropriate dimensioned M i. In addition, let s / σm 4 σẽ11, Ã11. Then L γ := sẽ11 Ã11 + σ 1 γ si + M 1 σ 1 γ M si M 4 1 M 3 is the Schur complement of T E se S,γ A S,γ T 1 = s Ẽ 11 Ã11 + σ1 γ si + M 1 σ1 γ M σ1 γ M 3 σ1 γ si + M 4 in σ1 γ si + M 4. By Theorem.5 the matrix L γ is regular because se S,γ A S,γ is regular. The functions γ L γ and thus γ GS S,γ s = C 1 L 1 γ B 1 + D are continuous in σ 1,. Note that L σ1 = sẽ11 Ã11 is regular. We obtain GS S,γ s = C 1 L 1 γ Therefore for all iω / σm 4 σẽ11, Ã11 GSiω GS 1 iω = B 1 + D γ σ 1+ C 1 sẽ11 Ã11 1 B1 + D = GS 1 s. lim GSiω GS S,γiω γ σ 1 + lim γ = σ 1. γ σ 1 + Now we show that GS 1 has no poles in C. By [Fra87, Section.3] we have Gs G for all G RH and s C. Thus we get the upper estimate GS S,γ s GS S,γ s GSs + GSs γ + GS. for all s C. Hence for all s C \σẽ11, Ã11 σm 4 we have GS 1 s = Thus GS 1 has no poles in C. lim GS S,γs γ σ 1 + lim γ + GS γ σ 1 + = σ 1 + GS. Theorem The system S 1 given in Lemma 3.1 solves AP. Proof. We use the notations of the proof of Lemma By Theorem 3.7 and Lemma 3.1 it remains to prove that σẽ11, Ã11 C < { }. Let s C. We apply Lemma.15 x1 because s is not a pole of GS 1. Let x = satisfy = C 1 x and = s Ẽ 11 Ã11x s = Γ + A 11 Γ + C 1 C 1Σ 1 σ 1 B B1 x x1 σ 1 C1 Ĉ σ 1 Ĉ. 3. Ĉ x 16
17 Hence x = C1 s Ẽ = 11 Ã11 = C Sb,σ 1 T 1 x x, 3.3 = T s E Sb,σ 1 A Sb,σ 1 T 1 = T s E Sb,σ 1 + A b E S b,σ 1 + C b C S b,σ 1 T 1 x x 3.3 = T s I + A b E S b,σ 1 T 1 Due to S b S S n,p,m we have s / σ A b and the equation Γ = E Sb,σ 1 T 1 x = T E Sb,σ 1 T 1 x = follows. Thus we get x 1 = by the regularity of Γ. Now we deduce 3. σ 1 Ĉ Ĉx = x = x 1 x x. from the regularity of Ĉ Ĉ. Thus x =. Analogously every x R n d which fulfils = s Ẽ 11 Ã11 x and = B 1 x equals zero. Thus s / σẽ11, Ã11 and σẽ11, Ã11 C < { }. The optimal solution given in the theorem above is still based on the balanced realization. Our second main contribution is the following result, which only needs one singular value decomposition. Theorem Let S = E, A, B, C, D S n,p,m. Suppose E S,σ1, A S,σ1 given in Theorem 3.7 is singular. Then there exist orthogonal U, V R n n with UA S,σ1 V = Every regular U, V R n n satisfying 3.4, fulfil UE S,σ1 V = Ê11 Ê 1, UB S,σ1 = The system S := Ê11, Â11, B 1, Ĉ1, D solves AP. Â11  1 and regular  B1, C S,σ1 V = Ĉ1 Ĉ. Proof. Let T R n n be as in Lemma Then we obtain T E E S,σ1 T 1, T E A S,σ1 T 1 = Ẽ11, Ã11 17
18 with Ã11 R n d n d and σẽ11, Ã11 C < { }. In particular, Ã11 is regular. The matrices A S,σ1 and T E A S,σ1 T 1 have the same rank. By singular value decomposition there exist regular orthogonal U, V R n n such that Â11 Â UA S,σ1 V = 1 with regular Â11 R n d n d. Let U, V R n n be regular and satisfy 3.4. We define Ê11 Ê UE S,σ1 V =: 1 B1, UB S,σ1 =:, C S,σ1 V =: Ê 1 Ê B Ĉ1 Ĉ. For all s C the matrices sê11 Â11 and sẽ11 Ã11 are regular. Thus we have sê11 Â11 sê1 Â1 [R n ] = UE T 1 s Ẽ 11 Ã11 T V [R n ] sê1 sê = UE T 1 [ R n d {} d] = R n d {} d. 3.5 We obtain the last equality because holds for all s C and in particular for s =. That implies Ê1 =, Ê = and finally σê11, Â11 = σẽ11, Ã11 which means S S + n d,p,m. Furthermore, we have B = because B1 [R B m ] = B1 UE T 1 [R m ] 3.5 = R n d {} d. Finally, we show that GS = GS 1. Let s ρê11, Â11 and Y 1 := C 1 sẽ11 Ã11 1. We conclude with Y := Y 1 R p n and X := X 1 X := Y T E U 1 that Ĉ1 Ĉ = C1 T V = Y s Ẽ 11 Ã11 T V = X In particular X 1 = Ĉ1sÊ11 Â11 1. That implies s Ê 11 Â11 sê1 Â1. GS 1 s = C 1 sẽ11 Ã11 1 B1 = Y 1 B1 = B1 Y 1 Y = B1 Y 1 Y T E U 1 = B1 X 1 X = X 1 B1 = Ĉ1sÊ11 Â11 1 B1 = GS s. By Theorem 3.13 we get GS GS = σ 1. Under the additional condition E = I we can use the less complex Schur decomposition to attain 3.4 in Theorem
19 Lemma In the situation of Theorem 3.14, where E = I, we obtain regular matrices U and V := U, such that 3.4 holds, by applying the Schur decomposition to A S,σ1. Proof. Let T R n n be as in Lemma Then we obtain T ES,σ1 T 1, T A T 1 Ẽ11 Ã11 S,σ1 =, with Ã11 R n d n d and σẽ11, Ã11 C < { }. In particular, à 11 is regular. The matrices A S,σ1 and T A S,σ1 T 1 have the same eigenvalues counting multiplicities. By the Schur decomposition there exists a regular in particular an orthogonal U R n n such that Â11 UA S,σ1 U 1  = 1 Ã11 = UT 1 T U 1,  where Â11 R n d n d is regular and σâ = {}. This implies rankâ11 = rankã11 = rankua S,σ1 U 1 and thus  =. In summary, we obtain the following result for AP. Theorem Let S S n,p,m. Then there exist S + S + n +,p,m and S = E, A, B, C, S n,p,m such that S S + S. We set σ 1 := max σe O S E C S, then We define the matrices inf Ŝ n N S+ n,p,m GS GŜ = σ 1. R S,σ 1 := O S E C S E σ 1I, E S,σ 1 := E R S,σ1, B S,σ 1 := E O S B, A S,σ 1 := A R S,σ1 C C S,σ 1, C S,σ 1 := C C S E. If E S,σ 1, A S,σ 1 is regular, then S + E S,σ 1, A S,σ 1, B S,σ 1, C S,σ 1, solves AP. If E S,σ 1, A S,σ 1 is singular, then there exist orthogonal matrices U, V R n n with UA S,σ 1 V = Every regular U, V R n n satisfying 3.6, fulfil UE S,σ 1 V =, UB S,σ 1 = Ê11 Ê 1 Â11  1 and regular  B1 The system S + Ê11, Â11, B 1, Ĉ1, solves AP., C S,σ 1 V = Ĉ1 Ĉ. 19
20 4 Application Algorithm We rewrite our results in an algorithmic manner. Note that the solution of AP is necessary to compute the solution of AP and thus no extra computation is needed. Input: S S n,p,m Output: S q S + n q,p,m which solves AP q for q {, }. 1. Decompose S into S S n,p,m and S + S + n +,p,m as in Theorem 3.1 with the help of Theorem.3.. The system S := S + solves AP. 3. Determine C := C S E, Õ := E O S and σ 1 = max σõ C. 4. Compute R := Õ C σ 1 I, Ê := E R, B := ÕB, Â := A R C Ĉ, Ĉ := C C. 5. If Ê, Â is regular test with Corollary 3.8, then AP is solved by S := S + Ê, Â, B, Ĉ,. 6. If Ê, Â is singular, then determine U, V Rn n SVD such that Â11 UÂV = Â 1 with regular Â11. Ê11 Compute Ê11, B 1, Ĉ1 of UÊV = Ê 1, U B B1 =, ĈV = Ĉ1 Ĉ, then AP is solved by S := S + Ê11, Â11, B 1, Ĉ1,. For the numerical issues of step 1 we refer to [KD9] and of step 3 to [Pen98]. Both approximation techniques does not increase the order of the system, i. e. n q n. For optimal RH -approximation we even have n < n. Another advantage is the less computational complexity compared to RH -approximation. The behaviour of the transfer functions GS and GS at infinity is the same, i. e. GS = GS. This does not hold for optimal RH -approximation in general. However, the suboptimal approximation S + S S,γ choosing γ > σ 1 in Theorem 3.7 has this property. Since is regular, we have E S,γ GS = GS + = GS + + GS S,γ = GS + S S,γ. Examples The following benchmarks are performed in Matlab R. We test the building model Figure 1 and the clamped beam model Figure, see [CD] for details. Both models are described by stable standard systems with one input and one output, i.e. m = p = 1.
21 We apply shifted Arnoldi method to reduce the systems. The parameters interpolation point s C { } and reduced order k N are based on the examples in [ASG1]. The computed reduced systems are unstable as listed in Table 1. Thus we apply the algorithm presented above. Since the transfer functions of the original systems and hence of the reduced systems vanish at infinity an optimal RH -approximation would result in huge relative errors at high frequencies. That is why we compute suboptimal approximations i. e. γ > σ 1 to match the original transfer functions at infinity. model order model reduction reduced unstable max. real part method order poles of unstable poles building 48 Arnoldi, s = RH RH, γ = 1.1 σ clamped 348 Arnoldi, s = beam RH RH, γ = 1.1 σ Table 1: Summary of the results The error between the original systems and the reduced systems of Arnoldi method does not significantly chance after applying our stabilization algorithm as depicted in Figure 1 and Figure. 4 The norm of Giω Error w.r.t. o rig inal Arnoldi Arnoldi Arnoldi RH 6 Arnoldi RH Arnoldi RH 1 Arnoldi RH Giω in db 8 Giω Kiω in db frequency ω frequency ω Figure 1: Frequency responses of the reduced building model left and the error systems right References [ASG1] A. C. Antoulas, D. C. Sorensen, and S. Gugercin. A survey of model reduction methods for large-scale systems. Contemporary Mathematics, 8:193 19, 1
22 8 The norm of Giω 4 Error w.r.t. original Arnoldi Arno ldi Arnoldi RH 4 Arnoldi RH Arnoldi RH Arnoldi RH Giω in db 4 Giω Kiω in db frequency ω frequency ω Figure : Frequency responses of the reduced clamped beam model left and the error systems right [BF1] [BFF98] [CD] 1. Zhaojun Bai and Roland W. Freund. A partial padé-via-lanczos method for reduced-order modeling. Linear Algebra and its Applications, : , 1. Zhaojun Bai, P. Feldmann, and R. W. Freund. How to make theoretically passive reduced-order models passive in practice. In Proc. Custom Integrated Circuits Conf the IEEE 1998, pages 7 1, Younès Chahlaoui and Paul Van Dooren. A collection of benchmark examples for model reduction of linear time invariant dynamical systems,. [Dai89] L. Dai. Singular Control Systems. Springer-Verlag New York, [Fra87] B. A. Francis. A Course in H Control Theory, volume 88 of Lecture Notes in Control and Information Sciences. Springer-Verlag Berlin Heidelberg, [GGLD9] M. Green, K. Glover, D. Limebeer, and J. Doyle. A J-spectral factorization approach to H control. SIAM Journal on Control and Optimization, 8: , 199. [GL96] G. H. Golub and C. F. Van Loan. Matrix Computations. The Johns Hopkins University Press, [Glo84] K. Glover. All optimal hankel-norm approximations of linear multivariable systems and their L -error bounds. International Journal of Control, 39: , [KD9] B. Kågström and P. Van Dooren. A generalized state-space approach for the additive decomposition of a transfer matrix, 199. [KM6] P. Kunkel and V. Mehrmann. Differential-Algebraic Equations - Analysis and Numerical Solution. EMS Publishing House, 6.
23 [Lju99] [Mac4] [Mar] [Pen98] [SCL9] [Ste7] L. Ljung. System Identification - Theory For the User, volume. Prentice Hall, U. Mackenroth. Robust Control Systems - Theory and Case Studies. Springer- Verlag Berlin Heidelberg, 4. Jorge Mari. Modifications of rational transfer matrices to achieve positive realness. Signal Processing, 84: ,. T. Penzl. Numerical solution of generalized lyapunov equations. Advances in Computational Mathematics, 8:33 48, M. G. Safonov, R. Y. Chiang, and D. J. N. Limebeer. Optimal hankel model reduction for nonminimal systems. IEEE Transactions on Automatic Control, 354:496 5, 199. G. W. Stewart. On the sensitivity of the eigenvalue problem Ax = λbx. SIAM Journal on Numerical Analysis, 9: , 197. [ZDG96] K. Zhou, J. Doyle, and K. Glover. Robust and Optimal Control, volume 1. Prentice Hall,
Stability preserving post-processing methods applied in the Loewner framework
Ion Victor Gosea and Athanasios C. Antoulas (Jacobs University Bremen and Rice University May 11, 2016 Houston 1 / 20 Stability preserving post-processing methods applied in the Loewner framework Ion Victor
More informationModel Reduction for Unstable Systems
Model Reduction for Unstable Systems Klajdi Sinani Virginia Tech klajdi@vt.edu Advisor: Serkan Gugercin October 22, 2015 (VT) SIAM October 22, 2015 1 / 26 Overview 1 Introduction 2 Interpolatory Model
More informationKrylov Techniques for Model Reduction of Second-Order Systems
Krylov Techniques for Model Reduction of Second-Order Systems A Vandendorpe and P Van Dooren February 4, 2004 Abstract The purpose of this paper is to present a Krylov technique for model reduction of
More informationModel reduction of interconnected systems
Model reduction of interconnected systems A Vandendorpe and P Van Dooren 1 Introduction Large scale linear systems are often composed of subsystems that interconnect to each other Instead of reducing the
More informationBalanced Truncation 1
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.242, Fall 2004: MODEL REDUCTION Balanced Truncation This lecture introduces balanced truncation for LTI
More informationIterative Rational Krylov Algorithm for Unstable Dynamical Systems and Generalized Coprime Factorizations
Iterative Rational Krylov Algorithm for Unstable Dynamical Systems and Generalized Coprime Factorizations Klajdi Sinani Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University
More informationProblem set 5 solutions 1
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.242, Fall 24: MODEL REDUCTION Problem set 5 solutions Problem 5. For each of the stetements below, state
More informationModel reduction of large-scale dynamical systems
Model reduction of large-scale dynamical systems Lecture III: Krylov approximation and rational interpolation Thanos Antoulas Rice University and Jacobs University email: aca@rice.edu URL: www.ece.rice.edu/
More informationH 2 -optimal model reduction of MIMO systems
H 2 -optimal model reduction of MIMO systems P. Van Dooren K. A. Gallivan P.-A. Absil Abstract We consider the problem of approximating a p m rational transfer function Hs of high degree by another p m
More informationH 2 optimal model reduction - Wilson s conditions for the cross-gramian
H 2 optimal model reduction - Wilson s conditions for the cross-gramian Ha Binh Minh a, Carles Batlle b a School of Applied Mathematics and Informatics, Hanoi University of Science and Technology, Dai
More informationModel reduction for linear systems by balancing
Model reduction for linear systems by balancing Bart Besselink Jan C. Willems Center for Systems and Control Johann Bernoulli Institute for Mathematics and Computer Science University of Groningen, Groningen,
More informationOptimization Based Output Feedback Control Design in Descriptor Systems
Trabalho apresentado no XXXVII CNMAC, S.J. dos Campos - SP, 017. Proceeding Series of the Brazilian Society of Computational and Applied Mathematics Optimization Based Output Feedback Control Design in
More informationModel reduction of coupled systems
Model reduction of coupled systems Tatjana Stykel Technische Universität Berlin ( joint work with Timo Reis, TU Kaiserslautern ) Model order reduction, coupled problems and optimization Lorentz Center,
More informationDetermining the order of minimal realization of descriptor systems without use of the Weierstrass canonical form
Journal of Mathematical Modeling Vol. 3, No. 1, 2015, pp. 91-101 JMM Determining the order of minimal realization of descriptor systems without use of the Weierstrass canonical form Kamele Nassiri Pirbazari
More informationRobust Multivariable Control
Lecture 2 Anders Helmersson anders.helmersson@liu.se ISY/Reglerteknik Linköpings universitet Today s topics Today s topics Norms Today s topics Norms Representation of dynamic systems Today s topics Norms
More informationBALANCING-RELATED MODEL REDUCTION FOR DATA-SPARSE SYSTEMS
BALANCING-RELATED Peter Benner Professur Mathematik in Industrie und Technik Fakultät für Mathematik Technische Universität Chemnitz Computational Methods with Applications Harrachov, 19 25 August 2007
More informationApproximation of the Linearized Boussinesq Equations
Approximation of the Linearized Boussinesq Equations Alan Lattimer Advisors Jeffrey Borggaard Serkan Gugercin Department of Mathematics Virginia Tech SIAM Talk Series, Virginia Tech, April 22, 2014 Alan
More informationHankel Optimal Model Reduction 1
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.242, Fall 2004: MODEL REDUCTION Hankel Optimal Model Reduction 1 This lecture covers both the theory and
More informationProjection of state space realizations
Chapter 1 Projection of state space realizations Antoine Vandendorpe and Paul Van Dooren Department of Mathematical Engineering Université catholique de Louvain B-1348 Louvain-la-Neuve Belgium 1.0.1 Description
More informationThe norms can also be characterized in terms of Riccati inequalities.
9 Analysis of stability and H norms Consider the causal, linear, time-invariant system ẋ(t = Ax(t + Bu(t y(t = Cx(t Denote the transfer function G(s := C (si A 1 B. Theorem 85 The following statements
More informationOn Positive Real Lemma for Non-minimal Realization Systems
Proceedings of the 17th World Congress The International Federation of Automatic Control On Positive Real Lemma for Non-minimal Realization Systems Sadaaki Kunimatsu Kim Sang-Hoon Takao Fujii Mitsuaki
More informationModel reduction via tangential interpolation
Model reduction via tangential interpolation K. Gallivan, A. Vandendorpe and P. Van Dooren May 14, 2002 1 Introduction Although most of the theory presented in this paper holds for both continuous-time
More informationOPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY
published in IMA Journal of Numerical Analysis (IMAJNA), Vol. 23, 1-9, 23. OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY SIEGFRIED M. RUMP Abstract. In this note we give lower
More informationGramians of structured systems and an error bound for structure-preserving model reduction
Gramians of structured systems and an error bound for structure-preserving model reduction D.C. Sorensen and A.C. Antoulas e-mail:{sorensen@rice.edu, aca@rice.edu} September 3, 24 Abstract In this paper
More informationarxiv: v1 [math.oc] 17 Oct 2014
SiMpLIfy: A Toolbox for Structured Model Reduction Martin Biel, Farhad Farokhi, and Henrik Sandberg arxiv:1414613v1 [mathoc] 17 Oct 214 Abstract In this paper, we present a toolbox for structured model
More informationAn Observation on the Positive Real Lemma
Journal of Mathematical Analysis and Applications 255, 48 49 (21) doi:1.16/jmaa.2.7241, available online at http://www.idealibrary.com on An Observation on the Positive Real Lemma Luciano Pandolfi Dipartimento
More informationZero controllability in discrete-time structured systems
1 Zero controllability in discrete-time structured systems Jacob van der Woude arxiv:173.8394v1 [math.oc] 24 Mar 217 Abstract In this paper we consider complex dynamical networks modeled by means of state
More informationThe Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications
MAX PLANCK INSTITUTE Elgersburg Workshop Elgersburg February 11-14, 2013 The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications Timo Reis 1 Matthias Voigt 2 1 Department
More informationCME 345: MODEL REDUCTION
CME 345: MODEL REDUCTION Balanced Truncation Charbel Farhat & David Amsallem Stanford University cfarhat@stanford.edu These slides are based on the recommended textbook: A.C. Antoulas, Approximation of
More informationModel reduction of large-scale systems by least squares
Model reduction of large-scale systems by least squares Serkan Gugercin Department of Mathematics, Virginia Tech, Blacksburg, VA, USA gugercin@mathvtedu Athanasios C Antoulas Department of Electrical and
More informationonly nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr
The discrete algebraic Riccati equation and linear matrix inequality nton. Stoorvogel y Department of Mathematics and Computing Science Eindhoven Univ. of Technology P.O. ox 53, 56 M Eindhoven The Netherlands
More informationAN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES
AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim
More informationPassivity Preserving Model Reduction for Large-Scale Systems. Peter Benner.
Passivity Preserving Model Reduction for Large-Scale Systems Peter Benner Mathematik in Industrie und Technik Fakultät für Mathematik Sonderforschungsbereich 393 S N benner@mathematik.tu-chemnitz.de SIMULATION
More informationDISTANCE BETWEEN BEHAVIORS AND RATIONAL REPRESENTATIONS
DISTANCE BETWEEN BEHAVIORS AND RATIONAL REPRESENTATIONS H.L. TRENTELMAN AND S.V. GOTTIMUKKALA Abstract. In this paper we study notions of distance between behaviors of linear differential systems. We introduce
More informationarxiv: v1 [math.na] 1 Sep 2018
On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing
More informationThe Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment
he Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment William Glunt 1, homas L. Hayden 2 and Robert Reams 2 1 Department of Mathematics and Computer Science, Austin Peay State
More informationPassive Interconnect Macromodeling Via Balanced Truncation of Linear Systems in Descriptor Form
Passive Interconnect Macromodeling Via Balanced Truncation of Linear Systems in Descriptor Form Boyuan Yan, Sheldon X.-D. Tan,PuLiu and Bruce McGaughy Department of Electrical Engineering, University of
More informationPeter C. Müller. Introduction. - and the x 2 - subsystems are called the slow and the fast subsystem, correspondingly, cf. (Dai, 1989).
Peter C. Müller mueller@srm.uni-wuppertal.de Safety Control Engineering University of Wuppertal D-497 Wupperta. Germany Modified Lyapunov Equations for LI Descriptor Systems or linear time-invariant (LI)
More informationStability and Inertia Theorems for Generalized Lyapunov Equations
Published in Linear Algebra and its Applications, 355(1-3, 2002, pp. 297-314. Stability and Inertia Theorems for Generalized Lyapunov Equations Tatjana Stykel Abstract We study generalized Lyapunov equations
More informationME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ
ME 234, Lyapunov and Riccati Problems. This problem is to recall some facts and formulae you already know. (a) Let A and B be matrices of appropriate dimension. Show that (A, B) is controllable if and
More informationBalancing of Lossless and Passive Systems
Balancing of Lossless and Passive Systems Arjan van der Schaft Abstract Different balancing techniques are applied to lossless nonlinear systems, with open-loop balancing applied to their scattering representation.
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationRECURSIVE ESTIMATION AND KALMAN FILTERING
Chapter 3 RECURSIVE ESTIMATION AND KALMAN FILTERING 3. The Discrete Time Kalman Filter Consider the following estimation problem. Given the stochastic system with x k+ = Ax k + Gw k (3.) y k = Cx k + Hv
More informationNonlinear Programming Algorithms Handout
Nonlinear Programming Algorithms Handout Michael C. Ferris Computer Sciences Department University of Wisconsin Madison, Wisconsin 5376 September 9 1 Eigenvalues The eigenvalues of a matrix A C n n are
More informationFourier Model Reduction for Large-Scale Applications in Computational Fluid Dynamics
Fourier Model Reduction for Large-Scale Applications in Computational Fluid Dynamics K. Willcox and A. Megretski A new method, Fourier model reduction (FMR), for obtaining stable, accurate, low-order models
More informationAn iterative SVD-Krylov based method for model reduction of large-scale dynamical systems
Proceedings of the 44th IEEE Conference on Decision and Control, and the European Control Conference 2005 Seville, Spain, December 12-15, 2005 WeC10.4 An iterative SVD-Krylov based method for model reduction
More informationThe skew-symmetric orthogonal solutions of the matrix equation AX = B
Linear Algebra and its Applications 402 (2005) 303 318 www.elsevier.com/locate/laa The skew-symmetric orthogonal solutions of the matrix equation AX = B Chunjun Meng, Xiyan Hu, Lei Zhang College of Mathematics
More informationEE731 Lecture Notes: Matrix Computations for Signal Processing
EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University October 17, 005 Lecture 3 3 he Singular Value Decomposition
More informationMATH 425-Spring 2010 HOMEWORK ASSIGNMENTS
MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS Instructor: Shmuel Friedland Department of Mathematics, Statistics and Computer Science email: friedlan@uic.edu Last update April 18, 2010 1 HOMEWORK ASSIGNMENT
More informationPseudospectra and Nonnormal Dynamical Systems
Pseudospectra and Nonnormal Dynamical Systems Mark Embree and Russell Carden Computational and Applied Mathematics Rice University Houston, Texas ELGERSBURG MARCH 1 Overview of the Course These lectures
More informationThe Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation
The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation Zheng-jian Bai Abstract In this paper, we first consider the inverse
More informationSolutions to the generalized Sylvester matrix equations by a singular value decomposition
Journal of Control Theory Applications 2007 5 (4) 397 403 DOI 101007/s11768-006-6113-0 Solutions to the generalized Sylvester matrix equations by a singular value decomposition Bin ZHOU Guangren DUAN (Center
More informationAn iterative SVD-Krylov based method for model reduction of large-scale dynamical systems
An iterative SVD-Krylov based method for model reduction of large-scale dynamical systems Serkan Gugercin Department of Mathematics, Virginia Tech., Blacksburg, VA, USA, 24061-0123 gugercin@math.vt.edu
More informationProblem Set 5 Solutions 1
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.245: MULTIVARIABLE CONTROL SYSTEMS by A. Megretski Problem Set 5 Solutions The problem set deals with Hankel
More informationA Symmetry Approach for Balanced Truncation of Positive Linear Systems
A Symmetry Approach for Balanced Truncation of Positive Linear Systems Grussler, Christian; Damm, Tobias Published in: IEEE Control Systems Society Conference 2012, Maui 2012 Link to publication Citation
More informationEE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2
EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko
More informationModule 09 From s-domain to time-domain From ODEs, TFs to State-Space Modern Control
Module 09 From s-domain to time-domain From ODEs, TFs to State-Space Modern Control Ahmad F. Taha EE 3413: Analysis and Desgin of Control Systems Email: ahmad.taha@utsa.edu Webpage: http://engineering.utsa.edu/
More informationIterative Solution of a Matrix Riccati Equation Arising in Stochastic Control
Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control Chun-Hua Guo Dedicated to Peter Lancaster on the occasion of his 70th birthday We consider iterative methods for finding the
More informationFEL3210 Multivariable Feedback Control
FEL3210 Multivariable Feedback Control Lecture 8: Youla parametrization, LMIs, Model Reduction and Summary [Ch. 11-12] Elling W. Jacobsen, Automatic Control Lab, KTH Lecture 8: Youla, LMIs, Model Reduction
More informationNumerical Methods for Solving Large Scale Eigenvalue Problems
Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems
More informationEL2520 Control Theory and Practice
So far EL2520 Control Theory and Practice r Fr wu u G w z n Lecture 5: Multivariable systems -Fy Mikael Johansson School of Electrical Engineering KTH, Stockholm, Sweden SISO control revisited: Signal
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationAMG for a Peta-scale Navier Stokes Code
AMG for a Peta-scale Navier Stokes Code James Lottes Argonne National Laboratory October 18, 2007 The Challenge Develop an AMG iterative method to solve Poisson 2 u = f discretized on highly irregular
More information16.31 Fall 2005 Lecture Presentation Mon 31-Oct-05 ver 1.1
16.31 Fall 2005 Lecture Presentation Mon 31-Oct-05 ver 1.1 Charles P. Coleman October 31, 2005 1 / 40 : Controllability Tests Observability Tests LEARNING OUTCOMES: Perform controllability tests Perform
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More informationarxiv: v1 [math.na] 12 Jan 2012
Interpolatory Weighted-H 2 Model Reduction Branimir Anić, Christopher A. Beattie, Serkan Gugercin, and Athanasios C. Antoulas Preprint submitted to Automatica, 2 January 22 arxiv:2.2592v [math.na] 2 Jan
More informationFinding eigenvalues for matrices acting on subspaces
Finding eigenvalues for matrices acting on subspaces Jakeniah Christiansen Department of Mathematics and Statistics Calvin College Grand Rapids, MI 49546 Faculty advisor: Prof Todd Kapitula Department
More informationA Necessary and Sufficient Condition for High-Frequency Robustness of Non-Strictly-Proper Feedback Systems
A Necessary and Sufficient Condition for High-Frequency Robustness of Non-Strictly-Proper Feedback Systems Daniel Cobb Department of Electrical Engineering University of Wisconsin Madison WI 53706-1691
More informationCharacterization of half-radial matrices
Characterization of half-radial matrices Iveta Hnětynková, Petr Tichý Faculty of Mathematics and Physics, Charles University, Sokolovská 83, Prague 8, Czech Republic Abstract Numerical radius r(a) is the
More informationOptimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications
Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications Yongge Tian China Economics and Management Academy, Central University of Finance and Economics,
More informationComputation of a canonical form for linear differential-algebraic equations
Computation of a canonical form for linear differential-algebraic equations Markus Gerdin Division of Automatic Control Department of Electrical Engineering Linköpings universitet, SE-581 83 Linköping,
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Eigenvalue Problems; Similarity Transformations Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 18 Eigenvalue
More informationECEN 605 LINEAR SYSTEMS. Lecture 7 Solution of State Equations 1/77
1/77 ECEN 605 LINEAR SYSTEMS Lecture 7 Solution of State Equations Solution of State Space Equations Recall from the previous Lecture note, for a system: ẋ(t) = A x(t) + B u(t) y(t) = C x(t) + D u(t),
More informationReview of Some Concepts from Linear Algebra: Part 2
Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set
More informationLecture notes: Applied linear algebra Part 1. Version 2
Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and
More informationISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION
ISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION Harald K. Wimmer 1 The set of all negative-semidefinite solutions of the CARE A X + XA + XBB X C C = 0 is homeomorphic
More informationModel Order Reduction of Continuous LTI Large Descriptor System Using LRCF-ADI and Square Root Balanced Truncation
, July 1-3, 15, London, U.K. Model Order Reduction of Continuous LI Large Descriptor System Using LRCF-ADI and Square Root Balanced runcation Mehrab Hossain Likhon, Shamsil Arifeen, and Mohammad Sahadet
More informationTHE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR
THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional
More informationOn invariant graph subspaces
On invariant graph subspaces Konstantin A. Makarov, Stephan Schmitz and Albrecht Seelmann Abstract. In this paper we discuss the problem of decomposition for unbounded 2 2 operator matrices by a pair of
More informationInstitut für Mathematik
U n i v e r s i t ä t A u g s b u r g Institut für Mathematik Serkan Gugercin, Tatjana Stykel, Sarah Wyatt Model Reduction of Descriptor Systems by Interpolatory Projection Methods Preprint Nr. 3/213 29.
More informationWe first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix
BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity
More information2 b 3 b 4. c c 2 c 3 c 4
OHSx XM511 Linear Algebra: Multiple Choice Questions for Chapter 4 a a 2 a 3 a 4 b b 1. What is the determinant of 2 b 3 b 4 c c 2 c 3 c 4? d d 2 d 3 d 4 (a) abcd (b) abcd(a b)(b c)(c d)(d a) (c) abcd(a
More informationBALANCING AS A MOMENT MATCHING PROBLEM
BALANCING AS A MOMENT MATCHING PROBLEM T.C. IONESCU, J.M.A. SCHERPEN, O.V. IFTIME, AND A. ASTOLFI Abstract. In this paper, we treat a time-domain moment matching problem associated to balanced truncation.
More informationReview problems for MA 54, Fall 2004.
Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on
More informationTHE STABLE EMBEDDING PROBLEM
THE STABLE EMBEDDING PROBLEM R. Zavala Yoé C. Praagman H.L. Trentelman Department of Econometrics, University of Groningen, P.O. Box 800, 9700 AV Groningen, The Netherlands Research Institute for Mathematics
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline
More informationSVD and its Application to Generalized Eigenvalue Problems. Thomas Melzer
SVD and its Application to Generalized Eigenvalue Problems Thomas Melzer June 8, 2004 Contents 0.1 Singular Value Decomposition.................. 2 0.1.1 Range and Nullspace................... 3 0.1.2
More informationGramians based model reduction for hybrid switched systems
Gramians based model reduction for hybrid switched systems Y. Chahlaoui Younes.Chahlaoui@manchester.ac.uk Centre for Interdisciplinary Computational and Dynamical Analysis (CICADA) School of Mathematics
More informationOn some interpolation problems
On some interpolation problems A. Gombani Gy. Michaletzky LADSEB-CNR Eötvös Loránd University Corso Stati Uniti 4 H-1111 Pázmány Péter sétány 1/C, 35127 Padova, Italy Computer and Automation Institute
More informationFluid flow dynamical model approximation and control
Fluid flow dynamical model approximation and control... a case-study on an open cavity flow C. Poussot-Vassal & D. Sipp Journée conjointe GT Contrôle de Décollement & GT MOSAR Frequency response of an
More informationSpectral Theorem for Self-adjoint Linear Operators
Notes for the undergraduate lecture by David Adams. (These are the notes I would write if I was teaching a course on this topic. I have included more material than I will cover in the 45 minute lecture;
More informationLMI Based Model Order Reduction Considering the Minimum Phase Characteristic of the System
LMI Based Model Order Reduction Considering the Minimum Phase Characteristic of the System Gholamreza Khademi, Haniyeh Mohammadi, and Maryam Dehghani School of Electrical and Computer Engineering Shiraz
More informationEE731 Lecture Notes: Matrix Computations for Signal Processing
EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten
More informationChapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors
Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there
More informationAn iterative SVD-Krylov based method for model reduction of large-scale dynamical systems
Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 1964 1986 www.elsevier.com/locate/laa An iterative SVD-Krylov based method for model reduction of large-scale dynamical
More informationMathematics for Control Theory
Mathematics for Control Theory H 2 and H system norms Hanz Richter Mechanical Engineering Department Cleveland State University Reading materials We will use: Michael Green and David Limebeer, Linear Robust
More informationTechnische Universität Berlin
Technische Universität Berlin Institut für Mathematik A Matlab Toolbox for the Regularization of Descriptor Systems Arising from Generalized Realization Procedures A. Binder V. Mehrmann A. Miedlar P. Schulze
More informationProblem Set 4 Solution 1
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.245: MULTIVARIABLE CONTROL SYSTEMS by A. Megretski Problem Set 4 Solution Problem 4. For the SISO feedback
More information1 Math 241A-B Homework Problem List for F2015 and W2016
1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let
More informationLecture 7: Positive Semidefinite Matrices
Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.
More information