Model reduction of large-scale dynamical systems

Size: px
Start display at page:

Download "Model reduction of large-scale dynamical systems"

Transcription

1 Model reduction of large-scale dynamical systems Lecture III: Krylov approximation and rational interpolation Thanos Antoulas Rice University and Jacobs University URL: aca International School, Monopoli, 7-12 September 2008 Thanos Antoulas (Rice U. & Jacobs U.) Reduction of large-scale systems 1 / 38

2 Outline 1 Krylov approximation methods 2 The Arnoldi and the Lanczos procedures The Arnoldi procedure The Lanczos procedure An example 3 Krylov methods and moment matching Remarks 4 Rational interpolation by Krylov projection Realization by projection Interpolation by projection 5 Choice of Krylov projection points: Optimal H 2 model reduction 6 Summary: Lectures II and III Thanos Antoulas (Rice U. & Jacobs U.) Reduction of large-scale systems 2 / 38

3 Outline Krylov approximation methods 1 Krylov approximation methods 2 The Arnoldi and the Lanczos procedures The Arnoldi procedure The Lanczos procedure An example 3 Krylov methods and moment matching Remarks 4 Rational interpolation by Krylov projection Realization by projection Interpolation by projection 5 Choice of Krylov projection points: Optimal H 2 model reduction 6 Summary: Lectures II and III Thanos Antoulas (Rice U. & Jacobs U.) Reduction of large-scale systems 3 / 38

4 Krylov approximation methods Krylov approximation methods ( A B Given Σ = C D ), expand the transfer function around s 0 : H(s) = η 0 + η 1 (s s 0 ) + η 2 (s s 0 ) 2 + η 3 (s s 0 ) 3 + ( ) Â ˆB Moments at s 0 : η j, j 0. Find ˆΣ = Ĉ ˆD, with Ĥ(s) = ˆη 0 + ˆη 1 (s s 0 ) + ˆη 2 (s s 0 ) 2 + ˆη 3 (s s 0 ) 3 + such that for appropriate k: η j = ˆη j, j = 1, 2,, k Moment matching methods can be implemented in a numerically stable and efficient way. Thanos Antoulas (Rice U. & Jacobs U.) Reduction of large-scale systems 4 / 38

5 Krylov approximation methods Krylov approximation methods: Special cases s 0 = Moments: Markov parameters Problem: (partial) realization Solution computed through: Lanczos and Arnoldi procedures s 0 = 0 Problem: Padé approximation Solution computed through: Lanczos and Arnoldi procedures In general: arbirtary s 0 C Problem: Rational interpolation Solution computed through: Rational Lanczos Computation of moments: numerically problematic Key fact for numerical reliability: If (A, B, C, D) given moment matching without moment computation iterative implementation. Thanos Antoulas (Rice U. & Jacobs U.) Reduction of large-scale systems 5 / 38

6 Outline The Arnoldi and the Lanczos procedures 1 Krylov approximation methods 2 The Arnoldi and the Lanczos procedures The Arnoldi procedure The Lanczos procedure An example 3 Krylov methods and moment matching Remarks 4 Rational interpolation by Krylov projection Realization by projection Interpolation by projection 5 Choice of Krylov projection points: Optimal H 2 model reduction 6 Summary: Lectures II and III Thanos Antoulas (Rice U. & Jacobs U.) Reduction of large-scale systems 6 / 38

7 The Arnoldi and the Lanczos procedures The Arnoldi procedure The Arnoldi procedure Given is A R n n, and b R n. Let R k (A, b) R n k be the reachability or Krylov matrix. It is assumed that R k has full column rank equal to k. Devise a process which is iterative and at the k th step we have AV k = V k H k + R k, V k, R k R n k, H k R k k, k = 1, 2,, n H A V = V + R These quantities have to satisfy the following conditions at each step. The columns of V k are orthonormal: V k V k = I k, k = 1, 2,, n. span col V k = span col R k (A, b), k = 1, 2,, n The residual R k satisfies the Galerkin condition: V k R k = 0, k = 1, 2,, n. This problem can be solved by the Arnoldi procedure. Thanos Antoulas (Rice U. & Jacobs U.) Reduction of large-scale systems 7 / 38

8 The Arnoldi and the Lanczos procedures The Arnoldi procedure Arnoldi: recursive implementation Given: A R n n, b R n Find: V R n k, f R n, and H R k k, such that AV = VH + fe k where H = V AV, V V = I k, V f = 0, with H in upper Hessenberg form. 1 v 1 = b b, w = Av 1; α 1 = v 1 w f 1 = w v 1 α 1 ; V 1 = (v 1 ); H 1 = (α 1 ) 2 For j = 1, 2,, k 1 1 β j = f j, v j+1 = f j β j ( Hj 2 V j+1 = ( ) V j v j+1, Ĥ j = β j e j 3 w = Av j+1, h = V j+1 w, f j+1 = w V j+1 h ) 4 H j+1 = (Ĥj h ) Thanos Antoulas (Rice U. & Jacobs U.) Reduction of large-scale systems 8 / 38

9 The Arnoldi and the Lanczos procedures The Arnoldi procedure Properties of Arnoldi H k is obtained by projecting A onto the span of the columns of V k : H k = V k AV k. The remainder R k has rank one and can be written as R k = r k e k, where e k is the kth unit vector; thus r k R k. This further implies that v k+1 = Hessenberg matrix. H k = r k r k, where v k+1 is the (k + 1)st column of V. Consequently, H k is an upper h 1,1 h 1,2 h 1,3 h 1,k 1 h 1,k h 2,1 h 2,2 h 2,3 h 2,k 1 h 2,k h 3,2 h 3,3 h 3,k 1 h 3,k h k 1,k 1 h k,k 1 h k 1,k h k,k Let p k (λ) = det(λi k H k ), be the characteristic polynomial of H k. This monic polynomial is the solution of the following minimization problem p k = arg min p(a)b 2 where the minimum is taken over all monic polynomials p of degree k. Since p k (A)b = A k b + R k p, where p i+1 is the coefficient of λ i of the polynomial p k, it also follows that the coefficients of p k provide the least squares fit between A k b and the columns of R k. There holds r k = 1 p k 1 (A)b p k (A)b, H k,k 1 = p k (A)b p k 1 (A)b Thanos Antoulas (Rice U. & Jacobs U.) Reduction of large-scale systems 9 / 38

10 The Arnoldi and the Lanczos procedures The Arnoldi procedure An alternative way of looking at Arnoldi Consider a matrix A R n n, a starting vector b R n, and the corresponding reachability matrix R n = [b Ab A n 1 b]. The following relationship holds true: α α 1 AR n = R n F where F = α α n 1 and χ A (s) = s n + α n 1 s n α 1 s + α 0, is the characteristic polynomial of A. Compute the QR factorization of R n : R n = VU, V V = I n, U upper triangular Thanos Antoulas (Rice U. & Jacobs U.) Reduction of large-scale systems 10 / 38

11 The Arnoldi and the Lanczos procedures The Arnoldi procedure It follows that AVU = VUF AV = V } UFU {{ 1 } Ā AV = VĀ Since U is upper triangular, so is U 1 ; furthermore F is upper Hessenberg. Therefore Ā being the product of an upper triangular times an upper Hessenberg times an upper triangular matrix is upper Hessenberg. The k-step Arnoldi factorization can now be obtained by considering the first k columns of the above relationship, to wit: [AV] k = [ VĀ] k A[V] k = [V] k Ā kk + fe k where f is a multiple of the (k +1)-st column of V. Notice that Ākk is still upper Hessenberg, while the columns of [V] k provide an orthonormal basis for the space spanned by the first k columns of the reachability matrix R n. Thanos Antoulas (Rice U. & Jacobs U.) Reduction of large-scale systems 11 / 38

12 The Arnoldi and the Lanczos procedures The Lanczos procedure The symmetric Lanczos procedure If A = A then the Arnoldi procedure is the same as the symmetric Lanczos procedure. In this case H k is tridiagonal: H k = α 1 β 2 β 2 α 2 β 3 β 3 α α k 1 β k This matrix shows that the vectors in the Lanczos procedure satisfy a three term recurrence relationship β k α k Av i = β i+1 v i+1 + α i v i + β i v i 1, i = 1, 2,, k 1 Remark. If the remainder r k = 0, the procedure has terminated, in which case if (λ, x) is an eigenpair of H k, (λ, V k x) is an eigenpair of A (since H k x = λx implies AV k x = V k H k x = λv k x). Thanos Antoulas (Rice U. & Jacobs U.) Reduction of large-scale systems 12 / 38

13 The Arnoldi and the Lanczos procedures The Lanczos procedure Two-sided Lanczos The two-sided Lanczos procedure. Given A R n n which is not symmetric, and two vectors b, c R n, devise a process which is iterative and the k th step there holds: AV k = V k H k + R k, A W k = W k H k + S k, k = 1, 2,, n. Biorthogonality: W k V k = I k, span col V k = span col R k (A, b), span col W k = span col R k (A, c ), Galerkin conditions: V k S k = 0, W k R k = 0, k = 1, 2,, n. Remarks. The second condition of the second item above can also be expressed as span rows W k = span rows O k (c, A), where O k is the observability matrix of the pair (c, A). The assumption for the solvability of this problem is det O k (c, A)R k (A, b) 0, k = 1, 2,, n. The associated Lanczos polynomials are defined as p k (λ) = det(λi k H k ), and the induced inner product is defined as p(λ), q(λ) = p(a )c, q(a)b = c p(a) q(a) b. Thanos Antoulas (Rice U. & Jacobs U.) Reduction of large-scale systems 13 / 38

14 The Arnoldi and the Lanczos procedures The Lanczos procedure Two-sided Lanczos: recursive implementation Given: the triple A R n n, b, c R n Find: V, W R n k,, g R n, and H R k k, such that AV = VH + fe k, A W = WH + ge k where H = V AW, V W = I k, W f = 0, V g = 0. The projections π L and π U above, are given by V, W, respectively. 1 β 1 := b c, γ 1 := sgn (b c )β 1 v 1 = b/β 1, w 1 := c /γ 1 2 For j = 1,, k, set 1 α j = w j Av j 2 r j = Av j α j v j γ j v j 1, q j = A w j α j w j β j w j 1 3 β j+1 = r j q j, γ j+1 = sgn (r j q j)β j+1 4 v j+1 = r j /β j+1, w j+1 = q j /γ j+1 Thanos Antoulas (Rice U. & Jacobs U.) Reduction of large-scale systems 14 / 38

15 The Arnoldi and the Lanczos procedures The Lanczos procedure Properties of two-sided Lanczos H k is obtained by projecting A as follows: H k = W k AV k. The remainders R k, S k have rank one and can be written as R k = r k e k, S k = q k e k. This further implies that v k+1, w k+1 are scaled versions of r k, q k respectively Consequently, H k is a tridiagonal matrix. The generalized Lanczos polynomials p k (λ) = det(λi k H k ), k = 0, 1,, n 1, p 0 = 1, are orthogonal: p i, p j = 0, for i j. The columns of V k, W k and the Lanczos polynomials satisfy the following three-term recurrences γ k v k+1 = (A α k )v k β k 1 v k 1 β k w k+1 = (A α k )w k γ k 1 w k 1 γ k p k+1 (λ) = (λ α k )p k (λ) β k 1 p k 1 (λ) β k q k+1 (λ) = (λ α k )q k (λ) γ k 1 q k 1 (λ) Thanos Antoulas (Rice U. & Jacobs U.) Reduction of large-scale systems 15 / 38

16 The Arnoldi and the Lanczos procedures An example Example: symmetric Lanczos Consider the following symmetric matrix: A = With the starting vector b = [ ], we obtain Thanos Antoulas (Rice U. & Jacobs U.) Reduction of large-scale systems 16 / 38

17 The Arnoldi and the Lanczos procedures V 2 = V 3 = V 4 = where V 1 = [ An example ] [ 0 1, H 1 = [2], R 1 = 2 1 [, H 2 =, H 3 = 1 2 ] [ , H 4 = ], R 2 = ], R 3 = [ ], R 4 = AV k = V k H k + R k, V k R k = 0 H k = V k AV k, k = 1, 2, 3, 4. Thanos Antoulas (Rice U. & Jacobs U.) Reduction of large-scale systems 17 / 38

18 Outline Krylov methods and moment matching 1 Krylov approximation methods 2 The Arnoldi and the Lanczos procedures The Arnoldi procedure The Lanczos procedure An example 3 Krylov methods and moment matching Remarks 4 Rational interpolation by Krylov projection Realization by projection Interpolation by projection 5 Choice of Krylov projection points: Optimal H 2 model reduction 6 Summary: Lectures II and III Thanos Antoulas (Rice U. & Jacobs U.) Reduction of large-scale systems 18 / 38

19 Krylov methods and moment matching Arnoldi and moment matching The Arnoldi factorization can be used for model reduction as follows. Recall the QR factorization of the reachability matrix R k R n k ; a projection VV can then be attached to this factorization: R k = VU V = R k U 1 where V R n k, V V = I k, and U is upper triangular. The reduced order system is: Σ = ( Ā B C ) where Ā = V AV, B = V B, C = CV Theorem. Σ as defined above satisfies the equality of the Markov parameters ˆη i = η i, i = 1,, k. Furthermore, Ā is in Hessenberg form, and B is a multiple of the unit vector e 1. Thanos Antoulas (Rice U. & Jacobs U.) Reduction of large-scale systems 19 / 38

20 Krylov methods and moment matching Proof. First notice that since U is upper triangular, v 1 = B B, and since V R k = U it follows that B = u 1 = B e 1 ; therefore B = V B. VV B = V B = B, hence Ā B = V AVV B = V AB; in general, since VV is a projection along the columns of R k, we have VV R k = R k ; moreover: ˆR k = V R k ; hence (ˆη 1 ˆη k ) = Ĉ ˆR k = CVV R k = CR k = (η 1 η k ) Finally, the upper triangularity of U implies that A is in Hessenberg form. Remark. Similarly, one can show that reduction by means the two-sided Lanczos procedure preserves 2k Markov parameters. Thanos Antoulas (Rice U. & Jacobs U.) Reduction of large-scale systems 20 / 38

21 Krylov methods and moment matching Remarks Remarks The number of operations is O(k 2 n) vs. O(n 3 ), which implies efficiency. The requirement for memory is large if k is relatively large. Only matrix-vector multiplications are required. No matrix factorizations and/or inversions. There is no need to compute the transformed n-th order model and then truncate. This eliminates ill-conditioning. Drawbacks: Numerical issue: Arnoldi/Lanczos methods loose orthogonality. This comes from the instability of the classical Gram-Schmidt procedure. Remedy: re-orthogonalization. no global error bound. ˆΣ tends to approximate the high frequency poles of Σ. Remedy: match expansions around other frequencies rational Lanczos. Thanos Antoulas (Rice U. & Jacobs U.) Reduction of large-scale systems 21 / 38

22 Outline Rational interpolation by Krylov projection 1 Krylov approximation methods 2 The Arnoldi and the Lanczos procedures The Arnoldi procedure The Lanczos procedure An example 3 Krylov methods and moment matching Remarks 4 Rational interpolation by Krylov projection Realization by projection Interpolation by projection 5 Choice of Krylov projection points: Optimal H 2 model reduction 6 Summary: Lectures II and III Thanos Antoulas (Rice U. & Jacobs U.) Reduction of large-scale systems 22 / 38

23 Rational interpolation by Krylov projection Realization by projection Partial realization by projection Given a system Σ = (A, B, C), where A R n n and B, C R n, We seek a lower dimensional model ˆΣ = (Â, ˆB, Ĉ), where  Rk k, ˆB, Ĉ R k, k < n, such that ˆΣ preserves some properties of the original system, through appropriate projection methods. In other words, we seek V R n k and W R n k such that W V = I k, and the reduced system is given by:  = W AV, ˆB = W B, Ĉ = CV. Lemma With V = [B, AB,, A k 1 B] = R k (A, B) and W any left inverse of V, ˆΣ is a partial realization of Σ and matches k Markov parameters. From a numerical point of view, one would not use V as defined above since usually the columns of V are almost linearly dependent. As it turns out any matrix whose column span is the same as that of V can be used. Proof. We have ĈˆB = CVW B = CR k (A, B)e 1 = CB; furthermore ĈÂj ˆB = CR k (A, B)W A j R k (A, B)e 1 = CR k (A, B)W A j B = CR k (A, B)e j+1 = CA j B, j = 1,, k 1. Thanos Antoulas (Rice U. & Jacobs U.) Reduction of large-scale systems 23 / 38

24 Rational interpolation by Krylov projection Interpolation by projection Rational interpolation by projection Suppose now that we are given k distinct points s j C. V is defined as the generalized reachability matrix V = [ (s 1 I n A) 1 B,, (s k I n A) 1 B ], and as before, let W be any left inverse of V. Then Lemma ˆΣ defined above, interpolates the transfer function of Σ at the s j, that is H(s j ) = C(s j I n A) 1 B = Ĉ(s ji k Â) 1 ˆB = Ĥ(s j ), j = 1,, k. Proof. The following string of equalities leads to the desired result: Ĉ(s j I k Â) 1 ˆB = CV(sj I k W AV) 1 W B [ = C (s 1 I n A) 1 B,, (s k I n A) 1 ] ( B W 1 (s j I n A)V) W B [ = C(s 1 I n A) 1 B,, C(s k I n A) 1 ) 1 B] ([ W B ] W B [ = C(s 1 I n A) 1 B,, C(s k I n A) 1 ] B e j = C(s j I n A) 1 B. Thanos Antoulas (Rice U. & Jacobs U.) Reduction of large-scale systems 24 / 38

25 Rational interpolation by Krylov projection Interpolation by projection Matching points with multiplicity We now wish to match the value of the transfer function at a given point s 0 C, together with k 1 derivatives. For this we define the generalized reachability matrix V = together with any left inverse W thereof. Lemma [ ] (s 0 I n A) 1 B, (s 0 I n A) 2 B,, (s 0 I n A) k B, ˆΣ interpolates the transfer function of Σ at s 0, together with k 1 derivatives at the same point, j = 0, 1,, k 1: ( 1) j d j j! ds j H(s) = C(s 0 I n A) (j+1) B = Ĉ(s ( 1) 0I k Â) (j+1) ˆB j d j = s=s0 j! ds j Ĥ(s) s=s0 Proof. Let V be as defined as above, and W be such that W V = I k. It readily follows that the projected matrix s 0 I r  is in companion form (expression on the left) and therefore its powers are obtained by shifting its columns to the right: s 0 I k  = W (s 0 I n A)V = [W B, e 1,, e k 1 ] (s 0 I k Â)l = [, W B, e }{{} 1,, e k l ]. l 1 Consequently [W (s 0 I n A)V] l W B = e l, which finally implies Ĉ(s 0 I k Â) l ˆB = CV [W (s 0 I A)V] l W B = CVe l = C(s 0 I n A) l B, l = 1, 2,, k. Thanos Antoulas (Rice U. & Jacobs U.) Reduction of large-scale systems 25 / 38

26 Rational interpolation by Krylov projection Interpolation by projection General result: rational Krylov A projector which is composed of any combination of the above three cases achieves matching of an appropriate number of Markov parameters and moments. Let the partial reachability matrix be R k (A, B) = [ B AB A k 1 B ], and partial generalized reachability matrix be: R k (A, B; σ) = [ (σi n A) 1 B (σi n A) 2 B (σi n A) k B ]. Rational Krylov (a) If V as defined in the above three cases is replaced by V = VR, R R k k, det R 0, and W by W = R 1 W, the same matching results hold true. (b) Let V be such that span col V = span col [R k (A, B) R m1 (A, B; σ 1 ) R ml (A, B; σ l )], and W any left inverse of V. The reduced system matches k Markov parameters and m i moments at σ i C, i = 1,, l. Thanos Antoulas (Rice U. & Jacobs U.) Reduction of large-scale systems 26 / 38

27 Rational interpolation by Krylov projection Interpolation by projection Two-sided projections: the choice of W Let O k (C, A) R k n, be the partial observability matrix consisting of the first k rows of O n (C, A) R n n. The first case is V = R k (A, B), W = (O k (C, A)R k (A, B) ) 1 O k (C, A). }{{} H k Lemma Assuming that det H k 0, ˆΣ is a partial realization of Σ and matches 2k Markov parameters. Given 2k distinct points s 1,, s 2k, we will make use of the following generalized reachability and observability matrices: Ṽ = Lemma [ ] (s 1 I n A) 1 B (s k I n A) 1 B, W = [(s k+1 I n A ) 1 C (s 2k I n A ) 1 C ]. Assuming that det W Ṽ 0, the projected system ˆΣ where V = Ṽ and W = W(Ṽ W) 1 interpolates the transfer function of Σ at the 2k points s i. Thanos Antoulas (Rice U. & Jacobs U.) Reduction of large-scale systems 27 / 38

28 Rational interpolation by Krylov projection Interpolation by projection Remarks (a)the same procedure as above can be used to approximate implicit systems, i.e., systems that are given in a generalized form Eẋ(t) = Ax(t) + Bu(t), y(t) = Cx(t), where E may be singular. The reduced system is given by Ê = W EV, Â = W AV, ˆB = W B, Ĉ = CV, where C(s k+1 E A) 1 W =., V = [ (s 1 E A) 1 B (s k E A) 1 B ] C(s 2k E A) 1 (b) Sylvester equations and projectors. The solution of an appropriate Sylvester equation AX + XH + BG = 0, provides a projector that interpolates the original system C, A, B at minus the eigenvalues of H. Therefore the projectors above can be obtained by solving Sylvester equations. Thanos Antoulas (Rice U. & Jacobs U.) Reduction of large-scale systems 28 / 38

29 Choice of Krylov projection points: Optimal H 2 model reduction Outline 1 Krylov approximation methods 2 The Arnoldi and the Lanczos procedures The Arnoldi procedure The Lanczos procedure An example 3 Krylov methods and moment matching Remarks 4 Rational interpolation by Krylov projection Realization by projection Interpolation by projection 5 Choice of Krylov projection points: Optimal H 2 model reduction 6 Summary: Lectures II and III Thanos Antoulas (Rice U. & Jacobs U.) Reduction of large-scale systems 29 / 38

30 Choice of Krylov projection points: Optimal H 2 model reduction Choice of Krylov projection points: Optimal H 2 model reduction Recall: the H 2 norm of a stable system is: Σ H2 = ( + ) 1/2 h 2 (t)dt where h(t) = Ce At B, t 0, is the impulse response of Σ. Goal: construct a Krylov projector such that Σ k = arg min deg(ˆσ) = r ˆΣ : stable Σ ˆΣ = H2 ( + (h ĥ)2 (t)dt ) 1/2 Thanos Antoulas (Rice U. & Jacobs U.) Reduction of large-scale systems 30 / 38

31 Choice of Krylov projection points: Optimal H 2 model reduction First-order necessary optimality conditions Let (Â, ˆB, Ĉ) solve the optimal H 2 problem and let ˆλ i denote the eigenvalues of Â. The necessary conditions are H( ˆλ i ) = Ĥ( ˆλ i ) and d ds H(s) = d s= ˆλ ds Ĥ(s) i s= ˆλ i Thus the reduced system has to match the first two moments of the original system at the mirror images of the eigenvalues of Â. The H 2 norm: if H(s) = n n φ k k=1 s λ k H 2 H 2 = c k H( λ i ) Corollary. With Ĥ(s) = r ˆφ k k=1, the H s ˆλ 2 norm of the error system, is k J = H Ĥ 2 n [ ] = φ i H( λ i ) Ĥ( λ i ) + H 2 i=1 r j=1 k=1 ] ˆφ j [Ĥ( ˆλj ) H( ˆλ j ) Conclusion. The H 2 error is due to the mismatch of the transfer functions H Ĥ at the mirror images of the full-order and reduced system poles λ i, ˆλ i. Thanos Antoulas (Rice U. & Jacobs U.) Reduction of large-scale systems 31 / 38

32 Choice of Krylov projection points: Optimal H 2 model reduction An iterative algorithm Let the system obtained after the (j 1) st step be (C j 1, A j 1, B j 1 ), where A j 1 R k k, B j 1, C j 1 Rk. At the j th step the system is obtained as follows where A j = (W j V j) 1 W j AV j, B j = (W j V j) 1 W j B, C j = CV j, V j = [ (λ 1 I A) 1 B,, (λ k I A) 1 B ], W j = [ C(λ 1 I A) 1,, C(λ k I A) 1], and: λ 1,, λ k σ(a j 1 ), i.e., λ i are the eigenvalues of the (j 1) st iterate A j 1. The Newton step: can be computed explicitly λ (k) 1 λ (k) 2. local convergence guaranteed. λ (k) 1 λ (k) 2. J 1 λ (k 1) 1 λ (k 1) 2. Thanos Antoulas (Rice U. & Jacobs U.) Reduction of large-scale systems 32 / 38

33 Choice of Krylov projection points: Optimal H 2 model reduction An iterative rational Krylov algorithm (IRKA) The proposed algorithm produces a reduced order model Ĥ(s) that satisfies the interpolation-based conditions, i.e. H( ˆλ i ) = Ĥ( ˆλ d i ) and ds H(s) = d s= ˆλ ds Ĥ(s) s= ˆλ i i 1 Make an initial selection of σ i, for i = 1,..., k 2 W = [(σ 1 I A ) 1 C,, (σ k I A ) 1 C ] 3 V = [(σ 1 I A) 1 B,, (σ k I A) 1 B] 4 while (not converged) Â = (W V) 1 W AV, σ i λ i (Â) + Newton correction, i = 1,..., k W = [(σ 1 I A ) 1 C,, (σ k I A ) 1 C ] V = [(σ 1 I A) 1 B,, (σ k I A) 1 B] 5 Â = (W V) 1 W AV, ˆB = (W V) 1 W B, Ĉ = CV Thanos Antoulas (Rice U. & Jacobs U.) Reduction of large-scale systems 33 / 38

34 Choice of Krylov projection points: Optimal H 2 model reduction Moderate-dimensional example u y R L C R C R L... L C R C R L C R C total system variables n = 902, independent variables dim = 599, reduced dimension k = 21 reduced model captures dominant modes Singular values (db) Frequency response Spectral zero method with SADPA n=902 dim=599 k=21 Original Reduced(SZM) Imag Dominant spectral zeros Theoretical and found with SADPA Spz: original Spz: dominant Spz: SADPA computed Frequency x 10 8 (rad/s) Real Thanos Antoulas (Rice U. & Jacobs U.) Reduction of large-scale systems 34 / 38

35 Choice of Krylov projection points: Optimal H 2 model reduction H and H 2 error norms Relative norms of the error systems Reduction Method n = 902, dim = 599, k = 21 H H 2 PRIMA Spectral Zero Method with SADPA Optimal H Balanced truncation (BT) Riccati Balanced Truncation (PRBT) Thanos Antoulas (Rice U. & Jacobs U.) Reduction of large-scale systems 35 / 38

36 Outline Summary: Lectures II and III 1 Krylov approximation methods 2 The Arnoldi and the Lanczos procedures The Arnoldi procedure The Lanczos procedure An example 3 Krylov methods and moment matching Remarks 4 Rational interpolation by Krylov projection Realization by projection Interpolation by projection 5 Choice of Krylov projection points: Optimal H 2 model reduction 6 Summary: Lectures II and III Thanos Antoulas (Rice U. & Jacobs U.) Reduction of large-scale systems 36 / 38

37 Summary: Lectures II and III Approximation methods: Summary Krylov Realization Interpolation Lanczos Arnoldi Properties numerical efficiency n 10 3 choice of matching moments SVD Nonlinear systems Linear systems POD methods Balanced truncation Empirical Gramians Hankel approximation Krylov/SVD Methods Properties Stability Error bound n 10 3 Thanos Antoulas (Rice U. & Jacobs U.) Reduction of large-scale systems 37 / 38

38 Summary: Lectures II and III Complexity considerations Dense problems Major cost Balanced Truncation: Compute gramians 30N 3 (eigenvalue decomp.) Perform balancing 25N 3 (sing. value decomp.) Rational Krylov approximation: Decompose (A σ i E) for k points 2 3 kn3 Remark : Iterations (Sign, Smith) can accelerate the computation of gramians (esp. on parallel machines) Approximate and/or sparse decompositions Major cost Balanced Truncation: Compute gramians c 1 αkn Perform balancing O(n 3 ) Rational Krylov approximation: Iterative solves for (A σ i E)x = b c 2 kαn, where k = number of expansion points; α = average number of non-zero elements per row in A, E. Thanos Antoulas (Rice U. & Jacobs U.) Reduction of large-scale systems 38 / 38

Model reduction of large-scale systems by least squares

Model reduction of large-scale systems by least squares Model reduction of large-scale systems by least squares Serkan Gugercin Department of Mathematics, Virginia Tech, Blacksburg, VA, USA gugercin@mathvtedu Athanasios C Antoulas Department of Electrical and

More information

Model Reduction for Unstable Systems

Model Reduction for Unstable Systems Model Reduction for Unstable Systems Klajdi Sinani Virginia Tech klajdi@vt.edu Advisor: Serkan Gugercin October 22, 2015 (VT) SIAM October 22, 2015 1 / 26 Overview 1 Introduction 2 Interpolatory Model

More information

Model reduction of large-scale systems

Model reduction of large-scale systems Model reduction of large-scale systems An overview and some new results Thanos Antoulas Rice University email: aca@rice.edu URL: www.ece.rice.edu/ aca LinSys2008, Sde Boker, 15-19 September 2008 Thanos

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

Iterative Rational Krylov Algorithm for Unstable Dynamical Systems and Generalized Coprime Factorizations

Iterative Rational Krylov Algorithm for Unstable Dynamical Systems and Generalized Coprime Factorizations Iterative Rational Krylov Algorithm for Unstable Dynamical Systems and Generalized Coprime Factorizations Klajdi Sinani Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University

More information

Fluid flow dynamical model approximation and control

Fluid flow dynamical model approximation and control Fluid flow dynamical model approximation and control... a case-study on an open cavity flow C. Poussot-Vassal & D. Sipp Journée conjointe GT Contrôle de Décollement & GT MOSAR Frequency response of an

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 19: More on Arnoldi Iteration; Lanczos Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 17 Outline 1

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give

More information

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Krylov Space Methods Nonstationary sounds good Radu Trîmbiţaş Babeş-Bolyai University Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Introduction These methods are used both to solve

More information

Model reduction of large-scale systems

Model reduction of large-scale systems Model reduction of large-scale systems Lecture IV: Model reduction from measurements Thanos Antoulas Rice University and Jacobs University email: aca@rice.edu URL: www.ece.rice.edu/ aca International School,

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

Krylov subspace projection methods

Krylov subspace projection methods I.1.(a) Krylov subspace projection methods Orthogonal projection technique : framework Let A be an n n complex matrix and K be an m-dimensional subspace of C n. An orthogonal projection technique seeks

More information

Block Bidiagonal Decomposition and Least Squares Problems

Block Bidiagonal Decomposition and Least Squares Problems Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition

More information

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Peter J.C. Dickinson p.j.c.dickinson@utwente.nl http://dickinson.website version: 12/02/18 Monday 5th February 2018 Peter J.C. Dickinson

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

Krylov-Subspace Based Model Reduction of Nonlinear Circuit Models Using Bilinear and Quadratic-Linear Approximations

Krylov-Subspace Based Model Reduction of Nonlinear Circuit Models Using Bilinear and Quadratic-Linear Approximations Krylov-Subspace Based Model Reduction of Nonlinear Circuit Models Using Bilinear and Quadratic-Linear Approximations Peter Benner and Tobias Breiten Abstract We discuss Krylov-subspace based model reduction

More information

Balanced Truncation 1

Balanced Truncation 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.242, Fall 2004: MODEL REDUCTION Balanced Truncation This lecture introduces balanced truncation for LTI

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated. Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces

More information

Algorithms that use the Arnoldi Basis

Algorithms that use the Arnoldi Basis AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Arnoldi Methods Dianne P. O Leary c 2006, 2007 Algorithms that use the Arnoldi Basis Reference: Chapter 6 of Saad The Arnoldi Basis How to

More information

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts

More information

Model reduction of large-scale systems

Model reduction of large-scale systems Model reduction of large-scale systems An overview and some new results Thanos Antoulas Rice University and Jacobs University email: aca@rice.edu URL: www.ece.rice.edu/ aca Electrical and Computer Engineering

More information

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization General Tools for Solving Large Eigen-Problems

More information

AN OVERVIEW OF MODEL REDUCTION TECHNIQUES APPLIED TO LARGE-SCALE STRUCTURAL DYNAMICS AND CONTROL MOTIVATING EXAMPLE INVERTED PENDULUM

AN OVERVIEW OF MODEL REDUCTION TECHNIQUES APPLIED TO LARGE-SCALE STRUCTURAL DYNAMICS AND CONTROL MOTIVATING EXAMPLE INVERTED PENDULUM Controls Lab AN OVERVIEW OF MODEL REDUCTION TECHNIQUES APPLIED TO LARGE-SCALE STRUCTURAL DYNAMICS AND CONTROL Eduardo Gildin (UT ICES and Rice Univ.) with Thanos Antoulas (Rice ECE) Danny Sorensen (Rice

More information

LARGE SPARSE EIGENVALUE PROBLEMS

LARGE SPARSE EIGENVALUE PROBLEMS LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization 14-1 General Tools for Solving Large Eigen-Problems

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given

More information

Krylov Techniques for Model Reduction of Second-Order Systems

Krylov Techniques for Model Reduction of Second-Order Systems Krylov Techniques for Model Reduction of Second-Order Systems A Vandendorpe and P Van Dooren February 4, 2004 Abstract The purpose of this paper is to present a Krylov technique for model reduction of

More information

Rational Krylov Methods for Model Reduction of Large-scale Dynamical Systems

Rational Krylov Methods for Model Reduction of Large-scale Dynamical Systems Rational Krylov Methods for Model Reduction of Large-scale Dynamical Systems Serkan Güǧercin Department of Mathematics, Virginia Tech, USA Aerospace Computational Design Laboratory, Dept. of Aeronautics

More information

Comparison of Model Reduction Methods with Applications to Circuit Simulation

Comparison of Model Reduction Methods with Applications to Circuit Simulation Comparison of Model Reduction Methods with Applications to Circuit Simulation Roxana Ionutiu, Sanda Lefteriu, and Athanasios C. Antoulas Department of Electrical and Computer Engineering, Rice University,

More information

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts

More information

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland Matrix Algorithms Volume II: Eigensystems G. W. Stewart University of Maryland College Park, Maryland H1HJ1L Society for Industrial and Applied Mathematics Philadelphia CONTENTS Algorithms Preface xv xvii

More information

Model reduction of large-scale systems

Model reduction of large-scale systems Model reduction of large-scale systems An overview Thanos Antoulas joint work with Dan Sorensen aca@rice.edu URL: www-ece.rice.edu/ aca CITI Lecture, Rice, February 2005 Model reduction of large-scale

More information

Large-scale eigenvalue problems

Large-scale eigenvalue problems ELE 538B: Mathematics of High-Dimensional Data Large-scale eigenvalue problems Yuxin Chen Princeton University, Fall 208 Outline Power method Lanczos algorithm Eigenvalue problems 4-2 Eigendecomposition

More information

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES Eigenvalue Problems CHAPTER 1 : PRELIMINARIES Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Mathematics TUHH Heinrich Voss Preliminaries Eigenvalue problems 2012 1 / 14

More information

Lecture 6. Numerical methods. Approximation of functions

Lecture 6. Numerical methods. Approximation of functions Lecture 6 Numerical methods Approximation of functions Lecture 6 OUTLINE 1. Approximation and interpolation 2. Least-square method basis functions design matrix residual weighted least squares normal equation

More information

Approximation of the Linearized Boussinesq Equations

Approximation of the Linearized Boussinesq Equations Approximation of the Linearized Boussinesq Equations Alan Lattimer Advisors Jeffrey Borggaard Serkan Gugercin Department of Mathematics Virginia Tech SIAM Talk Series, Virginia Tech, April 22, 2014 Alan

More information

Numerical Methods for Solving Large Scale Eigenvalue Problems

Numerical Methods for Solving Large Scale Eigenvalue Problems Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems

More information

Index. for generalized eigenvalue problem, butterfly form, 211

Index. for generalized eigenvalue problem, butterfly form, 211 Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,

More information

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm CS 622 Data-Sparse Matrix Computations September 19, 217 Lecture 9: Krylov Subspace Methods Lecturer: Anil Damle Scribes: David Eriksson, Marc Aurele Gilles, Ariah Klages-Mundt, Sophia Novitzky 1 Introduction

More information

Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method

Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method Antti Koskela KTH Royal Institute of Technology, Lindstedtvägen 25, 10044 Stockholm,

More information

Krylov Subspace Methods that Are Based on the Minimization of the Residual

Krylov Subspace Methods that Are Based on the Minimization of the Residual Chapter 5 Krylov Subspace Methods that Are Based on the Minimization of the Residual Remark 51 Goal he goal of these methods consists in determining x k x 0 +K k r 0,A such that the corresponding Euclidean

More information

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems Part I: Review of basic theory of eigenvalue problems 1. Let A C n n. (a) A scalar λ is an eigenvalue of an n n A

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

Stability preserving post-processing methods applied in the Loewner framework

Stability preserving post-processing methods applied in the Loewner framework Ion Victor Gosea and Athanasios C. Antoulas (Jacobs University Bremen and Rice University May 11, 2016 Houston 1 / 20 Stability preserving post-processing methods applied in the Loewner framework Ion Victor

More information

Orthonormal Transformations and Least Squares

Orthonormal Transformations and Least Squares Orthonormal Transformations and Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 30, 2009 Applications of Qx with Q T Q = I 1. solving

More information

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization

More information

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP) MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

More information

FEM and sparse linear system solving

FEM and sparse linear system solving FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich

More information

MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS

MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS Ulrike Baur joint work with Peter Benner Mathematics in Industry and Technology Faculty of Mathematics Chemnitz University of Technology

More information

Passivity Preserving Model Reduction for Large-Scale Systems. Peter Benner.

Passivity Preserving Model Reduction for Large-Scale Systems. Peter Benner. Passivity Preserving Model Reduction for Large-Scale Systems Peter Benner Mathematik in Industrie und Technik Fakultät für Mathematik Sonderforschungsbereich 393 S N benner@mathematik.tu-chemnitz.de SIMULATION

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Lecture 5: Numerical Linear Algebra Cho-Jui Hsieh UC Davis April 20, 2017 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

Introduction to Arnoldi method

Introduction to Arnoldi method Introduction to Arnoldi method SF2524 - Matrix Computations for Large-scale Systems KTH Royal Institute of Technology (Elias Jarlebring) 2014-11-07 KTH Royal Institute of Technology (Elias Jarlebring)Introduction

More information

BALANCING-RELATED MODEL REDUCTION FOR DATA-SPARSE SYSTEMS

BALANCING-RELATED MODEL REDUCTION FOR DATA-SPARSE SYSTEMS BALANCING-RELATED Peter Benner Professur Mathematik in Industrie und Technik Fakultät für Mathematik Technische Universität Chemnitz Computational Methods with Applications Harrachov, 19 25 August 2007

More information

Model reduction via tangential interpolation

Model reduction via tangential interpolation Model reduction via tangential interpolation K. Gallivan, A. Vandendorpe and P. Van Dooren May 14, 2002 1 Introduction Although most of the theory presented in this paper holds for both continuous-time

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

CME 345: MODEL REDUCTION

CME 345: MODEL REDUCTION CME 345: MODEL REDUCTION Balanced Truncation Charbel Farhat & David Amsallem Stanford University cfarhat@stanford.edu These slides are based on the recommended textbook: A.C. Antoulas, Approximation of

More information

Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices

Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 8, 2009 Today

More information

Lecture 3: QR-Factorization

Lecture 3: QR-Factorization Lecture 3: QR-Factorization This lecture introduces the Gram Schmidt orthonormalization process and the associated QR-factorization of matrices It also outlines some applications of this factorization

More information

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM 33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM (UPDATED MARCH 17, 2018) The final exam will be cumulative, with a bit more weight on more recent material. This outline covers the what we ve done since the

More information

An iterative SVD-Krylov based method for model reduction of large-scale dynamical systems

An iterative SVD-Krylov based method for model reduction of large-scale dynamical systems An iterative SVD-Krylov based method for model reduction of large-scale dynamical systems Serkan Gugercin Department of Mathematics, Virginia Tech., Blacksburg, VA, USA, 24061-0123 gugercin@math.vt.edu

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Orthogonal Transformations

Orthogonal Transformations Orthogonal Transformations Tom Lyche University of Oslo Norway Orthogonal Transformations p. 1/3 Applications of Qx with Q T Q = I 1. solving least squares problems (today) 2. solving linear equations

More information

ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition

ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition Wing-Kin (Ken) Ma 2017 2018 Term 2 Department of Electronic Engineering The Chinese University of Hong Kong Lecture 8: QR Decomposition

More information

Lecture 4 Orthonormal vectors and QR factorization

Lecture 4 Orthonormal vectors and QR factorization Orthonormal vectors and QR factorization 4 1 Lecture 4 Orthonormal vectors and QR factorization EE263 Autumn 2004 orthonormal vectors Gram-Schmidt procedure, QR factorization orthogonal decomposition induced

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Vector and Matrix Norms. Vector and Matrix Norms

Vector and Matrix Norms. Vector and Matrix Norms Vector and Matrix Norms Vector Space Algebra Matrix Algebra: We let x x and A A, where, if x is an element of an abstract vector space n, and A = A: n m, then x is a complex column vector of length n whose

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 7: More on Householder Reflectors; Least Squares Problems Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 15 Outline

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Krylov Subspace Methods for Large/Sparse Eigenvalue Problems

Krylov Subspace Methods for Large/Sparse Eigenvalue Problems Krylov Subspace Methods for Large/Sparse Eigenvalue Problems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan April 17, 2012 T.-M. Huang (Taiwan Normal University) Krylov

More information

Math 504 (Fall 2011) 1. (*) Consider the matrices

Math 504 (Fall 2011) 1. (*) Consider the matrices Math 504 (Fall 2011) Instructor: Emre Mengi Study Guide for Weeks 11-14 This homework concerns the following topics. Basic definitions and facts about eigenvalues and eigenvectors (Trefethen&Bau, Lecture

More information

Iterative Methods for Sparse Linear Systems

Iterative Methods for Sparse Linear Systems Iterative Methods for Sparse Linear Systems Luca Bergamaschi e-mail: berga@dmsa.unipd.it - http://www.dmsa.unipd.it/ berga Department of Mathematical Methods and Models for Scientific Applications University

More information

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

Orthonormal Transformations

Orthonormal Transformations Orthonormal Transformations Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 25, 2010 Applications of transformation Q : R m R m, with Q T Q = I 1.

More information

arxiv: v1 [math.na] 5 May 2011

arxiv: v1 [math.na] 5 May 2011 ITERATIVE METHODS FOR COMPUTING EIGENVALUES AND EIGENVECTORS MAYSUM PANJU arxiv:1105.1185v1 [math.na] 5 May 2011 Abstract. We examine some numerical iterative methods for computing the eigenvalues and

More information

Charles University Faculty of Mathematics and Physics DOCTORAL THESIS. Krylov subspace approximations in linear algebraic problems

Charles University Faculty of Mathematics and Physics DOCTORAL THESIS. Krylov subspace approximations in linear algebraic problems Charles University Faculty of Mathematics and Physics DOCTORAL THESIS Iveta Hnětynková Krylov subspace approximations in linear algebraic problems Department of Numerical Mathematics Supervisor: Doc. RNDr.

More information

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. Orthogonal sets Let V be a vector space with an inner product. Definition. Nonzero vectors v 1,v

More information

Introduction. Chapter One

Introduction. Chapter One Chapter One Introduction The aim of this book is to describe and explain the beautiful mathematical relationships between matrices, moments, orthogonal polynomials, quadrature rules and the Lanczos and

More information

Adaptive rational Krylov subspaces for large-scale dynamical systems. V. Simoncini

Adaptive rational Krylov subspaces for large-scale dynamical systems. V. Simoncini Adaptive rational Krylov subspaces for large-scale dynamical systems V. Simoncini Dipartimento di Matematica, Università di Bologna valeria@dm.unibo.it joint work with Vladimir Druskin, Schlumberger Doll

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information

Methods for eigenvalue problems with applications in model order reduction

Methods for eigenvalue problems with applications in model order reduction Methods for eigenvalue problems with applications in model order reduction Methoden voor eigenwaardeproblemen met toepassingen in model orde reductie (met een samenvatting in het Nederlands) Proefschrift

More information

An iterative SVD-Krylov based method for model reduction of large-scale dynamical systems

An iterative SVD-Krylov based method for model reduction of large-scale dynamical systems Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 1964 1986 www.elsevier.com/locate/laa An iterative SVD-Krylov based method for model reduction of large-scale dynamical

More information

Total 170. Name. Final Examination M340L-CS

Total 170. Name. Final Examination M340L-CS 1 10 2 10 3 15 4 5 5 10 6 10 7 20 8 10 9 20 10 25 11 10 12 10 13 15 Total 170 Final Examination Name M340L-CS 1. Use extra paper to determine your solutions then neatly transcribe them onto these sheets.

More information

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues also important in analyzing numerical methods Theory and algorithms apply

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

Designing Information Devices and Systems II

Designing Information Devices and Systems II EECS 16B Fall 2016 Designing Information Devices and Systems II Linear Algebra Notes Introduction In this set of notes, we will derive the linear least squares equation, study the properties symmetric

More information

Math 407: Linear Optimization

Math 407: Linear Optimization Math 407: Linear Optimization Lecture 16: The Linear Least Squares Problem II Math Dept, University of Washington February 28, 2018 Lecture 16: The Linear Least Squares Problem II (Math Dept, University

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Numerical Linear Algebra Background Cho-Jui Hsieh UC Davis May 15, 2018 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

AM205: Assignment 2. i=1

AM205: Assignment 2. i=1 AM05: Assignment Question 1 [10 points] (a) [4 points] For p 1, the p-norm for a vector x R n is defined as: ( n ) 1/p x p x i p ( ) i=1 This definition is in fact meaningful for p < 1 as well, although

More information

LinGloss. A glossary of linear algebra

LinGloss. A glossary of linear algebra LinGloss A glossary of linear algebra Contents: Decompositions Types of Matrices Theorems Other objects? Quasi-triangular A matrix A is quasi-triangular iff it is a triangular matrix except its diagonal

More information

Hessenberg eigenvalue eigenvector relations and their application to the error analysis of finite precision Krylov subspace methods

Hessenberg eigenvalue eigenvector relations and their application to the error analysis of finite precision Krylov subspace methods Hessenberg eigenvalue eigenvector relations and their application to the error analysis of finite precision Krylov subspace methods Jens Peter M. Zemke Minisymposium on Numerical Linear Algebra Technical

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course

More information

Model Order Reduction via Matlab Parallel Computing Toolbox. Istanbul Technical University

Model Order Reduction via Matlab Parallel Computing Toolbox. Istanbul Technical University Model Order Reduction via Matlab Parallel Computing Toolbox E. Fatih Yetkin & Hasan Dağ Istanbul Technical University Computational Science & Engineering Department September 21, 2009 E. Fatih Yetkin (Istanbul

More information

DS-GA 1002 Lecture notes 10 November 23, Linear models

DS-GA 1002 Lecture notes 10 November 23, Linear models DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR QR-decomposition The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which In Matlab A = QR [Q,R]=qr(A); Note. The QR-decomposition is unique

More information