Model Reduction for Dynamical Systems

Size: px
Start display at page:

Download "Model Reduction for Dynamical Systems"

Transcription

1 Otto-von-Guericke Universität Magdeburg Faculty of Mathematics Summer term 2015 Model Reduction for Dynamical Systems Lecture 8 Peter Benner Lihong Feng Max Planck Institute for Dynamics of Complex Technical Systems Computational Methods in Systems and Control Theory Magdeburg, Germany benner@mpi-magdeburg.mpg.de feng@mpi-magdeburg.mpg.de ss15

2 Outline 1 Introduction 2 Mathematical Basics 3 Model Reduction by Projection 4 Modal Truncation 5 Balanced Truncation 6 Moment-Matching 7 Solving Large-Scale Matrix Equations Linear Matrix Equations Numerical Methods for Solving Lyapunov Equations Solving Large-Scale Algebraic Riccati Equations Software Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 2/39

3 Solving Large-Scale Matrix Equations Large-Scale Algebraic Lyapunov and Riccati Equations Algebraic Riccati equation (ARE) for A, G = G T, W = W T R n n given and X R n n unknown: G = 0 = Lyapunov equation: 0 = R(X ) := A T X + XA XGX + W. 0 = L(X ) := A T X + XA + W. Typical situation in model reduction and optimal control problems for semi-discretized PDEs: n = (= unknowns!), A has sparse representation (A = M 1 S for FEM), G, W low-rank with G, W {BB T, C T C}, where B R n m, m n, C R p n, p n. Standard (eigenproblem-based) O(n 3 ) methods are not applicable! Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 3/39

4 Solving Large-Scale Matrix Equations Large-Scale Algebraic Lyapunov and Riccati Equations Algebraic Riccati equation (ARE) for A, G = G T, W = W T R n n given and X R n n unknown: G = 0 = Lyapunov equation: 0 = R(X ) := A T X + XA XGX + W. 0 = L(X ) := A T X + XA + W. Typical situation in model reduction and optimal control problems for semi-discretized PDEs: n = (= unknowns!), A has sparse representation (A = M 1 S for FEM), G, W low-rank with G, W {BB T, C T C}, where B R n m, m n, C R p n, p n. Standard (eigenproblem-based) O(n 3 ) methods are not applicable! Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 3/39

5 Solving Large-Scale Matrix Equations Large-Scale Algebraic Lyapunov and Riccati Equations Algebraic Riccati equation (ARE) for A, G = G T, W = W T R n n given and X R n n unknown: G = 0 = Lyapunov equation: 0 = R(X ) := A T X + XA XGX + W. 0 = L(X ) := A T X + XA + W. Typical situation in model reduction and optimal control problems for semi-discretized PDEs: n = (= unknowns!), A has sparse representation (A = M 1 S for FEM), G, W low-rank with G, W {BB T, C T C}, where B R n m, m n, C R p n, p n. Standard (eigenproblem-based) O(n 3 ) methods are not applicable! Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 3/39

6 Solving Large-Scale Matrix Equations Large-Scale Algebraic Lyapunov and Riccati Equations Algebraic Riccati equation (ARE) for A, G = G T, W = W T R n n given and X R n n unknown: G = 0 = Lyapunov equation: 0 = R(X ) := A T X + XA XGX + W. 0 = L(X ) := A T X + XA + W. Typical situation in model reduction and optimal control problems for semi-discretized PDEs: n = (= unknowns!), A has sparse representation (A = M 1 S for FEM), G, W low-rank with G, W {BB T, C T C}, where B R n m, m n, C R p n, p n. Standard (eigenproblem-based) O(n 3 ) methods are not applicable! Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 3/39

7 Solving Large-Scale Matrix Equations Large-Scale Algebraic Lyapunov and Riccati Equations Algebraic Riccati equation (ARE) for A, G = G T, W = W T R n n given and X R n n unknown: G = 0 = Lyapunov equation: 0 = R(X ) := A T X + XA XGX + W. 0 = L(X ) := A T X + XA + W. Typical situation in model reduction and optimal control problems for semi-discretized PDEs: n = (= unknowns!), A has sparse representation (A = M 1 S for FEM), G, W low-rank with G, W {BB T, C T C}, where B R n m, m n, C R p n, p n. Standard (eigenproblem-based) O(n 3 ) methods are not applicable! Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 3/39

8 Solving Large-Scale Matrix Equations Large-Scale Algebraic Lyapunov and Riccati Equations Algebraic Riccati equation (ARE) for A, G = G T, W = W T R n n given and X R n n unknown: G = 0 = Lyapunov equation: 0 = R(X ) := A T X + XA XGX + W. 0 = L(X ) := A T X + XA + W. Typical situation in model reduction and optimal control problems for semi-discretized PDEs: n = (= unknowns!), A has sparse representation (A = M 1 S for FEM), G, W low-rank with G, W {BB T, C T C}, where B R n m, m n, C R p n, p n. Standard (eigenproblem-based) O(n 3 ) methods are not applicable! Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 3/39

9 Solving Large-Scale Matrix Equations Low-Rank Approximation Consider spectrum of ARE solution (analogous for Lyapunov equations). Example: Linear 1D heat equation with point control, Ω = [ 0, 1 ], FEM discretization using linear B-splines, h = 1/100 = n = 101. Idea: X = X T 0 = X = ZZ T = n λ k z k zk T Z (r) (Z (r) ) T = k=1 r λ k z k zk T. k=1 = Goal: compute Z (r) R n r directly w/o ever forming X! Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 4/39

10 Solving Large-Scale Matrix Equations Low-Rank Approximation Consider spectrum of ARE solution (analogous for Lyapunov equations). Example: Linear 1D heat equation with point control, Ω = [ 0, 1 ], FEM discretization using linear B-splines, h = 1/100 = n = 101. Idea: X = X T 0 = X = ZZ T = n λ k z k zk T Z (r) (Z (r) ) T = k=1 r λ k z k zk T. k=1 = Goal: compute Z (r) R n r directly w/o ever forming X! Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 4/39

11 Solving Large-Scale Matrix Equations Linear Matrix Equations Equations without symmetry Sylvester equation AX + XB = W discrete Sylvester equation AXB X = W with data A R n n, B R m m, W R n m and unknown X R n m. Equations with symmetry Lyapunov equation AX + XA T = W Stein equation (discrete Lyapunov equation) AXA T X = W with data A R n n, W = W T R n n and unknown X R n n. Here: focus on (Sylvester and) Lyapunov equations; analogous results and methods for discrete versions exist. Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 5/39

12 Solving Large-Scale Matrix Equations Linear Matrix Equations Equations without symmetry Sylvester equation AX + XB = W discrete Sylvester equation AXB X = W with data A R n n, B R m m, W R n m and unknown X R n m. Equations with symmetry Lyapunov equation AX + XA T = W Stein equation (discrete Lyapunov equation) AXA T X = W with data A R n n, W = W T R n n and unknown X R n n. Here: focus on (Sylvester and) Lyapunov equations; analogous results and methods for discrete versions exist. Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 5/39

13 Linear Matrix Equations Solvability Using the Kronecker (tensor) product, AX + XB = W is equivalent to ( (Im A) + ( B T I n )) vec (X ) = vec (W ). Hence, Sylvester equation has a unique solution M := (I m A) + ( ) B T I n is invertible. 0 Λ (M) = Λ ((I m A) + (B T I n)) = {λ j + µ k, λ j Λ (A), µ k Λ (B)}. Λ (A) Λ ( B) = Corollary A, B Hurwitz = Sylvester equation has unique solution. Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 6/39

14 Linear Matrix Equations Solvability Using the Kronecker (tensor) product, AX + XB = W is equivalent to ( (Im A) + ( B T I n )) vec (X ) = vec (W ). Hence, Sylvester equation has a unique solution M := (I m A) + ( ) B T I n is invertible. 0 Λ (M) = Λ ((I m A) + (B T I n)) = {λ j + µ k, λ j Λ (A), µ k Λ (B)}. Λ (A) Λ ( B) = Corollary A, B Hurwitz = Sylvester equation has unique solution. Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 6/39

15 Linear Matrix Equations Solvability Using the Kronecker (tensor) product, AX + XB = W is equivalent to ( (Im A) + ( B T I n )) vec (X ) = vec (W ). Hence, Sylvester equation has a unique solution M := (I m A) + ( ) B T I n is invertible. 0 Λ (M) = Λ ((I m A) + (B T I n)) = {λ j + µ k, λ j Λ (A), µ k Λ (B)}. Λ (A) Λ ( B) = Corollary A, B Hurwitz = Sylvester equation has unique solution. Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 6/39

16 Linear Matrix Equations Solvability Using the Kronecker (tensor) product, AX + XB = W is equivalent to ( (Im A) + ( B T I n )) vec (X ) = vec (W ). Hence, Sylvester equation has a unique solution M := (I m A) + ( ) B T I n is invertible. 0 Λ (M) = Λ ((I m A) + (B T I n)) = {λ j + µ k, λ j Λ (A), µ k Λ (B)}. Λ (A) Λ ( B) = Corollary A, B Hurwitz = Sylvester equation has unique solution. Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 6/39

17 Linear Matrix Equations Solvability Using the Kronecker (tensor) product, AX + XB = W is equivalent to ( (Im A) + ( B T I n )) vec (X ) = vec (W ). Hence, Sylvester equation has a unique solution M := (I m A) + ( ) B T I n is invertible. 0 Λ (M) = Λ ((I m A) + (B T I n)) = {λ j + µ k, λ j Λ (A), µ k Λ (B)}. Λ (A) Λ ( B) = Corollary A, B Hurwitz = Sylvester equation has unique solution. Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 6/39

18 Linear Matrix Equations Complexity Issues Solving the Sylvester equation AX + XB = W via the equivalent linear system of equations ( (Im A) + ( B T I n )) vec (X ) = vec (W ) requires LU factorization of nm nm matrix; for n m, complexity is 2 3 n6 ; storing n m unknowns: for n m we have n 4 data for X. Example n = m = 1, 000 Gaussian elimination on an Intel core i7 (Westmere, 6 cores, 3.46 GHz 83.2 GFLOP peak) would take > 94 DAYS and 7.3 TB of memory! Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 7/39

19 Linear Matrix Equations Complexity Issues Solving the Sylvester equation AX + XB = W via the equivalent linear system of equations ( (Im A) + ( B T I n )) vec (X ) = vec (W ) requires LU factorization of nm nm matrix; for n m, complexity is 2 3 n6 ; storing n m unknowns: for n m we have n 4 data for X. Example n = m = 1, 000 Gaussian elimination on an Intel core i7 (Westmere, 6 cores, 3.46 GHz 83.2 GFLOP peak) would take > 94 DAYS and 7.3 TB of memory! Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 7/39

20 Numerical Methods for Solving Lyapunov Equations Traditional Methods Bartels-Stewart method for Sylvester and Lyapunov equation (lyap); Hessenberg-Schur method for Sylvester equations (lyap); Hammarling s method for Lyapunov equations AX + XA T + GG T = 0 with A Hurwitz (lyapchol). All based on the fact that if A, B T are in Schur form, then M = (I m A) + ( B T I n ) is block-upper triangular. Hence, solve Mx = b by back-substitution. Clever implementation of back-substitution process requires nm(n + m) flops. For Sylvester eqns., B in Hessenberg form is enough ( Hessenberg-Schur method). Hammarling s method computes Cholesky factor Y of X directly. All methods require Schur decomposition of A and Schur or Hessenberg decomposition of B need QR algorithm which requires 25n 3 flops for Schur decomposition. Not feasible for large-scale problems (n > 10, 000). Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 8/39

21 Numerical Methods for Solving Lyapunov Equations Traditional Methods Bartels-Stewart method for Sylvester and Lyapunov equation (lyap); Hessenberg-Schur method for Sylvester equations (lyap); Hammarling s method for Lyapunov equations AX + XA T + GG T = 0 with A Hurwitz (lyapchol). All based on the fact that if A, B T are in Schur form, then M = (I m A) + ( B T I n ) is block-upper triangular. Hence, solve Mx = b by back-substitution. Clever implementation of back-substitution process requires nm(n + m) flops. For Sylvester eqns., B in Hessenberg form is enough ( Hessenberg-Schur method). Hammarling s method computes Cholesky factor Y of X directly. All methods require Schur decomposition of A and Schur or Hessenberg decomposition of B need QR algorithm which requires 25n 3 flops for Schur decomposition. Not feasible for large-scale problems (n > 10, 000). Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 8/39

22 Numerical Methods for Solving Lyapunov Equations Traditional Methods Bartels-Stewart method for Sylvester and Lyapunov equation (lyap); Hessenberg-Schur method for Sylvester equations (lyap); Hammarling s method for Lyapunov equations AX + XA T + GG T = 0 with A Hurwitz (lyapchol). All based on the fact that if A, B T are in Schur form, then M = (I m A) + ( B T I n ) is block-upper triangular. Hence, solve Mx = b by back-substitution. Clever implementation of back-substitution process requires nm(n + m) flops. For Sylvester eqns., B in Hessenberg form is enough ( Hessenberg-Schur method). Hammarling s method computes Cholesky factor Y of X directly. All methods require Schur decomposition of A and Schur or Hessenberg decomposition of B need QR algorithm which requires 25n 3 flops for Schur decomposition. Not feasible for large-scale problems (n > 10, 000). Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 8/39

23 Numerical Methods for Solving Lyapunov Equations The Sign Function Method Definition For Z R n n with Λ (Z) ır = and Jordan canonical form [ ] J + 0 Z = S S 1 0 J the matrix sign function is [ Ik 0 sign (Z) := S 0 I n k ] S 1. Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 9/39

24 Numerical Methods for Solving Lyapunov Equations The Sign Function Method Definition For Z R n n with Λ (Z) ır = and Jordan canonical form [ ] J + 0 Z = S S 1 0 J the matrix sign function is [ Ik 0 sign (Z) := S 0 I n k ] S 1. Lemma Let T R n n be nonsingular and Z as before, then sign ( TZT 1) = T sign (Z) T 1 Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 9/39

25 Numerical Methods for Solving Lyapunov Equations The Sign Function Method Computation of sign (Z) sign (Z) is root of I n = use Newton s method to compute it: Z 0 Z, Z j+1 1 ( c j Z j + 1 ) Z 1 j, j = 1, 2,... 2 c j = sign (Z) = lim j Z j. c j > 0 is scaling parameter for convergence acceleration and rounding error minimization, e.g. Z 1 j F c j =, Z j F based on equilibrating the norms of the two summands [Higham 86]. Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 9/39

26 Solving Lyapunov Equations with the Matrix Sign Function Method Key observation: If X R n n is a solution of AX + XA T + W = 0, then [ ] [ ] [ ] [ In X A W In X A 0 = 0 I n 0 A T 0 I n 0 A T }{{}}{{}}{{} =T 1 =:H =:T ]. Hence, if A is Hurwitz (i.e., asymptotically stable), then ( [ ] ) ([ A 0 A 0 sign (H) = sign T T 1 = T sign 0 A T 0 A T [ ] In 2X =. 0 I n ]) T 1 Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 10/39

27 Solving Lyapunov Equations with the Matrix Sign Function Method Key observation: If X R n n is a solution of AX + XA T + W = 0, then [ ] [ ] [ ] [ In X A W In X A 0 = 0 I n 0 A T 0 I n 0 A T }{{}}{{}}{{} =T 1 =:H =:T ]. Hence, if A is Hurwitz (i.e., asymptotically stable), then ( [ ] ) ([ A 0 A 0 sign (H) = sign T T 1 = T sign 0 A T 0 A T [ ] In 2X =. 0 I n ]) T 1 Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 10/39

28 Solving Lyapunov Equations with the Matrix Sign Function Method [ ] A W Apply sign function iteration Z 1 2 (Z + Z 1 ) to H = : 0 A T H + H 1 = [ A W 0 A T ] + [ A 1 A 1 WA T 0 A T ] = Sign function iteration for Lyapunov equation: ( ) A 0 A, A j A j + A 1 j, ( ) W 0 G, W j W j + A 1 j W j A T j, j = 0, 1, 2,.... Define A := lim j A j, W := lim j W j. Theorem If A is Hurwitz, then A = I n and X = 1 2 W. Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 11/39

29 Solving Lyapunov Equations with the Matrix Sign Function Method Factored form Recall sign function iteration for AX + XA T + W = 0: ( ) A 0 A, A j+1 1 Aj + A 1 2 j, ( ) j = 0, 1, 2,.... W 0 G, W j+1 1 Wj + A 1 2 j W j A T j, Now consider the second iteration for W = BB T, starting with W 0 = BB T =: B 0B T 0 : 1 ( ) W j + A 1 j W j A T j 2 Hence, obtain factored iteration = 1 2 = 1 2 with S := 1 2 lim j B j and X = SS T. ( B j B T j B j [ Bj A 1 j B j ] ) + A 1 j B j Bj T A T j [ Bj A 1 j B j ] [ Bj A 1 j B j ] T. Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 12/39

30 Solving Lyapunov Equations with the Matrix Sign Function Method Factored form Recall sign function iteration for AX + XA T + W = 0: ( ) A 0 A, A j+1 1 Aj + A 1 2 j, ( ) j = 0, 1, 2,.... W 0 G, W j+1 1 Wj + A 1 2 j W j A T j, Now consider the second iteration for W = BB T, starting with W 0 = BB T =: B 0B T 0 : 1 ( ) W j + A 1 j W j A T j 2 Hence, obtain factored iteration = 1 2 = 1 2 with S := 1 2 lim j B j and X = SS T. ( B j B T j B j [ Bj A 1 j B j ] ) + A 1 j B j Bj T A T j [ Bj A 1 j B j ] [ Bj A 1 j B j ] T. Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 12/39

31 Solving Lyapunov Equations with the Matrix Sign Function Method Factored form Recall sign function iteration for AX + XA T + W = 0: ( ) A 0 A, A j+1 1 Aj + A 1 2 j, ( ) j = 0, 1, 2,.... W 0 G, W j+1 1 Wj + A 1 2 j W j A T j, Now consider the second iteration for W = BB T, starting with W 0 = BB T =: B 0B T 0 : 1 ( ) W j + A 1 j W j A T j 2 Hence, obtain factored iteration = 1 2 = 1 2 with S := 1 2 lim j B j and X = SS T. ( B j B T j B j [ Bj A 1 j B j ] ) + A 1 j B j Bj T A T j [ Bj A 1 j B j ] [ Bj A 1 j B j ] T. Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 12/39

32 Solving Lyapunov Equations with the Matrix Sign Function Method Factored form Recall sign function iteration for AX + XA T + W = 0: ( ) A 0 A, A j+1 1 Aj + A 1 2 j, ( ) j = 0, 1, 2,.... W 0 G, W j+1 1 Wj + A 1 2 j W j A T j, Now consider the second iteration for W = BB T, starting with W 0 = BB T =: B 0B T 0 : 1 ( ) W j + A 1 j W j A T j 2 Hence, obtain factored iteration = 1 2 = 1 2 with S := 1 2 lim j B j and X = SS T. ( B j B T j B j [ Bj A 1 j B j ] ) + A 1 j B j Bj T A T j [ Bj A 1 j B j ] [ Bj A 1 j B j ] T. Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 12/39

33 Solving Lyapunov Equations with the Matrix Sign Function Method Factored form [B./Quintana-Ortí 97] Factored sign function iteration for A(SS T ) + (SS T )A T + BB T = 0 ( ) A 0 A, A j A j + A 1 j, [ B 0 B, B j+1 1 Bj 2 A 1 ] j = 0, 1, 2,.... j B j, Remarks: To get both Gramians, run in parallel C j+1 1 [ ] Cj 2 C j A 1. j To avoid growth in numbers of columns of B j (or rows of C j ): column compression by RRLQ or truncated SVD. Several options to incorporate scaling, e.g., scale A -iteration only. Simple stopping cirterion: A j + I n F tol. Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 13/39

34 Numerical Methods for Solving Lyapunov Equations The ADI Method Recall Peaceman Rachford ADI: Consider Au = s where A R n n spd, s R n. ADI Iteration Idea: Decompose A = H + V with H, V R n n such that can be solved easily/efficiently. ADI Iteration (H + pi )v = r (V + pi )w = t If H, V spd p k, k = 1, 2,... such that u 0 = 0 (H + p k I )u k 1 = (p 2 k I V )u k 1 + s (V + p k I )u k = (p k I H)u k 1 + s 2 converges to u R n solving Au = s. Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 14/39

35 Numerical Methods for Solving Lyapunov Equations The ADI Method Recall Peaceman Rachford ADI: Consider Au = s where A R n n spd, s R n. ADI Iteration Idea: Decompose A = H + V with H, V R n n such that can be solved easily/efficiently. ADI Iteration (H + pi )v = r (V + pi )w = t If H, V spd p k, k = 1, 2,... such that u 0 = 0 (H + p k I )u k 1 = (p 2 k I V )u k 1 + s (V + p k I )u k = (p k I H)u k 1 + s 2 converges to u R n solving Au = s. Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 14/39

36 Numerical Methods for Solving Lyapunov Equations The Lyapunov operator L : P AX + XA T can be decomposed into the linear operators L H : X AX, L V : X XA T. In analogy to the standard ADI method we find the ADI iteration for the Lyapunov equation [Wachspress 88] X 0 = 0 (A + p k I )X k 1 = W X 2 k 1 (A T p k I ) (A + p k I )Xk T = W X T (A T p k 1 k I ). 2 Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 15/39

37 Numerical Methods for Solving Lyapunov Equations Low-Rank ADI Consider AX + XA T = BB T for stable A; B R n m with m n. ADI iteration for the Lyapunov equation [Wachspress 95] For k = 1,..., k max X 0 = 0 (A + p k I )X k 1 = BB T X 2 k 1 (A T p k I ) (A + p k I )Xk T = BB T X T (A T p k 1 k I ) 2 Rewrite as one step iteration and factorize X k = Z k Z T k, k = 0,..., k max Z 0 Z0 T = 0 Z k Zk T = 2p k (A + p k I ) 1 BB T (A + p k I ) T +(A + p k I ) 1 (A p k I )Z k 1 Zk 1 T (A p ki ) T (A + p k I ) T... low-rank Cholesky factor ADI [Penzl 97/ 00, Li/White 99/ 02, B./Li/Penzl 99/ 08, Gugercin/Sorensen/Antoulas 03] Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 16/39

38 Numerical Methods for Solving Lyapunov Equations Low-Rank ADI Consider AX + XA T = BB T for stable A; B R n m with m n. ADI iteration for the Lyapunov equation [Wachspress 95] For k = 1,..., k max X 0 = 0 (A + p k I )X k 1 = BB T X 2 k 1 (A T p k I ) (A + p k I )Xk T = BB T X T (A T p k 1 k I ) 2 Rewrite as one step iteration and factorize X k = Z k Z T k, k = 0,..., k max Z 0 Z0 T = 0 Z k Zk T = 2p k (A + p k I ) 1 BB T (A + p k I ) T +(A + p k I ) 1 (A p k I )Z k 1 Zk 1 T (A p ki ) T (A + p k I ) T... low-rank Cholesky factor ADI [Penzl 97/ 00, Li/White 99/ 02, B./Li/Penzl 99/ 08, Gugercin/Sorensen/Antoulas 03] Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 16/39

39 Numerical Methods for Solving Lyapunov Equations Low-Rank ADI Consider AX + XA T = BB T for stable A; B R n m with m n. ADI iteration for the Lyapunov equation [Wachspress 95] For k = 1,..., k max X 0 = 0 (A + p k I )X k 1 = BB T X 2 k 1 (A T p k I ) (A + p k I )Xk T = BB T X T (A T p k 1 k I ) 2 Rewrite as one step iteration and factorize X k = Z k Z T k, k = 0,..., k max Z 0 Z0 T = 0 Z k Zk T = 2p k (A + p k I ) 1 BB T (A + p k I ) T +(A + p k I ) 1 (A p k I )Z k 1 Zk 1 T (A p ki ) T (A + p k I ) T... low-rank Cholesky factor ADI [Penzl 97/ 00, Li/White 99/ 02, B./Li/Penzl 99/ 08, Gugercin/Sorensen/Antoulas 03] Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 16/39

40 Solving Large-Scale Matrix Equations Numerical Methods for Solving Lyapunov Equations Z k = [ 2p k (A + p k I ) 1 B, (A + p k I ) 1 (A p k I )Z k 1 ] [Penzl 00] Observing that (A p i I ), (A + p k I ) 1 commute, we rewrite Z kmax as Z kmax = [z kmax, P kmax 1z kmax, P kmax 2(P kmax 1z kmax ),..., P 1(P 2 P kmax 1z kmax )], where and P i := z kmax = 2p kmax (A + p kmax I ) 1 B 2pi 2pi+1 [ I (pi + p i+1 )(A + p i I ) 1]. [Li/White 02] Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 17/39

41 Solving Large-Scale Matrix Equations Numerical Methods for Solving Lyapunov Equations Z k = [ 2p k (A + p k I ) 1 B, (A + p k I ) 1 (A p k I )Z k 1 ] [Penzl 00] Observing that (A p i I ), (A + p k I ) 1 commute, we rewrite Z kmax as Z kmax = [z kmax, P kmax 1z kmax, P kmax 2(P kmax 1z kmax ),..., P 1(P 2 P kmax 1z kmax )], where and P i := z kmax = 2p kmax (A + p kmax I ) 1 B 2pi 2pi+1 [ I (pi + p i+1 )(A + p i I ) 1]. [Li/White 02] Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 17/39

42 Numerical Methods for Solving Lyapunov Equations Lyapunov equation 0 = AX + XA T + BB T. Algorithm [Penzl 97/ 00, Li/White 99/ 02, B. 04, B./Li/Penzl 99/ 08] V 1 2 re p 1(A + p 1I ) 1 B, Z 1 V 1 FOR k = 2, 3,... V k re p k re p k 1 ( Vk 1 (p k + p k 1 )(A + p k I ) 1 V k 1 ) Z k [ Z k 1 V k ] Z k rrlq(z k, τ) column compression At convergence, Z kmax Zk T max X, where (without column compression) Z kmax = [ V 1... V kmax ], Vk = C n m. Note: Implementation in real arithmetic possible by combining two steps [B./Li/Penzl 99/ 08] or using new idea employing the relation of 2 consecutive complex factors [B./Kürschner/Saak 11]. Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 18/39

43 Numerical Methods for Solving Lyapunov Equations Lyapunov equation 0 = AX + XA T + BB T. Algorithm [Penzl 97/ 00, Li/White 99/ 02, B. 04, B./Li/Penzl 99/ 08] V 1 2 re p 1(A + p 1I ) 1 B, Z 1 V 1 FOR k = 2, 3,... V k re p k re p k 1 ( Vk 1 (p k + p k 1 )(A + p k I ) 1 V k 1 ) Z k [ Z k 1 V k ] Z k rrlq(z k, τ) column compression At convergence, Z kmax Zk T max X, where (without column compression) Z kmax = [ V 1... V kmax ], Vk = C n m. Note: Implementation in real arithmetic possible by combining two steps [B./Li/Penzl 99/ 08] or using new idea employing the relation of 2 consecutive complex factors [B./Kürschner/Saak 11]. Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 18/39

44 Numerical Results for ADI Optimal Cooling of Steel Profiles Mathematical model: boundary control for linearized 2D heat equation. c ρ t x = λ x, ξ Ω λ n x = κ(u k x), ξ Γ k, 1 k 7, x = 0, ξ Γ7. n = m = 7, q = FEM Discretization, different models for initial mesh (n = 371), 1, 2, 3, 4 steps of mesh refinement n = 1357, 5177, 20209, Source: Physical model: courtesy of Mannesmann/Demag. Math. model: Tröltzsch/Unger 1999/2001, Penzl 1999, Saak Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 19/39

45 Numerical Results for ADI Optimal Cooling of Steel Profiles Solve dual Lyapunov equations needed for balanced truncation, i.e., APM T + MPA T + BB T = 0, A T QM + M T QA + C T C = 0, for n = 79, shifts chosen by Penzl heuristic from 50/25 Ritz values of A of largest/smallest magnitude, no column compression performed. No factorization of mass matrix required. Computations done on Core2Duo at 2.8GHz with 3GB RAM and 32Bit-MATLAB. CPU times: 626 / 356 sec. Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 20/39

46 Numerical Results for ADI Scaling / Mesh Independence Computations by Martin Köhler 10 A R n n FDM matrix for 2D heat equation on [0, 1] 2 (Lyapack benchmark demo l1, m = 1). 16 shifts chosen by Penzl heuristic from 50/25 Ritz values of A of largest/smallest magnitude. Computations on 2 dual core Intel Xeon 5160 with 16 GB RAM using M.E.S.S. ( Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 21/39

47 Numerical Results for ADI Scaling / Mesh Independence Computations by Martin Köhler 10 A R n n FDM matrix for 2D heat equation on [0, 1] 2 (Lyapack benchmark demo l1, m = 1). 16 shifts chosen by Penzl heuristic from 50/25 Ritz values of A of largest/smallest magnitude. Computations on 2 dual core Intel Xeon 5160 with 16 GB RAM using M.E.S.S. ( CPU Times n M.E.S.S. 1 (C) LyaPack M.E.S.S. (MATLAB) , , , , , out of memory , out of memory , out of memory ,000, out of memory Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 21/39

48 Numerical Results for ADI Scaling / Mesh Independence Computations by Martin Köhler 10 A R n n FDM matrix for 2D heat equation on [0, 1] 2 (Lyapack benchmark demo l1, m = 1). 16 shifts chosen by Penzl heuristic from 50/25 Ritz values of A of largest/smallest magnitude. Computations on 2 dual core Intel Xeon 5160 with 16 GB RAM using M.E.S.S. ( Note: for n = 1, 000, 000, first sparse LU needs 1, 100 sec., using UMFPACK this reduces to 30 sec. Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 21/39

49 Factored Galerkin-ADI Iteration Lyapunov equation 0 = AX + XA T + BB T Projection-based methods for Lyapunov equations with A + A T < 0: 1 Compute orthonormal basis range (Z), Z R n r, for subspace Z R n, dim Z = r. 2 Set  := Z T AZ, ˆB := Z T B. 3 Solve small-size Lyapunov equation  ˆX + ˆX ÂT + ˆB ˆB T = 0. 4 Use X Z ˆX Z T. Examples: Krylov subspace methods, i.e., for m = 1: Z = K(A, B, r) = span{b, AB, A 2 B,..., A r 1 B} [Saad 90, Jaimoukha/Kasenally 94, Jbilou 02 08]. K-PIK [Simoncini 07], Z = K(A, B, r) K(A 1, B, r). Rational Krylov [Druskin/Simoncini 11] ( exercises). Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 22/39

50 Factored Galerkin-ADI Iteration Lyapunov equation 0 = AX + XA T + BB T Projection-based methods for Lyapunov equations with A + A T < 0: 1 Compute orthonormal basis range (Z), Z R n r, for subspace Z R n, dim Z = r. 2 Set  := Z T AZ, ˆB := Z T B. 3 Solve small-size Lyapunov equation  ˆX + ˆX ÂT + ˆB ˆB T = 0. 4 Use X Z ˆX Z T. Examples: Krylov subspace methods, i.e., for m = 1: Z = K(A, B, r) = span{b, AB, A 2 B,..., A r 1 B} [Saad 90, Jaimoukha/Kasenally 94, Jbilou 02 08]. K-PIK [Simoncini 07], Z = K(A, B, r) K(A 1, B, r). Rational Krylov [Druskin/Simoncini 11] ( exercises). Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 22/39

51 Factored Galerkin-ADI Iteration Lyapunov equation 0 = AX + XA T + BB T Projection-based methods for Lyapunov equations with A + A T < 0: 1 Compute orthonormal basis range (Z), Z R n r, for subspace Z R n, dim Z = r. 2 Set  := Z T AZ, ˆB := Z T B. 3 Solve small-size Lyapunov equation  ˆX + ˆX ÂT + ˆB ˆB T = 0. 4 Use X Z ˆX Z T. Examples: Krylov subspace methods, i.e., for m = 1: Z = K(A, B, r) = span{b, AB, A 2 B,..., A r 1 B} [Saad 90, Jaimoukha/Kasenally 94, Jbilou 02 08]. K-PIK [Simoncini 07], Z = K(A, B, r) K(A 1, B, r). Rational Krylov [Druskin/Simoncini 11] ( exercises). Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 22/39

52 Factored Galerkin-ADI Iteration Lyapunov equation 0 = AX + XA T + BB T Projection-based methods for Lyapunov equations with A + A T < 0: 1 Compute orthonormal basis range (Z), Z R n r, for subspace Z R n, dim Z = r. 2 Set  := Z T AZ, ˆB := Z T B. 3 Solve small-size Lyapunov equation  ˆX + ˆX ÂT + ˆB ˆB T = 0. 4 Use X Z ˆX Z T. Examples: ADI subspace [B./R.-C. Li/Truhar 08]: Note: Z = colspan [ V 1,..., V r ]. 1 ADI subspace is rational Krylov subspace [J.-R. Li/White 02]. 2 Similar approach: ADI-preconditioned global Arnoldi method [Jbilou 08]. Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 22/39

53 Numerical Methods for Solving Lyapunov Equations Numerical examples for Galerkin-ADI FEM semi-discretized control problem for parabolic PDE: optimal cooling of rail profiles, n = 20, 209, m = 7, q = 6. Good ADI shifts CPU times: 80s (projection every 5th ADI step) vs. 94s (no projection). Computations by Jens Saak 10. Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 23/39

54 Numerical Methods for Solving Lyapunov Equations Numerical examples for Galerkin-ADI FEM semi-discretized control problem for parabolic PDE: optimal cooling of rail profiles, n = 20, 209, m = 7, q = 6. Bad ADI shifts CPU times: 368s (projection every 5th ADI step) vs. 1207s (no projection). Computations by Jens Saak 10. Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 23/39

55 Numerical Methods for Solving Lyapunov Equations Numerical examples for Galerkin-ADI: optimal cooling of rail profiles, n = 79, 841. M.E.S.S. w/o Galerkin projection and column compression Rank of solution factors: 532 / 426 M.E.S.S. with Galerkin projection and column compression Rank of solution factors: 269 / 205 Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 24/39

56 Solving Large-Scale Matrix Equations Numerical example for BT: Optimal Cooling of Steel Profiles n = 1, 357, Absolute Error BT model computed with sign function method, MT w/o static condensation, same order as BT model. Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 25/39

57 Solving Large-Scale Matrix Equations Numerical example for BT: Optimal Cooling of Steel Profiles n = 1, 357, Absolute Error n = 79, 841, Absolute Error BT model computed with sign function method, MT w/o static condensation, same order as BT model. BT model computed using M.E.S.S. in MATLAB, dualcore, computation time: <10 min. Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 25/39

58 Solving Large-Scale Matrix Equations Numerical example for BT: Microgyroscope (Butterfly Gyro) FEM discretization of structure dynamical model using quadratic tetrahedral elements (ANSYS-SOLID187) n = 34, 722, m = 1, q = 12. Reduced model computed using SpaRed, r = 30. Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 26/39

59 Solving Large-Scale Matrix Equations Numerical example for BT: Microgyroscope (Butterfly Gyro) FEM discretization of structure dynamical model using quadratic tetrahedral elements (ANSYS-SOLID187) n = 34, 722, m = 1, q = 12. Reduced model computed using SpaRed, r = 30. Frequency Repsonse Analysis Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 26/39

60 Solving Large-Scale Matrix Equations Numerical example for BT: Microgyroscope (Butterfly Gyro) FEM discretization of structure dynamical model using quadratic tetrahedral elements (ANSYS-SOLID187) n = 34, 722, m = 1, q = 12. Reduced model computed using SpaRed, r = 30. Frequency Repsonse Analysis Hankel Singular Values Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 26/39

61 Solving Large-Scale Algebraic Riccati Equations Theory [Lancaster/Rodman 95] Theorem Consider the (continuous-time) algebraic Riccati equation (ARE) 0 = R(X ) = C T C + A T X + XA XBB T X, with A R n n, B R n m, C R q n, (A, B) stabilizable, (A, C) detectable. Then: (a) There exists a unique stabilizing X {X R n n R(X ) = 0 }, i.e., Λ (A BB T X ) C. (b) X = X T 0 and X X for all X {X R n n R(X ) = 0 }. (c) If (A, C) observable, then X > 0. {[ ]} I n (d) span is the unique invariant subspace of the Hamiltonian X matrix [ ] A BB T H = C T C A T corresponding to Λ (H) C. Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 27/39

62 Solving Large-Scale Algebraic Riccati Equations Numerical Methods [Bini/Iannazzo/Meini 12] Numerical Methods (incomplete list) Invariant subspace methods ( eigenproblem for Hamiltonian matrix): Schur vector method (care) [Laub 79] Hamiltonian SR algorithm [Bunse-Gerstner/Mehrmann 86] Symplectic URV-based method [B./Mehrmann/Xu 97/ 98, Chu/Liu/Mehrmann 07] Spectral projection methods Sign function method [Roberts 71, Byers 87] Disk function method [Bai/Demmel/Gu 94, B. 97] (rational, global) Krylov subspace techniques [Jaimoukha/Kasenally 94, Jbilou 03/ 06, Heyouni/Jbilou 09] Newton s method Kleinman iteration [Kleinman 68] Line search acceleration [B./Byers 98] Newton-ADI [B./J.-R. Li/Penzl 99/ 08] Inexact Newton [Feitzinger/Hylla/Sachs 09] Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 28/39

63 Solving Large-Scale Algebraic Riccati Equations Newton s Method for AREs [Kleinman 68, Mehrmann 91, Lancaster/Rodman 95, B./Byers 94/ 98, B. 97, Guo/Laub 99] Consider 0 = R(X ) = C T C + A T X + XA XBB T X. Frechét derivative of R(X ) at X : R X : Z (A BBT X ) T Z + Z(A BB T X ). Newton-Kantorovich method: ( ) 1 X j+1 = X j R X R(Xj j ), j = 0, 1, 2,... Newton s method (with line search) for AREs FOR j = 0, 1,... 1 A j A BB T X j =: A BK j. 2 Solve the Lyapunov equation A T j N j + N j A j = R(X j ). 3 X j+1 X j + t j N j. END FOR j Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 29/39

64 Solving Large-Scale Algebraic Riccati Equations Newton s Method for AREs [Kleinman 68, Mehrmann 91, Lancaster/Rodman 95, B./Byers 94/ 98, B. 97, Guo/Laub 99] Consider 0 = R(X ) = C T C + A T X + XA XBB T X. Frechét derivative of R(X ) at X : R X : Z (A BBT X ) T Z + Z(A BB T X ). Newton-Kantorovich method: ( ) 1 X j+1 = X j R X R(Xj j ), j = 0, 1, 2,... Newton s method (with line search) for AREs FOR j = 0, 1,... 1 A j A BB T X j =: A BK j. 2 Solve the Lyapunov equation A T j N j + N j A j = R(X j ). 3 X j+1 X j + t j N j. END FOR j Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 29/39

65 Solving Large-Scale Algebraic Riccati Equations Newton s Method for AREs [Kleinman 68, Mehrmann 91, Lancaster/Rodman 95, B./Byers 94/ 98, B. 97, Guo/Laub 99] Consider 0 = R(X ) = C T C + A T X + XA XBB T X. Frechét derivative of R(X ) at X : R X : Z (A BBT X ) T Z + Z(A BB T X ). Newton-Kantorovich method: ( ) 1 X j+1 = X j R X R(Xj j ), j = 0, 1, 2,... Newton s method (with line search) for AREs FOR j = 0, 1,... 1 A j A BB T X j =: A BK j. 2 Solve the Lyapunov equation A T j N j + N j A j = R(X j ). 3 X j+1 X j + t j N j. END FOR j Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 29/39

66 Solving Large-Scale Algebraic Riccati Equations Newton s Method for AREs [Kleinman 68, Mehrmann 91, Lancaster/Rodman 95, B./Byers 94/ 98, B. 97, Guo/Laub 99] Consider 0 = R(X ) = C T C + A T X + XA XBB T X. Frechét derivative of R(X ) at X : R X : Z (A BBT X ) T Z + Z(A BB T X ). Newton-Kantorovich method: ( ) 1 X j+1 = X j R X R(Xj j ), j = 0, 1, 2,... Newton s method (with line search) for AREs FOR j = 0, 1,... 1 A j A BB T X j =: A BK j. 2 Solve the Lyapunov equation A T j N j + N j A j = R(X j ). 3 X j+1 X j + t j N j. END FOR j Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 29/39

67 Newton s Method for AREs Properties and Implementation Convergence for K 0 stabilizing: A j = A BK j = A BB T X j is stable j 0. lim j R(X j ) F = 0 (monotonically). lim j X j = X 0 (locally quadratic). Need large-scale Lyapunov solver; here, ADI iteration: linear systems with dense, but sparse+low rank coefficient matrix A j : A j = A B K j = sparse m m n = efficient inversion using Sherman-Morrison-Woodbury formula: (A BK j + p (j) k I ) 1 = (I n + (A + p (j) k I ) 1 B(I m K j (A + p (j) k I ) 1 B) 1 K j )(A + p (j) k I ) 1. BUT: X = X T R n n = n(n + 1)/2 unknowns! Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 30/39

68 Newton s Method for AREs Properties and Implementation Convergence for K 0 stabilizing: A j = A BK j = A BB T X j is stable j 0. lim j R(X j ) F = 0 (monotonically). lim j X j = X 0 (locally quadratic). Need large-scale Lyapunov solver; here, ADI iteration: linear systems with dense, but sparse+low rank coefficient matrix A j : A j = A B K j = sparse m m n = efficient inversion using Sherman-Morrison-Woodbury formula: (A BK j + p (j) k I ) 1 = (I n + (A + p (j) k I ) 1 B(I m K j (A + p (j) k I ) 1 B) 1 K j )(A + p (j) k I ) 1. BUT: X = X T R n n = n(n + 1)/2 unknowns! Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 30/39

69 Newton s Method for AREs Properties and Implementation Convergence for K 0 stabilizing: A j = A BK j = A BB T X j is stable j 0. lim j R(X j ) F = 0 (monotonically). lim j X j = X 0 (locally quadratic). Need large-scale Lyapunov solver; here, ADI iteration: linear systems with dense, but sparse+low rank coefficient matrix A j : A j = A B K j = sparse m m n = efficient inversion using Sherman-Morrison-Woodbury formula: (A BK j + p (j) k I ) 1 = (I n + (A + p (j) k I ) 1 B(I m K j (A + p (j) k I ) 1 B) 1 K j )(A + p (j) k I ) 1. BUT: X = X T R n n = n(n + 1)/2 unknowns! Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 30/39

70 Newton s Method for AREs Properties and Implementation Convergence for K 0 stabilizing: A j = A BK j = A BB T X j is stable j 0. lim j R(X j ) F = 0 (monotonically). lim j X j = X 0 (locally quadratic). Need large-scale Lyapunov solver; here, ADI iteration: linear systems with dense, but sparse+low rank coefficient matrix A j : A j = A B K j = sparse m m n = efficient inversion using Sherman-Morrison-Woodbury formula: (A BK j + p (j) k I ) 1 = (I n + (A + p (j) k I ) 1 B(I m K j (A + p (j) k I ) 1 B) 1 K j )(A + p (j) k I ) 1. BUT: X = X T R n n = n(n + 1)/2 unknowns! Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 30/39

71 Low-Rank Newton-ADI for AREs Re-write Newton s method for AREs A T j (X j + N j ) }{{} A T j N j + N j A j = R(X j ) + (X j + N j ) }{{} =X j+1 A j = C T C X j BB T X j }{{} =X j+1 =: W j W T j Set X j = Z j Zj T for rank (Z j ) n = A T j ( Zj+1 Zj+1 T ) ( + Zj+1 Zj+1 T ) Aj = W j Wj T Factored Newton Iteration [B./Li/Penzl 1999/2008] Solve Lyapunov equations for Z j+1 directly by factored ADI iteration and use sparse + low-rank structure of A j. Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 31/39

72 Low-Rank Newton-ADI for AREs Re-write Newton s method for AREs A T j (X j + N j ) }{{} A T j N j + N j A j = R(X j ) + (X j + N j ) }{{} =X j+1 A j = C T C X j BB T X j }{{} =X j+1 =: W j W T j Set X j = Z j Zj T for rank (Z j ) n = A T j ( Zj+1 Zj+1 T ) ( + Zj+1 Zj+1 T ) Aj = W j Wj T Factored Newton Iteration [B./Li/Penzl 1999/2008] Solve Lyapunov equations for Z j+1 directly by factored ADI iteration and use sparse + low-rank structure of A j. Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 31/39

73 Low-Rank Newton-ADI for AREs Feedback Iteration Optimal feedback K = B T X = B T Z Z T can be computed by direct feedback iteration: jth Newton iteration: k max K j = B T Z j Zj T = (B T V j,k )Vj,k T k=1 j K = B T Z Z T K j can be updated in ADI iteration, no need to even form Z j, need only fixed workspace for K j R m n! Related to earlier work by [Banks/Ito 1991]. Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 32/39

74 Solving Large-Scale Matrix Equations Galerkin-Newton-ADI Basic ideas Hybrid method of Galerkin projection methods for AREs [Jaimoukha/Kasenally 94, Jbilou 06, Heyouni/Jbilou 09] and Newton-ADI, i.e., use column space of current Newton iterate for projection, solve projected ARE, and prolongate. Independence of good parameters observed for Galerkin-ADI applied to Lyapunov equations fix ADI parameters for all Newton iterations. Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 33/39

75 Solving Large-Scale Matrix Equations Galerkin-Newton-ADI Basic ideas Hybrid method of Galerkin projection methods for AREs [Jaimoukha/Kasenally 94, Jbilou 06, Heyouni/Jbilou 09] and Newton-ADI, i.e., use column space of current Newton iterate for projection, solve projected ARE, and prolongate. Independence of good parameters observed for Galerkin-ADI applied to Lyapunov equations fix ADI parameters for all Newton iterations. Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 33/39

76 Numerical Results LQR Problem for 2D Geometry Linear 2D heat equation with homogeneous Dirichlet boundary and point control/observation. FD discretization on uniform grid. n = , m = p = 1, 10 shifts for ADI iterations. Convergence of large-scale matrix equation solvers: Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 34/39

77 Numerical Results Newton-ADI vs. Newton-ADI-Gelerkin FDM for 2D heat/convection-diffusion equations on [0, 1] 2 (Lyapack benchmarks, m = p = 1) symmetric/nonsymmetric A R n n, n = 10, shifts chosen by Penzl s heuristic from 50/25 Ritz/harmonic Ritz values of A. Computations using Intel Core 2 Quad CPU of type Q9400 at 2.66GHz with 4 GB RAM and 64Bit-MATLAB. Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 35/39

78 Numerical Results Newton-ADI vs. Newton-ADI-Gelerkin FDM for 2D heat/convection-diffusion equations on [0, 1] 2 (Lyapack benchmarks, m = p = 1) symmetric/nonsymmetric A R n n, n = 10, shifts chosen by Penzl s heuristic from 50/25 Ritz/harmonic Ritz values of A. Computations using Intel Core 2 Quad CPU of type Q9400 at 2.66GHz with 4 GB RAM and 64Bit-MATLAB. Newton-ADI step rel. change rel. residual ADI e e e e e e e e e e e e e e e e e e e CPU time: 76.9 sec. Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 35/39

79 Numerical Results Newton-ADI vs. Newton-ADI-Gelerkin FDM for 2D heat/convection-diffusion equations on [0, 1] 2 (Lyapack benchmarks, m = p = 1) symmetric/nonsymmetric A R n n, n = 10, shifts chosen by Penzl s heuristic from 50/25 Ritz/harmonic Ritz values of A. Computations using Intel Core 2 Quad CPU of type Q9400 at 2.66GHz with 4 GB RAM and 64Bit-MATLAB. Newton-ADI step rel. change rel. residual ADI e e e e e e e e e e e e e e e e e e e CPU time: 76.9 sec. Newton-Galerkin-ADI step rel. change rel. residual ADI e e e e e e e e e e e e e e e CPU time: 38.0 sec. Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 35/39

80 Numerical Results Newton-ADI vs. Newton-ADI-Gelerkin FDM for 2D heat/convection-diffusion equations on [0, 1] 2 (Lyapack benchmarks, m = p = 1) symmetric/nonsymmetric A R n n, n = 10, shifts chosen by Penzl s heuristic from 50/25 Ritz/harmonic Ritz values of A. Computations using Intel Core 2 Quad CPU of type Q9400 at 2.66GHz with 4 GB RAM and 64Bit-MATLAB relative change in LRCF no projection ever step every 5th step iteration index relative residual norm no projection ever step every 5th step iteration index Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 35/39

81 Numerical Results Newton-ADI vs. Newton-ADI-Gelerkin FDM for 2D heat/convection-diffusion equations on [0, 1] 2 (Lyapack benchmarks, m = p = 1) symmetric/nonsymmetric A R n n, n = 10, shifts chosen by Penzl s heuristic from 50/25 Ritz/harmonic Ritz values of A. Computations using Intel Core 2 Quad CPU of type Q9400 at 2.66GHz with 4 GB RAM and 64Bit-MATLAB. Newton-ADI step rel. change rel. residual ADI e e e e e e e e e e e e e e e CPU time: sec. Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 35/39

82 Numerical Results Newton-ADI vs. Newton-ADI-Gelerkin FDM for 2D heat/convection-diffusion equations on [0, 1] 2 (Lyapack benchmarks, m = p = 1) symmetric/nonsymmetric A R n n, n = 10, shifts chosen by Penzl s heuristic from 50/25 Ritz/harmonic Ritz values of A. Computations using Intel Core 2 Quad CPU of type Q9400 at 2.66GHz with 4 GB RAM and 64Bit-MATLAB. Newton-ADI step rel. change rel. residual ADI e e e e e e e e e e e e e e e CPU time: sec. Newton-Galerkin-ADI step rel. change rel. residual ADI it e e e e e e e e e e e e e CPU time: 75.7 sec. Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 35/39

83 Numerical Results Newton-ADI vs. Newton-ADI-Gelerkin FDM for 2D heat/convection-diffusion equations on [0, 1] 2 (Lyapack benchmarks, m = p = 1) symmetric/nonsymmetric A R n n, n = 10, shifts chosen by Penzl s heuristic from 50/25 Ritz/harmonic Ritz values of A. Computations using Intel Core 2 Quad CPU of type Q9400 at 2.66GHz with 4 GB RAM and 64Bit-MATLAB relative change in LRCF no projection ever step every 5th step iteration index relative residual norm no projection ever step every 5th step iteration index Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 35/39

84 Numerical Results Example: LQR Problem for 3D Geometry Control problem for 3d Convection-Diffusion Equation FDM for 3D convection-diffusion equation on [0, 1] 3 proposed in [Simoncini 07], q = p = 1 non-symmetric A R n n, n = Test system: INTEL Xeon GHz ; 16 GB RAM; 64Bit-MATLAB (R2010a) using threaded BLAS; stopping tolerance: Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 36/39

85 Numerical Results Example: LQR Problem for 3D Geometry Newton-ADI NWT rel. change rel. residual ADI CPU time: sec. NG-ADI inner= 5, outer= 1 NWT rel. change rel. residual ADI CPU time: sec. NG-ADI inner= 1, outer= 1 NWT rel. change rel. residual ADI CPU time: sec. NG-ADI inner= 0, outer= 1 NWT rel. change rel. residual ADI CPU time: sec. Test system: INTEL Xeon GHz ; 16 GB RAM; 64Bit-MATLAB (R2010a) using threaded BLAS; stopping tolerance: Max Planck Institute Magdeburg Peter Benner, Lihong Feng, MOR for Dynamical Systems 36/39

A Newton-Galerkin-ADI Method for Large-Scale Algebraic Riccati Equations

A Newton-Galerkin-ADI Method for Large-Scale Algebraic Riccati Equations A Newton-Galerkin-ADI Method for Large-Scale Algebraic Riccati Equations Peter Benner Max-Planck-Institute for Dynamics of Complex Technical Systems Computational Methods in Systems and Control Theory

More information

The Newton-ADI Method for Large-Scale Algebraic Riccati Equations. Peter Benner.

The Newton-ADI Method for Large-Scale Algebraic Riccati Equations. Peter Benner. The Newton-ADI Method for Large-Scale Algebraic Riccati Equations Mathematik in Industrie und Technik Fakultät für Mathematik Peter Benner benner@mathematik.tu-chemnitz.de Sonderforschungsbereich 393 S

More information

Efficient Implementation of Large Scale Lyapunov and Riccati Equation Solvers

Efficient Implementation of Large Scale Lyapunov and Riccati Equation Solvers Efficient Implementation of Large Scale Lyapunov and Riccati Equation Solvers Jens Saak joint work with Peter Benner (MiIT) Professur Mathematik in Industrie und Technik (MiIT) Fakultät für Mathematik

More information

S N. hochdimensionaler Lyapunov- und Sylvestergleichungen. Peter Benner. Mathematik in Industrie und Technik Fakultät für Mathematik TU Chemnitz

S N. hochdimensionaler Lyapunov- und Sylvestergleichungen. Peter Benner. Mathematik in Industrie und Technik Fakultät für Mathematik TU Chemnitz Ansätze zur numerischen Lösung hochdimensionaler Lyapunov- und Sylvestergleichungen Peter Benner Mathematik in Industrie und Technik Fakultät für Mathematik TU Chemnitz S N SIMULATION www.tu-chemnitz.de/~benner

More information

Numerical Solution of Descriptor Riccati Equations for Linearized Navier-Stokes Control Problems

Numerical Solution of Descriptor Riccati Equations for Linearized Navier-Stokes Control Problems MAX PLANCK INSTITUTE Control and Optimization with Differential-Algebraic Constraints Banff, October 24 29, 2010 Numerical Solution of Descriptor Riccati Equations for Linearized Navier-Stokes Control

More information

BALANCING-RELATED MODEL REDUCTION FOR LARGE-SCALE SYSTEMS

BALANCING-RELATED MODEL REDUCTION FOR LARGE-SCALE SYSTEMS FOR LARGE-SCALE SYSTEMS Professur Mathematik in Industrie und Technik Fakultät für Mathematik Technische Universität Chemnitz Recent Advances in Model Order Reduction Workshop TU Eindhoven, November 23,

More information

Numerical Solution of Matrix Equations Arising in Control of Bilinear and Stochastic Systems

Numerical Solution of Matrix Equations Arising in Control of Bilinear and Stochastic Systems MatTriad 2015 Coimbra September 7 11, 2015 Numerical Solution of Matrix Equations Arising in Control of Bilinear and Stochastic Systems Peter Benner Max Planck Institute for Dynamics of Complex Technical

More information

MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS

MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS Ulrike Baur joint work with Peter Benner Mathematics in Industry and Technology Faculty of Mathematics Chemnitz University of Technology

More information

BALANCING-RELATED MODEL REDUCTION FOR DATA-SPARSE SYSTEMS

BALANCING-RELATED MODEL REDUCTION FOR DATA-SPARSE SYSTEMS BALANCING-RELATED Peter Benner Professur Mathematik in Industrie und Technik Fakultät für Mathematik Technische Universität Chemnitz Computational Methods with Applications Harrachov, 19 25 August 2007

More information

Passivity Preserving Model Reduction for Large-Scale Systems. Peter Benner.

Passivity Preserving Model Reduction for Large-Scale Systems. Peter Benner. Passivity Preserving Model Reduction for Large-Scale Systems Peter Benner Mathematik in Industrie und Technik Fakultät für Mathematik Sonderforschungsbereich 393 S N benner@mathematik.tu-chemnitz.de SIMULATION

More information

SYSTEM-THEORETIC METHODS FOR MODEL REDUCTION OF LARGE-SCALE SYSTEMS: SIMULATION, CONTROL, AND INVERSE PROBLEMS. Peter Benner

SYSTEM-THEORETIC METHODS FOR MODEL REDUCTION OF LARGE-SCALE SYSTEMS: SIMULATION, CONTROL, AND INVERSE PROBLEMS. Peter Benner SYSTEM-THEORETIC METHODS FOR MODEL REDUCTION OF LARGE-SCALE SYSTEMS: SIMULATION, CONTROL, AND INVERSE PROBLEMS Professur Mathematik in Industrie und Technik Fakultät für Mathematik Technische Universität

More information

Moving Frontiers in Model Reduction Using Numerical Linear Algebra

Moving Frontiers in Model Reduction Using Numerical Linear Algebra Using Numerical Linear Algebra Max-Planck-Institute for Dynamics of Complex Technical Systems Computational Methods in Systems and Control Theory Group Magdeburg, Germany Technische Universität Chemnitz

More information

Factorized Solution of Sylvester Equations with Applications in Control

Factorized Solution of Sylvester Equations with Applications in Control Factorized Solution of Sylvester Equations with Applications in Control Peter Benner Abstract Sylvester equations play a central role in many areas of applied mathematics and in particular in systems and

More information

Accelerating Model Reduction of Large Linear Systems with Graphics Processors

Accelerating Model Reduction of Large Linear Systems with Graphics Processors Accelerating Model Reduction of Large Linear Systems with Graphics Processors P. Benner 1, P. Ezzatti 2, D. Kressner 3, E.S. Quintana-Ortí 4, Alfredo Remón 4 1 Max-Plank-Institute for Dynamics of Complex

More information

On the numerical solution of large-scale linear matrix equations. V. Simoncini

On the numerical solution of large-scale linear matrix equations. V. Simoncini On the numerical solution of large-scale linear matrix equations V. Simoncini Dipartimento di Matematica, Università di Bologna (Italy) valeria.simoncini@unibo.it 1 Some matrix equations Sylvester matrix

More information

Index. for generalized eigenvalue problem, butterfly form, 211

Index. for generalized eigenvalue problem, butterfly form, 211 Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,

More information

Numerical solution of large-scale Lyapunov equations, Riccati equations, and linear-quadratic optimal control problems

Numerical solution of large-scale Lyapunov equations, Riccati equations, and linear-quadratic optimal control problems NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2008; 15:755 777 Published online in Wiley InterScience (www.interscience.wiley.com)..622 Numerical solution of large-scale Lyapunov

More information

arxiv: v1 [math.na] 4 Apr 2018

arxiv: v1 [math.na] 4 Apr 2018 Efficient Solution of Large-Scale Algebraic Riccati Equations Associated with Index-2 DAEs via the Inexact Low-Rank Newton-ADI Method Peter Benner a,d, Matthias Heinkenschloss b,1,, Jens Saak a, Heiko

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

The rational Krylov subspace for parameter dependent systems. V. Simoncini

The rational Krylov subspace for parameter dependent systems. V. Simoncini The rational Krylov subspace for parameter dependent systems V. Simoncini Dipartimento di Matematica, Università di Bologna valeria.simoncini@unibo.it 1 Given the continuous-time system Motivation. Model

More information

Preconditioned inverse iteration and shift-invert Arnoldi method

Preconditioned inverse iteration and shift-invert Arnoldi method Preconditioned inverse iteration and shift-invert Arnoldi method Melina Freitag Department of Mathematical Sciences University of Bath CSC Seminar Max-Planck-Institute for Dynamics of Complex Technical

More information

In this article we discuss sparse matrix algorithms and parallel algorithms, Solving Large-Scale Control Problems

In this article we discuss sparse matrix algorithms and parallel algorithms, Solving Large-Scale Control Problems F E A T U R E Solving Large-Scale Control Problems EYEWIRE Sparsity and parallel algorithms: two approaches to beat the curse of dimensionality. By Peter Benner In this article we discuss sparse matrix

More information

Order reduction numerical methods for the algebraic Riccati equation. V. Simoncini

Order reduction numerical methods for the algebraic Riccati equation. V. Simoncini Order reduction numerical methods for the algebraic Riccati equation V. Simoncini Dipartimento di Matematica Alma Mater Studiorum - Università di Bologna valeria.simoncini@unibo.it 1 The problem Find X

More information

Exploiting off-diagonal rank structures in the solution of linear matrix equations

Exploiting off-diagonal rank structures in the solution of linear matrix equations Stefano Massei Exploiting off-diagonal rank structures in the solution of linear matrix equations Based on joint works with D. Kressner (EPFL), M. Mazza (IPP of Munich), D. Palitta (IDCTS of Magdeburg)

More information

Model reduction of coupled systems

Model reduction of coupled systems Model reduction of coupled systems Tatjana Stykel Technische Universität Berlin ( joint work with Timo Reis, TU Kaiserslautern ) Model order reduction, coupled problems and optimization Lorentz Center,

More information

Model order reduction of large-scale dynamical systems with Jacobi-Davidson style eigensolvers

Model order reduction of large-scale dynamical systems with Jacobi-Davidson style eigensolvers MAX PLANCK INSTITUTE International Conference on Communications, Computing and Control Applications March 3-5, 2011, Hammamet, Tunisia. Model order reduction of large-scale dynamical systems with Jacobi-Davidson

More information

Linear and Nonlinear Matrix Equations Arising in Model Reduction

Linear and Nonlinear Matrix Equations Arising in Model Reduction International Conference on Numerical Linear Algebra and its Applications Guwahati, January 15 18, 2013 Linear and Nonlinear Matrix Equations Arising in Model Reduction Peter Benner Tobias Breiten Max

More information

Solving Large-Scale Matrix Equations: Recent Progress and New Applications

Solving Large-Scale Matrix Equations: Recent Progress and New Applications 19th Conference of the International Linear Algebra Society (ILAS) Seoul, August 6 9, 2014 Solving Large-Scale Matrix Equations: Recent Progress and New Applications Peter Benner Max Planck Institute for

More information

A POD Projection Method for Large-Scale Algebraic Riccati Equations

A POD Projection Method for Large-Scale Algebraic Riccati Equations A POD Projection Method for Large-Scale Algebraic Riccati Equations Boris Kramer Department of Mathematics Virginia Tech Blacksburg, VA 24061-0123 Email: bokr@vt.edu John R. Singler Department of Mathematics

More information

Balanced Truncation Model Reduction of Large and Sparse Generalized Linear Systems

Balanced Truncation Model Reduction of Large and Sparse Generalized Linear Systems Balanced Truncation Model Reduction of Large and Sparse Generalized Linear Systems Jos M. Badía 1, Peter Benner 2, Rafael Mayo 1, Enrique S. Quintana-Ortí 1, Gregorio Quintana-Ortí 1, A. Remón 1 1 Depto.

More information

Krylov Subspace Type Methods for Solving Projected Generalized Continuous-Time Lyapunov Equations

Krylov Subspace Type Methods for Solving Projected Generalized Continuous-Time Lyapunov Equations Krylov Subspace Type Methods for Solving Proected Generalized Continuous-Time Lyapunov Equations YUIAN ZHOU YIQIN LIN Hunan University of Science and Engineering Institute of Computational Mathematics

More information

Model Order Reduction of Continuous LTI Large Descriptor System Using LRCF-ADI and Square Root Balanced Truncation

Model Order Reduction of Continuous LTI Large Descriptor System Using LRCF-ADI and Square Root Balanced Truncation , July 1-3, 15, London, U.K. Model Order Reduction of Continuous LI Large Descriptor System Using LRCF-ADI and Square Root Balanced runcation Mehrab Hossain Likhon, Shamsil Arifeen, and Mohammad Sahadet

More information

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland Matrix Algorithms Volume II: Eigensystems G. W. Stewart University of Maryland College Park, Maryland H1HJ1L Society for Industrial and Applied Mathematics Philadelphia CONTENTS Algorithms Preface xv xvii

More information

On Solving Large Algebraic. Riccati Matrix Equations

On Solving Large Algebraic. Riccati Matrix Equations International Mathematical Forum, 5, 2010, no. 33, 1637-1644 On Solving Large Algebraic Riccati Matrix Equations Amer Kaabi Department of Basic Science Khoramshahr Marine Science and Technology University

More information

Matrix Equations and and Bivariate Function Approximation

Matrix Equations and and Bivariate Function Approximation Matrix Equations and and Bivariate Function Approximation D. Kressner Joint work with L. Grasedyck, C. Tobler, N. Truhar. ETH Zurich, Seminar for Applied Mathematics Manchester, 17.06.2009 Sylvester matrix

More information

Inexact inverse iteration with preconditioning

Inexact inverse iteration with preconditioning Department of Mathematical Sciences Computational Methods with Applications Harrachov, Czech Republic 24th August 2007 (joint work with M. Robbé and M. Sadkane (Brest)) 1 Introduction 2 Preconditioned

More information

Low Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL

Low Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL Low Rank Approximation Lecture 7 Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL daniel.kressner@epfl.ch 1 Alternating least-squares / linear scheme General setting:

More information

Direct Methods for Matrix Sylvester and Lyapunov Equations

Direct Methods for Matrix Sylvester and Lyapunov Equations Direct Methods for Matrix Sylvester and Lyapunov Equations D. C. Sorensen and Y. Zhou Dept. of Computational and Applied Mathematics Rice University Houston, Texas, 77005-89. USA. e-mail: {sorensen,ykzhou}@caam.rice.edu

More information

On the -Sylvester Equation AX ± X B = C

On the -Sylvester Equation AX ± X B = C Department of Applied Mathematics National Chiao Tung University, Taiwan A joint work with E. K.-W. Chu and W.-W. Lin Oct., 20, 2008 Outline 1 Introduction 2 -Sylvester Equation 3 Numerical Examples 4

More information

A Continuation Approach to a Quadratic Matrix Equation

A Continuation Approach to a Quadratic Matrix Equation A Continuation Approach to a Quadratic Matrix Equation Nils Wagner nwagner@mecha.uni-stuttgart.de Institut A für Mechanik, Universität Stuttgart GAMM Workshop Applied and Numerical Linear Algebra September

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

1 Sylvester equations

1 Sylvester equations 1 Sylvester equations Notes for 2016-11-02 The Sylvester equation (or the special case of the Lyapunov equation) is a matrix equation of the form AX + XB = C where A R m m, B R n n, B R m n, are known,

More information

On GPU Acceleration of Common Solvers for (Quasi-) Triangular Generalized Lyapunov Equations

On GPU Acceleration of Common Solvers for (Quasi-) Triangular Generalized Lyapunov Equations Max Planck Institute Magdeburg Preprints Martin Köhler Jens Saak On GPU Acceleration of Common Solvers for (Quasi-) Triangular Generalized Lyapunov Equations MAX PLANCK INSTITUT FÜR DYNAMIK KOMPLEXER TECHNISCHER

More information

Structure preserving Krylov-subspace methods for Lyapunov equations

Structure preserving Krylov-subspace methods for Lyapunov equations Structure preserving Krylov-subspace methods for Lyapunov equations Matthias Bollhöfer, André Eppler Institute Computational Mathematics TU Braunschweig MoRePas Workshop, Münster September 17, 2009 System

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

Towards high performance IRKA on hybrid CPU-GPU systems

Towards high performance IRKA on hybrid CPU-GPU systems Towards high performance IRKA on hybrid CPU-GPU systems Jens Saak in collaboration with Georg Pauer (OVGU/MPI Magdeburg) Kapil Ahuja, Ruchir Garg (IIT Indore) Hartwig Anzt, Jack Dongarra (ICL Uni Tennessee

More information

The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications

The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications MAX PLANCK INSTITUTE Elgersburg Workshop Elgersburg February 11-14, 2013 The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications Timo Reis 1 Matthias Voigt 2 1 Department

More information

Parallel Model Reduction of Large Linear Descriptor Systems via Balanced Truncation

Parallel Model Reduction of Large Linear Descriptor Systems via Balanced Truncation Parallel Model Reduction of Large Linear Descriptor Systems via Balanced Truncation Peter Benner 1, Enrique S. Quintana-Ortí 2, Gregorio Quintana-Ortí 2 1 Fakultät für Mathematik Technische Universität

More information

Incomplete Cholesky preconditioners that exploit the low-rank property

Incomplete Cholesky preconditioners that exploit the low-rank property anapov@ulb.ac.be ; http://homepages.ulb.ac.be/ anapov/ 1 / 35 Incomplete Cholesky preconditioners that exploit the low-rank property (theory and practice) Artem Napov Service de Métrologie Nucléaire, Université

More information

On the numerical solution of large-scale sparse discrete-time Riccati equations

On the numerical solution of large-scale sparse discrete-time Riccati equations Advances in Computational Mathematics manuscript No (will be inserted by the editor) On the numerical solution of large-scale sparse discrete-time Riccati equations Peter Benner Heike Faßbender Received:

More information

Sonderforschungsbereich 393

Sonderforschungsbereich 393 Sonderforschungsbereich 393 Parallele Numerische Simulation für Physik und Kontinuumsmechanik Peter Benner Enrique S. Quintana-Ortí Gregorio Quintana-Ortí Solving Stable Sylvester Equations via Rational

More information

APPLIED NUMERICAL LINEAR ALGEBRA

APPLIED NUMERICAL LINEAR ALGEBRA APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

Solving linear equations with Gaussian Elimination (I)

Solving linear equations with Gaussian Elimination (I) Term Projects Solving linear equations with Gaussian Elimination The QR Algorithm for Symmetric Eigenvalue Problem The QR Algorithm for The SVD Quasi-Newton Methods Solving linear equations with Gaussian

More information

ADI-preconditioned FGMRES for solving large generalized Lyapunov equations - A case study

ADI-preconditioned FGMRES for solving large generalized Lyapunov equations - A case study -preconditioned for large - A case study Matthias Bollhöfer, André Eppler TU Braunschweig Institute Computational Mathematics Syrene-MOR Workshop, TU Hamburg October 30, 2008 2 / 20 Outline 1 2 Overview

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 15-05 Induced Dimension Reduction method for solving linear matrix equations R. Astudillo and M. B. van Gijzen ISSN 1389-6520 Reports of the Delft Institute of Applied

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Max Planck Institute Magdeburg Preprints

Max Planck Institute Magdeburg Preprints Peter Benner Patrick Kürschner Jens Saak Real versions of low-rank ADI methods with complex shifts MAXPLANCKINSTITUT FÜR DYNAMIK KOMPLEXER TECHNISCHER SYSTEME MAGDEBURG Max Planck Institute Magdeburg Preprints

More information

Model reduction of large-scale dynamical systems

Model reduction of large-scale dynamical systems Model reduction of large-scale dynamical systems Lecture III: Krylov approximation and rational interpolation Thanos Antoulas Rice University and Jacobs University email: aca@rice.edu URL: www.ece.rice.edu/

More information

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization

More information

Adaptive rational Krylov subspaces for large-scale dynamical systems. V. Simoncini

Adaptive rational Krylov subspaces for large-scale dynamical systems. V. Simoncini Adaptive rational Krylov subspaces for large-scale dynamical systems V. Simoncini Dipartimento di Matematica, Università di Bologna valeria@dm.unibo.it joint work with Vladimir Druskin, Schlumberger Doll

More information

Computing least squares condition numbers on hybrid multicore/gpu systems

Computing least squares condition numbers on hybrid multicore/gpu systems Computing least squares condition numbers on hybrid multicore/gpu systems M. Baboulin and J. Dongarra and R. Lacroix Abstract This paper presents an efficient computation for least squares conditioning

More information

Low Rank Solution of Data-Sparse Sylvester Equations

Low Rank Solution of Data-Sparse Sylvester Equations Low Ran Solution of Data-Sparse Sylvester Equations U. Baur e-mail: baur@math.tu-berlin.de, Phone: +49(0)3031479177, Fax: +49(0)3031479706, Technische Universität Berlin, Institut für Mathemati, Straße

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost

More information

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen

More information

On the numerical solution of large-scale sparse discrete-time Riccati equations CSC/ Chemnitz Scientific Computing Preprints

On the numerical solution of large-scale sparse discrete-time Riccati equations CSC/ Chemnitz Scientific Computing Preprints Peter Benner Heike Faßbender On the numerical solution of large-scale sparse discrete-time Riccati equations CSC/09-11 Chemnitz Scientific Computing Preprints Impressum: Chemnitz Scientific Computing Preprints

More information

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 1: Direct Methods Dianne P. O Leary c 2008

More information

FEM and sparse linear system solving

FEM and sparse linear system solving FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich

More information

Parametrische Modellreduktion mit dünnen Gittern

Parametrische Modellreduktion mit dünnen Gittern Parametrische Modellreduktion mit dünnen Gittern (Parametric model reduction with sparse grids) Ulrike Baur Peter Benner Mathematik in Industrie und Technik, Fakultät für Mathematik Technische Universität

More information

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009) Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential

More information

Approximate Low Rank Solution of Generalized Lyapunov Matrix Equations via Proper Orthogonal Decomposition

Approximate Low Rank Solution of Generalized Lyapunov Matrix Equations via Proper Orthogonal Decomposition Applied Mathematical Sciences, Vol. 4, 2010, no. 1, 21-30 Approximate Low Rank Solution of Generalized Lyapunov Matrix Equations via Proper Orthogonal Decomposition Amer Kaabi Department of Basic Science

More information

THE solution of the absolute value equation (AVE) of

THE solution of the absolute value equation (AVE) of The nonlinear HSS-like iterative method for absolute value equations Mu-Zheng Zhu Member, IAENG, and Ya-E Qi arxiv:1403.7013v4 [math.na] 2 Jan 2018 Abstract Salkuyeh proposed the Picard-HSS iteration method

More information

Balancing-Related Model Reduction for Large-Scale Systems

Balancing-Related Model Reduction for Large-Scale Systems Balancing-Related Model Reduction for Large-Scale Systems Peter Benner Professur Mathematik in Industrie und Technik Fakultät für Mathematik Technische Universität Chemnitz D-09107 Chemnitz benner@mathematik.tu-chemnitz.de

More information

Max Planck Institute Magdeburg Preprints

Max Planck Institute Magdeburg Preprints Thomas Mach Computing Inner Eigenvalues of Matrices in Tensor Train Matrix Format MAX PLANCK INSTITUT FÜR DYNAMIK KOMPLEXER TECHNISCHER SYSTEME MAGDEBURG Max Planck Institute Magdeburg Preprints MPIMD/11-09

More information

Numerical Methods for Large-Scale Nonlinear Systems

Numerical Methods for Large-Scale Nonlinear Systems Numerical Methods for Large-Scale Nonlinear Systems Handouts by Ronald H.W. Hoppe following the monograph P. Deuflhard Newton Methods for Nonlinear Problems Springer, Berlin-Heidelberg-New York, 2004 Num.

More information

Numerical behavior of inexact linear solvers

Numerical behavior of inexact linear solvers Numerical behavior of inexact linear solvers Miro Rozložník joint results with Zhong-zhi Bai and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic The fourth

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Krylov subspace methods for projected Lyapunov equations

Krylov subspace methods for projected Lyapunov equations Krylov subspace methods for projected Lyapunov equations Tatjana Stykel and Valeria Simoncini Technical Report 735-2010 DFG Research Center Matheon Mathematics for key technologies http://www.matheon.de

More information

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU Preconditioning Techniques for Solving Large Sparse Linear Systems Arnold Reusken Institut für Geometrie und Praktische Mathematik RWTH-Aachen OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative

More information

ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS

ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS Fatemeh Panjeh Ali Beik and Davod Khojasteh Salkuyeh, Department of Mathematics, Vali-e-Asr University of Rafsanjan, Rafsanjan,

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Stabilization and Acceleration of Algebraic Multigrid Method

Stabilization and Acceleration of Algebraic Multigrid Method Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration

More information

DIRECT METHODS FOR MATRIX SYLVESTER AND LYAPUNOV EQUATIONS

DIRECT METHODS FOR MATRIX SYLVESTER AND LYAPUNOV EQUATIONS DIRECT METHODS FOR MATRIX SYLVESTER AND LYAPUNOV EQUATIONS DANNY C. SORENSEN AND YUNKAI ZHOU Received 12 December 2002 and in revised form 31 January 2003 We revisit the two standard dense methods for

More information

A Chebyshev-based two-stage iterative method as an alternative to the direct solution of linear systems

A Chebyshev-based two-stage iterative method as an alternative to the direct solution of linear systems A Chebyshev-based two-stage iterative method as an alternative to the direct solution of linear systems Mario Arioli m.arioli@rl.ac.uk CCLRC-Rutherford Appleton Laboratory with Daniel Ruiz (E.N.S.E.E.I.H.T)

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 4

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 4 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 4 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 12, 2012 Andre Tkacenko

More information

SOLVING DISCRETE-TIME ALGEBRAIC RICCATI EQUATIONS USING MODIFIED NEWTON S METHOD

SOLVING DISCRETE-TIME ALGEBRAIC RICCATI EQUATIONS USING MODIFIED NEWTON S METHOD PHYSCON 2013, San Luis Potosí, México, 26 29th August, 2013 SOLVING DISCRETE-TIME ALGEBRAIC RICCATI EQUATIONS USING MODIFIED NEWTON S METHOD Vasile Sima Advanced Research National Institute for Research

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated. Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces

More information

Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control

Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control Chun-Hua Guo Dedicated to Peter Lancaster on the occasion of his 70th birthday We consider iterative methods for finding the

More information

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 26, 2010 Linear system Linear system Ax = b, A C m,n, b C m, x C n. under-determined

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

MODEL-order reduction is emerging as an effective

MODEL-order reduction is emerging as an effective IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I: REGULAR PAPERS, VOL. 52, NO. 5, MAY 2005 975 Model-Order Reduction by Dominant Subspace Projection: Error Bound, Subspace Computation, and Circuit Applications

More information

Fast Iterative Solution of Saddle Point Problems

Fast Iterative Solution of Saddle Point Problems Michele Benzi Department of Mathematics and Computer Science Emory University Atlanta, GA Acknowledgments NSF (Computational Mathematics) Maxim Olshanskii (Mech-Math, Moscow State U.) Zhen Wang (PhD student,

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give

More information

ON THE NUMERICAL SOLUTION OF LARGE-SCALE RICCATI EQUATIONS

ON THE NUMERICAL SOLUTION OF LARGE-SCALE RICCATI EQUATIONS ON THE NUMERICAL SOLUTION OF LARGE-SCALE RICCATI EQUATIONS VALERIA SIMONCINI, DANIEL B. SZYLD, AND MARLLINY MONSALVE Abstract. The inexact Newton-Kleinman method is an iterative scheme for numerically

More information

MPI at MPI. Jens Saak. Max Planck Institute for Dynamics of Complex Technical Systems Computational Methods in Systems and Control Theory

MPI at MPI. Jens Saak. Max Planck Institute for Dynamics of Complex Technical Systems Computational Methods in Systems and Control Theory MAX PLANCK INSTITUTE November 5, 2010 MPI at MPI Jens Saak Max Planck Institute for Dynamics of Complex Technical Systems Computational Methods in Systems and Control Theory FOR DYNAMICS OF COMPLEX TECHNICAL

More information

Lecture 3: Inexact inverse iteration with preconditioning

Lecture 3: Inexact inverse iteration with preconditioning Lecture 3: Department of Mathematical Sciences CLAPDE, Durham, July 2008 Joint work with M. Freitag (Bath), and M. Robbé & M. Sadkane (Brest) 1 Introduction 2 Preconditioned GMRES for Inverse Power Method

More information