Numerical Solution of Matrix Equations Arising in Control of Bilinear and Stochastic Systems

Size: px
Start display at page:

Download "Numerical Solution of Matrix Equations Arising in Control of Bilinear and Stochastic Systems"

Transcription

1 MatTriad 2015 Coimbra September 7 11, 2015 Numerical Solution of Matrix Equations Arising in Control of Bilinear and Stochastic Systems Peter Benner Max Planck Institute for Dynamics of Complex Technical Systems Computational Methods in Systems and Control Theory Magdeburg, Germany benner@mpi-magdeburg.mpg.de Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 1/38

2 Overview 1 Introduction 2 Applications 3 Solving Large-Scale Sylvester and Lyapunov Equations 4 Solving Large-Scale Lyapunov-plus-Positive Equations 5 References Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 2/38

3 Overview 1 Introduction Classification of Linear Matrix Equations Existence and Uniqueness of Solutions 2 Applications 3 Solving Large-Scale Sylvester and Lyapunov Equations 4 Solving Large-Scale Lyapunov-plus-Positive Equations 5 References Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 3/38

4 Introduction Linear Matrix Equations/Men with Beards Sylvester equation James Joseph Sylvester (September 3, 1814 March 15, 1897) AX + XB = C. Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 4/38

5 Introduction Linear Matrix Equations/Men with Beards Sylvester equation Lyapunov equation James Joseph Sylvester (September 3, 1814 March 15, 1897) AX + XB = C. Alexander Michailowitsch Ljapunow (June 6, 1857 November 3, 1918) AX + XA T = C, C = C T. Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 4/38

6 Introduction Generalizations of Sylvester (AX + XB = C) and Lyapunov (AX + XA T = C) Equations Generalized Sylvester equation: AXD + EXB = C. Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 5/38

7 Introduction Generalizations of Sylvester (AX + XB = C) and Lyapunov (AX + XA T = C) Equations Generalized Sylvester equation: Generalized Lyapunov equation: AXD + EXB = C. AXE T + EXA T = C, C = C T. Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 5/38

8 Introduction Generalizations of Sylvester (AX + XB = C) and Lyapunov (AX + XA T = C) Equations Generalized Sylvester equation: Generalized Lyapunov equation: AXD + EXB = C. AXE T + EXA T = C, C = C T. Stein equation: X AXB = C. Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 5/38

9 Introduction Generalizations of Sylvester (AX + XB = C) and Lyapunov (AX + XA T = C) Equations Generalized Sylvester equation: Generalized Lyapunov equation: Stein equation: AXD + EXB = C. AXE T + EXA T = C, C = C T. X AXB = C. (Generalized) discrete Lyapunov/Stein equation: EXE T AXA T = C, C = C T. Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 5/38

10 Introduction Generalizations of Sylvester (AX + XB = C) and Lyapunov (AX + XA T = C) Equations Generalized Sylvester equation: Generalized Lyapunov equation: Stein equation: AXD + EXB = C. AXE T + EXA T = C, C = C T. X AXB = C. (Generalized) discrete Lyapunov/Stein equation: Note: EXE T AXA T = C, C = C T. Consider only regular cases, having a unique solution! Solutions of symmetric cases are symmetric, X = X T R n n ; otherwise, X R n l with n l in general. Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 5/38

11 Introduction Generalizations of Sylvester (AX + XB = C) and Lyapunov (AX + XA T = C) Equations Bilinear Lyapunov equation/lyapunov-plus-positive equation: AX + XA T + m N k XNk T = C, C = C T. k=1 Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 5/38

12 Introduction Generalizations of Sylvester (AX + XB = C) and Lyapunov (AX + XA T = C) Equations Bilinear Lyapunov equation/lyapunov-plus-positive equation: AX + XA T + Bilinear Sylvester equation: m N k XNk T = C, C = C T. k=1 AX + XB + m N k XM k = C. k=1 Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 5/38

13 Introduction Generalizations of Sylvester (AX + XB = C) and Lyapunov (AX + XA T = C) Equations Bilinear Lyapunov equation/lyapunov-plus-positive equation: AX + XA T + Bilinear Sylvester equation: m N k XNk T = C, C = C T. k=1 AX + XB + m N k XM k = C. k=1 (Generalized) discrete bilinear Lyapunov/Stein-minus-positive eq.: EXE T AXA T m N k XNk T = C, C = C T. k=1 Note: Again consider only regular cases, symmetric equations have symmetric solutions. Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 5/38

14 Introduction Existence of Solutions of Linear Matrix Equations I Exemplarily, consider the generalized Sylvester equation AXD + EXB = C. (1) Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 6/38

15 Introduction Existence of Solutions of Linear Matrix Equations I Exemplarily, consider the generalized Sylvester equation AXD + EXB = C. (1) Vectorization (using Kronecker product) representation as linear system: ( D T A + B T E Ax = c. } {{ } =:A ) vec(x ) }{{} =:x = vec(c) }{{} =:c Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 6/38

16 Introduction Existence of Solutions of Linear Matrix Equations I Exemplarily, consider the generalized Sylvester equation AXD + EXB = C. (1) Vectorization (using Kronecker product) representation as linear system: ( D T A + B T E Ax = c. } {{ } =:A ) vec(x ) }{{} =:x = vec(c) }{{} =:c = (1) has a unique solution A is nonsingular Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 6/38

17 Introduction Existence of Solutions of Linear Matrix Equations I Exemplarily, consider the generalized Sylvester equation AXD + EXB = C. (1) Vectorization (using Kronecker product) representation as linear system: ( D T A + B T E Ax = c. } {{ } =:A ) vec(x ) }{{} =:x = vec(c) }{{} =:c = (1) has a unique solution A is nonsingular Lemma Λ (A) = {α j + β k α j Λ (A, E), β k Λ (B, D)}. Hence, (1) has unique solution Λ (A, E) Λ (B, D) =. Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 6/38

18 Introduction Existence of Solutions of Linear Matrix Equations I Exemplarily, consider the generalized Sylvester equation AXD + EXB = C. (1) Vectorization (using Kronecker product) representation as linear system: ( D T A + B T E Ax = c. } {{ } =:A ) vec(x ) }{{} =:x = vec(c) }{{} =:c = (1) has a unique solution A is nonsingular Lemma Λ (A) = {α j + β k α j Λ (A, E), β k Λ (B, D)}. Hence, (1) has unique solution Λ (A, E) Λ (B, D) =. Example: Lyapunov equation AX + XA T = C has unique solution µ C : ±µ Λ (A). Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 6/38

19 Introduction The Classical Lyapunov Theorem Theorem (Lyapunov 1892) Let A R n n and consider the Lyapunov operator L : X AX + XA T. Then the following are equivalent: (a) Y > 0: X > 0: L(X ) = Y, (b) Y > 0: X > 0: L(X ) = Y, (c) Λ (A) C := {z C Rz < 0}, i.e., A is (asymptotically) stable or Hurwitz. A. M. Lyapunov. The General Problem of the Stability of Motion (in Russian). Doctoral dissertation, Univ. Kharkov English translation: Stability of Motion, Academic Press, New-York & London, P. Lancaster, M. Tismenetsky. The Theory of Matrices (2nd edition). Academic Press, Orlando, FL, [Chapter 13] Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 7/38

20 Introduction The Classical Lyapunov Theorem Theorem (Lyapunov 1892) Let A R n n and consider the Lyapunov operator L : X AX + XA T. Then the following are equivalent: (a) Y > 0: X > 0: L(X ) = Y, (b) Y > 0: X > 0: L(X ) = Y, (c) Λ (A) C := {z C Rz < 0}, i.e., A is (asymptotically) stable or Hurwitz. The proof (c) (a) is trivial from the necessary and sufficient condition for existence and uniqueness, apart from the positive definiteness. The latter is shown by studying z H Yz for all eigenvectors z of A. A. M. Lyapunov. The General Problem of the Stability of Motion (in Russian). Doctoral dissertation, Univ. Kharkov English translation: Stability of Motion, Academic Press, New-York & London, P. Lancaster, M. Tismenetsky. The Theory of Matrices (2nd edition). Academic Press, Orlando, FL, [Chapter 13] Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 7/38

21 Introduction The Classical Lyapunov Theorem Theorem (Lyapunov 1892) Let A R n n and consider the Lyapunov operator L : X AX + XA T. Then the following are equivalent: (a) Y > 0: X > 0: L(X ) = Y, (b) Y > 0: X > 0: L(X ) = Y, (c) Λ (A) C := {z C Rz < 0}, i.e., A is (asymptotically) stable or Hurwitz. Important in applications: the nonnegative case: L(X ) = AX + XA T = WW T, where W R n n W, n W n. A Hurwitz unique solution X = ZZ T for Z R n n X with 1 n X n. A. M. Lyapunov. The General Problem of the Stability of Motion (in Russian). Doctoral dissertation, Univ. Kharkov English translation: Stability of Motion, Academic Press, New-York & London, P. Lancaster, M. Tismenetsky. The Theory of Matrices (2nd edition). Academic Press, Orlando, FL, [Chapter 13] Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 7/38

22 Introduction Existence of Solutions of Linear Matrix Equations II For Lyapunov-plus-positive-type equations, the solution theory is more involved. Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 8/38

23 Introduction Existence of Solutions of Linear Matrix Equations II For Lyapunov-plus-positive-type equations, the solution theory is more involved. Focus on the Lyapunov-plus-positive case: m AX + XA }{{ T + N } k XNk T = C, C = C T 0. =:L(X ) k=1 }{{} =:P(X ) Note: The operator P(X ) m N k XNk T is nonnegative in the sense that P(X ) 0, whenever X 0. j=1 Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 8/38

24 Introduction Existence of Solutions of Linear Matrix Equations II For Lyapunov-plus-positive-type equations, the solution theory is more involved. Focus on the Lyapunov-plus-positive case: m AX + XA }{{ T + N } k XNk T = C, C = C T 0. =:L(X ) k=1 }{{} =:P(X ) Note: The operator P(X ) m N k XNk T is nonnegative in the sense that P(X ) 0, whenever X 0. This nonnegative Lyapunov-plus-positive equation is the one occurring in applications like model order reduction. j=1 Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 8/38

25 Introduction Existence of Solutions of Linear Matrix Equations II For Lyapunov-plus-positive-type equations, the solution theory is more involved. Focus on the Lyapunov-plus-positive case: m AX + XA }{{ T + N } k XNk T = C, C = C T 0. =:L(X ) k=1 }{{} =:P(X ) Note: The operator P(X ) m N k XNk T is nonnegative in the sense that P(X ) 0, whenever X 0. This nonnegative Lyapunov-plus-positive equation is the one occurring in applications like model order reduction. If A is Hurwitz and the N k are small enough, eigenvalue perturbation theory yields existence and uniqueness of solution. This is related to the concept of bounded-input bounded-output (BIBO) stability of dynamical systems. j=1 Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 8/38

26 Introduction Existence of Solutions of Linear Matrix Equations II Theorem (Schneider 1965, Damm 2004) Let A R n n and consider the Lyapunov operator L : X AX + XA T and a nonnegative operator P (i.e., P(X ) 0 if X 0). The following are equivalent: (a) Y > 0: X > 0: L(X ) + P(X ) = Y, (b) Y > 0: X > 0: L(X ) + P(X ) = Y, (c) Y 0 with (A, Y ) controllable: X > 0: L(X ) + P(X ) = Y, (d) Λ (L + P) C := {z C Rz < 0}, (e) Λ (L) C and ρ(l 1 P) < 1, where ρ(t ) = max{ λ λ Λ (T )} = spectral radius of T. T. Damm. Rational Matrix Equations in Stochastic Control. Number 297 in Lecture Notes in Control and Information Sciences. Springer-Verlag, H. Schneider. Positive operators and an inertia theorem. Numerische Mathematik, 7:11 17, Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 8/38

27 Overview 1 Introduction 2 Applications Stability Theory Classical Control Applications Applications of Lyapunov-plus-Positive Equations 3 Solving Large-Scale Sylvester and Lyapunov Equations 4 Solving Large-Scale Lyapunov-plus-Positive Equations 5 References Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 9/38

28 Applications Stability Theory From Lyapunov s theorem, immediately obtain characterization of asymptotic stability of linear dynamical systems ẋ(t) = Ax(t). (2) Theorem (Lyapunov) The following are equivalent: For (2), the zero state is asymptotically stable. The Lyapunov equation AX + XA T = Y has a unique solution X = X T > 0 for all Y = Y T < 0. A is Hurwitz. A. M. Lyapunov. The General Problem of the Stability of Motion (In Russian). Doctoral dissertation, Univ. Kharkov English translation: Stability of Motion, Academic Press, New-York & London, Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 10/38

29 Classical Control Applications Algebraic Riccati Equations (ARE) Solving AREs by Newtons s Method Feedback control design often involves solution of A T X + XA XGX + H = 0, G = G T, H = H T. In each Newton step, solve Lyapunov equation (A GX j ) T X j+1 + X j+1 (A GX j ) = X j GX j H. Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 11/38

30 Classical Control Applications Algebraic Riccati Equations (ARE) Solving AREs by Newtons s Method Feedback control design often involves solution of A T X + XA XGX + H = 0, G = G T, H = H T. In each Newton step, solve Lyapunov equation (A GX j ) T X j+1 + X j+1 (A GX j ) = X j GX j H. Decoupling of dynamical systems, e.g., in slow/fast modes, requires solution of nonsymmetric ARE AX + XF XGX + H = 0. In each Newton step, solve Sylvester equation (A X j G)X j+1 + X j+1 (F GX j ) = X j GX j H. Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 11/38

31 Classical Control Applications Model Reduction Model Reduction via Balanced Truncation For linear dynamical system ẋ(t) = Ax(t) + Bu(t), y(t) = Cx r (t), x(t) R n find reduced-order system x r (t) = A r x r (t) + B r u(t), y r (t) = C r x r (t), x(t) R r, r n such that y(t) y r (t) < δ. The popular method balanced truncation requires the solution of the dual Lyapunov equations AX + XA T + BB T = 0, A T Y + YA + C T C = 0. Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 11/38

32 Applications of Lyapunov-plus-Positive Equations Bilinear control systems: m ẋ(t) = Ax(t) + N i x(t)u i (t) + Bu(t), Σ : i=1 y(t) = Cx(t), x(0) = x 0, where A, N i R n n, B R n m, C R q n. Properties: Approximation of (weakly) nonlinear systems by Carleman linearization yields bilinear systems. Appear naturally in boundary control problems, control via coefficients of PDEs, Fokker-Planck equations,... Due to the close relation to linear systems, a lot of successful concepts can be extended, e.g. transfer functions, Gramians, Lyapunov equations,... Linear stochastic control systems possess an equivalent structure and can be treated alike [B./Damm 11]. Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 12/38

33 Applications of Lyapunov-plus-Positive Equations The concept of balanced truncation can be generalized to the case of bilinear systems, where we need the solutions of the Lyapunov-plus-positive equations: m AP + PA T + N i PA T i + BB T = 0, A T Q + QA T + i=1 m Ni T QA i + C T C = 0. i=1 Due to its approximation quality, balanced truncation is method of choice for model reduction of medium-size bilinear systems. For stationary iterative solvers, see [Damm 2008], extended to low-rank solutions recently by [Szyld/Shank/Simoncini 2014]. Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 13/38

34 Applications of Lyapunov-plus-Positive Equations The concept of balanced truncation can be generalized to the case of bilinear systems, where we need the solutions of the Lyapunov-plus-positive equations: m AP + PA T + N i PA T i + BB T = 0, Further applications: A T Q + QA T + i=1 m Ni T QA i + C T C = 0. i=1 Analysis and model reduction for linear stochastic control systems driven by Wiener noise [B./Damm 2011], Lévy processes [B./Redmann 2011/15]. Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 13/38

35 Applications of Lyapunov-plus-Positive Equations The concept of balanced truncation can be generalized to the case of bilinear systems, where we need the solutions of the Lyapunov-plus-positive equations: m AP + PA T + N i PA T i + BB T = 0, Further applications: A T Q + QA T + i=1 m Ni T QA i + C T C = 0. i=1 Analysis and model reduction for linear stochastic control systems driven by Wiener noise [B./Damm 2011], Lévy processes [B./Redmann 2011/15]. Model reduction of linear parameter-varying (LPV) systems using bilinearization approach [B./Breiten 2011, B./Bruns 2015]. Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 13/38

36 Applications of Lyapunov-plus-Positive Equations The concept of balanced truncation can be generalized to the case of bilinear systems, where we need the solutions of the Lyapunov-plus-positive equations: m AP + PA T + N i PA T i + BB T = 0, Further applications: A T Q + QA T + i=1 m Ni T QA i + C T C = 0. i=1 Analysis and model reduction for linear stochastic control systems driven by Wiener noise [B./Damm 2011], Lévy processes [B./Redmann 2011/15]. Model reduction of linear parameter-varying (LPV) systems using bilinearization approach [B./Breiten 2011, B./Bruns 2015]. Model reduction for Fokker-Planck equations [Hartmann et al. 2013]. Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 13/38

37 Applications of Lyapunov-plus-Positive Equations The concept of balanced truncation can be generalized to the case of bilinear systems, where we need the solutions of the Lyapunov-plus-positive equations: m AP + PA T + N i PA T i + BB T = 0 Further applications: i=1 Analysis and model reduction for linear stochastic control systems driven by Wiener noise [B./Damm 2011], Lévy processes [B./Redmann 2011/15]. Model reduction of linear parameter-varying (LPV) systems using bilinearization approach [B./Breiten 2011, B./Bruns 2015]. Model reduction for Fokker-Planck equations [Hartmann et al. 2013]. Linear-quadratic regulators for stochastic systems require solution of AREs of the form m AP + PA T XC T CX + N i PA T i + BB T = 0, application of Newton s method 1 L-p-P equation/iteration. Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 13/38 i=1

38 Overview This part: joint work with Patrick Kürschner and Jens Saak (MPI Magdeburg) 1 Introduction 2 Applications 3 Solving Large-Scale Sylvester and Lyapunov Equations Some Basics LR-ADI Derivation The New LR-ADI Applied to Lyapunov Equations 4 Solving Large-Scale Lyapunov-plus-Positive Equations 5 References Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 14/38

39 Solving Large-Scale Sylvester and Lyapunov Equations The Low-Rank Structure Sylvester Equations Find X R n m solving AX X B = F G T, where A R n n, B R m m, F R n r, G R m r. If n, m large, but r n, m X has a small numerical rank. [Penzl 1999, Grasedyck 2004, Antoulas/Sorensen/Zhou 2002] rank(x, τ) = f min(n, m) singular values of example 10 0 σ(x ) σ 155 u u Compute low-rank solution factors Z R n f, Y R m f, D R f f, such that X ZDY T with f min(n, m). Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 15/38

40 Solving Large-Scale Sylvester and Lyapunov Equations The Low-Rank Structure Lyapunov Equations Find X R n n solving where A R n n, F R n r. If n large, but r n X has a small numerical rank. [Penzl 1999, Grasedyck 2004, Antoulas/Sorensen/Zhou 2002] rank(x, τ) = f n AX +X A T = F F T, u Compute low-rank solution factors Z R n f, D R f f, such that X ZDZ T with f n. singular values of example σ 155 u σ(x ) Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 15/38

41 Solving Large-Scale Sylvester and Lyapunov Equations Some Basics Sylvester equation AX XB = FG T is equivalent to linear system of equations ( Im A B T I n ) vec(x ) = vec(fg T ). Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 16/38

42 Solving Large-Scale Sylvester and Lyapunov Equations Some Basics Sylvester equation AX XB = FG T is equivalent to linear system of equations ( Im A B T I n ) vec(x ) = vec(fg T ). This cannot be used for numerical solutions unless nm 1, 000 (or so), as it requires O(n 2 m 2 ) of storage; Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 16/38

43 Solving Large-Scale Sylvester and Lyapunov Equations Some Basics Sylvester equation AX XB = FG T is equivalent to linear system of equations ( Im A B T I n ) vec(x ) = vec(fg T ). This cannot be used for numerical solutions unless nm 1, 000 (or so), as it requires O(n 2 m 2 ) of storage; direct solver needs O(n 3 m 3 ) flops; Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 16/38

44 Solving Large-Scale Sylvester and Lyapunov Equations Some Basics Sylvester equation AX XB = FG T is equivalent to linear system of equations ( Im A B T I n ) vec(x ) = vec(fg T ). This cannot be used for numerical solutions unless nm 1, 000 (or so), as it requires O(n 2 m 2 ) of storage; direct solver needs O(n 3 m 3 ) flops; low (tensor-)rank of right-hand side is ignored; Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 16/38

45 Solving Large-Scale Sylvester and Lyapunov Equations Some Basics Sylvester equation AX XB = FG T is equivalent to linear system of equations ( Im A B T I n ) vec(x ) = vec(fg T ). This cannot be used for numerical solutions unless nm 1, 000 (or so), as it requires O(n 2 m 2 ) of storage; direct solver needs O(n 3 m 3 ) flops; low (tensor-)rank of right-hand side is ignored; in Lyapunov case, symmetry and possible definiteness are not respected. Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 16/38

46 Solving Large-Scale Sylvester and Lyapunov Equations Some Basics Sylvester equation AX XB = FG T is equivalent to linear system of equations ( Im A B T I n ) vec(x ) = vec(fg T ). This cannot be used for numerical solutions unless nm 1, 000 (or so), as it requires O(n 2 m 2 ) of storage; direct solver needs O(n 3 m 3 ) flops; low (tensor-)rank of right-hand side is ignored; in Lyapunov case, symmetry and possible definiteness are not respected. Possible solvers: Standard Krylov subspace solvers in operator from [Hochbruck, Starke, Reichel, Bao,... ]. Block-Tensor-Krylov subspace methods with truncation [Kressner/Tobler, Bollhöfer/Eppler, B./Breiten,... ]. Galerkin-type methods based on (extended, rational) Krylov subspace methods [Jaimoukha, Kasenally, Jbilou, Simoncini, Druskin, Knizhermann,... ] Doubling-type methods [Smith, Chu et al., B./Sadkane/El Khoury,... ]. ADI methods [Wachspress, Reichel et al., Li, Penzl, B., Saak, Kürschner,... ]. Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 16/38

47 Solving Large-Scale Sylvester and Lyapunov Equations LR-ADI Derivation Sylvester and Stein equations Let α β with α / Λ(B), β / Λ(A), then AX XB = FG T }{{} Sylvester equation X = A X B + (β α )F G H }{{} Stein equation with the Cayley like transformations A := (A β I n ) 1 (A α I n ), B := (B α I m ) 1 (B β I m ), F := (A β I n ) 1 F, G := (B α I m ) H G. fix point iteration X k = A X k 1 B + (β α )F G H for k 1, X 0 R n m. Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 17/38

48 Solving Large-Scale Sylvester and Lyapunov Equations LR-ADI Derivation Sylvester and Stein equations Let α k β k with α k / Λ(B), β k / Λ(A), then AX XB = FG T }{{} Sylvester equation X = A k X B k + (β k α k )F k G k H }{{} Stein equation with the Cayley like transformations A k := (A β k I n ) 1 (A α k I n ), B k := (B α k I m ) 1 (B β k I m ), F k := (A β k I n ) 1 F, G k := (B α k I m ) H G. alternating directions implicit (ADI) iteration X k = A k X k 1 B k + (β k α k )F k G H k for k 1, X 0 R n m. [Wachspress 1988] Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 17/38

49 Solving Large-Scale Sylvester and Lyapunov Equations LR-ADI Derivation Sylvester ADI iteration [Wachspress 1988] X k = A k X k 1 B k + (β k α k )F k G H k, A k := (A β k I n ) 1 (A α k I n ), B k := (B α k I m ) 1 (B β k I m ), F k := (A β k I n ) 1 F R n r, G k := (B α k I m ) H G C m r. Now set X 0 = 0 and find factorization X k = Z k D k Y H k X 1 = A 1 X 0 B 1 + (β 1 α 1 )F 1 G H 1, Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 18/38

50 Solving Large-Scale Sylvester and Lyapunov Equations LR-ADI Derivation Sylvester ADI iteration [Wachspress 1988] X k = A k X k 1 B k + (β k α k )F k G H k, A k := (A β k I n ) 1 (A α k I n ), B k := (B α k I m ) 1 (B β k I m ), F k := (A β k I n ) 1 F R n r, G k := (B α k I m ) H G C m r. Now set X 0 = 0 and find factorization X k = Z k D k Yk H X 1 = (β 1 α 1 )(A β 1 I n ) 1 F G T (B α 1 I m ) 1 V 1 := Z 1 = (A β 1 I n ) 1 F R n r, D 1 = (β 1 α 1 )I r R r r, W 1 := Y 1 = (B α 1 I m ) H G C m r. Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 18/38

51 Solving Large-Scale Sylvester and Lyapunov Equations LR-ADI Derivation Sylvester ADI iteration [Wachspress 1988] X k = A k X k 1 B k + (β k α k )F k G H k, A k := (A β k I n ) 1 (A α k I n ), B k := (B α k I m ) 1 (B β k I m ), F k := (A β k I n ) 1 F R n r, G k := (B α k I m ) H G C m r. Now set X 0 = 0 and find factorization X k = Z k D k Yk H X 2 = A 2 X 1 B 2 + (β 2 α 2 )F 2 G2 H =... = V 2 = V 1 + (β 2 α 1 )(A + β 2 I ) 1 V 1 R n r, W 2 = W 1 + (α 2 β 1 )(B + α 2 I ) H W 1 R m r, Z 2 = [Z 1, V 2 ], D 2 = diag (D 1, (β 2 α 2 )I r ), Y 2 = [Y 1, W 2 ]. Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 18/38

52 Solving Large-Scale Sylvester and Lyapunov Equations LR-ADI Algorithm [B. 2005, Li/Truhar 2008, B./Li/Truhar 2009] Algorithm 1: Low-rank Sylvester ADI / factored ADI (fadi) Input : Matrices defining AX XB = FG T and shift parameters {α 1,..., α kmax }, {β 1,..., β kmax }. Output: Z, D, Y such that ZDY H X. 1 Z 1 = V 1 = (A β 1 I n ) 1 F, 2 Y 1 = W 1 = (B α 1 I m ) H G. 3 D 1 = (β 1 α 1 )I r 4 for k = 2,..., k max do 5 V k = V k 1 + (β k α k 1 )(A β k I n ) 1 V k 1. 6 W k = W k 1 + (α k β k 1 )(B α k I n ) H W k 1. 7 Update solution factors Z k = [Z k 1, V k ], Y k = [Y k 1, W k ], D k = diag (D k 1, (β k α k )I r ). Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 19/38

53 Solving Large-Scale Sylvester and Lyapunov Equations ADI Shifts Optimal Shifts Solution of rational optimization problem min max α j C λ Λ(A) j=1 β j C µ Λ(B) k (λ α j )(µ β j ) (λ β j )(µ α j ), for which no analytic solution is known in general. Some shift generation approaches: generalized Bagby points, [Levenberg/Reichel 1993] adaption of Penzl s cheap heuristic approach available [Penzl 1999, Li/Truhar 2008] approximate Λ(A), Λ(B) by small number of Ritz values w.r.t. A, A 1, B, B 1 via Arnoldi, just taking these Ritz values alone also works well quite often. Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 20/38

54 Solving Large-Scale Sylvester and Lyapunov Equations LR-ADI Derivation Disadvantages of Low-Rank ADI as of 2012: 1 No efficient stopping criteria: Difference in iterates norm of added columns/step: not reliable, stops often too late. Residual is a full dense matrix, can not be calculated as such. 2 Requires complex arithmetic for real coefficients when complex shifts are used. 3 Expensive (only semi-automatic) set-up phase to precompute ADI shifts. Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 21/38

55 Solving Large-Scale Sylvester and Lyapunov Equations LR-ADI Derivation Disadvantages of Low-Rank ADI as of 2012: 1 No efficient stopping criteria: Difference in iterates norm of added columns/step: not reliable, stops often too late. Residual is a full dense matrix, can not be calculated as such. 2 Requires complex arithmetic for real coefficients when complex shifts are used. 3 Expensive (only semi-automatic) set-up phase to precompute ADI shifts. None of these disadvantages exists as of today = speed-ups old vs. new LR-ADI can be up to 20! Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 21/38

56 Projection-Based Lyapunov Solvers for Lyapunov equation 0 = AX + XA T + BB T Projection-based methods for Lyapunov equations with A + A T < 0: 1 Compute orthonormal basis range (Z), Z R n r, for subspace Z R n, dim Z = r. 2 Set  := Z T AZ, ˆB := Z T B. 3 Solve small-size Lyapunov equation  ˆX + ˆX  T + ˆB ˆB T = 0. 4 Use X Z ˆX Z T. Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 22/38

57 Projection-Based Lyapunov Solvers for Lyapunov equation 0 = AX + XA T + BB T Projection-based methods for Lyapunov equations with A + A T < 0: 1 Compute orthonormal basis range (Z), Z R n r, for subspace Z R n, dim Z = r. 2 Set  := Z T AZ, ˆB := Z T B. 3 Solve small-size Lyapunov equation  ˆX + ˆX  T + ˆB ˆB T = 0. 4 Use X Z ˆX Z T. Examples: Krylov subspace methods, i.e., for m = 1: Z = K(A, B, r) = span{b, AB, A 2 B,..., A r 1 B} [Saad 1990, Jaimoukha/Kasenally 1994, Jbilou ]. Extended Krylov subspace method (EKSM) [Simoncini 2007], Z = K(A, B, r) K(A 1, B, r). Rational Krylov subspace methods (RKSM) [Druskin/Simoncini 2011]. Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 22/38

58 The New LR-ADI Applied to Lyapunov Equations Example: an ocean circulation problem [Van Gijzen et al. 1998] FEM discretization of a simple 3D ocean circulation model (barotropic, constant depth) stiffness matrix A with n = 42, 249, choose artificial constant term B = rand(n,5). Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 23/38

59 The New LR-ADI Applied to Lyapunov Equations Example: an ocean circulation problem [Van Gijzen et al. 1998] FEM discretization of a simple 3D ocean circulation model (barotropic, constant depth) stiffness matrix A with n = 42, 249, choose artificial constant term B = rand(n,5). Convergence history: R / B T B TOL LR-ADI with adaptive shifts vs. EKSM coldim(z) LR-ADI EKSM Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 23/38

60 The New LR-ADI Applied to Lyapunov Equations Example: an ocean circulation problem [Van Gijzen et al. 1998] FEM discretization of a simple 3D ocean circulation model (barotropic, constant depth) stiffness matrix A with n = 42, 249, choose artificial constant term B = rand(n,5). Convergence history: R / B T B TOL LR-ADI with adaptive shifts vs. EKSM coldim(z) CPU times: LR-ADI 110 sec, EKSM 135 sec. LR-ADI EKSM Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 23/38

61 Solving Large-Scale Sylvester and Lyapunov Equations Summary & Outlook Numerical enhancements of low-rank ADI for large Sylvester/Lyapunov equations: 1 low-rank residuals, reformulated implementation, 2 compute real low-rank factors in the presence of complex shifts, 3 self-generating shift strategies (quantification in progress). For diffusion-convection-reaction example: sec. down to sec. acceleration by factor almost 20. Generalized version enables derivation of low-rank solvers for various generalized Sylvester equations. Ongoing work: Apply LR-ADI in Newton methods for algebraic Riccati equations R(X ) = AX + XA T + GG T XSS T X = 0, D(X ) = AXA T EXE T + GG T + A T XF (I r + F T XF ) 1 F T XA = 0. For nonlinear AREs see P. Benner, P. Kürschner, J. Saak. Low-rank Newton-ADI methods for large nonsymmetric algebraic Riccati equations. J. Franklin Inst., Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 24/38

62 Overview This part: joint work with Tobias Breiten (KFU Graz, Austria) 1 Introduction 2 Applications 3 Solving Large-Scale Sylvester and Lyapunov Equations 4 Solving Large-Scale Lyapunov-plus-Positive Equations Existence of Low-Rank Approximations Generalized ADI Iteration Bilinear EKSM Tensorized Krylov Subspace Methods Comparison of Methods 5 References Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 25/38

63 Solving Large-Scale Lyapunov-plus-Positive Equations Some basic facts and assumptions AX + XA T + m N j XNj T + BB T = 0. (3) j=1 Need a positive semi-definite symmetric solution X. Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 26/38

64 Solving Large-Scale Lyapunov-plus-Positive Equations Some basic facts and assumptions AX + XA T + m N j XNj T + BB T = 0. (3) j=1 Need a positive semi-definite symmetric solution X. As discussed before, solution theory for Lyapuonv-plus-positive equation is more involved then in standard Lyapuonv case. Here, existence and uniqueness of positive semi-definite solution X = X T is assumed. Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 26/38

65 Solving Large-Scale Lyapunov-plus-Positive Equations Some basic facts and assumptions AX + XA T + m N j XNj T + BB T = 0. (3) j=1 Need a positive semi-definite symmetric solution X. As discussed before, solution theory for Lyapuonv-plus-positive equation is more involved then in standard Lyapuonv case. Here, existence and uniqueness of positive semi-definite solution X = X T is assumed. Want: solution methods for large scale problems, i.e., only matrix-matrix multiplication with A, N j, solves with (shifted) A allowed! Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 26/38

66 Solving Large-Scale Lyapunov-plus-Positive Equations Some basic facts and assumptions AX + XA T + m N j XNj T + BB T = 0. (3) j=1 Need a positive semi-definite symmetric solution X. As discussed before, solution theory for Lyapuonv-plus-positive equation is more involved then in standard Lyapuonv case. Here, existence and uniqueness of positive semi-definite solution X = X T is assumed. Want: solution methods for large scale problems, i.e., only matrix-matrix multiplication with A, N j, solves with (shifted) A allowed! Requires to compute data-sparse approximation to generally dense X ; here: X ZZ T with Z R n n Z, n Z n! Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 26/38

67 Solving Large-Scale Lyapunov-plus-Positive Equations Existence of Low-Rank Approximations Question Can we expect low-rank approximations ZZ T X to the solution of m AX + XA T + N j XNj T + BB T = 0? j=1 Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 27/38

68 Solving Large-Scale Lyapunov-plus-Positive Equations Existence of Low-Rank Approximations Question Can we expect low-rank approximations ZZ T X to the solution of m AX + XA T + N j XNj T + BB T = 0? j=1 Standard Lyapunov case: [Grasedyck 04] AX + XA T + BB T = 0 (I n A + A I n) vec(x ) = vec(bb T ). }{{} =:A Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 27/38

69 Solving Large-Scale Lyapunov-plus-Positive Equations Existence of Low-Rank Approximations Standard Lyapunov case: [Grasedyck 04] Apply AX + XA T + BB T = 0 (I n A + A I n) vec(x ) = vec(bb T ). }{{} =:A M 1 = 0 exp(tm)dt to A and approximate the integral via (sinc) quadrature A 1 k ω i exp(t k A), i= k with error exp( k) (exp( k) if A = A T ), then an approximate Lyapunov solution is given by k vec(x ) vec(x k ) = ω i exp(t i A) vec(bb T ). i= k Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 27/38

70 Solving Large-Scale Lyapunov-plus-Positive Equations Existence of Low-Rank Approximations Standard Lyapunov case: [Grasedyck 04] AX + XA T + BB T = 0 (I n A + A I n) vec(x ) = vec(bb T ). }{{} =:A Now observe that vec(x ) vec(x k ) = k ω i exp(t i A) vec(bb T ). i= k exp(t i A) = exp (t i (I n A + A I n)) exp(t i A) exp(t i A). Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 27/38

71 Solving Large-Scale Lyapunov-plus-Positive Equations Existence of Low-Rank Approximations Standard Lyapunov case: [Grasedyck 04] AX + XA T + BB T = 0 (I n A + A I n) vec(x ) = vec(bb T ). }{{} =:A Now observe that Hence, vec(x ) vec(x k ) = k ω i exp(t i A) vec(bb T ). i= k exp(t i A) = exp (t i (I n A + A I n)) exp(t i A) exp(t i A). vec(x k ) = k ω i (exp(t i A) exp(t i A)) vec(bb T ) i= k Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 27/38

72 Solving Large-Scale Lyapunov-plus-Positive Equations Existence of Low-Rank Approximations Standard Lyapunov case: [Grasedyck 04] Hence, AX + XA T + BB T = 0 (I n A + A I n) vec(x ) = vec(bb T ). }{{} =:A vec(x k ) = = X k = k ω i (exp(t i A) exp(t i A)) vec(bb T ) i= k k k ω i exp(t i A)BB T exp(t i A T ) ω i B i Bi T, i= k i= k so that rank (X k ) (2k + 1)m with X X k 2 exp( k) ( exp( k) for A = A T )! Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 27/38

73 Solving Large-Scale Lyapunov-plus-Positive Equations Existence of Low-Rank Approximations Question Can we expect low-rank approximations ZZ T X to the solution of m AX + XA T + N j XNj T + BB T = 0? j=1 Problem: in general, m m exp t i (I A + A I + N j N j ) (exp (t i A) exp (t i A)) exp t i ( N j N j ). j=1 j=1 Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 27/38

74 Solving Large-Scale Lyapunov-plus-Positive Equations Existence of Low-Rank Approximations Question Can we expect low-rank approximations ZZ T X to the solution of m AX + XA T + N j XNj T + BB T = 0? j=1 Assume that m = 1 and N 1 = UV T with U, V R n r and consider ( I n A + A I }{{} n +N 1 N 1 ) vec(x ) = vec(bb T ). }{{} =A =:y Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 27/38

75 Solving Large-Scale Lyapunov-plus-Positive Equations Existence of Low-Rank Approximations Question Can we expect low-rank approximations ZZ T X to the solution of m AX + XA T + N j XNj T + BB T = 0? j=1 Assume that m = 1 and N 1 = UV T with U, V R n r and consider ( I n A + A I }{{} n +N 1 N 1 ) vec(x ) = vec(bb T ). }{{} =A =:y Sherman-Morrison-Woodbury = ( Ir I r + (V T V T )A 1 (U U) ) w = (V T V T )A 1 y, A vec(x ) = y (U U)w. Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 27/38

76 Solving Large-Scale Lyapunov-plus-Positive Equations Existence of Low-Rank Approximations Question Can we expect low-rank approximations ZZ T X to the solution of m AX + XA T + N j XNj T + BB T = 0? j=1 Assume that m = 1 and N 1 = UV T with U, V R n r and consider ( I n A + A I }{{} n +N 1 N 1 ) vec(x ) = vec(bb T ). }{{} =A =:y Sherman-Morrison-Woodbury = ( Ir I r + (V T V T )A 1 (U U) ) w = (V T V T )A 1 y, A vec(x ) = y (U U)w. Matrix rank of RHS BB T U vec 1 (w) U T is r + 1! Apply results for linear Lyapunov equations with r.h.s of rank r + 1. Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 27/38

77 Solving Large-Scale Lyapunov-plus-Positive Equations Existence of Low-Rank Approximations Theorem [B./Breiten 2012] Assume existence and uniqueness with stable A and N j = U j Vj T, with U j, V j R n r j. Set r = m j=1 r j. Then the solution X of AX + XA T + m N j XNj T + BB T = 0 j=1 can be approximated by X k of rank (2k + 1)(m + r), with an error satisfying X X k 2 exp( k). Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 28/38

78 Solving Large-Scale Lyapunov-plus-Positive Equations Generalized ADI Iteration Let us again consider the Lyapunov-plus-positive equation AP + PA T + NPN T + BB T = 0. Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 29/38

79 Solving Large-Scale Lyapunov-plus-Positive Equations Generalized ADI Iteration Let us again consider the Lyapunov-plus-positive equation AP + PA T + NPN T + BB T = 0. For a fixed parameter p, we can rewrite the linear Lyapunov operator as AP + PA T = 1 ( (A + pi )P(A + pi ) T (A pi )P(A pi ) T ) 2p Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 29/38

80 Solving Large-Scale Lyapunov-plus-Positive Equations Generalized ADI Iteration Let us again consider the Lyapunov-plus-positive equation AP + PA T + NPN T + BB T = 0. For a fixed parameter p, we can rewrite the linear Lyapunov operator as AP + PA T = 1 ( (A + pi )P(A + pi ) T (A pi )P(A pi ) T ) 2p leading to the fix point iteration [Damm 2008] P j = (A pi ) 1 (A + pi )P j 1 (A + pi ) T (A pi ) T + 2p(A pi ) 1 (NP j 1 N T + BB T )(A pi ) T. Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 29/38

81 Solving Large-Scale Lyapunov-plus-Positive Equations Generalized ADI Iteration Let us again consider the Lyapunov-plus-positive equation AP + PA T + NPN T + BB T = 0. For a fixed parameter p, we can rewrite the linear Lyapunov operator as AP + PA T = 1 ( (A + pi )P(A + pi ) T (A pi )P(A pi ) T ) 2p leading to the fix point iteration [Damm 2008] P j = (A pi ) 1 (A + pi )P j 1 (A + pi ) T (A pi ) T + 2p(A pi ) 1 (NP j 1 N T + BB T )(A pi ) T. P j Z j Z T j Z j Z T j (rank (Z j ) n) factored iteration = (A pi ) 1 (A + pi )Z j 1 Z T j 1(A + pi ) T (A pi ) T + 2p(A pi ) 1 (NZ j 1 Z T j 1N T + BB T )(A pi ) T. Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 29/38

82 Solving Large-Scale Lyapunov-plus-Positive Equations Generalized ADI Iteration Hence, for a given sequence of shift parameters {p 1,..., p q }, we can extend the linear ADI iteration as follows: Z 1 = 2p 1 (A p 1 I ) 1 B, Z j = (A p j I ) 1 [ (A + p j I ) Z j 1 2pj B 2p j NZ j 1 ], j q. Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 29/38

83 Solving Large-Scale Lyapunov-plus-Positive Equations Generalized ADI Iteration Hence, for a given sequence of shift parameters {p 1,..., p q }, we can extend the linear ADI iteration as follows: Z 1 = 2p 1 (A p 1 I ) 1 B, Z j = (A p j I ) 1 [ (A + p j I ) Z j 1 2pj B 2p j NZ j 1 ], j q. Problems: A and N in general do not commute we have to operate on full preceding subspace Z j 1 in each step. Rapid increase of rank (Z j ) perform some kind of column compression. Choice of shift parameters? No obvious generalization of minimax problem. Here, we will use shifts minimizing a certain H 2 -optimization problem, see [B./Breiten 2011/14]. Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 29/38

84 Generalized ADI Iteration Numerical Example: A Heat Transfer Model with Uncertainty 2-dimensional heat distribution motivated by [Benner/Saak 05] boundary control by a cooling fluid with an uncertain spraying intensity x 10 Γ 2 x 01 x 02 x 03 x 11 x 12 x 13 x 14 Ω = (0, 1) (0, 1) x t = x in Ω Γ 1 x 20 x 21 x 22 x 23 x 24 Γ 4 n x = (0.5 + dω 1 )x on Γ 1 x = u on Γ 2 x 30 x 31 x 32 x 33 x 34 x = 0 on Γ 3, Γ 4 spatial discretization k k-grid dx Axdt + Nxdω i + Budt output: C = 1 [ ] k 2 x 41 x 42 x 43 Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 30/38 Γ 3

85 Generalized ADI Iteration Numerical Example: A Heat Transfer Model with Uncertainty Conv. history for bilinear low-rank ADI method (n = 40, 000) 10 5 Relative Residual Absolute Residual Residual Iteration Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 30/38

86 Solving Large-Scale Lyapunov-plus-Positive Equations Generalizing the Extended Krylov Subspace Method (EKSM) [Simoncini 07] Low-rank solutions of the Lyapunov-plus-positive equation may be obtained by projecting the original equation onto a suitable smaller subspace V = span(v ), V R n k, with V T V = I. In more detail, solve ( V T AV ) ˆX + ˆX ( V T A T V ) + ( V T NV ) ˆX ( V T N T V ) + ( V T B ) ( V T B ) T = 0 and prolongate X V ˆX V T. Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 31/38

87 Solving Large-Scale Lyapunov-plus-Positive Equations Generalizing the Extended Krylov Subspace Method (EKSM) [Simoncini 07] Low-rank solutions of the Lyapunov-plus-positive equation may be obtained by projecting the original equation onto a suitable smaller subspace V = span(v ), V R n k, with V T V = I. In more detail, solve ( V T AV ) ˆX + ˆX ( V T A T V ) + ( V T NV ) ˆX ( V T N T V ) + ( V T B ) ( V T B ) T = 0 and prolongate X V ˆX V T. For this, one might use the extended Krylov subspace method (EKSM) algorithm in the following way: Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 31/38

88 Solving Large-Scale Lyapunov-plus-Positive Equations Generalizing the Extended Krylov Subspace Method (EKSM) [Simoncini 07] Low-rank solutions of the Lyapunov-plus-positive equation may be obtained by projecting the original equation onto a suitable smaller subspace V = span(v ), V R n k, with V T V = I. In more detail, solve ( V T AV ) ˆX + ˆX ( V T A T V ) + ( V T NV ) ˆX ( V T N T V ) + ( V T B ) ( V T B ) T = 0 and prolongate X V ˆX V T. For this, one might use the extended Krylov subspace method (EKSM) algorithm in the following way: V 1 = [ B A 1 B ], V r = [ AV r 1 A 1 V r 1 NV r 1 ], r = 2, 3,... Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 31/38

89 Solving Large-Scale Lyapunov-plus-Positive Equations Generalizing the Extended Krylov Subspace Method (EKSM) [Simoncini 07] Low-rank solutions of the Lyapunov-plus-positive equation may be obtained by projecting the original equation onto a suitable smaller subspace V = span(v ), V R n k, with V T V = I. In more detail, solve ( V T AV ) ˆX + ˆX ( V T A T V ) + ( V T NV ) ˆX ( V T N T V ) + ( V T B ) ( V T B ) T = 0 and prolongate X V ˆX V T. For this, one might use the extended Krylov subspace method (EKSM) algorithm in the following way: V 1 = [ B A 1 B ], V r = [ AV r 1 A 1 V r 1 NV r 1 ], r = 2, 3,... However, criteria like dissipativity of A for the linear case which ensure solvability of the projected equation have to be further investigated. Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 31/38

90 Bilinear EKSM Residual Computation in O(k 3 ) Theorem (B./Breiten 2012) Let V i R n k i be the extend Krylov matrix after i generalized EKSM steps. Denote the residual associated with the approximate solution X i = V i ˆX i Vi T by R i := AX i + X i A T + NX i N T + BB T, where ˆX i is the solution of the reduced Lyapunov-plus-positive equation Then: V T i AV i ˆXi + ˆX i V T i range (R i ) range (V i+1 ), A T V i + V T i NV i ˆXi Vi T N T V i + Vi T BB T V i = 0. R i = V T i+1r i V i+1 for the Frobenius and spectral norms. Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 32/38

91 Bilinear EKSM Residual Computation in O(k 3 ) Theorem (B./Breiten 2012) Let V i R n k i be the extend Krylov matrix after i generalized EKSM steps. Denote the residual associated with the approximate solution X i = V i ˆX i Vi T by R i := AX i + X i A T + NX i N T + BB T, where ˆX i is the solution of the reduced Lyapunov-plus-positive equation Then: V T i AV i ˆXi + ˆX i V T i range (R i ) range (V i+1 ), A T V i + V T i NV i ˆXi Vi T N T V i + Vi T BB T V i = 0. R i = V T i+1r i V i+1 for the Frobenius and spectral norms. Remarks: Residual evaluation only requires quantities needed in i + 1st projection step plus O(k 3 i+1) operations. No Hessenberg structure of reduced system matrix that allows to simplify residual expression as in standard Lyapunov case! Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 32/38

92 Bilinear EKSM Numerical Example: A Heat Transfer Model with Uncertainty Convergence history for bilinear EKSM variant (n = 6, 400) 10 4 Relative Residual Absolute Residual 10 0 Residual Iteration Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 33/38

93 Solving Large-Scale Lyapunov-plus-Positive Equations Tensorized Krylov Subspace Methods Another possibility is to iteratively solve the linear system (I n A + A I n + N N) vec(x ) = vec(bb T ), with a fixed number of ADI iteration steps used as a preconditioner M M 1 (I n A + A I n + N N) vec(x ) = M 1 vec(bb T ). We implemented this approach for PCG and BiCGstab. Updates like X k+1 X k + ω k P k require truncation operator to preserve low-order structure. Note, that the low-rank factorization X ZZ T has to be replaced by X ZDZ T, D possibly indefinite. Similar to more general tensorized Krylov solvers, see [Kressner/Tobler 2010/12]. Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 34/38

94 Tensorized Krylov Subspace Methods Vanilla Implementation of Tensor-PCG for Matrix Equations Algorithm 2: Preconditioned CG method for A(X ) = B Input : Matrix functions A, M : R n n R n n, low rank factor B of right-hand side B = BB T. Truncation operator T w.r.t. relative accuracy ɛ rel. Output: Low rank approximation X = LDL T with A(X ) B F tol. 1 X 0 = 0, R 0 = B, Z 0 = M 1 (R 0 ), P 0 = Z 0, Q 0 = A(P 0 ), ξ 0 = P 0, Q 0, k = 0 2 while R k F > tol do 3 ω k = R k,p k ξ k 4 X k+1 = X k + ω k P k, X k+1 T (X k+1 ) 5 R k+1 = B A(X k+1 ), Optionally: R k+1 T (R k+1 ) 6 Z k+1 = M 1 (R k+1 ) 7 β k = Z k+1,q k ξ k 8 P k+1 = Z k+1 + β k P k, P k+1 T (P k+1 ) 9 Q k+1 = A(P k+1 ), Optionally: Q k+1 T (Q k+1 ) 10 ξ k+1 = P k+1, Q k+1 11 k = k X = X k Here, A : X AX + XA T + NXN T, M: l steps of (bilinear) ADI, both in low-rank ( ZDZ T format). Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 35/38

95 Comparison of Methods Heat Equation with Boundary Control Comparison of low rank solution methods for n = 562, 500. Bilinear ADI (6 H 2 -optimal shifts) Bilinear ADI (10 H 2 -optimal shifts) CG (Bilinear ADI Precond.) Bilinear ADI (8 H 2 -optimal shifts) Bilinear ADI (4 Wachspress shifts) Bilinear EKSM Relative residual Iteration number Rank of X k Max Planck Institute Magdeburg P. Benner, Numerical Solution of Matrix Equations 36/38

Solving Large-Scale Matrix Equations: Recent Progress and New Applications

Solving Large-Scale Matrix Equations: Recent Progress and New Applications 19th Conference of the International Linear Algebra Society (ILAS) Seoul, August 6 9, 2014 Solving Large-Scale Matrix Equations: Recent Progress and New Applications Peter Benner Max Planck Institute for

More information

Linear and Nonlinear Matrix Equations Arising in Model Reduction

Linear and Nonlinear Matrix Equations Arising in Model Reduction International Conference on Numerical Linear Algebra and its Applications Guwahati, January 15 18, 2013 Linear and Nonlinear Matrix Equations Arising in Model Reduction Peter Benner Tobias Breiten Max

More information

A Newton-Galerkin-ADI Method for Large-Scale Algebraic Riccati Equations

A Newton-Galerkin-ADI Method for Large-Scale Algebraic Riccati Equations A Newton-Galerkin-ADI Method for Large-Scale Algebraic Riccati Equations Peter Benner Max-Planck-Institute for Dynamics of Complex Technical Systems Computational Methods in Systems and Control Theory

More information

Model Reduction for Dynamical Systems

Model Reduction for Dynamical Systems Otto-von-Guericke Universität Magdeburg Faculty of Mathematics Summer term 2015 Model Reduction for Dynamical Systems Lecture 8 Peter Benner Lihong Feng Max Planck Institute for Dynamics of Complex Technical

More information

The Newton-ADI Method for Large-Scale Algebraic Riccati Equations. Peter Benner.

The Newton-ADI Method for Large-Scale Algebraic Riccati Equations. Peter Benner. The Newton-ADI Method for Large-Scale Algebraic Riccati Equations Mathematik in Industrie und Technik Fakultät für Mathematik Peter Benner benner@mathematik.tu-chemnitz.de Sonderforschungsbereich 393 S

More information

MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS

MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS Ulrike Baur joint work with Peter Benner Mathematics in Industry and Technology Faculty of Mathematics Chemnitz University of Technology

More information

Efficient Implementation of Large Scale Lyapunov and Riccati Equation Solvers

Efficient Implementation of Large Scale Lyapunov and Riccati Equation Solvers Efficient Implementation of Large Scale Lyapunov and Riccati Equation Solvers Jens Saak joint work with Peter Benner (MiIT) Professur Mathematik in Industrie und Technik (MiIT) Fakultät für Mathematik

More information

S N. hochdimensionaler Lyapunov- und Sylvestergleichungen. Peter Benner. Mathematik in Industrie und Technik Fakultät für Mathematik TU Chemnitz

S N. hochdimensionaler Lyapunov- und Sylvestergleichungen. Peter Benner. Mathematik in Industrie und Technik Fakultät für Mathematik TU Chemnitz Ansätze zur numerischen Lösung hochdimensionaler Lyapunov- und Sylvestergleichungen Peter Benner Mathematik in Industrie und Technik Fakultät für Mathematik TU Chemnitz S N SIMULATION www.tu-chemnitz.de/~benner

More information

On the numerical solution of large-scale linear matrix equations. V. Simoncini

On the numerical solution of large-scale linear matrix equations. V. Simoncini On the numerical solution of large-scale linear matrix equations V. Simoncini Dipartimento di Matematica, Università di Bologna (Italy) valeria.simoncini@unibo.it 1 Some matrix equations Sylvester matrix

More information

The rational Krylov subspace for parameter dependent systems. V. Simoncini

The rational Krylov subspace for parameter dependent systems. V. Simoncini The rational Krylov subspace for parameter dependent systems V. Simoncini Dipartimento di Matematica, Università di Bologna valeria.simoncini@unibo.it 1 Given the continuous-time system Motivation. Model

More information

BALANCING-RELATED MODEL REDUCTION FOR DATA-SPARSE SYSTEMS

BALANCING-RELATED MODEL REDUCTION FOR DATA-SPARSE SYSTEMS BALANCING-RELATED Peter Benner Professur Mathematik in Industrie und Technik Fakultät für Mathematik Technische Universität Chemnitz Computational Methods with Applications Harrachov, 19 25 August 2007

More information

Moving Frontiers in Model Reduction Using Numerical Linear Algebra

Moving Frontiers in Model Reduction Using Numerical Linear Algebra Using Numerical Linear Algebra Max-Planck-Institute for Dynamics of Complex Technical Systems Computational Methods in Systems and Control Theory Group Magdeburg, Germany Technische Universität Chemnitz

More information

Order reduction numerical methods for the algebraic Riccati equation. V. Simoncini

Order reduction numerical methods for the algebraic Riccati equation. V. Simoncini Order reduction numerical methods for the algebraic Riccati equation V. Simoncini Dipartimento di Matematica Alma Mater Studiorum - Università di Bologna valeria.simoncini@unibo.it 1 The problem Find X

More information

Exploiting off-diagonal rank structures in the solution of linear matrix equations

Exploiting off-diagonal rank structures in the solution of linear matrix equations Stefano Massei Exploiting off-diagonal rank structures in the solution of linear matrix equations Based on joint works with D. Kressner (EPFL), M. Mazza (IPP of Munich), D. Palitta (IDCTS of Magdeburg)

More information

Structure preserving Krylov-subspace methods for Lyapunov equations

Structure preserving Krylov-subspace methods for Lyapunov equations Structure preserving Krylov-subspace methods for Lyapunov equations Matthias Bollhöfer, André Eppler Institute Computational Mathematics TU Braunschweig MoRePas Workshop, Münster September 17, 2009 System

More information

Max Planck Institute Magdeburg Preprints

Max Planck Institute Magdeburg Preprints Peter Benner Patrick Kürschner Jens Saak Real versions of low-rank ADI methods with complex shifts MAXPLANCKINSTITUT FÜR DYNAMIK KOMPLEXER TECHNISCHER SYSTEME MAGDEBURG Max Planck Institute Magdeburg Preprints

More information

ADI-preconditioned FGMRES for solving large generalized Lyapunov equations - A case study

ADI-preconditioned FGMRES for solving large generalized Lyapunov equations - A case study -preconditioned for large - A case study Matthias Bollhöfer, André Eppler TU Braunschweig Institute Computational Mathematics Syrene-MOR Workshop, TU Hamburg October 30, 2008 2 / 20 Outline 1 2 Overview

More information

Matrix Equations and and Bivariate Function Approximation

Matrix Equations and and Bivariate Function Approximation Matrix Equations and and Bivariate Function Approximation D. Kressner Joint work with L. Grasedyck, C. Tobler, N. Truhar. ETH Zurich, Seminar for Applied Mathematics Manchester, 17.06.2009 Sylvester matrix

More information

Low Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL

Low Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL Low Rank Approximation Lecture 7 Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL daniel.kressner@epfl.ch 1 Alternating least-squares / linear scheme General setting:

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 15-05 Induced Dimension Reduction method for solving linear matrix equations R. Astudillo and M. B. van Gijzen ISSN 1389-6520 Reports of the Delft Institute of Applied

More information

Krylov Subspace Methods for Nonlinear Model Reduction

Krylov Subspace Methods for Nonlinear Model Reduction MAX PLANCK INSTITUT Conference in honour of Nancy Nichols 70th birthday Reading, 2 3 July 2012 Krylov Subspace Methods for Nonlinear Model Reduction Peter Benner and Tobias Breiten Max Planck Institute

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Adaptive rational Krylov subspaces for large-scale dynamical systems. V. Simoncini

Adaptive rational Krylov subspaces for large-scale dynamical systems. V. Simoncini Adaptive rational Krylov subspaces for large-scale dynamical systems V. Simoncini Dipartimento di Matematica, Università di Bologna valeria@dm.unibo.it joint work with Vladimir Druskin, Schlumberger Doll

More information

Krylov-Subspace Based Model Reduction of Nonlinear Circuit Models Using Bilinear and Quadratic-Linear Approximations

Krylov-Subspace Based Model Reduction of Nonlinear Circuit Models Using Bilinear and Quadratic-Linear Approximations Krylov-Subspace Based Model Reduction of Nonlinear Circuit Models Using Bilinear and Quadratic-Linear Approximations Peter Benner and Tobias Breiten Abstract We discuss Krylov-subspace based model reduction

More information

Krylov Subspace Type Methods for Solving Projected Generalized Continuous-Time Lyapunov Equations

Krylov Subspace Type Methods for Solving Projected Generalized Continuous-Time Lyapunov Equations Krylov Subspace Type Methods for Solving Proected Generalized Continuous-Time Lyapunov Equations YUIAN ZHOU YIQIN LIN Hunan University of Science and Engineering Institute of Computational Mathematics

More information

Computational methods for large-scale matrix equations and application to PDEs. V. Simoncini

Computational methods for large-scale matrix equations and application to PDEs. V. Simoncini Computational methods for large-scale matrix equations and application to PDEs V. Simoncini Dipartimento di Matematica Alma Mater Studiorum - Università di Bologna valeria.simoncini@unibo.it 1 Some matrix

More information

Model order reduction of large-scale dynamical systems with Jacobi-Davidson style eigensolvers

Model order reduction of large-scale dynamical systems with Jacobi-Davidson style eigensolvers MAX PLANCK INSTITUTE International Conference on Communications, Computing and Control Applications March 3-5, 2011, Hammamet, Tunisia. Model order reduction of large-scale dynamical systems with Jacobi-Davidson

More information

Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control

Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control Chun-Hua Guo Dedicated to Peter Lancaster on the occasion of his 70th birthday We consider iterative methods for finding the

More information

Factorized Solution of Sylvester Equations with Applications in Control

Factorized Solution of Sylvester Equations with Applications in Control Factorized Solution of Sylvester Equations with Applications in Control Peter Benner Abstract Sylvester equations play a central role in many areas of applied mathematics and in particular in systems and

More information

Computational methods for large-scale matrix equations and application to PDEs. V. Simoncini

Computational methods for large-scale matrix equations and application to PDEs. V. Simoncini Computational methods for large-scale matrix equations and application to PDEs V. Simoncini Dipartimento di Matematica, Università di Bologna valeria.simoncini@unibo.it 1 Some matrix equations Sylvester

More information

Passivity Preserving Model Reduction for Large-Scale Systems. Peter Benner.

Passivity Preserving Model Reduction for Large-Scale Systems. Peter Benner. Passivity Preserving Model Reduction for Large-Scale Systems Peter Benner Mathematik in Industrie und Technik Fakultät für Mathematik Sonderforschungsbereich 393 S N benner@mathematik.tu-chemnitz.de SIMULATION

More information

On Solving Large Algebraic. Riccati Matrix Equations

On Solving Large Algebraic. Riccati Matrix Equations International Mathematical Forum, 5, 2010, no. 33, 1637-1644 On Solving Large Algebraic Riccati Matrix Equations Amer Kaabi Department of Basic Science Khoramshahr Marine Science and Technology University

More information

The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications

The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications MAX PLANCK INSTITUTE Elgersburg Workshop Elgersburg February 11-14, 2013 The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications Timo Reis 1 Matthias Voigt 2 1 Department

More information

Preconditioned inverse iteration and shift-invert Arnoldi method

Preconditioned inverse iteration and shift-invert Arnoldi method Preconditioned inverse iteration and shift-invert Arnoldi method Melina Freitag Department of Mathematical Sciences University of Bath CSC Seminar Max-Planck-Institute for Dynamics of Complex Technical

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

arxiv: v1 [math.na] 4 Apr 2018

arxiv: v1 [math.na] 4 Apr 2018 Efficient Solution of Large-Scale Algebraic Riccati Equations Associated with Index-2 DAEs via the Inexact Low-Rank Newton-ADI Method Peter Benner a,d, Matthias Heinkenschloss b,1,, Jens Saak a, Heiko

More information

A POD Projection Method for Large-Scale Algebraic Riccati Equations

A POD Projection Method for Large-Scale Algebraic Riccati Equations A POD Projection Method for Large-Scale Algebraic Riccati Equations Boris Kramer Department of Mathematics Virginia Tech Blacksburg, VA 24061-0123 Email: bokr@vt.edu John R. Singler Department of Mathematics

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

Numerical Solution of Descriptor Riccati Equations for Linearized Navier-Stokes Control Problems

Numerical Solution of Descriptor Riccati Equations for Linearized Navier-Stokes Control Problems MAX PLANCK INSTITUTE Control and Optimization with Differential-Algebraic Constraints Banff, October 24 29, 2010 Numerical Solution of Descriptor Riccati Equations for Linearized Navier-Stokes Control

More information

Towards high performance IRKA on hybrid CPU-GPU systems

Towards high performance IRKA on hybrid CPU-GPU systems Towards high performance IRKA on hybrid CPU-GPU systems Jens Saak in collaboration with Georg Pauer (OVGU/MPI Magdeburg) Kapil Ahuja, Ruchir Garg (IIT Indore) Hartwig Anzt, Jack Dongarra (ICL Uni Tennessee

More information

Stability and Inertia Theorems for Generalized Lyapunov Equations

Stability and Inertia Theorems for Generalized Lyapunov Equations Published in Linear Algebra and its Applications, 355(1-3, 2002, pp. 297-314. Stability and Inertia Theorems for Generalized Lyapunov Equations Tatjana Stykel Abstract We study generalized Lyapunov equations

More information

Gramians based model reduction for hybrid switched systems

Gramians based model reduction for hybrid switched systems Gramians based model reduction for hybrid switched systems Y. Chahlaoui Younes.Chahlaoui@manchester.ac.uk Centre for Interdisciplinary Computational and Dynamical Analysis (CICADA) School of Mathematics

More information

Computational methods for large-scale matrix equations: recent advances and applications. V. Simoncini

Computational methods for large-scale matrix equations: recent advances and applications. V. Simoncini Computational methods for large-scale matrix equations: recent advances and applications V. Simoncini Dipartimento di Matematica Alma Mater Studiorum - Università di Bologna valeria.simoncini@unibo.it

More information

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc. Lecture 11: CMSC 878R/AMSC698R Iterative Methods An introduction Outline Direct Solution of Linear Systems Inverse, LU decomposition, Cholesky, SVD, etc. Iterative methods for linear systems Why? Matrix

More information

Iterative Methods for Linear Systems of Equations

Iterative Methods for Linear Systems of Equations Iterative Methods for Linear Systems of Equations Projection methods (3) ITMAN PhD-course DTU 20-10-08 till 24-10-08 Martin van Gijzen 1 Delft University of Technology Overview day 4 Bi-Lanczos method

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 16-02 The Induced Dimension Reduction method applied to convection-diffusion-reaction problems R. Astudillo and M. B. van Gijzen ISSN 1389-6520 Reports of the Delft

More information

Inexact inverse iteration with preconditioning

Inexact inverse iteration with preconditioning Department of Mathematical Sciences Computational Methods with Applications Harrachov, Czech Republic 24th August 2007 (joint work with M. Robbé and M. Sadkane (Brest)) 1 Introduction 2 Preconditioned

More information

On the -Sylvester Equation AX ± X B = C

On the -Sylvester Equation AX ± X B = C Department of Applied Mathematics National Chiao Tung University, Taiwan A joint work with E. K.-W. Chu and W.-W. Lin Oct., 20, 2008 Outline 1 Introduction 2 -Sylvester Equation 3 Numerical Examples 4

More information

Balanced Truncation 1

Balanced Truncation 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.242, Fall 2004: MODEL REDUCTION Balanced Truncation This lecture introduces balanced truncation for LTI

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

Model Order Reduction of Continuous LTI Large Descriptor System Using LRCF-ADI and Square Root Balanced Truncation

Model Order Reduction of Continuous LTI Large Descriptor System Using LRCF-ADI and Square Root Balanced Truncation , July 1-3, 15, London, U.K. Model Order Reduction of Continuous LI Large Descriptor System Using LRCF-ADI and Square Root Balanced runcation Mehrab Hossain Likhon, Shamsil Arifeen, and Mohammad Sahadet

More information

Efficient iterative algorithms for linear stability analysis of incompressible flows

Efficient iterative algorithms for linear stability analysis of incompressible flows IMA Journal of Numerical Analysis Advance Access published February 27, 215 IMA Journal of Numerical Analysis (215) Page 1 of 21 doi:1.193/imanum/drv3 Efficient iterative algorithms for linear stability

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

Denis ARZELIER arzelier

Denis ARZELIER   arzelier COURSE ON LMI OPTIMIZATION WITH APPLICATIONS IN CONTROL PART II.2 LMIs IN SYSTEMS CONTROL STATE-SPACE METHODS PERFORMANCE ANALYSIS and SYNTHESIS Denis ARZELIER www.laas.fr/ arzelier arzelier@laas.fr 15

More information

An Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems

An Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems An Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems H. Voss 1 Introduction In this paper we consider the nonlinear eigenvalue problem T (λ)x = 0 (1) where T (λ) R n n is a family of symmetric

More information

Computational methods for large-scale linear matrix equations and application to FDEs. V. Simoncini

Computational methods for large-scale linear matrix equations and application to FDEs. V. Simoncini Computational methods for large-scale linear matrix equations and application to FDEs V. Simoncini Dipartimento di Matematica, Università di Bologna valeria.simoncini@unibo.it Joint work with: Tobias Breiten,

More information

Model reduction of coupled systems

Model reduction of coupled systems Model reduction of coupled systems Tatjana Stykel Technische Universität Berlin ( joint work with Timo Reis, TU Kaiserslautern ) Model order reduction, coupled problems and optimization Lorentz Center,

More information

1 Sylvester equations

1 Sylvester equations 1 Sylvester equations Notes for 2016-11-02 The Sylvester equation (or the special case of the Lyapunov equation) is a matrix equation of the form AX + XB = C where A R m m, B R n n, B R m n, are known,

More information

Lecture 3: Inexact inverse iteration with preconditioning

Lecture 3: Inexact inverse iteration with preconditioning Lecture 3: Department of Mathematical Sciences CLAPDE, Durham, July 2008 Joint work with M. Freitag (Bath), and M. Robbé & M. Sadkane (Brest) 1 Introduction 2 Preconditioned GMRES for Inverse Power Method

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information

Solving Large Scale Lyapunov Equations

Solving Large Scale Lyapunov Equations Solving Large Scale Lyapunov Equations D.C. Sorensen Thanks: Mark Embree John Sabino Kai Sun NLA Seminar (CAAM 651) IWASEP VI, Penn State U., 22 May 2006 The Lyapunov Equation AP + PA T + BB T = 0 where

More information

FEM and sparse linear system solving

FEM and sparse linear system solving FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich

More information

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm CS 622 Data-Sparse Matrix Computations September 19, 217 Lecture 9: Krylov Subspace Methods Lecturer: Anil Damle Scribes: David Eriksson, Marc Aurele Gilles, Ariah Klages-Mundt, Sophia Novitzky 1 Introduction

More information

Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms

Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms 2. First Results and Algorithms Michael Eiermann Institut für Numerische Mathematik und Optimierung Technische

More information

Model reduction for linear systems by balancing

Model reduction for linear systems by balancing Model reduction for linear systems by balancing Bart Besselink Jan C. Willems Center for Systems and Control Johann Bernoulli Institute for Mathematics and Computer Science University of Groningen, Groningen,

More information

On ADI Method for Sylvester Equations

On ADI Method for Sylvester Equations On ADI Method for Sylvester Equations Ninoslav Truhar Ren-Cang Li Technical Report 2008-02 http://www.uta.edu/math/preprint/ On ADI Method for Sylvester Equations Ninoslav Truhar Ren-Cang Li February 8,

More information

Iterative Methods for Sparse Linear Systems

Iterative Methods for Sparse Linear Systems Iterative Methods for Sparse Linear Systems Luca Bergamaschi e-mail: berga@dmsa.unipd.it - http://www.dmsa.unipd.it/ berga Department of Mathematical Methods and Models for Scientific Applications University

More information

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices Chapter 7 Iterative methods for large sparse linear systems In this chapter we revisit the problem of solving linear systems of equations, but now in the context of large sparse systems. The price to pay

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

Model Reduction for Unstable Systems

Model Reduction for Unstable Systems Model Reduction for Unstable Systems Klajdi Sinani Virginia Tech klajdi@vt.edu Advisor: Serkan Gugercin October 22, 2015 (VT) SIAM October 22, 2015 1 / 26 Overview 1 Introduction 2 Interpolatory Model

More information

Daniel B. Szyld and Jieyong Zhou. Report June 2016

Daniel B. Szyld and Jieyong Zhou. Report June 2016 Numerical solution of singular semi-stable Lyapunov equations Daniel B. Szyld and Jieyong Zhou Report 16-06-02 June 2016 Department of Mathematics Temple University Philadelphia, PA 19122 This report is

More information

Lab 1: Iterative Methods for Solving Linear Systems

Lab 1: Iterative Methods for Solving Linear Systems Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as

More information

On solving linear systems arising from Shishkin mesh discretizations

On solving linear systems arising from Shishkin mesh discretizations On solving linear systems arising from Shishkin mesh discretizations Petr Tichý Faculty of Mathematics and Physics, Charles University joint work with Carlos Echeverría, Jörg Liesen, and Daniel Szyld October

More information

A Continuation Approach to a Quadratic Matrix Equation

A Continuation Approach to a Quadratic Matrix Equation A Continuation Approach to a Quadratic Matrix Equation Nils Wagner nwagner@mecha.uni-stuttgart.de Institut A für Mechanik, Universität Stuttgart GAMM Workshop Applied and Numerical Linear Algebra September

More information

ITERATIVE METHODS BASED ON KRYLOV SUBSPACES

ITERATIVE METHODS BASED ON KRYLOV SUBSPACES ITERATIVE METHODS BASED ON KRYLOV SUBSPACES LONG CHEN We shall present iterative methods for solving linear algebraic equation Au = b based on Krylov subspaces We derive conjugate gradient (CG) method

More information

6.241 Dynamic Systems and Control

6.241 Dynamic Systems and Control 6.241 Dynamic Systems and Control Lecture 24: H2 Synthesis Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology May 4, 2011 E. Frazzoli (MIT) Lecture 24: H 2 Synthesis May

More information

Taylor expansions for the HJB equation associated with a bilinear control problem

Taylor expansions for the HJB equation associated with a bilinear control problem Taylor expansions for the HJB equation associated with a bilinear control problem Tobias Breiten, Karl Kunisch and Laurent Pfeiffer University of Graz, Austria Rome, June 217 Motivation dragged Brownian

More information

On a quadratic matrix equation associated with an M-matrix

On a quadratic matrix equation associated with an M-matrix Article Submitted to IMA Journal of Numerical Analysis On a quadratic matrix equation associated with an M-matrix Chun-Hua Guo Department of Mathematics and Statistics, University of Regina, Regina, SK

More information

arxiv: v3 [math.na] 9 Oct 2016

arxiv: v3 [math.na] 9 Oct 2016 RADI: A low-ran ADI-type algorithm for large scale algebraic Riccati equations Peter Benner Zvonimir Bujanović Patric Kürschner Jens Saa arxiv:50.00040v3 math.na 9 Oct 206 Abstract This paper introduces

More information

Numerical Methods for Large Scale Eigenvalue Problems

Numerical Methods for Large Scale Eigenvalue Problems MAX PLANCK INSTITUTE Summer School in Trogir, Croatia Oktober 12, 2011 Numerical Methods for Large Scale Eigenvalue Problems Patrick Kürschner Max Planck Institute for Dynamics of Complex Technical Systems

More information

Iterative methods for positive definite linear systems with a complex shift

Iterative methods for positive definite linear systems with a complex shift Iterative methods for positive definite linear systems with a complex shift William McLean, University of New South Wales Vidar Thomée, Chalmers University November 4, 2011 Outline 1. Numerical solution

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

Model reduction of large-scale dynamical systems

Model reduction of large-scale dynamical systems Model reduction of large-scale dynamical systems Lecture III: Krylov approximation and rational interpolation Thanos Antoulas Rice University and Jacobs University email: aca@rice.edu URL: www.ece.rice.edu/

More information

Hierarchical Parallel Solution of Stochastic Systems

Hierarchical Parallel Solution of Stochastic Systems Hierarchical Parallel Solution of Stochastic Systems Second M.I.T. Conference on Computational Fluid and Solid Mechanics Contents: Simple Model of Stochastic Flow Stochastic Galerkin Scheme Resulting Equations

More information

Approximate Low Rank Solution of Generalized Lyapunov Matrix Equations via Proper Orthogonal Decomposition

Approximate Low Rank Solution of Generalized Lyapunov Matrix Equations via Proper Orthogonal Decomposition Applied Mathematical Sciences, Vol. 4, 2010, no. 1, 21-30 Approximate Low Rank Solution of Generalized Lyapunov Matrix Equations via Proper Orthogonal Decomposition Amer Kaabi Department of Basic Science

More information

Efficient Solvers for Stochastic Finite Element Saddle Point Problems

Efficient Solvers for Stochastic Finite Element Saddle Point Problems Efficient Solvers for Stochastic Finite Element Saddle Point Problems Catherine E. Powell c.powell@manchester.ac.uk School of Mathematics University of Manchester, UK Efficient Solvers for Stochastic Finite

More information

IDR(s) A family of simple and fast algorithms for solving large nonsymmetric systems of linear equations

IDR(s) A family of simple and fast algorithms for solving large nonsymmetric systems of linear equations IDR(s) A family of simple and fast algorithms for solving large nonsymmetric systems of linear equations Harrachov 2007 Peter Sonneveld en Martin van Gijzen August 24, 2007 1 Delft University of Technology

More information

THE solution of the absolute value equation (AVE) of

THE solution of the absolute value equation (AVE) of The nonlinear HSS-like iterative method for absolute value equations Mu-Zheng Zhu Member, IAENG, and Ya-E Qi arxiv:1403.7013v4 [math.na] 2 Jan 2018 Abstract Salkuyeh proposed the Picard-HSS iteration method

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD

More information

Implicit Volterra Series Interpolation for Model Reduction of Bilinear Systems

Implicit Volterra Series Interpolation for Model Reduction of Bilinear Systems Max Planck Institute Magdeburg Preprints Mian Ilyas Ahmad Ulrike Baur Peter Benner Implicit Volterra Series Interpolation for Model Reduction of Bilinear Systems MAX PLANCK INSTITUT FÜR DYNAMIK KOMPLEXER

More information

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna.

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna. Recent advances in approximation using Krylov subspaces V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The framework It is given an operator

More information

H 2 optimal model reduction - Wilson s conditions for the cross-gramian

H 2 optimal model reduction - Wilson s conditions for the cross-gramian H 2 optimal model reduction - Wilson s conditions for the cross-gramian Ha Binh Minh a, Carles Batlle b a School of Applied Mathematics and Informatics, Hanoi University of Science and Technology, Dai

More information

Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms

Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms 4. Monotonicity of the Lanczos Method Michael Eiermann Institut für Numerische Mathematik und Optimierung Technische

More information

APPLIED NUMERICAL LINEAR ALGEBRA

APPLIED NUMERICAL LINEAR ALGEBRA APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation

More information

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1 Parallel Numerics, WT 2016/2017 5 Iterative Methods for Sparse Linear Systems of Equations page 1 of 1 Contents 1 Introduction 1.1 Computer Science Aspects 1.2 Numerical Problems 1.3 Graphs 1.4 Loop Manipulations

More information

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Stefania Bellavia Dipartimento di Energetica S. Stecco Università degli Studi di Firenze Joint work with Jacek

More information

FEL3210 Multivariable Feedback Control

FEL3210 Multivariable Feedback Control FEL3210 Multivariable Feedback Control Lecture 8: Youla parametrization, LMIs, Model Reduction and Summary [Ch. 11-12] Elling W. Jacobsen, Automatic Control Lab, KTH Lecture 8: Youla, LMIs, Model Reduction

More information

Balancing-Related Model Reduction for Large-Scale Systems

Balancing-Related Model Reduction for Large-Scale Systems Balancing-Related Model Reduction for Large-Scale Systems Peter Benner Professur Mathematik in Industrie und Technik Fakultät für Mathematik Technische Universität Chemnitz D-09107 Chemnitz benner@mathematik.tu-chemnitz.de

More information