Solving Large-Scale Matrix Equations: Recent Progress and New Applications

Size: px
Start display at page:

Download "Solving Large-Scale Matrix Equations: Recent Progress and New Applications"

Transcription

1 19th Conference of the International Linear Algebra Society (ILAS) Seoul, August 6 9, 2014 Solving Large-Scale Matrix Equations: Recent Progress and New Applications Peter Benner Max Planck Institute for Dynamics of Complex Technical Systems Computational Methods in Systems and Control Theory Magdeburg, Germany benner@mpi-magdeburg.mpg.de Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 1/52

2 Overview 1 Introduction 2 Applications 3 Solving Large-Scale Sylvester and Lyapunov Equations 4 Solving Large-Scale Lyapunov-plus-Positive Equations 5 References Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 2/52

3 Overview 1 Introduction Classification of Linear Matrix Equations Existence and Uniqueness of Solutions 2 Applications 3 Solving Large-Scale Sylvester and Lyapunov Equations 4 Solving Large-Scale Lyapunov-plus-Positive Equations 5 References Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 3/52

4 Introduction Linear Matrix Equations/Men with Beards Sylvester equation James Joseph Sylvester (September 3, 1814 March 15, 1897) AX + XB = C. Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 4/52

5 Introduction Linear Matrix Equations/Men with Beards Sylvester equation Lyapunov equation James Joseph Sylvester (September 3, 1814 March 15, 1897) AX + XB = C. Alexander Michailowitsch Ljapunow (June 6, 1857 November 3, 1918) AX + XA T = C, C = C T. Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 4/52

6 Introduction Generalizations of Sylvester (AX + XB = C) and Lyapunov (AX + XA T = C) Equations Generalized Sylvester equation: AXD + EXB = C. Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 5/52

7 Introduction Generalizations of Sylvester (AX + XB = C) and Lyapunov (AX + XA T = C) Equations Generalized Sylvester equation: Generalized Lyapunov equation: AXD + EXB = C. AXE T + EXA T = C, C = C T. Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 5/52

8 Introduction Generalizations of Sylvester (AX + XB = C) and Lyapunov (AX + XA T = C) Equations Generalized Sylvester equation: Generalized Lyapunov equation: AXD + EXB = C. AXE T + EXA T = C, C = C T. Stein equation: X AXB = C. Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 5/52

9 Introduction Generalizations of Sylvester (AX + XB = C) and Lyapunov (AX + XA T = C) Equations Generalized Sylvester equation: Generalized Lyapunov equation: Stein equation: AXD + EXB = C. AXE T + EXA T = C, C = C T. X AXB = C. (Generalized) discrete Lyapunov/Stein equation: EXE T AXA T = C, C = C T. Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 5/52

10 Introduction Generalizations of Sylvester (AX + XB = C) and Lyapunov (AX + XA T = C) Equations Generalized Sylvester equation: Generalized Lyapunov equation: Stein equation: AXD + EXB = C. AXE T + EXA T = C, C = C T. X AXB = C. (Generalized) discrete Lyapunov/Stein equation: Note: EXE T AXA T = C, C = C T. Consider only regular cases, having a unique solution! Solutions of symmetric cases are symmetric, X = X T R n n ; otherwise, X R n l with n l in general. Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 5/52

11 Introduction Generalizations of Sylvester (AX + XB = C) and Lyapunov (AX + XA T = C) Equations Bilinear Lyapunov equation/lyapunov-plus-positive equation: AX + XA T + m N k XNk T = C, C = C T. k=1 Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 5/52

12 Introduction Generalizations of Sylvester (AX + XB = C) and Lyapunov (AX + XA T = C) Equations Bilinear Lyapunov equation/lyapunov-plus-positive equation: AX + XA T + Bilinear Sylvester equation: m N k XNk T = C, C = C T. k=1 AX + XB + m N k XM k = C. k=1 Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 5/52

13 Introduction Generalizations of Sylvester (AX + XB = C) and Lyapunov (AX + XA T = C) Equations Bilinear Lyapunov equation/lyapunov-plus-positive equation: AX + XA T + Bilinear Sylvester equation: m N k XNk T = C, C = C T. k=1 AX + XB + m N k XM k = C. k=1 (Generalized) discrete bilinear Lyapunov/Stein-minus-positive eq.: EXE T AXA T m N k XNk T = C, C = C T. k=1 Note: Again consider only regular cases, symmetric equations have symmetric solutions. Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 5/52

14 Introduction Existence of Solutions of Linear Matrix Equations I Exemplarily, consider the generalized Sylvester equation AXD + EXB = C. (1) Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 6/52

15 Introduction Existence of Solutions of Linear Matrix Equations I Exemplarily, consider the generalized Sylvester equation AXD + EXB = C. (1) Vectorization (using Kronecker product) representation as linear system: ( D T A + B T E Ax = c. } {{ } =:A ) vec(x ) }{{} =:x = vec(c) }{{} =:c Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 6/52

16 Introduction Existence of Solutions of Linear Matrix Equations I Exemplarily, consider the generalized Sylvester equation AXD + EXB = C. (1) Vectorization (using Kronecker product) representation as linear system: ( D T A + B T E Ax = c. } {{ } =:A ) vec(x ) }{{} =:x = vec(c) }{{} =:c = (1) has a unique solution A is nonsingular Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 6/52

17 Introduction Existence of Solutions of Linear Matrix Equations I Exemplarily, consider the generalized Sylvester equation AXD + EXB = C. (1) Vectorization (using Kronecker product) representation as linear system: ( D T A + B T E Ax = c. } {{ } =:A ) vec(x ) }{{} =:x = vec(c) }{{} =:c = (1) has a unique solution A is nonsingular Lemma Λ (A) = {α j + β k α j Λ (A, E), β k Λ (B, D)}. Hence, (1) has unique solution = Λ (A, E) Λ (B, D) =. Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 6/52

18 Introduction Existence of Solutions of Linear Matrix Equations I Exemplarily, consider the generalized Sylvester equation AXD + EXB = C. (1) Vectorization (using Kronecker product) representation as linear system: ( D T A + B T E Ax = c. } {{ } =:A ) vec(x ) }{{} =:x = vec(c) }{{} =:c = (1) has a unique solution A is nonsingular Lemma Λ (A) = {α j + β k α j Λ (A, E), β k Λ (B, D)}. Hence, (1) has unique solution = Λ (A, E) Λ (B, D) =. Example: Lyapunov equation AX + XA T = C has unique solution µ C : ±µ Λ (A). Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 6/52

19 Introduction The Classical Lyapunov Theorem Theorem (Lyapunov 1892) Let A R n n and consider the Lyapunov operator L : X AX + XA T. Then the following are equivalent: (a) Y > 0: X > 0: L(X ) = Y, (b) Y > 0: X > 0: L(X ) = Y, (c) Λ (A) C := {z C Rz < 0}, i.e., A is (asymptotically) stable or Hurwitz. A. M. Lyapunov. The General Problem of the Stability of Motion (in Russian). Doctoral dissertation, Univ. Kharkov English translation: Stability of Motion, Academic Press, New-York & London, P. Lancaster, M. Tismenetsky. The Theory of Matrices (2nd edition). Academic Press, Orlando, FL, [Chapter 13] Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 7/52

20 Introduction The Classical Lyapunov Theorem Theorem (Lyapunov 1892) Let A R n n and consider the Lyapunov operator L : X AX + XA T. Then the following are equivalent: (a) Y > 0: X > 0: L(X ) = Y, (b) Y > 0: X > 0: L(X ) = Y, (c) Λ (A) C := {z C Rz < 0}, i.e., A is (asymptotically) stable or Hurwitz. The proof (c) (a) is trivial from the necessary and sufficient condition for existence and uniqueness, apart from the positive definiteness. The latter is shown by studying z H Yz for all eigenvectors z of A. A. M. Lyapunov. The General Problem of the Stability of Motion (in Russian). Doctoral dissertation, Univ. Kharkov English translation: Stability of Motion, Academic Press, New-York & London, P. Lancaster, M. Tismenetsky. The Theory of Matrices (2nd edition). Academic Press, Orlando, FL, [Chapter 13] Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 7/52

21 Introduction The Classical Lyapunov Theorem Theorem (Lyapunov 1892) Let A R n n and consider the Lyapunov operator L : X AX + XA T. Then the following are equivalent: (a) Y > 0: X > 0: L(X ) = Y, (b) Y > 0: X > 0: L(X ) = Y, (c) Λ (A) C := {z C Rz < 0}, i.e., A is (asymptotically) stable or Hurwitz. Important in applications: the nonnegative case: L(X ) = AX + XA T = WW T, where W R n n W, n W n. A Hurwitz unique solution X = ZZ T for Z R n n X with 1 n X n. A. M. Lyapunov. The General Problem of the Stability of Motion (in Russian). Doctoral dissertation, Univ. Kharkov English translation: Stability of Motion, Academic Press, New-York & London, P. Lancaster, M. Tismenetsky. The Theory of Matrices (2nd edition). Academic Press, Orlando, FL, [Chapter 13] Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 7/52

22 Introduction Existence of Solutions of Linear Matrix Equations II For Lyapunov-plus-positive-type equations, the solution theory is more involved. Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 8/52

23 Introduction Existence of Solutions of Linear Matrix Equations II For Lyapunov-plus-positive-type equations, the solution theory is more involved. Focus on the Lyapunov-plus-positive case: m AX + XA }{{ T + N } k XNk T = C, C = C T 0. =:L(X ) k=1 }{{} =:P(X ) Note: The operator P(X ) m N k XNk T is nonnegative in the sense that P(X ) 0, whenever X 0. j=1 Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 8/52

24 Introduction Existence of Solutions of Linear Matrix Equations II For Lyapunov-plus-positive-type equations, the solution theory is more involved. Focus on the Lyapunov-plus-positive case: m AX + XA }{{ T + N } k XNk T = C, C = C T 0. =:L(X ) k=1 }{{} =:P(X ) Note: The operator P(X ) m N k XNk T is nonnegative in the sense that P(X ) 0, whenever X 0. This nonnegative Lyapunov-plus-positive equation is the one occurring in applications like model order reduction. j=1 Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 8/52

25 Introduction Existence of Solutions of Linear Matrix Equations II For Lyapunov-plus-positive-type equations, the solution theory is more involved. Focus on the Lyapunov-plus-positive case: m AX + XA }{{ T + N } k XNk T = C, C = C T 0. =:L(X ) k=1 }{{} =:P(X ) Note: The operator P(X ) m N k XNk T is nonnegative in the sense that P(X ) 0, whenever X 0. This nonnegative Lyapunov-plus-positive equation is the one occurring in applications like model order reduction. If A is Hurwitz and the N k are small enough, eigenvalue perturbation theory yields existence and uniqueness of solution. This is related to the concept of bounded-input bounded-output (BIBO) stability of dynamical systems. j=1 Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 8/52

26 Introduction Existence of Solutions of Linear Matrix Equations II Theorem (Schneider 1965, Damm 2004) Let A R n n and consider the Lyapunov operator L : X AX + XA T and a nonnegative operator P (i.e., P(X ) 0 if X 0). The following are equivalent: (a) Y > 0: X > 0: L(X ) + P(X ) = Y, (b) Y > 0: X > 0: L(X ) + P(X ) = Y, (c) Y 0 with (A, Y ) controllable: X > 0: L(X ) + P(X ) = Y, (d) Λ (L + P) C := {z C Rz < 0}, (e) Λ (L) C and ρ(l 1 P) < 1, where ρ(t ) = max{ λ λ Λ (T )} = spectral radius of T. T. Damm. Rational Matrix Equations in Stochastic Control. Number 297 in Lecture Notes in Control and Information Sciences. Springer-Verlag, H. Schneider. Positive operators and an inertia theorem. Numerische Mathematik, 7:11 17, Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 8/52

27 Overview 1 Introduction 2 Applications Stability Theory Biochemical Engineering Fractional Differential Equations Some Classical Applications 3 Solving Large-Scale Sylvester and Lyapunov Equations 4 Solving Large-Scale Lyapunov-plus-Positive Equations 5 References Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 9/52

28 Applications Stability Theory I Classical From Lyapunov s theorem, immediately obtain characterization of asymptotic stability of linear dynamical systems ẋ(t) = Ax(t). (2) Theorem (Lyapunov) The following are equivalent: For (2), the zero state is asymptotically stable. The Lyapunov equation AX + XA T = Y has a unique solution X = X T > 0 for all Y = Y T < 0. A is Hurwitz. A. M. Lyapunov. The General Problem of the Stability of Motion (In Russian). Doctoral dissertation, Univ. Kharkov English translation: Stability of Motion, Academic Press, New-York & London, Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 10/52

29 Applications Stability Theory II Detecting Hopf Bifurcations Detecting instability in large-scale dynamical systems caused by Hopf bifurcations identifying the rightmost pair of complex eigenvalues of large sparse generalized eigenvalue problems. [Meerbergen/Spence 2010] suggest Lyapunov inverse iteration for the dynamical system with parameter µ R Mx t = f (x; µ). Task: Identify critical points (x, µ ) where the steady-state solution (i.e., x t 0) changes from being stable to unstable. Their continuation algorithm involves solution of generalized Lyapunov equation AX j+1 M T + MX j+1 A T = F j F (X j ), where A = D xf ( x; µ) and ( x; µ) is current estimate of critical point. K. Meerbergen, A. Spence. Inverse iteration for purely imaginary eigenvalues with application to the detection of Hopf bifurcations in large-scale problems. SIAM Journal on Matrix Analysis and Applications, 31: , H.C. Elman, K. Meerbergen, A. Spence, M. Wu. Lyapunov inverse iteration for identifying Hopf bifurcations in models of incompressible flow. SIAM Journal on Scientific Computing, 34(3):A1584-A1606, Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 10/52

30 Applications Stability Theory III Metastable Equilibria of Stochastic Systems Metastable states of stochastic processes Figure: Metastable states (red dashed) and path of of a 1-dimensional stochastic ODE. This is Fig. 2.2(c) of [Kuehn 2012]. C. Kuehn. Deterministic continuation of stochastic metastable equilibria via Lyapunov equations and ellipsoids. SIAM Journal on Scientific Computing, 34(3):A1635-A1658, Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 10/52

31 Applications Stability Theory III Metastable Equilibria of Stochastic Systems Tracking (w.r.t. a parameter µ R) metastable equilibrium points of stochastic differential equations (SDEs) via continuation methods: Let x R n and consider the SDE dx t = f (x t; µ)dt + σf (x t; µ)dw t, where W t = k-dimensional Brownian motion, σ > 0 controls the noise level and f, F sufficiently smooth. For metastable equilibrium points x := x (µ), stochastic paths with high probability stay in regions characterized by covariance matrix C of x t, linearized at x, defined by Lyapunov equation A(x ; µ)c + CA(x ; µ) T + σ 2 F (x ; µ)f (x ; µ) T = 0. where A(x; µ) := (D xf )(x; µ). C. Kuehn. Deterministic continuation of stochastic metastable equilibria via Lyapunov equations and ellipsoids. SIAM Journal on Scientific Computing, 34(3):A1635-A1658, Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 10/52

32 Applications Biochemical Engineering Biochemical reaction networks under certain assumptions can be described by ċ(t) = Sv(c(t), q), (2) where S R n m is the stoichiometric matrix, c(t) R n denotes the species concentrations, v(t) R m the reaction rates, and q the rate constants. In order to take molecular fluctuations (or intrinsic noise) due the stochasticity of the biochemical reactions into account, need the covariance matrix C R n n of the concentrations. With the diffusion matrix D R n n reflecting the randomness of the reaction events, and the drift matrix A = v c (c0 ) R n n denoting the Jacobian of (2) along the macroscopic state trajectory at the equilibrium state c 0, C is determined by the Lyapunov equation AC + CA T + D = 0. P. Kügler, W. Yang. Identification of alterations in the Jacobian of biochemical reaction networks from steady state covariance data at two conditions. Journal of Mathematical Biology, 68: , Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 11/52

33 Applications Fractional Differential Equations Fractional partial differential equations have received recent interest in various fields, e.g., viscoelasticity (e.g., Kelvin-Voigt fractional derivative model), image processing, electro-analytical chemistry, biomedical engineering. T. Breiten, V. Simoncini, M. Stoll. Fast iterative solvers for fractional differential equations. Max Planck Institute Magdeburg Preprints MPIMD/14-02, January Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 12/52

34 Applications Fractional Differential Equations Fractional partial differential equations have received recent interest in various fields, e.g., viscoelasticity (e.g., Kelvin-Voigt fractional derivative model), image processing, electro-analytical chemistry, biomedical engineering. Definition (Caputo derivative) Given f C n (a, b), α [n 1, n), Caputo derivative of real order α is defined by: C a Dt α 1 t f (n) (s) f (t) := ds. Γ(n α) (t s) α n+1 a T. Breiten, V. Simoncini, M. Stoll. Fast iterative solvers for fractional differential equations. Max Planck Institute Magdeburg Preprints MPIMD/14-02, January Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 12/52

35 Applications Fractional Differential Equations Fractional partial differential equations have received recent interest in various fields, e.g., viscoelasticity (e.g., Kelvin-Voigt fractional derivative model), image processing, electro-analytical chemistry, biomedical engineering. Definition (Riemann-Liouville derivative) Given integrable f (t) with t [a, b], β [n 1, n), left sided Riemann-Liouville derivative of real order β is defined by: ( ) n RL a D β 1 d t f (s) t f (t) := ds. Γ(n β) dt (t s) β n+1 a T. Breiten, V. Simoncini, M. Stoll. Fast iterative solvers for fractional differential equations. Max Planck Institute Magdeburg Preprints MPIMD/14-02, January Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 12/52

36 Applications Fractional Differential Equations Consider fractional heat equation C 0 Dt α u(x, t) RL a Dx β u(x, t) = f (x, t). For discretization use Grünwald-Letnikov formula (β (1, 2)) RL a Dx β 1 u(x, t) = lim M h α and as an approximation get RL a Dx β u n+1 i 1 hx β M g β,k u(x (k 1)h, t) k=0 i+1 k=0 g β,k u n+1 i k+1. T. Breiten, V. Simoncini, M. Stoll. Fast iterative solvers for fractional differential equations. Max Planck Institute Magdeburg Preprints MPIMD/14-02, January Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 12/52

37 Applications Fractional Differential Equations Consider fractional heat equation C 0 Dt α u(x, t) RL a Dx β u(x, t) = f (x, t). For discretization use Grünwald-Letnikov formula (β (1, 2)) RL a Dx β 1 u(x, t) = lim M h α M g β,k u(x (k 1)h, t) k=0 and as an approximation get in (Toeplitz) matrix form h β x g β,1 g β,0 0 g β,2 g β,1 g β,0 g β,3 g β,2 g β,1 g β, gβ,2 g β, gβ, gβ,2 g β,1 g β,0 g β,nx g β,nx g β,2 g β,1 } {{ } T nx β u n+1 1 u n+1 2 u n+1 3. u n+1 nx 1 u n+1 nx Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 12/52.

38 Applications Fractional Differential Equations For fractional heat equation equation C 0 Dt α u(x, t) RL a Dx β u(x, t) = f (x, t) get ( (T nt α I nx ) (I nt L nx β ) ) u = f where Tα nt and L nx β are Toeplitz matrices. With u = vec(u) and dropping all superscripts this corresponds to the Sylvester equation UT T α L β U = F. T. Breiten, V. Simoncini, M. Stoll. Fast iterative solvers for fractional differential equations. Max Planck Institute Magdeburg Preprints MPIMD/14-02, January Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 12/52

39 Some Classical Applications Algebraic Riccati Equations (ARE) Solving AREs by Newtons s Method Feedback control design often involves solution of A T X + XA XGX + H = 0, G = G T, H = H T. In each Newton step, solve Lyapunov equation (A GX j ) T X j+1 + X j+1 (A GX j ) = X j GX j H. Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 13/52

40 Some Classical Applications Algebraic Riccati Equations (ARE) Solving AREs by Newtons s Method Feedback control design often involves solution of A T X + XA XGX + H = 0, G = G T, H = H T. In each Newton step, solve Lyapunov equation (A GX j ) T X j+1 + X j+1 (A GX j ) = X j GX j H. Decoupling of dynamical systems, e.g., in slow/fast modes, requires solution of nonsymmetric ARE AX + XF XGX + H = 0. In each Newton step, solve Sylvester equation (A X j G)X j+1 + X j+1 (F GX j ) = X j GX j H. Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 13/52

41 Some Classical Applications Model Reduction Model Reduction via Balanced Truncation For linear dynamical system ẋ(t) = Ax(t) + Bu(t), y(t) = Cx r (t), x(t) R n find reduced-order system x r (t) = A r x r (t) + B r u(t), y r (t) = C r x r (t), x(t) R r, r n such that y(t) y r (t) < δ. The popular method balanced truncation requires the solution of the dual Lyapunov equations AX + XA T + BB T = 0, A T Y + YA + C T C = 0. Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 13/52

42 Overview This part: joint work with Patrick Kürschner and Jens Saak (MPI Magdeburg) 1 Introduction 2 Applications 3 Solving Large-Scale Sylvester and Lyapunov Equations Some Basics LR-ADI Derivation Low-Rank Structure of the Residual Realification of LR-ADI Self-generating Shifts The New LR-ADI Applied to Lyapunov Equations 4 Solving Large-Scale Lyapunov-plus-Positive Equations 5 References Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 14/52

43 Solving Large-Scale Sylvester and Lyapunov Equations The Low-Rank Structure Sylvester Equations Find X R n m solving AX X B = F G T, where A R n n, B R m m, F R n r, G R m r. If n, m large, but r n, m X has a small numerical rank. [Penzl 1999, Grasedyck 2004, Antoulas/Sorensen/Zhou 2002] rank(x, τ) = f min(n, m) singular values of example 10 0 σ(x ) σ 155 u u Compute low-rank solution factors Z R n f, Y R m f, D R f f, such that X ZDY T with f min(n, m). Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 15/52

44 Solving Large-Scale Sylvester and Lyapunov Equations The Low-Rank Structure Lyapunov Equations Find X R n n solving where A R n n, F R n r. If n large, but r n X has a small numerical rank. [Penzl 1999, Grasedyck 2004, Antoulas/Sorensen/Zhou 2002] rank(x, τ) = f n AX +X A T = F F T, u Compute low-rank solution factors Z R n f, D R f f, such that X ZDZ T with f n. singular values of example σ 155 u σ(x ) Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 15/52

45 Solving Large-Scale Sylvester and Lyapunov Equations Some Basics Sylvester equation AX XB = FG T is equivalent to linear system of equations ( Im A B T I n ) vec(x) = vec(fg T ). Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 16/52

46 Solving Large-Scale Sylvester and Lyapunov Equations Some Basics Sylvester equation AX XB = FG T is equivalent to linear system of equations ( Im A B T I n ) vec(x) = vec(fg T ). This cannot be used for numerical solutions unless nm 100 (or so), as it requires O(n 2 m 2 ) of storage; Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 16/52

47 Solving Large-Scale Sylvester and Lyapunov Equations Some Basics Sylvester equation AX XB = FG T is equivalent to linear system of equations ( Im A B T I n ) vec(x) = vec(fg T ). This cannot be used for numerical solutions unless nm 100 (or so), as it requires O(n 2 m 2 ) of storage; direct solver needs O(n 3 m 3 ) flops; Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 16/52

48 Solving Large-Scale Sylvester and Lyapunov Equations Some Basics Sylvester equation AX XB = FG T is equivalent to linear system of equations ( Im A B T I n ) vec(x) = vec(fg T ). This cannot be used for numerical solutions unless nm 100 (or so), as it requires O(n 2 m 2 ) of storage; direct solver needs O(n 3 m 3 ) flops; low (tensor-)rank of right-hand side is ignored; Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 16/52

49 Solving Large-Scale Sylvester and Lyapunov Equations Some Basics Sylvester equation AX XB = FG T is equivalent to linear system of equations ( Im A B T I n ) vec(x) = vec(fg T ). This cannot be used for numerical solutions unless nm 100 (or so), as it requires O(n 2 m 2 ) of storage; direct solver needs O(n 3 m 3 ) flops; low (tensor-)rank of right-hand side is ignored; in Lyapunov case, symmetry and possible definiteness are not respected. Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 16/52

50 Solving Large-Scale Sylvester and Lyapunov Equations Some Basics Sylvester equation AX XB = FG T is equivalent to linear system of equations ( Im A B T I n ) vec(x) = vec(fg T ). This cannot be used for numerical solutions unless nm 100 (or so), as it requires O(n 2 m 2 ) of storage; direct solver needs O(n 3 m 3 ) flops; low (tensor-)rank of right-hand side is ignored; in Lyapunov case, symmetry and possible definiteness are not respected. Possible solvers: Standard Krylov subspace solvers in operator from [Hochbruck, Starke, Reichel, Bao,... ]. Block-Tensor-Krylov subspace methods with truncation [Kressner/Tobler, Bollhöfer/Eppler, B./Breiten,... ]. Galerkin-type methods based on (extended, rational) Krylov subspace methods [Jaimoukha, Kasenally, Jbilou, Simoncini, Druskin, Knizhermann,... ] Doubling-type methods [Smith, Chu et al., B./Sadkane/El Khoury,... ]. ADI methods [Wachspress, Reichel et al., Li, Penzl, B., Saak, Kürschner,... ]. Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 16/52

51 Solving Large-Scale Sylvester and Lyapunov Equations LR-ADI Derivation Sylvester and Stein equations Let α β with α / Λ(B), β / Λ(A), then AX XB = FG T }{{} Sylvester equation X = A X B + (β α )F G H }{{} Stein equation with the Cayley like transformations A := (A β I n ) 1 (A α I n ), B := (B α I m ) 1 (B β I m ), F := (A β I n ) 1 F, G := (B α I m ) H G. fix point iteration X k = A X k 1 B + (β α )F G H for k 1, X 0 R n m. Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 17/52

52 Solving Large-Scale Sylvester and Lyapunov Equations LR-ADI Derivation Sylvester and Stein equations Let α k β k with α k / Λ(B), β k / Λ(A), then AX XB = FG T }{{} Sylvester equation X = A k X B k + (β k α k )F k G k H }{{} Stein equation with the Cayley like transformations A := (A β k I n ) 1 (A α k I n ), B := (B α k I m ) 1 (B β k I m ), F := (A β k I n ) 1 F, G := (B α k I m ) H G. alternating directions implicit (ADI) iteration X k = A k X k 1 B k + (β k α k )F k G H k for k 1, X 0 R n m. [Wachspress 1988] Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 17/52

53 Solving Large-Scale Sylvester and Lyapunov Equations LR-ADI Derivation Sylvester ADI iteration [Wachspress 1988] X k = A k X k 1 B k + (β k α k )F k G H k, A k := (A β k I n ) 1 (A α k I n ), B k := (B α k I m ) 1 (B β k I m ), F k := (A β k I n ) 1 F R n r, G k := (B α k I m ) H G C m r. Now set X 0 = 0 and find factorization X k = Z k D k Y H k X 1 = A 1 X 0 B 1 + (β 1 α 1 )F 1 G H 1, Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 18/52

54 Solving Large-Scale Sylvester and Lyapunov Equations LR-ADI Derivation Sylvester ADI iteration [Wachspress 1988] X k = A k X k 1 B k + (β k α k )F k G H k, A k := (A β k I n ) 1 (A α k I n ), B k := (B α k I m ) 1 (B β k I m ), F k := (A β k I n ) 1 F R n r, G k := (B α k I m ) H G C m r. Now set X 0 = 0 and find factorization X k = Z k D k Yk H X 1 = (β 1 α 1 )(A β 1 I n ) 1 F G T (B α 1 I m ) 1 V 1 := Z 1 = (A β 1 I n ) 1 F R n r, D 1 = (β 1 α 1 )I r R r r, W 1 := Y 1 = (B α 1 I m ) H G C m r. Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 18/52

55 Solving Large-Scale Sylvester and Lyapunov Equations LR-ADI Derivation Sylvester ADI iteration [Wachspress 1988] X k = A k X k 1 B k + (β k α k )F k G H k, A k := (A β k I n ) 1 (A α k I n ), B k := (B α k I m ) 1 (B β k I m ), F k := (A β k I n ) 1 F R n r, G k := (B α k I m ) H G C m r. Now set X 0 = 0 and find factorization X k = Z k D k Yk H X 2 = A 2 X 1 B 2 + (β 2 α 2 )F 2 G2 H =... = V 2 = V 1 + (β 2 α 1 )(A + β 2 I ) 1 V 1 R n r, W 2 = W 1 + (α 2 β 1 )(B + α 2 I ) H W 1 R m r, Z 2 = [Z 1, V 2 ], D 2 = diag (D 1, (β 2 α 2 )I r ), Y 2 = [Y 1, W 2 ]. Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 18/52

56 Solving Large-Scale Sylvester and Lyapunov Equations LR-ADI Algorithm [B. 2005, Li/Truhar 2008, B./Li/Truhar 2009] Algorithm 1: Low-rank Sylvester ADI / factored ADI (fadi) Input : Matrices defining AX XB = FG T and shift parameters {α 1,..., α kmax }, {β 1,..., β kmax }. Output: Z, D, Y such that ZDY H X. 1 Z 1 = V 1 = (A β 1 I n ) 1 F, 2 Y 1 = W 1 = (B α 1 I m ) H G. 3 D 1 = (β 1 α 1 )I r 4 for k = 2,..., k max do 5 V k = V k 1 + (β k α k 1 )(A β k I n ) 1 V k 1. 6 W k = W k 1 + (α k β k 1 )(B α k I n ) H W k 1. 7 Update solution factors Z k = [Z k 1, V k ], Y k = [Y k 1, W k ], D k = diag (D k 1, (β k α k )I r ). Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 19/52

57 Solving Large-Scale Sylvester and Lyapunov Equations ADI Shifts Optimal Shifts Solution of rational optimization problem min max α j C λ Λ(A) j=1 β j C µ Λ(B) k (λ α j )(µ β j ) (λ β j )(µ α j ), for which no analytic solution is known in general. Some shift generation approaches: generalized Bagby points, [Levenberg/Reichel 1993] adaption of Penzl s cheap heuristic approach available [Penzl 1999, Li/Truhar 2008] approximate Λ(A), Λ(B) by small number of Ritz values w.r.t. A, A 1, B, B 1 via Arnoldi, just taking these Ritz values alone also works well quite often. Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 20/52

58 Solving Large-Scale Sylvester and Lyapunov Equations LR-ADI Derivation Disadvantages of Low-Rank ADI as of 2012: 1 No efficient stopping criteria: Difference in iterates norm of added columns/step: not reliable, stops often too late. Residual is a full dense matrix, can not be calculated as such. 2 Requires complex arithmetic for real coefficients when complex shifts are used. 3 Expensive (only semi-automatic) set-up phase to precompute ADI shifts. Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 21/52

59 Solving Large-Scale Sylvester and Lyapunov Equations LR-ADI Derivation Disadvantages of Low-Rank ADI as of 2012: 1 No efficient stopping criteria: Difference in iterates norm of added columns/step: not reliable, stops often too late. Residual is a full dense matrix, can not be calculated as such. 2 Requires complex arithmetic for real coefficients when complex shifts are used. 3 Expensive (only semi-automatic) set-up phase to precompute ADI shifts. Will show: none of these disadvantages exists as of today = speed-ups old vs. new LR-ADI can be up to 20! Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 21/52

60 Solving Large-Scale Sylvester and Lyapunov Equations Low-Rank Structure of the Residual Low-rank Structure of S k in LR-ADI [B./Kürschner 2013] S k := A ( Z k D k Yk H ) ( Zk D k Yk H ) B FG T = Q k Uk H C n m, }{{} large, dense n m matrix Q k = Q k 1 + (β k α k )V k C n r, U k = U k 1 (β k α k )W k C m r. rank(s k ) r. Moreover, with Q 0 = F, U 0 = G it holds for the LR-ADI iterations V k = (A α k I n ) 1 Q k 1, W k = (B β k I m ) H U k 1, k 1. Holds also similarly in LR-ADI for Lyapunov equations. [B./Kürschner/Saak 2013] Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 22/52

61 Solving Large-Scale Sylvester and Lyapunov Equations Low-rank Sylvester ADI reloaded [B./Kürschner 2013] Algorithm 2: Reformulated Factored ADI iteration (fadi 2.0) Input : Matrices defining AX XB = FG T and shift parameters {α 1,..., α kmax }, {β 1,..., β kmax }, tolerance τ. Output: Z, Y, D such that ZDY H X. 1 Q 0 = F, U 0 = G, k = 1. 2 while Q k 1 Uk 1 H τ FG T do 3 γ k = β k α k. 4 V k = (A β k I n ) 1 Q k 1, W k = (B α k I m ) H U k 1, 5 Q k = Q k 1 + γ k V k, U k = U k 1 γ k W k. 6 Update solution factors Z k = [Z k 1, V k ], Y k = [Y k 1, W k ], D k = diag (D k 1, γ k I r ). 7 k + + Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 23/52

62 Solving Large-Scale Sylvester and Lyapunov Equations Computing the Residual Norm Low-rank factors Q k, U k of the residual S k now integral part of the iteration. Allows a cheap computation of S k 2 via, e.g., S k 2 = Q k U H k 2 = U k R H k 2, Q k = H k R k, H H k H k = I r requires thin QR factorization of an n r matrix and 2 computation of an r r matrix. Much cheaper than the traditional approach: apply Lanczos to Sk HS k to get S k 2 = λ max (Sk HS k) requires several matrix vector products with A, B (and A T, B T ) and additional scalar products. Note: In Lyapunov case, residual evaluation is almost free as no QR factorization is required. Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 24/52

63 Solving Large-Scale Sylvester and Lyapunov Equations Low-Rank Structure of the Residual Example I: 5-point discretizations of the operator [Jbilou 2006] L(x) := x v. x f (ξ 1, ξ 2 )x on Ω = (0, 1) 2 for x = x(ξ 1, ξ 2 ), homogeneous Dirichlet BC. A: 150 grid points, v = [e ξ1+ξ2, 1000ξ 2 ], f (ξ 1, ξ 2 ) = ξ 1, B: 120 grid points, v = [sin(ξ 1 + 2ξ 2 ), 20e ξ1+ξ2 ], f (ξ 1, ξ 2 ) = ξ 1 ξ 2. n = 22500, m = 14400, F, G random with r = 4 columns. Shifts: 10 Ritz values w.r.t. A, A 1, B, B 1 yield 20 α- shifts, 20 β- shifts Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 25/52

64 Solving Large-Scale Sylvester and Lyapunov Equations Low-Rank Structure of the Residual Example I: 5-point discretizations of the operator [Jbilou 2006] L(x) := x v. x f (ξ 1, ξ 2 )x on Ω = (0, 1) 2 for x = x(ξ 1, ξ 2 ), homogeneous Dirichlet BC. Sk fadi ( sec.) iteration number k comp. time for Sk fadi 2.0 (61.1 sec.) sec. ˆ=80.5% sec. ˆ=0.005% iteration number k Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 25/52

65 Solving Large-Scale Sylvester and Lyapunov Equations Realification of LR-ADI We have real matrices A, B, F, G defining the Sylvester equation. If Λ(A), Λ(B) C some α k, β k might be complex complex operations in LR-ADI Z, D, Y complex. To generate real solution factors we need that {α k }, {β k } form Proper and suitably ordered sets of shifts If α k C then α k+1 = α k and either β k, β k+1 = β k C or β k, β k+1 R. If β k C then β k+1 = β k and either α k, α k+1 = α k C or α k, α k+1 R. No restriction, since ADI is independent of the order of shifts. Can be achieved by simple permutation of the sets of shifts. Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 26/52

66 Solving Large-Scale Sylvester and Lyapunov Equations Realification of LR-ADI Relation of Iterates [B./Kürschner 2013] If α k, α k+1 = α k C and β k, β k+1 = β k C then V k+1 = V k + β k γ k Im (β k ) Im (V k), W k+1 = W k + β k γ k Im (α k ) Im (W k). Linear systems with A α k I n, B β k I m not required, low-rank factors always augmented by real data: Z k+1 = [ Z k 1, [Re (V k ), Im (V k )] R n 2r], Y k+1 = [ Y k 1, [Re (W k ), Im (W k )] R m 2r], D k+1 = diag ( D k 1, [ ] R 2r 2r), similar relations for residual factors Q k+1 R n r, U k+1 R m r and for the other shift sequences. (Generalization of Lyapunov case as in [B./Kürschner/Saak 2012/13].) Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 27/52

67 Solving Large-Scale Sylvester and Lyapunov Equations Realification of LR-ADI Example I, cont.: Shifts: 10 Ritz values w.r.t. A, A 1, B, B 1 yield 20 α- shifts (4 real, 8 complex), 20 β- shifts (12 real, 4 complex) fadi 2.0 (61.1 sec.) fadi R 2.0 (36.9 sec.) 100 Sk iteration number k spy(d k ) of fadi R. Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 28/52

68 Solving Large-Scale Sylvester and Lyapunov Equations Self-generating Shifts Problems with heuristic shifts: low-rank structure of solution not embraced, no known rules for the numbers k max of α / β shifts, Ritz values (i.e., Arnoldi steps) k A +, k A, k B +, k B w.r.t. A, A 1, B, B 1, Arnoldi process brings additional costs. Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 29/52

69 Solving Large-Scale Sylvester and Lyapunov Equations Self-generating Shifts Problems with heuristic shifts: low-rank structure of solution not embraced, no known rules for the numbers k max of α / β shifts, Ritz values (i.e., Arnoldi steps) k A +, k A, k B +, k B w.r.t. A, A 1, B, B 1, Arnoldi process brings additional costs. Observation: Even small changes in these numbers can lead to significantly different convergence results. Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 29/52

70 Solving Large-Scale Sylvester and Lyapunov Equations Self-generating Shifts A cheap but powerful way out [Hund 2012, B./Kürschner/Saak 2013] 1 Choose initial shifts, e.g, Q = orth(f ), {α} = Λ( Q T A Q), Ũ = orth(g), {β} = Λ(ŨT BŨ). 2 If these are depleted during ADI compute new shifts via {α new } = Λ( Q T A Q), {β new } = Λ(Ũ T BŨ), where Variant 1: Q = orth(re (V ), Im (V )), Ũ = orth(re (W ), Im (W )) (iterates), Variant 2: Q = orth(q), Ũ = orth(u) (residual factors). Works surprisingly well although no setup parameters are needed. Theoretical foundation is current research. Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 30/52

71 Solving Large-Scale Sylvester and Lyapunov Equations Self-generating Shifts Example I, cont.: 10 3 Ritz values (36.9 sec.) Variant 1 (20.56 sec.) Variant 2 (17.24 sec.) Sk tol iteration number k Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 31/52

72 Projection-Based Lyapunov Solvers for Lyapunov equation 0 = AX + XA T + BB T Projection-based methods for Lyapunov equations with A + A T < 0: 1 Compute orthonormal basis range (Z), Z R n r, for subspace Z R n, dim Z = r. 2 Set  := Z T AZ, ˆB := Z T B. 3 Solve small-size Lyapunov equation  ˆX + ˆX  T + ˆB ˆB T = 0. 4 Use X Z ˆX Z T. Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 32/52

73 Projection-Based Lyapunov Solvers for Lyapunov equation 0 = AX + XA T + BB T Projection-based methods for Lyapunov equations with A + A T < 0: 1 Compute orthonormal basis range (Z), Z R n r, for subspace Z R n, dim Z = r. 2 Set  := Z T AZ, ˆB := Z T B. 3 Solve small-size Lyapunov equation  ˆX + ˆX  T + ˆB ˆB T = 0. 4 Use X Z ˆX Z T. Examples: Krylov subspace methods, i.e., for m = 1: Z = K(A, B, r) = span{b, AB, A 2 B,..., A r 1 B} [Saad 1990, Jaimoukha/Kasenally 1994, Jbilou ]. Extended Krylov subspace method (EKSM) [Simoncini 2007], Z = K(A, B, r) K(A 1, B, r). Rational Krylov subspace methods (RKSM) [Druskin/Simoncini 2011]. Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 32/52

74 Solving Large-Scale Sylvester and Lyapunov Equations The New LR-ADI Applied to Lyapunov Equations Comparison of the new LR-ADI and EKSM Both methods require a system solve and several matvecs per iteration. Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 33/52

75 Solving Large-Scale Sylvester and Lyapunov Equations The New LR-ADI Applied to Lyapunov Equations Comparison of the new LR-ADI and EKSM Both methods require a system solve and several matvecs per iteration. EKSM requires only one (or two in the presence of a mass matrix) factorizations in total, LR-ADI needs a new factorization for each new shift. Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 33/52

76 Solving Large-Scale Sylvester and Lyapunov Equations The New LR-ADI Applied to Lyapunov Equations Comparison of the new LR-ADI and EKSM Both methods require a system solve and several matvecs per iteration. EKSM requires only one (or two in the presence of a mass matrix) factorizations in total, LR-ADI needs a new factorization for each new shift. EKSM requires deflation strategy for MIMO systems MIMO case requires no extra treatment in LR-ADI. Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 33/52

77 Solving Large-Scale Sylvester and Lyapunov Equations The New LR-ADI Applied to Lyapunov Equations Comparison of the new LR-ADI and EKSM Both methods require a system solve and several matvecs per iteration. EKSM requires only one (or two in the presence of a mass matrix) factorizations in total, LR-ADI needs a new factorization for each new shift. EKSM requires deflation strategy for MIMO systems MIMO case requires no extra treatment in LR-ADI. Both methods have low-rank expressions for the residual, enabling residual-based stopping criteria. Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 33/52

78 Solving Large-Scale Sylvester and Lyapunov Equations The New LR-ADI Applied to Lyapunov Equations Comparison of the new LR-ADI and EKSM Both methods require a system solve and several matvecs per iteration. EKSM requires only one (or two in the presence of a mass matrix) factorizations in total, LR-ADI needs a new factorization for each new shift. EKSM requires deflation strategy for MIMO systems MIMO case requires no extra treatment in LR-ADI. Both methods have low-rank expressions for the residual, enabling residual-based stopping criteria. Both methods can be run fully automatic (LR ADI requires self-generating shifts for this). Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 33/52

79 Solving Large-Scale Sylvester and Lyapunov Equations The New LR-ADI Applied to Lyapunov Equations Comparison of the new LR-ADI and EKSM Both methods require a system solve and several matvecs per iteration. EKSM requires only one (or two in the presence of a mass matrix) factorizations in total, LR-ADI needs a new factorization for each new shift. EKSM requires deflation strategy for MIMO systems MIMO case requires no extra treatment in LR-ADI. Both methods have low-rank expressions for the residual, enabling residual-based stopping criteria. Both methods can be run fully automatic (LR ADI requires self-generating shifts for this). EKSM requires dissipativity of A, i.e., A + A T < 0, to guarantee convergence, ADI only needs Λ (A) C. Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 33/52

80 Solving Large-Scale Sylvester and Lyapunov Equations The New LR-ADI Applied to Lyapunov Equations Comparison of the new LR-ADI and EKSM Both methods require a system solve and several matvecs per iteration. EKSM requires only one (or two in the presence of a mass matrix) factorizations in total, LR-ADI needs a new factorization for each new shift. EKSM requires deflation strategy for MIMO systems MIMO case requires no extra treatment in LR-ADI. Both methods have low-rank expressions for the residual, enabling residual-based stopping criteria. Both methods can be run fully automatic (LR ADI requires self-generating shifts for this). EKSM requires dissipativity of A, i.e., A + A T < 0, to guarantee convergence, ADI only needs Λ (A) C. If it converges, EKSM is usually faster for SISO systems with A = A T < 0. Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 33/52

81 The New LR-ADI Applied to Lyapunov Equations Example II: an ocean circulation problem [Van Gijzen et al. 1998] FEM discretization of a simple 3D ocean circulation model (barotropic, constant depth) stiffness matrix A with n = 42, 249, choose artificial constant term B = rand(n,5). Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 34/52

82 The New LR-ADI Applied to Lyapunov Equations Example II: an ocean circulation problem [Van Gijzen et al. 1998] FEM discretization of a simple 3D ocean circulation model (barotropic, constant depth) stiffness matrix A with n = 42, 249, choose artificial constant term B = rand(n,5). Convergence history: R / B T B TOL LR-ADI with adaptive shifts vs. EKSM coldim(z) LR-ADI EKSM Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 34/52

83 The New LR-ADI Applied to Lyapunov Equations Example II: an ocean circulation problem [Van Gijzen et al. 1998] FEM discretization of a simple 3D ocean circulation model (barotropic, constant depth) stiffness matrix A with n = 42, 249, choose artificial constant term B = rand(n,5). Convergence history: R / B T B TOL LR-ADI with adaptive shifts vs. EKSM coldim(z) CPU times: LR-ADI 110 sec, EKSM 135 sec. LR-ADI EKSM Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 34/52

84 The New LR-ADI Applied to Lyapunov Equations Example III: the triple-chain-ocillator [Truhar/Veselic 2009] Standard vibrational system second-order system with n = 21, 001, linearization n = 42, 002, Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 35/52

85 The New LR-ADI Applied to Lyapunov Equations Example III: the triple-chain-ocillator [Truhar/Veselic 2009] Standard vibrational system second-order system with n = 21, 001, linearization n = 42, 002, Again, artificial constant term: B = rand(n,5). Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 35/52

86 The New LR-ADI Applied to Lyapunov Equations Example III: the triple-chain-ocillator [Truhar/Veselic 2009] Standard vibrational system second-order system with n = 21, 001, linearization n = 42, 002, Again, artificial constant term: B = rand(n,5). Convergence history: 10 4 LR-ADI with adaptive shifts vs. EKSM R / B T B 10 4 TOL coldim(z) LR-ADI EKSM Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 35/52

87 Solving Large-Scale Sylvester and Lyapunov Equations Summary & Outlook Numerical enhancements of low-rank ADI for large Sylvester/Lyapunov equations: 1 low-rank residuals, reformulated implementation, 2 compute real low-rank factors in the presence of complex shifts, 3 self-generating shift strategies (quantification in progress). Recall the example: sec. down to sec. acceleration by factor almost 20. Generalized version enables derivation of low-rank solvers for various generalized Sylvester equations. Ongoing work: Apply LR-ADI in Newton methods for algebraic Riccati equations N (X ) = AX + XB + FG T XST T X = 0, D(X ) = AXA T EXE T + SS T + A T XF (I r + F T XF ) 1 F T XA = 0. Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 36/52

88 Overview This part: joint work with Tobias Breiten (KFU Graz, Austria) 1 Introduction 2 Applications 3 Solving Large-Scale Sylvester and Lyapunov Equations 4 Solving Large-Scale Lyapunov-plus-Positive Equations Application: Balanced Truncation for Bilinear Systems Existence of Low-Rank Approximations Generalized ADI Iteration Bilinear EKSM Tensorized Krylov Subspace Methods Comparison of Methods 5 References Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 37/52

89 Solving Large-Scale Lyapunov-plus-Positive Equations Application: Balanced Truncation for Bilinear Systems Bilinear control systems: m ẋ(t) = Ax(t) + N i x(t)u i (t) + Bu(t), Σ : i=1 y(t) = Cx(t), x(0) = x 0, where A, N i R n n, B R n m, C R q n. Properties: Approximation of (weakly) nonlinear systems by Carleman linearization yields bilinear systems. Appear naturally in boundary control problems, control via coefficients of PDEs, Fokker-Planck equations,... Due to the close relation to linear systems, a lot of successful concepts can be extended, e.g. transfer functions, Gramians, Lyapunov equations,... Linear stochastic control systems possess an equivalent structure and can be treated alike [B./Damm 11]. Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 38/52

90 Solving Large-Scale Lyapunov-plus-Positive Equations Application: Balanced Truncation for Bilinear Systems The concept of balanced truncation can be generalized to the case of bilinear systems, where we need the solutions of the Lyapunov-plus-positive equations: AP + PA T + A T Q + QA T + m N i PA T i + BB T = 0, i=1 m Ni T QA i + C T C = 0. i=1 Due to its approximation quality, balanced truncation is method of choice for model reduction of medium-size biliner systems. For stationary iterative solvers, see [Damm 2008], extended to low-rank solutions recently by [Szyld/Shank/Simoncini 2014]. Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 39/52

91 Solving Large-Scale Lyapunov-plus-Positive Equations Application: Balanced Truncation for Bilinear Systems The concept of balanced truncation can be generalized to the case of bilinear systems, where we need the solutions of the Lyapunov-plus-positive equations: Further applications: AP + PA T + A T Q + QA T + m N i PA T i + BB T = 0, i=1 m Ni T QA i + C T C = 0. i=1 Analysis and model reduction for linear stochastic control systems driven by Wiener noise [B./Damm 2011], Lévy processes [B./Redmann 2011]. Model reduction of linear parameter-varying (LPV) systems using bilinearization approach [B./Breiten 2011]. Model reduction for Fokker-Planck equations [Hartmann et al. 2013]. Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 39/52

92 Solving Large-Scale Lyapunov-plus-Positive Equations Some basic facts and assumptions AX + XA T + m N i XNi T + BB T = 0. (3) i=1 Need a positive semi-definite symmetric solution X. Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 40/52

93 Solving Large-Scale Lyapunov-plus-Positive Equations Some basic facts and assumptions AX + XA T + m N i XNi T + BB T = 0. (3) i=1 Need a positive semi-definite symmetric solution X. As discussed before, solution theory for Lyapuonv-plus-positive equation is more involved then in standard Lyapuonv case. Here, existence and uniqueness of positive semi-definite solution X = X T is assumed. Max Planck Institute Magdeburg P. Benner, Large-Scale Matrix Equations 40/52

Numerical Solution of Matrix Equations Arising in Control of Bilinear and Stochastic Systems

Numerical Solution of Matrix Equations Arising in Control of Bilinear and Stochastic Systems MatTriad 2015 Coimbra September 7 11, 2015 Numerical Solution of Matrix Equations Arising in Control of Bilinear and Stochastic Systems Peter Benner Max Planck Institute for Dynamics of Complex Technical

More information

A Newton-Galerkin-ADI Method for Large-Scale Algebraic Riccati Equations

A Newton-Galerkin-ADI Method for Large-Scale Algebraic Riccati Equations A Newton-Galerkin-ADI Method for Large-Scale Algebraic Riccati Equations Peter Benner Max-Planck-Institute for Dynamics of Complex Technical Systems Computational Methods in Systems and Control Theory

More information

The Newton-ADI Method for Large-Scale Algebraic Riccati Equations. Peter Benner.

The Newton-ADI Method for Large-Scale Algebraic Riccati Equations. Peter Benner. The Newton-ADI Method for Large-Scale Algebraic Riccati Equations Mathematik in Industrie und Technik Fakultät für Mathematik Peter Benner benner@mathematik.tu-chemnitz.de Sonderforschungsbereich 393 S

More information

Model Reduction for Dynamical Systems

Model Reduction for Dynamical Systems Otto-von-Guericke Universität Magdeburg Faculty of Mathematics Summer term 2015 Model Reduction for Dynamical Systems Lecture 8 Peter Benner Lihong Feng Max Planck Institute for Dynamics of Complex Technical

More information

Linear and Nonlinear Matrix Equations Arising in Model Reduction

Linear and Nonlinear Matrix Equations Arising in Model Reduction International Conference on Numerical Linear Algebra and its Applications Guwahati, January 15 18, 2013 Linear and Nonlinear Matrix Equations Arising in Model Reduction Peter Benner Tobias Breiten Max

More information

MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS

MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS Ulrike Baur joint work with Peter Benner Mathematics in Industry and Technology Faculty of Mathematics Chemnitz University of Technology

More information

BALANCING-RELATED MODEL REDUCTION FOR DATA-SPARSE SYSTEMS

BALANCING-RELATED MODEL REDUCTION FOR DATA-SPARSE SYSTEMS BALANCING-RELATED Peter Benner Professur Mathematik in Industrie und Technik Fakultät für Mathematik Technische Universität Chemnitz Computational Methods with Applications Harrachov, 19 25 August 2007

More information

S N. hochdimensionaler Lyapunov- und Sylvestergleichungen. Peter Benner. Mathematik in Industrie und Technik Fakultät für Mathematik TU Chemnitz

S N. hochdimensionaler Lyapunov- und Sylvestergleichungen. Peter Benner. Mathematik in Industrie und Technik Fakultät für Mathematik TU Chemnitz Ansätze zur numerischen Lösung hochdimensionaler Lyapunov- und Sylvestergleichungen Peter Benner Mathematik in Industrie und Technik Fakultät für Mathematik TU Chemnitz S N SIMULATION www.tu-chemnitz.de/~benner

More information

Efficient Implementation of Large Scale Lyapunov and Riccati Equation Solvers

Efficient Implementation of Large Scale Lyapunov and Riccati Equation Solvers Efficient Implementation of Large Scale Lyapunov and Riccati Equation Solvers Jens Saak joint work with Peter Benner (MiIT) Professur Mathematik in Industrie und Technik (MiIT) Fakultät für Mathematik

More information

On the numerical solution of large-scale linear matrix equations. V. Simoncini

On the numerical solution of large-scale linear matrix equations. V. Simoncini On the numerical solution of large-scale linear matrix equations V. Simoncini Dipartimento di Matematica, Università di Bologna (Italy) valeria.simoncini@unibo.it 1 Some matrix equations Sylvester matrix

More information

The rational Krylov subspace for parameter dependent systems. V. Simoncini

The rational Krylov subspace for parameter dependent systems. V. Simoncini The rational Krylov subspace for parameter dependent systems V. Simoncini Dipartimento di Matematica, Università di Bologna valeria.simoncini@unibo.it 1 Given the continuous-time system Motivation. Model

More information

Computational methods for large-scale linear matrix equations and application to FDEs. V. Simoncini

Computational methods for large-scale linear matrix equations and application to FDEs. V. Simoncini Computational methods for large-scale linear matrix equations and application to FDEs V. Simoncini Dipartimento di Matematica, Università di Bologna valeria.simoncini@unibo.it Joint work with: Tobias Breiten,

More information

Order reduction numerical methods for the algebraic Riccati equation. V. Simoncini

Order reduction numerical methods for the algebraic Riccati equation. V. Simoncini Order reduction numerical methods for the algebraic Riccati equation V. Simoncini Dipartimento di Matematica Alma Mater Studiorum - Università di Bologna valeria.simoncini@unibo.it 1 The problem Find X

More information

Moving Frontiers in Model Reduction Using Numerical Linear Algebra

Moving Frontiers in Model Reduction Using Numerical Linear Algebra Using Numerical Linear Algebra Max-Planck-Institute for Dynamics of Complex Technical Systems Computational Methods in Systems and Control Theory Group Magdeburg, Germany Technische Universität Chemnitz

More information

Factorized Solution of Sylvester Equations with Applications in Control

Factorized Solution of Sylvester Equations with Applications in Control Factorized Solution of Sylvester Equations with Applications in Control Peter Benner Abstract Sylvester equations play a central role in many areas of applied mathematics and in particular in systems and

More information

Low Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL

Low Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL Low Rank Approximation Lecture 7 Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL daniel.kressner@epfl.ch 1 Alternating least-squares / linear scheme General setting:

More information

Max Planck Institute Magdeburg Preprints

Max Planck Institute Magdeburg Preprints Peter Benner Patrick Kürschner Jens Saak Real versions of low-rank ADI methods with complex shifts MAXPLANCKINSTITUT FÜR DYNAMIK KOMPLEXER TECHNISCHER SYSTEME MAGDEBURG Max Planck Institute Magdeburg Preprints

More information

Efficient iterative algorithms for linear stability analysis of incompressible flows

Efficient iterative algorithms for linear stability analysis of incompressible flows IMA Journal of Numerical Analysis Advance Access published February 27, 215 IMA Journal of Numerical Analysis (215) Page 1 of 21 doi:1.193/imanum/drv3 Efficient iterative algorithms for linear stability

More information

Krylov-Subspace Based Model Reduction of Nonlinear Circuit Models Using Bilinear and Quadratic-Linear Approximations

Krylov-Subspace Based Model Reduction of Nonlinear Circuit Models Using Bilinear and Quadratic-Linear Approximations Krylov-Subspace Based Model Reduction of Nonlinear Circuit Models Using Bilinear and Quadratic-Linear Approximations Peter Benner and Tobias Breiten Abstract We discuss Krylov-subspace based model reduction

More information

Matrix Equations and and Bivariate Function Approximation

Matrix Equations and and Bivariate Function Approximation Matrix Equations and and Bivariate Function Approximation D. Kressner Joint work with L. Grasedyck, C. Tobler, N. Truhar. ETH Zurich, Seminar for Applied Mathematics Manchester, 17.06.2009 Sylvester matrix

More information

Structure preserving Krylov-subspace methods for Lyapunov equations

Structure preserving Krylov-subspace methods for Lyapunov equations Structure preserving Krylov-subspace methods for Lyapunov equations Matthias Bollhöfer, André Eppler Institute Computational Mathematics TU Braunschweig MoRePas Workshop, Münster September 17, 2009 System

More information

Krylov Subspace Methods for Nonlinear Model Reduction

Krylov Subspace Methods for Nonlinear Model Reduction MAX PLANCK INSTITUT Conference in honour of Nancy Nichols 70th birthday Reading, 2 3 July 2012 Krylov Subspace Methods for Nonlinear Model Reduction Peter Benner and Tobias Breiten Max Planck Institute

More information

The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications

The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications MAX PLANCK INSTITUTE Elgersburg Workshop Elgersburg February 11-14, 2013 The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications Timo Reis 1 Matthias Voigt 2 1 Department

More information

Krylov Subspace Type Methods for Solving Projected Generalized Continuous-Time Lyapunov Equations

Krylov Subspace Type Methods for Solving Projected Generalized Continuous-Time Lyapunov Equations Krylov Subspace Type Methods for Solving Proected Generalized Continuous-Time Lyapunov Equations YUIAN ZHOU YIQIN LIN Hunan University of Science and Engineering Institute of Computational Mathematics

More information

Computational methods for large-scale matrix equations and application to PDEs. V. Simoncini

Computational methods for large-scale matrix equations and application to PDEs. V. Simoncini Computational methods for large-scale matrix equations and application to PDEs V. Simoncini Dipartimento di Matematica Alma Mater Studiorum - Università di Bologna valeria.simoncini@unibo.it 1 Some matrix

More information

Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control

Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control Chun-Hua Guo Dedicated to Peter Lancaster on the occasion of his 70th birthday We consider iterative methods for finding the

More information

6.241 Dynamic Systems and Control

6.241 Dynamic Systems and Control 6.241 Dynamic Systems and Control Lecture 24: H2 Synthesis Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology May 4, 2011 E. Frazzoli (MIT) Lecture 24: H 2 Synthesis May

More information

Implicit Volterra Series Interpolation for Model Reduction of Bilinear Systems

Implicit Volterra Series Interpolation for Model Reduction of Bilinear Systems Max Planck Institute Magdeburg Preprints Mian Ilyas Ahmad Ulrike Baur Peter Benner Implicit Volterra Series Interpolation for Model Reduction of Bilinear Systems MAX PLANCK INSTITUT FÜR DYNAMIK KOMPLEXER

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Computational methods for large-scale matrix equations and application to PDEs. V. Simoncini

Computational methods for large-scale matrix equations and application to PDEs. V. Simoncini Computational methods for large-scale matrix equations and application to PDEs V. Simoncini Dipartimento di Matematica, Università di Bologna valeria.simoncini@unibo.it 1 Some matrix equations Sylvester

More information

A POD Projection Method for Large-Scale Algebraic Riccati Equations

A POD Projection Method for Large-Scale Algebraic Riccati Equations A POD Projection Method for Large-Scale Algebraic Riccati Equations Boris Kramer Department of Mathematics Virginia Tech Blacksburg, VA 24061-0123 Email: bokr@vt.edu John R. Singler Department of Mathematics

More information

Passivity Preserving Model Reduction for Large-Scale Systems. Peter Benner.

Passivity Preserving Model Reduction for Large-Scale Systems. Peter Benner. Passivity Preserving Model Reduction for Large-Scale Systems Peter Benner Mathematik in Industrie und Technik Fakultät für Mathematik Sonderforschungsbereich 393 S N benner@mathematik.tu-chemnitz.de SIMULATION

More information

University of Maryland Department of Computer Science TR-5009 University of Maryland Institute for Advanced Computer Studies TR April 2012

University of Maryland Department of Computer Science TR-5009 University of Maryland Institute for Advanced Computer Studies TR April 2012 University of Maryland Department of Computer Science TR-5009 University of Maryland Institute for Advanced Computer Studies TR-202-07 April 202 LYAPUNOV INVERSE ITERATION FOR COMPUTING A FEW RIGHTMOST

More information

Model Reduction for Unstable Systems

Model Reduction for Unstable Systems Model Reduction for Unstable Systems Klajdi Sinani Virginia Tech klajdi@vt.edu Advisor: Serkan Gugercin October 22, 2015 (VT) SIAM October 22, 2015 1 / 26 Overview 1 Introduction 2 Interpolatory Model

More information

Adaptive rational Krylov subspaces for large-scale dynamical systems. V. Simoncini

Adaptive rational Krylov subspaces for large-scale dynamical systems. V. Simoncini Adaptive rational Krylov subspaces for large-scale dynamical systems V. Simoncini Dipartimento di Matematica, Università di Bologna valeria@dm.unibo.it joint work with Vladimir Druskin, Schlumberger Doll

More information

Stability and Inertia Theorems for Generalized Lyapunov Equations

Stability and Inertia Theorems for Generalized Lyapunov Equations Published in Linear Algebra and its Applications, 355(1-3, 2002, pp. 297-314. Stability and Inertia Theorems for Generalized Lyapunov Equations Tatjana Stykel Abstract We study generalized Lyapunov equations

More information

Exploiting off-diagonal rank structures in the solution of linear matrix equations

Exploiting off-diagonal rank structures in the solution of linear matrix equations Stefano Massei Exploiting off-diagonal rank structures in the solution of linear matrix equations Based on joint works with D. Kressner (EPFL), M. Mazza (IPP of Munich), D. Palitta (IDCTS of Magdeburg)

More information

Zeros and zero dynamics

Zeros and zero dynamics CHAPTER 4 Zeros and zero dynamics 41 Zero dynamics for SISO systems Consider a linear system defined by a strictly proper scalar transfer function that does not have any common zero and pole: g(s) =α p(s)

More information

FEL3210 Multivariable Feedback Control

FEL3210 Multivariable Feedback Control FEL3210 Multivariable Feedback Control Lecture 8: Youla parametrization, LMIs, Model Reduction and Summary [Ch. 11-12] Elling W. Jacobsen, Automatic Control Lab, KTH Lecture 8: Youla, LMIs, Model Reduction

More information

Fast iterative solvers for fractional differential equations

Fast iterative solvers for fractional differential equations Fast iterative solvers for fractional differential equations Tobias Breiten a, Valeria Simoncini b, Martin Stoll c a Institute of Mathematics and Scientific Computing, University of Graz, Heinrichstr 36,

More information

Stabilization and Passivity-Based Control

Stabilization and Passivity-Based Control DISC Systems and Control Theory of Nonlinear Systems, 2010 1 Stabilization and Passivity-Based Control Lecture 8 Nonlinear Dynamical Control Systems, Chapter 10, plus handout from R. Sepulchre, Constructive

More information

Balanced Truncation 1

Balanced Truncation 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.242, Fall 2004: MODEL REDUCTION Balanced Truncation This lecture introduces balanced truncation for LTI

More information

Efficient Solvers for Stochastic Finite Element Saddle Point Problems

Efficient Solvers for Stochastic Finite Element Saddle Point Problems Efficient Solvers for Stochastic Finite Element Saddle Point Problems Catherine E. Powell c.powell@manchester.ac.uk School of Mathematics University of Manchester, UK Efficient Solvers for Stochastic Finite

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information

Model reduction for linear systems by balancing

Model reduction for linear systems by balancing Model reduction for linear systems by balancing Bart Besselink Jan C. Willems Center for Systems and Control Johann Bernoulli Institute for Mathematics and Computer Science University of Groningen, Groningen,

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 16-02 The Induced Dimension Reduction method applied to convection-diffusion-reaction problems R. Astudillo and M. B. van Gijzen ISSN 1389-6520 Reports of the Delft

More information

University of Maryland Department of Computer Science TR-4975 University of Maryland Institute for Advanced Computer Studies TR March 2011

University of Maryland Department of Computer Science TR-4975 University of Maryland Institute for Advanced Computer Studies TR March 2011 University of Maryland Department of Computer Science TR-4975 University of Maryland Institute for Advanced Computer Studies TR-2011-04 March 2011 LYAPUNOV INVERSE ITERATION FOR IDENTIFYING HOPF BIFURCATIONS

More information

Large scale continuation using a block eigensolver

Large scale continuation using a block eigensolver Universidad Central de Venezuela Facultad de Ciencias Escuela de Computación Lecturas en Ciencias de la Computación ISSN 1316-6239 Large scale continuation using a block eigensolver Z. Castillo RT 2006-03

More information

Preconditioned inverse iteration and shift-invert Arnoldi method

Preconditioned inverse iteration and shift-invert Arnoldi method Preconditioned inverse iteration and shift-invert Arnoldi method Melina Freitag Department of Mathematical Sciences University of Bath CSC Seminar Max-Planck-Institute for Dynamics of Complex Technical

More information

Solving Large Scale Lyapunov Equations

Solving Large Scale Lyapunov Equations Solving Large Scale Lyapunov Equations D.C. Sorensen Thanks: Mark Embree John Sabino Kai Sun NLA Seminar (CAAM 651) IWASEP VI, Penn State U., 22 May 2006 The Lyapunov Equation AP + PA T + BB T = 0 where

More information

H 2 optimal model reduction - Wilson s conditions for the cross-gramian

H 2 optimal model reduction - Wilson s conditions for the cross-gramian H 2 optimal model reduction - Wilson s conditions for the cross-gramian Ha Binh Minh a, Carles Batlle b a School of Applied Mathematics and Informatics, Hanoi University of Science and Technology, Dai

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

ADI-preconditioned FGMRES for solving large generalized Lyapunov equations - A case study

ADI-preconditioned FGMRES for solving large generalized Lyapunov equations - A case study -preconditioned for large - A case study Matthias Bollhöfer, André Eppler TU Braunschweig Institute Computational Mathematics Syrene-MOR Workshop, TU Hamburg October 30, 2008 2 / 20 Outline 1 2 Overview

More information

arxiv: v1 [math.na] 4 Apr 2018

arxiv: v1 [math.na] 4 Apr 2018 Efficient Solution of Large-Scale Algebraic Riccati Equations Associated with Index-2 DAEs via the Inexact Low-Rank Newton-ADI Method Peter Benner a,d, Matthias Heinkenschloss b,1,, Jens Saak a, Heiko

More information

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra Definite versus Indefinite Linear Algebra Christian Mehl Institut für Mathematik TU Berlin Germany 10th SIAM Conference on Applied Linear Algebra Monterey Bay Seaside, October 26-29, 2009 Indefinite Linear

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

3 Gramians and Balanced Realizations

3 Gramians and Balanced Realizations 3 Gramians and Balanced Realizations In this lecture, we use an optimization approach to find suitable realizations for truncation and singular perturbation of G. It turns out that the recommended realizations

More information

Computing Transfer Function Dominant Poles of Large Second-Order Systems

Computing Transfer Function Dominant Poles of Large Second-Order Systems Computing Transfer Function Dominant Poles of Large Second-Order Systems Joost Rommes Mathematical Institute Utrecht University rommes@math.uu.nl http://www.math.uu.nl/people/rommes joint work with Nelson

More information

MPC/LQG for infinite-dimensional systems using time-invariant linearizations

MPC/LQG for infinite-dimensional systems using time-invariant linearizations MPC/LQG for infinite-dimensional systems using time-invariant linearizations Peter Benner 1 and Sabine Hein 2 1 Max Planck Institute for Dynamics of Complex Technical Systems, Sandtorstr. 1, 39106 Magdeburg,

More information

DESIGN OF OBSERVERS FOR SYSTEMS WITH SLOW AND FAST MODES

DESIGN OF OBSERVERS FOR SYSTEMS WITH SLOW AND FAST MODES DESIGN OF OBSERVERS FOR SYSTEMS WITH SLOW AND FAST MODES by HEONJONG YOO A thesis submitted to the Graduate School-New Brunswick Rutgers, The State University of New Jersey In partial fulfillment of the

More information

Reduced Basis Method for Parametric

Reduced Basis Method for Parametric 1/24 Reduced Basis Method for Parametric H 2 -Optimal Control Problems NUMOC2017 Andreas Schmidt, Bernard Haasdonk Institute for Applied Analysis and Numerical Simulation, University of Stuttgart andreas.schmidt@mathematik.uni-stuttgart.de

More information

1 Controllability and Observability

1 Controllability and Observability 1 Controllability and Observability 1.1 Linear Time-Invariant (LTI) Systems State-space: Dimensions: Notation Transfer function: ẋ = Ax+Bu, x() = x, y = Cx+Du. x R n, u R m, y R p. Note that H(s) is always

More information

Nonlinear Observers. Jaime A. Moreno. Eléctrica y Computación Instituto de Ingeniería Universidad Nacional Autónoma de México

Nonlinear Observers. Jaime A. Moreno. Eléctrica y Computación Instituto de Ingeniería Universidad Nacional Autónoma de México Nonlinear Observers Jaime A. Moreno JMorenoP@ii.unam.mx Eléctrica y Computación Instituto de Ingeniería Universidad Nacional Autónoma de México XVI Congreso Latinoamericano de Control Automático October

More information

Numerical Solution of Descriptor Riccati Equations for Linearized Navier-Stokes Control Problems

Numerical Solution of Descriptor Riccati Equations for Linearized Navier-Stokes Control Problems MAX PLANCK INSTITUTE Control and Optimization with Differential-Algebraic Constraints Banff, October 24 29, 2010 Numerical Solution of Descriptor Riccati Equations for Linearized Navier-Stokes Control

More information

Model Order Reduction of Continuous LTI Large Descriptor System Using LRCF-ADI and Square Root Balanced Truncation

Model Order Reduction of Continuous LTI Large Descriptor System Using LRCF-ADI and Square Root Balanced Truncation , July 1-3, 15, London, U.K. Model Order Reduction of Continuous LI Large Descriptor System Using LRCF-ADI and Square Root Balanced runcation Mehrab Hossain Likhon, Shamsil Arifeen, and Mohammad Sahadet

More information

Computational methods for large-scale matrix equations: recent advances and applications. V. Simoncini

Computational methods for large-scale matrix equations: recent advances and applications. V. Simoncini Computational methods for large-scale matrix equations: recent advances and applications V. Simoncini Dipartimento di Matematica Alma Mater Studiorum - Università di Bologna valeria.simoncini@unibo.it

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science : Dynamic Systems Spring 2011

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science : Dynamic Systems Spring 2011 MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science 6.4: Dynamic Systems Spring Homework Solutions Exercise 3. a) We are given the single input LTI system: [

More information

Model reduction of large-scale dynamical systems

Model reduction of large-scale dynamical systems Model reduction of large-scale dynamical systems Lecture III: Krylov approximation and rational interpolation Thanos Antoulas Rice University and Jacobs University email: aca@rice.edu URL: www.ece.rice.edu/

More information

On the -Sylvester Equation AX ± X B = C

On the -Sylvester Equation AX ± X B = C Department of Applied Mathematics National Chiao Tung University, Taiwan A joint work with E. K.-W. Chu and W.-W. Lin Oct., 20, 2008 Outline 1 Introduction 2 -Sylvester Equation 3 Numerical Examples 4

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

Finding Rightmost Eigenvalues of Large, Sparse, Nonsymmetric Parameterized Eigenvalue Problems

Finding Rightmost Eigenvalues of Large, Sparse, Nonsymmetric Parameterized Eigenvalue Problems Finding Rightmost Eigenvalues of Large, Sparse, Nonsymmetric Parameterized Eigenvalue Problems AMSC 663-664 Final Report Minghao Wu AMSC Program mwu@math.umd.edu Dr. Howard Elman Department of Computer

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

Lecture 12: Detailed balance and Eigenfunction methods

Lecture 12: Detailed balance and Eigenfunction methods Miranda Holmes-Cerfon Applied Stochastic Analysis, Spring 2015 Lecture 12: Detailed balance and Eigenfunction methods Readings Recommended: Pavliotis [2014] 4.5-4.7 (eigenfunction methods and reversibility),

More information

Taylor expansions for the HJB equation associated with a bilinear control problem

Taylor expansions for the HJB equation associated with a bilinear control problem Taylor expansions for the HJB equation associated with a bilinear control problem Tobias Breiten, Karl Kunisch and Laurent Pfeiffer University of Graz, Austria Rome, June 217 Motivation dragged Brownian

More information

Towards high performance IRKA on hybrid CPU-GPU systems

Towards high performance IRKA on hybrid CPU-GPU systems Towards high performance IRKA on hybrid CPU-GPU systems Jens Saak in collaboration with Georg Pauer (OVGU/MPI Magdeburg) Kapil Ahuja, Ruchir Garg (IIT Indore) Hartwig Anzt, Jack Dongarra (ICL Uni Tennessee

More information

Daniel B. Szyld and Jieyong Zhou. Report June 2016

Daniel B. Szyld and Jieyong Zhou. Report June 2016 Numerical solution of singular semi-stable Lyapunov equations Daniel B. Szyld and Jieyong Zhou Report 16-06-02 June 2016 Department of Mathematics Temple University Philadelphia, PA 19122 This report is

More information

Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms

Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms 2. First Results and Algorithms Michael Eiermann Institut für Numerische Mathematik und Optimierung Technische

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 15-05 Induced Dimension Reduction method for solving linear matrix equations R. Astudillo and M. B. van Gijzen ISSN 1389-6520 Reports of the Delft Institute of Applied

More information

Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries

Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries Fakultät für Mathematik TU Chemnitz, Germany Peter Benner benner@mathematik.tu-chemnitz.de joint work with Heike Faßbender

More information

Introduction to multiscale modeling and simulation. Explicit methods for ODEs : forward Euler. y n+1 = y n + tf(y n ) dy dt = f(y), y(0) = y 0

Introduction to multiscale modeling and simulation. Explicit methods for ODEs : forward Euler. y n+1 = y n + tf(y n ) dy dt = f(y), y(0) = y 0 Introduction to multiscale modeling and simulation Lecture 5 Numerical methods for ODEs, SDEs and PDEs The need for multiscale methods Two generic frameworks for multiscale computation Explicit methods

More information

Inexact inverse iteration with preconditioning

Inexact inverse iteration with preconditioning Department of Mathematical Sciences Computational Methods with Applications Harrachov, Czech Republic 24th August 2007 (joint work with M. Robbé and M. Sadkane (Brest)) 1 Introduction 2 Preconditioned

More information

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm CS 622 Data-Sparse Matrix Computations September 19, 217 Lecture 9: Krylov Subspace Methods Lecturer: Anil Damle Scribes: David Eriksson, Marc Aurele Gilles, Ariah Klages-Mundt, Sophia Novitzky 1 Introduction

More information

On the Stability of the Best Reply Map for Noncooperative Differential Games

On the Stability of the Best Reply Map for Noncooperative Differential Games On the Stability of the Best Reply Map for Noncooperative Differential Games Alberto Bressan and Zipeng Wang Department of Mathematics, Penn State University, University Park, PA, 68, USA DPMMS, University

More information

1 Sylvester equations

1 Sylvester equations 1 Sylvester equations Notes for 2016-11-02 The Sylvester equation (or the special case of the Lyapunov equation) is a matrix equation of the form AX + XB = C where A R m m, B R n n, B R m n, are known,

More information

Introduction to Nonlinear Control Lecture # 4 Passivity

Introduction to Nonlinear Control Lecture # 4 Passivity p. 1/6 Introduction to Nonlinear Control Lecture # 4 Passivity È p. 2/6 Memoryless Functions ¹ y È Ý Ù È È È È u (b) µ power inflow = uy Resistor is passive if uy 0 p. 3/6 y y y u u u (a) (b) (c) Passive

More information

On ADI Method for Sylvester Equations

On ADI Method for Sylvester Equations On ADI Method for Sylvester Equations Ninoslav Truhar Ren-Cang Li Technical Report 2008-02 http://www.uta.edu/math/preprint/ On ADI Method for Sylvester Equations Ninoslav Truhar Ren-Cang Li February 8,

More information

Model reduction of large-scale systems by least squares

Model reduction of large-scale systems by least squares Model reduction of large-scale systems by least squares Serkan Gugercin Department of Mathematics, Virginia Tech, Blacksburg, VA, USA gugercin@mathvtedu Athanasios C Antoulas Department of Electrical and

More information

On Linear-Quadratic Control Theory of Implicit Difference Equations

On Linear-Quadratic Control Theory of Implicit Difference Equations On Linear-Quadratic Control Theory of Implicit Difference Equations Daniel Bankmann Technische Universität Berlin 10. Elgersburg Workshop 2016 February 9, 2016 D. Bankmann (TU Berlin) Control of IDEs February

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop Perturbation theory for eigenvalues of Hermitian pencils Christian Mehl Institut für Mathematik TU Berlin, Germany 9th Elgersburg Workshop Elgersburg, March 3, 2014 joint work with Shreemayee Bora, Michael

More information

Model order reduction of large-scale dynamical systems with Jacobi-Davidson style eigensolvers

Model order reduction of large-scale dynamical systems with Jacobi-Davidson style eigensolvers MAX PLANCK INSTITUTE International Conference on Communications, Computing and Control Applications March 3-5, 2011, Hammamet, Tunisia. Model order reduction of large-scale dynamical systems with Jacobi-Davidson

More information

Model Reduction for Linear Dynamical Systems

Model Reduction for Linear Dynamical Systems Summer School on Numerical Linear Algebra for Dynamical and High-Dimensional Problems Trogir, October 10 15, 2011 Model Reduction for Linear Dynamical Systems Peter Benner Max Planck Institute for Dynamics

More information

MODEL-order reduction is emerging as an effective

MODEL-order reduction is emerging as an effective IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I: REGULAR PAPERS, VOL. 52, NO. 5, MAY 2005 975 Model-Order Reduction by Dominant Subspace Projection: Error Bound, Subspace Computation, and Circuit Applications

More information

Fast iterative solvers for fractional differential equations

Fast iterative solvers for fractional differential equations Fast iterative solvers for fractional differential equations Tobias Breiten a, Valeria Simoncini b, Martin Stoll c a Institute of Mathematics and Scientific Computing, University of Graz, Heinrichstr 36,

More information

On a quadratic matrix equation associated with an M-matrix

On a quadratic matrix equation associated with an M-matrix Article Submitted to IMA Journal of Numerical Analysis On a quadratic matrix equation associated with an M-matrix Chun-Hua Guo Department of Mathematics and Statistics, University of Regina, Regina, SK

More information

Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques

Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques Institut für Numerische Mathematik und Optimierung Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques Oliver Ernst Computational Methods with Applications Harrachov, CR,

More information

Model reduction via tangential interpolation

Model reduction via tangential interpolation Model reduction via tangential interpolation K. Gallivan, A. Vandendorpe and P. Van Dooren May 14, 2002 1 Introduction Although most of the theory presented in this paper holds for both continuous-time

More information

Lecture 4. Chapter 4: Lyapunov Stability. Eugenio Schuster. Mechanical Engineering and Mechanics Lehigh University.

Lecture 4. Chapter 4: Lyapunov Stability. Eugenio Schuster. Mechanical Engineering and Mechanics Lehigh University. Lecture 4 Chapter 4: Lyapunov Stability Eugenio Schuster schuster@lehigh.edu Mechanical Engineering and Mechanics Lehigh University Lecture 4 p. 1/86 Autonomous Systems Consider the autonomous system ẋ

More information

On Solving Large Algebraic. Riccati Matrix Equations

On Solving Large Algebraic. Riccati Matrix Equations International Mathematical Forum, 5, 2010, no. 33, 1637-1644 On Solving Large Algebraic Riccati Matrix Equations Amer Kaabi Department of Basic Science Khoramshahr Marine Science and Technology University

More information

Solving Ill-Posed Cauchy Problems in Three Space Dimensions using Krylov Methods

Solving Ill-Posed Cauchy Problems in Three Space Dimensions using Krylov Methods Solving Ill-Posed Cauchy Problems in Three Space Dimensions using Krylov Methods Lars Eldén Department of Mathematics Linköping University, Sweden Joint work with Valeria Simoncini February 21 Lars Eldén

More information