Order reduction numerical methods for the algebraic Riccati equation. V. Simoncini

Size: px
Start display at page:

Download "Order reduction numerical methods for the algebraic Riccati equation. V. Simoncini"

Transcription

1 Order reduction numerical methods for the algebraic Riccati equation V. Simoncini Dipartimento di Matematica Alma Mater Studiorum - Università di Bologna valeria.simoncini@unibo.it 1

2 The problem Find X R n n such that AX+XA XBB X+C C = 0 with A R n n, B R n p, C R s n, p,s = O(1) Rich literature on analysis, applications and numerics: Lancaster-Rodman 1995, Bini-Iannazzo-Meini 2012, Mehrmann etal We focus on the large scale case: n 1000 Different strategies (Inexact) Kleinman iteration (Newton-type method) Projection methods Invariant subspace iteration (Sparse) multilevel methods... 2

3 The problem Find X R n n such that AX+XA XBB X+C C = 0 with A R n n, B R n p, C R s n, p,s = O(1) Rich literature on analysis, applications and numerics: Lancaster-Rodman 1995, Bini-Iannazzo-Meini 2012, Mehrmann etal We focus on the large scale case: n 1000 Different strategies (Inexact) Kleinman iteration (Newton-type method) Projection methods Invariant subspace iteration (Sparse) multilevel methods... 3

4 Newton-Kleinman iteration Assume A stable. Compute sequence {X k } with X k k X (A X k BB )X k+1 +X k+1 (A BB X k )+C C +X k BB X k = 0 1: Given X 0 R n n such that X 0 = X 0, A BB X 0 is stable 2: For k = 0,1,..., until convergence 3: Set A k = A BB X k 4: Set Ck = [X k B, C ] 5: Solve A k X k+1 +X k+1 A k +C k C k = 0 Critical issues: The full matrix X k cannot be stored (sparse or low-rank approx) Need a computable stopping criterion Each iteration k requires the solution of the Lyapunov equation: (Benner, Feitzinger, Hylla, Saak, Sachs,...) 4

5 Newton-Kleinman iteration Assume A stable. Compute sequence {X k } with X k k X (A X k BB )X k+1 +X k+1 (A BB X k )+C C +X k BB X k = 0 1: Given X 0 R n n such that X 0 = X 0, A BB X 0 is stable 2: For k = 0,1,..., until convergence 3: Set A k = A BB X k 4: Set Ck = [X k B, C ] 5: Solve A k X k+1 +X k+1 A k +C k C k = 0 Critical issues: The full matrix X k cannot be stored (sparse or low-rank approx) Need a computable stopping criterion Each iteration k requires the solution of the Lyapunov equation: (Benner, Feitzinger, Hylla, Saak, Sachs,...) 5

6 Newton-Kleinman iteration Assume A stable. Compute sequence {X k } with X k k X (A X k BB )X k+1 +X k+1 (A BB X k )+C C +X k BB X k = 0 1: Given X 0 R n n such that X 0 = X 0, A BB X 0 is stable. 2: For k = 0,1,..., until convergence 3: Set A k = A BB X k 4: Set Ck = [X k B, C ] 5: Solve A k X k+1 +X k+1 A k +C k C k = 0 Critical issues: The full matrix X k cannot be stored (sparse or low-rank approx) Need a computable stopping criterion Each iteration k requires the solution of the Lyapunov equation (Benner, Feitzinger, Hylla, Saak, Sachs,...) 6

7 Galerkin projection method for the Riccati equation Given the orth basis V k for an approximation space, determine X k = V k Y k V k to the Riccati solution matrix by orthogonal projection: Galerkin condition: Residual orthogonal to approximation space V k (AX k +X k A X k BB X k +C C)V k = 0 giving the reduced Riccati equation (V k AV k )Y +Y(V k A V k ) Y(V k BB V k )Y +(V k C )(CV k ) = 0 Y k is the stabilizing solution (Heyouni-Jbilou 2009) Key questions: Which approximation space? Is this meaningful from a Control Theory perspective? 7

8 Galerkin projection method for the Riccati equation Given the orth basis V k for an approximation space, determine X k = V k Y k V k to the Riccati solution matrix by orthogonal projection: Galerkin condition: Residual orthogonal to approximation space V k (AX k +X k A X k BB X k +C C)V k = 0 giving the reduced Riccati equation (V k AV k )Y +Y(V k A V k ) Y(V k BB V k )Y +(V k C )(CV k ) = 0 Y k is the stabilizing solution (Heyouni-Jbilou 2009) Key questions: Which approximation space? Is this meaningful from a Control Theory perspective? 8

9 Galerkin projection method for the Riccati equation Given the orth basis V k for an approximation space, determine X k = V k Y k V k to the Riccati solution matrix by orthogonal projection: Galerkin condition: Residual orthogonal to approximation space V k (AX k +X k A X k BB X k +C C)V k = 0 giving the reduced Riccati equation (V k AV k )Y +Y(V k A V k ) Y(V k BB V k )Y +(V k C )(CV k ) = 0 Y k is the stabilizing solution (Heyouni-Jbilou 2009) Key questions: Which approximation space? Is this meaningful from a Control Theory perspective? 9

10 Galerkin projection method for the Riccati equation Given the orth basis V k for an approximation space, determine X k = V k Y k V k to the Riccati solution matrix by orthogonal projection: Galerkin condition: Residual orthogonal to approximation space V k (AX k +X k A X k BB X k +C C)V k = 0 giving the reduced Riccati equation (V k AV k )Y +Y(V k A V k ) Y(V k BB V k )Y +(V k C )(CV k ) = 0 Y k is the stabilizing solution (Heyouni-Jbilou 2009) Key questions: Which approximation space? Is this meaningful from a Control Theory perspective? 10

11 Dynamical systems and the Riccati equation AX+XA XBB X+C C = 0 Time-invariant linear system ẋ(t) = Ax(t)+Bu(t), x(0) = x 0 y(t) = Cx(t), u(t) : control (input) vector; y(t) : output vector x(t) : state vector; x 0 : initial state Minimization problem for a Cost functional: (simplified form) inf u J(u,x 0) J(u,x 0 ) := 0 (x(t) C Cx(t)+u(t) u(t))dt 11

12 Dynamical systems and the Riccati equation AX+XA XBB X+C C = 0 Time-invariant linear system ẋ(t) = Ax(t)+Bu(t), x(0) = x 0 y(t) = Cx(t), u(t) : control (input) vector; y(t) : output vector x(t) : state vector; x 0 : initial state Minimization problem for a Cost functional: (simplified form) inf u J(u,x 0) J(u,x 0 ) := 0 ( x(t) C Cx(t)+u(t) u(t) ) dt 12

13 Dynamical systems and the Riccati equation AX+XA XBB X+C C = 0 inf u J(u,x 0) J(u,x 0 ) := 0 ( x(t) C Cx(t)+u(t) u(t) ) dt theorem Let the pair (A,B) be stabilizable and (C,A) observable. Then there is a unique solution X 0 of the Riccati equation. Moreover, i) For each x 0 there is a unique optimal control, and it is given by u (t) = B Xexp((A BB X)t)x 0 for t 0 ii) J(u,x 0 ) = x 0 Xx 0 for all x 0 R n see, e.g., Lancaster & Rodman,

14 Order reduction of dynamical systems by projection Let V k R n d k have orthonormal columns, d k n Let T k = V k AV k, B k = V k B, C k = V k C Reduced order dynamical system: x(t) = T k x(t)+b k û(t), ŷ(t) = C k x(t) x(0) = x 0 :=V k x 0 x k (t) = V k x(t) x(t) Typical frameworks: Transfer function approximation Model reduction 14

15 The role of the projected Riccati equation Consider again the reduced Riccati equation: (V k AV k )Y +Y(V k A V k ) Y(V k BB V k )Y +(V k C )(CV k ) = 0 that is T k Y +YT k YB k B k Y +C k C k = 0 ( ) Theorem. Let the pair (T k,b k ) be stabilizable and (C k,t k ) observable. Then there is a unique solution Y k 0 of (*) that for each x 0 gives the feedback optimal control û (t) = B ky k exp((t k B k B ky k )t) x 0, t 0 for the reduced system. If there exists a matrix K such that A BK is passive, then the pair (T k,b k ) is stabilizable. 15

16 The role of the projected Riccati equation Consider again the reduced Riccati equation: (V k AV k )Y +Y(V k A V k ) Y(V k BB V k )Y +(V k C )(CV k ) = 0 that is T k Y +YT k YB k B k Y +C k C k = 0 ( ) Theorem. Let the pair (T k,b k ) be stabilizable and (C k,t k ) observable. Then there is a unique solution Y k 0 of (*) that for each x 0 gives the feedback optimal control û (t) = B ky k exp((t k B k B ky k )t) x 0, t 0 for the reduced system. If there exists a matrix K such that A BK is passive, then the pair (T k,b k ) is stabilizable. 16

17 The role of the projected Riccati equation Consider again the reduced Riccati equation: (V k AV k )Y +Y(V k A V k ) Y(V k BB V k )Y +(V k C )(CV k ) = 0 that is T k Y +YT k YB k B k Y +C k C k = 0 ( ) Theorem. Let the pair (T k,b k ) be stabilizable and (C k,t k ) observable. Then there is a unique solution Y k 0 of (*) that for each x 0 gives the feedback optimal control û (t) = B ky k exp((t k B k B ky k )t) x 0, t 0 for the reduced system. If there exists a matrix K such that A BK is passive, then the pair (T k,b k ) is stabilizable. 17

18 Projected optimal control vs approximate control Our projected optimal control function: û (t) = B k Y k exp((t k B k B k Y k )t) x 0, t 0 with X k = V k Y k V k Commonly used approximate control function: If X is some approximation to X, then ũ(t) := B X x(t) where x(t) := exp((a BB X)t)x0 û ũ They induce different actions on the functional J, even for X = X k 18

19 X k = V k Y k V k Projected optimal control vs approximate control Residual matrix: R k := AX k +X k A X k BB X k +C C Projected optimal control function: û (t) = B k Y kexp((t k B k B k Y k)t) theorem. Assume that A BB X k is stable and that ũ(t) := B X k x(t) approx control. Then J(ũ,x 0 ) Ĵk(û, x 0 ) = E k, with E k R k 2α x 0 x 0, where α > 0 is such that e (A BB X k )t e αt for all t 0. Note: J(ũ,x 0 ) Ĵk(û, x 0 ) is nonzero for R k 0 19

20 On the choice of approximation space Approximate solution X k = V k Y k V k, with (V k AV k )Y k +Y k (V k A V k ) Y k (V k BB V k )Y k +(V k C )(CV k ) = 0 Krylov-type subspaces: (extensively used in the linear case) K k (A,C ) := Range([C,AC,...,A k 1 C ]) (Polynomial) EK k (A,C ) := K k (A,C )+K k (A 1,A 1 C ) (EKS, Rational) RK k (A,C,s) := k 1 Range([C,(A s 2 I) 1 C,..., (A s j+1 I) 1 C ]) j=1 (RKS, Rational) Matrix BB not involved Parameters s j (adaptively) chosen in field of values of A 20

21 On the choice of approximation space Approximate solution X k = V k Y k V k, with (V k AV k )Y k +Y k (V k A V k ) Y k (V k BB V k )Y k +(V k C )(CV k ) = 0 Krylov-type subspaces: (extensively used in the linear case) K k (A,C ) := Range([C,AC,...,A k 1 C ]) (Polynomial) EK k (A,C ) := K k (A,C )+K k (A 1,A 1 C ) (EKS, Rational) RK k (A,C,s) := k 1 Range([C,(A s 2 I) 1 C,..., (A s j+1 I) 1 C ]) j=1 (RKS, Rational) Matrix BB not involved (nonlinear term!) Parameters s j (adaptively) chosen in field of values of A 21

22 On the choice of approximation space Approximate solution X k = V k Y k V k, with (V k AV k )Y k +Y k (V k A V k ) Y k (V k BB V k )Y k +(V k C )(CV k ) = 0 Krylov-type subspaces: (extensively used in the linear case) K k (A,C ) := Range([C,AC,...,A k 1 C ]) (Polynomial) EK k (A,C ) := K k (A,C )+K k (A 1,A 1 C ) (EKS, Rational) RK k (A,C,s) := k 1 Range([C,(A s 2 I) 1 C,..., (A s j+1 I) 1 C ]) j=1 (RKS, Rational) Matrix BB not involved (nonlinear term!) Parameters s j (adaptively) chosen in field of values of A 22

23 Performance of solvers Problem: A: 3D Laplace operator, B, C randn matrices, tol=10 8 (n,p,s) = (125000,5,5) its inner its time space dim rank(x f ) Newton X 0 = ,..., GP-EKS GP-RKS (n,p,s) = (125000,20,20) its inner its time space dim rank(x f ) Newton X 0 = ,..., GP-EKS GP-RKS GP=Galerkin projection (V.Simoncini & D.Szyld & M.Monsalve, 2014) 23

24 A numerical example on the role of BB Consider the Toeplitz matrix A = toeplitz( 1,2.5,1,1,1), C = [1, 2,1, 2,...],B = RKSM 10 0 absolute residual norm space dimension Parameter computation: Left: adaptive RKS on A Right: adaptive RKS on A BB X k (Lin & Simoncini 2015) 24

25 A numerical example on the role of BB Consider the Toeplitz matrix A = toeplitz( 1,2.5,1,1,1), C = [1, 2,1, 2,...],B = RKSM 10 4 RKSM absolute residual norm absolute residual norm space dimension space dimension Parameter computation: Left: adaptive RKS on A Right: adaptive RKS on A BB X k (Lin & Simoncini 2015) 25

26 On the residual matrix and adaptive RKS R k := AX k +X k A X k BB X k +C C theorem. Let T k = T k B k B k Y k. Then R k = R k V k +V k R k, with Rk = AV k Y k +V k Y k T k +C (CV k ) so that R k F = 2 R k F At least formally: V k Y k V k is a solution to the Riccati equation (R k = 0) if and only if Z k = V k Y k is the solution to the Sylvester equation ( R k = 0) 26

27 On the residual matrix and adaptive RKS R k = R k V k +V k R k Expression for the semi-residual Rk : theorem. Assume C R n, Range(V k )= RK k (A,C,s). Assume that T k = T k B k B k Y k is diagonalizable. Then R k = ψ k,tk (A)C CV k (ψ k,tk ( T k )) 1. where ψ k,tk (z) = det(zi T k) k j=1 (z s j) (see also Beckermann 2011 for the linear case) 27

28 On the choice of the next parameters s k+1 with ψ k,tk (z) = det(zi T k) k j=1 (z s j) R k = ψ k,tk (A)C CV k (ψ k,tk ( T k )) 1. Greedy strategy: Next shift should make (ψ k,tk ( T k )) 1 smaller Determine for which s in the spectral region of T k the quantity (ψ k,tk ( s)) 1 is large, and add a root there s k+1 = arg max s S k 1 ψ k,tk (s) S k region enclosing the eigenvalues of T k = (T k B k B k Y k) (This argument is new also for linear equations) 28

29 Selection of s k+1 in RKS. An example A: D Laplacian, B = t1 with t j = 5 10 j, C = [1, 2,1, 2,1, 2,...] absolute residual norm t 2 t t 1 t 1 t 2 t space dimension RKS convergence with and without modified shift selection as t varies Solid curves: use of T k Dashed curves: use of T k 29

30 Further results not presented but relevant Stabilization properties of the approx solution X k Accuracy tracking as the approximation space grows Interpretation via invariant subspace approximation (V.Simoncini, 2016) Wrap-up Projection-type methods fill the gap between MOR and Riccati equation Clearer role of the non-linear term during the projection 30

31 Further results not presented but relevant Stabilization properties of the approx solution X k Accuracy tracking as the approximation space grows Interpretation via invariant subspace approximation (V.Simoncini, 2016) Wrap-up Projection-type methods fill the gap between MOR and Riccati equation Clearer role of the non-linear term during the projection 31

32 Outlook Petrov-Galerkin projection (à la Balanced truncation) (work in progress, with A. Alla, Florida State Univ.) Projected Differential Riccati equations (see, e.g., Koskela & Mena, tr 2017) Parameterized Algebraic Riccati equations (see, e.g., Schmidt & Haasdonk, tr 2017) References - V. Simoncini, Daniel B. Szyld and Marlliny Monsalve, On two numerical methods for the solution of large-scale algebraic Riccati equations IMA Journal of Numerical Analysis, (2014) - Yiding Lin and V. Simoncini, A new subspace iteration method for the algebraic Riccati equation Numerical Linear Algebra w/appl., (2015) - V.Simoncini, Analysis of the rational Krylov subspace projection method for large-scale algebraic Riccati equations SIAM J. Matrix Analysis Appl, (2016). - V. Simoncini, Computational methods for linear matrix equations (Survey article) SIAM Review, (2016). 32

33 Outlook Petrov-Galerkin projection (à la Balanced truncation) (work in progress, with A. Alla, Florida State Univ.) Projected Differential Riccati equations (see, e.g., Koskela & Mena, tr 2017) Parameterized Algebraic Riccati equations (see, e.g., Schmidt & Haasdonk, tr 2017) References - V. Simoncini, Daniel B. Szyld and Marlliny Monsalve, On two numerical methods for the solution of large-scale algebraic Riccati equations IMA Journal of Numerical Analysis, (2014) - Yiding Lin and V. Simoncini, A new subspace iteration method for the algebraic Riccati equation Numerical Linear Algebra w/appl., (2015) - V.Simoncini, Analysis of the rational Krylov subspace projection method for large-scale algebraic Riccati equations SIAM J. Matrix Anal.Appl, (2016) - V. Simoncini, Computational methods for linear matrix equations SIAM Review, (2016) 33

On the numerical solution of large-scale linear matrix equations. V. Simoncini

On the numerical solution of large-scale linear matrix equations. V. Simoncini On the numerical solution of large-scale linear matrix equations V. Simoncini Dipartimento di Matematica, Università di Bologna (Italy) valeria.simoncini@unibo.it 1 Some matrix equations Sylvester matrix

More information

The rational Krylov subspace for parameter dependent systems. V. Simoncini

The rational Krylov subspace for parameter dependent systems. V. Simoncini The rational Krylov subspace for parameter dependent systems V. Simoncini Dipartimento di Matematica, Università di Bologna valeria.simoncini@unibo.it 1 Given the continuous-time system Motivation. Model

More information

Adaptive rational Krylov subspaces for large-scale dynamical systems. V. Simoncini

Adaptive rational Krylov subspaces for large-scale dynamical systems. V. Simoncini Adaptive rational Krylov subspaces for large-scale dynamical systems V. Simoncini Dipartimento di Matematica, Università di Bologna valeria@dm.unibo.it joint work with Vladimir Druskin, Schlumberger Doll

More information

ON THE NUMERICAL SOLUTION OF LARGE-SCALE RICCATI EQUATIONS

ON THE NUMERICAL SOLUTION OF LARGE-SCALE RICCATI EQUATIONS ON THE NUMERICAL SOLUTION OF LARGE-SCALE RICCATI EQUATIONS VALERIA SIMONCINI, DANIEL B. SZYLD, AND MARLLINY MONSALVE Abstract. The inexact Newton-Kleinman method is an iterative scheme for numerically

More information

Computational methods for large-scale matrix equations and application to PDEs. V. Simoncini

Computational methods for large-scale matrix equations and application to PDEs. V. Simoncini Computational methods for large-scale matrix equations and application to PDEs V. Simoncini Dipartimento di Matematica Alma Mater Studiorum - Università di Bologna valeria.simoncini@unibo.it 1 Some matrix

More information

On Solving Large Algebraic. Riccati Matrix Equations

On Solving Large Algebraic. Riccati Matrix Equations International Mathematical Forum, 5, 2010, no. 33, 1637-1644 On Solving Large Algebraic Riccati Matrix Equations Amer Kaabi Department of Basic Science Khoramshahr Marine Science and Technology University

More information

A Newton-Galerkin-ADI Method for Large-Scale Algebraic Riccati Equations

A Newton-Galerkin-ADI Method for Large-Scale Algebraic Riccati Equations A Newton-Galerkin-ADI Method for Large-Scale Algebraic Riccati Equations Peter Benner Max-Planck-Institute for Dynamics of Complex Technical Systems Computational Methods in Systems and Control Theory

More information

Computational methods for large-scale matrix equations: recent advances and applications. V. Simoncini

Computational methods for large-scale matrix equations: recent advances and applications. V. Simoncini Computational methods for large-scale matrix equations: recent advances and applications V. Simoncini Dipartimento di Matematica Alma Mater Studiorum - Università di Bologna valeria.simoncini@unibo.it

More information

The Newton-ADI Method for Large-Scale Algebraic Riccati Equations. Peter Benner.

The Newton-ADI Method for Large-Scale Algebraic Riccati Equations. Peter Benner. The Newton-ADI Method for Large-Scale Algebraic Riccati Equations Mathematik in Industrie und Technik Fakultät für Mathematik Peter Benner benner@mathematik.tu-chemnitz.de Sonderforschungsbereich 393 S

More information

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna.

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna. Recent advances in approximation using Krylov subspaces V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The framework It is given an operator

More information

Computational methods for large-scale matrix equations and application to PDEs. V. Simoncini

Computational methods for large-scale matrix equations and application to PDEs. V. Simoncini Computational methods for large-scale matrix equations and application to PDEs V. Simoncini Dipartimento di Matematica, Università di Bologna valeria.simoncini@unibo.it 1 Some matrix equations Sylvester

More information

Efficient Implementation of Large Scale Lyapunov and Riccati Equation Solvers

Efficient Implementation of Large Scale Lyapunov and Riccati Equation Solvers Efficient Implementation of Large Scale Lyapunov and Riccati Equation Solvers Jens Saak joint work with Peter Benner (MiIT) Professur Mathematik in Industrie und Technik (MiIT) Fakultät für Mathematik

More information

Computational methods for large-scale linear matrix equations and application to FDEs. V. Simoncini

Computational methods for large-scale linear matrix equations and application to FDEs. V. Simoncini Computational methods for large-scale linear matrix equations and application to FDEs V. Simoncini Dipartimento di Matematica, Università di Bologna valeria.simoncini@unibo.it Joint work with: Tobias Breiten,

More information

Model Reduction for Dynamical Systems

Model Reduction for Dynamical Systems Otto-von-Guericke Universität Magdeburg Faculty of Mathematics Summer term 2015 Model Reduction for Dynamical Systems Lecture 8 Peter Benner Lihong Feng Max Planck Institute for Dynamics of Complex Technical

More information

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. Orthogonal sets Let V be a vector space with an inner product. Definition. Nonzero vectors v 1,v

More information

The norms can also be characterized in terms of Riccati inequalities.

The norms can also be characterized in terms of Riccati inequalities. 9 Analysis of stability and H norms Consider the causal, linear, time-invariant system ẋ(t = Ax(t + Bu(t y(t = Cx(t Denote the transfer function G(s := C (si A 1 B. Theorem 85 The following statements

More information

Downloaded 05/27/14 to Redistribution subject to SIAM license or copyright; see

Downloaded 05/27/14 to Redistribution subject to SIAM license or copyright; see SIAM J. MATRIX ANAL. APPL. Vol. 31, No. 2, pp. 272 288 c 2009 Society for Industrial and Applied Mathematics INEXACT KLEINMAN NEWTON METHOD FOR RICCATI EQUATIONS F. FEITZINGER, T. HYLLA, AND E. W. SACHS

More information

The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method. V. Simoncini. Dipartimento di Matematica, Università di Bologna

The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method. V. Simoncini. Dipartimento di Matematica, Università di Bologna The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The

More information

arxiv: v1 [math.na] 4 Apr 2018

arxiv: v1 [math.na] 4 Apr 2018 Efficient Solution of Large-Scale Algebraic Riccati Equations Associated with Index-2 DAEs via the Inexact Low-Rank Newton-ADI Method Peter Benner a,d, Matthias Heinkenschloss b,1,, Jens Saak a, Heiko

More information

MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS

MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS Ulrike Baur joint work with Peter Benner Mathematics in Industry and Technology Faculty of Mathematics Chemnitz University of Technology

More information

Numerical Solution of Matrix Equations Arising in Control of Bilinear and Stochastic Systems

Numerical Solution of Matrix Equations Arising in Control of Bilinear and Stochastic Systems MatTriad 2015 Coimbra September 7 11, 2015 Numerical Solution of Matrix Equations Arising in Control of Bilinear and Stochastic Systems Peter Benner Max Planck Institute for Dynamics of Complex Technical

More information

ADAPTIVE RATIONAL KRYLOV SUBSPACES FOR LARGE-SCALE DYNAMICAL SYSTEMS. 1. Introduction. The time-invariant linear system

ADAPTIVE RATIONAL KRYLOV SUBSPACES FOR LARGE-SCALE DYNAMICAL SYSTEMS. 1. Introduction. The time-invariant linear system ADAPTIVE RATIONAL KRYLOV SUBSPACES FOR LARGE-SCALE DYNAMICAL SYSTEMS V. DRUSKIN AND V. SIMONCINI Abstract. The rational Krylov space is recognized as a powerful tool within Model Order Reduction techniques

More information

CME 345: MODEL REDUCTION - Projection-Based Model Order Reduction

CME 345: MODEL REDUCTION - Projection-Based Model Order Reduction CME 345: MODEL REDUCTION - Projection-Based Model Order Reduction Projection-Based Model Order Reduction Charbel Farhat and David Amsallem Stanford University cfarhat@stanford.edu 1 / 38 Outline 1 Solution

More information

Approximation of functions of large matrices. Part I. Computational aspects. V. Simoncini

Approximation of functions of large matrices. Part I. Computational aspects. V. Simoncini Approximation of functions of large matrices. Part I. Computational aspects V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The problem Given

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

Approximation of the Linearized Boussinesq Equations

Approximation of the Linearized Boussinesq Equations Approximation of the Linearized Boussinesq Equations Alan Lattimer Advisors Jeffrey Borggaard Serkan Gugercin Department of Mathematics Virginia Tech SIAM Talk Series, Virginia Tech, April 22, 2014 Alan

More information

arxiv: v2 [math.na] 11 Apr 2017

arxiv: v2 [math.na] 11 Apr 2017 Low rank approximate solutions to large-scale differential matrix Riccati equations arxiv:1612.499v2 [math.na] 11 Apr 217 Y. Güldoğan a, M. Hached b,, K. Jbilou c, M. Kurulay a a Department of Mathematical

More information

Low Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL

Low Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL Low Rank Approximation Lecture 7 Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL daniel.kressner@epfl.ch 1 Alternating least-squares / linear scheme General setting:

More information

S N. hochdimensionaler Lyapunov- und Sylvestergleichungen. Peter Benner. Mathematik in Industrie und Technik Fakultät für Mathematik TU Chemnitz

S N. hochdimensionaler Lyapunov- und Sylvestergleichungen. Peter Benner. Mathematik in Industrie und Technik Fakultät für Mathematik TU Chemnitz Ansätze zur numerischen Lösung hochdimensionaler Lyapunov- und Sylvestergleichungen Peter Benner Mathematik in Industrie und Technik Fakultät für Mathematik TU Chemnitz S N SIMULATION www.tu-chemnitz.de/~benner

More information

Reduced Basis Method for Parametric

Reduced Basis Method for Parametric 1/24 Reduced Basis Method for Parametric H 2 -Optimal Control Problems NUMOC2017 Andreas Schmidt, Bernard Haasdonk Institute for Applied Analysis and Numerical Simulation, University of Stuttgart andreas.schmidt@mathematik.uni-stuttgart.de

More information

BALANCING-RELATED MODEL REDUCTION FOR DATA-SPARSE SYSTEMS

BALANCING-RELATED MODEL REDUCTION FOR DATA-SPARSE SYSTEMS BALANCING-RELATED Peter Benner Professur Mathematik in Industrie und Technik Fakultät für Mathematik Technische Universität Chemnitz Computational Methods with Applications Harrachov, 19 25 August 2007

More information

Krylov Subspace Type Methods for Solving Projected Generalized Continuous-Time Lyapunov Equations

Krylov Subspace Type Methods for Solving Projected Generalized Continuous-Time Lyapunov Equations Krylov Subspace Type Methods for Solving Proected Generalized Continuous-Time Lyapunov Equations YUIAN ZHOU YIQIN LIN Hunan University of Science and Engineering Institute of Computational Mathematics

More information

MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2.

MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2. MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2. Diagonalization Let L be a linear operator on a finite-dimensional vector space V. Then the following conditions are equivalent:

More information

Model reduction of large-scale dynamical systems

Model reduction of large-scale dynamical systems Model reduction of large-scale dynamical systems Lecture III: Krylov approximation and rational interpolation Thanos Antoulas Rice University and Jacobs University email: aca@rice.edu URL: www.ece.rice.edu/

More information

The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications

The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications MAX PLANCK INSTITUTE Elgersburg Workshop Elgersburg February 11-14, 2013 The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications Timo Reis 1 Matthias Voigt 2 1 Department

More information

Factorized Solution of Sylvester Equations with Applications in Control

Factorized Solution of Sylvester Equations with Applications in Control Factorized Solution of Sylvester Equations with Applications in Control Peter Benner Abstract Sylvester equations play a central role in many areas of applied mathematics and in particular in systems and

More information

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop Perturbation theory for eigenvalues of Hermitian pencils Christian Mehl Institut für Mathematik TU Berlin, Germany 9th Elgersburg Workshop Elgersburg, March 3, 2014 joint work with Shreemayee Bora, Michael

More information

Matrix Equations and and Bivariate Function Approximation

Matrix Equations and and Bivariate Function Approximation Matrix Equations and and Bivariate Function Approximation D. Kressner Joint work with L. Grasedyck, C. Tobler, N. Truhar. ETH Zurich, Seminar for Applied Mathematics Manchester, 17.06.2009 Sylvester matrix

More information

Statistical Geometry Processing Winter Semester 2011/2012

Statistical Geometry Processing Winter Semester 2011/2012 Statistical Geometry Processing Winter Semester 2011/2012 Linear Algebra, Function Spaces & Inverse Problems Vector and Function Spaces 3 Vectors vectors are arrows in space classically: 2 or 3 dim. Euclidian

More information

A Continuation Approach to a Quadratic Matrix Equation

A Continuation Approach to a Quadratic Matrix Equation A Continuation Approach to a Quadratic Matrix Equation Nils Wagner nwagner@mecha.uni-stuttgart.de Institut A für Mechanik, Universität Stuttgart GAMM Workshop Applied and Numerical Linear Algebra September

More information

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES Eigenvalue Problems CHAPTER 1 : PRELIMINARIES Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Mathematics TUHH Heinrich Voss Preliminaries Eigenvalue problems 2012 1 / 14

More information

EXTENDED KRYLOV SUBSPACE FOR PARAMETER DEPENDENT SYSTEMS

EXTENDED KRYLOV SUBSPACE FOR PARAMETER DEPENDENT SYSTEMS EXTENDED KRYLOV SUBSPACE FOR PARAMETER DEPENDENT SYSTEMS V. SIMONCINI Abstract. The Extended Krylov Subspace has recently received considerable attention as a powerful tool for matrix function evaluations

More information

Linear and Nonlinear Matrix Equations Arising in Model Reduction

Linear and Nonlinear Matrix Equations Arising in Model Reduction International Conference on Numerical Linear Algebra and its Applications Guwahati, January 15 18, 2013 Linear and Nonlinear Matrix Equations Arising in Model Reduction Peter Benner Tobias Breiten Max

More information

Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control

Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control Chun-Hua Guo Dedicated to Peter Lancaster on the occasion of his 70th birthday We consider iterative methods for finding the

More information

u(t) = Au(t), u(0) = B, (1.1)

u(t) = Au(t), u(0) = B, (1.1) ADAPTIVE TANGENTIAL INTERPOLATION IN RATIONAL KRYLOV SUBSPACES FOR MIMO MODEL REDUCTION V. DRUSKIN, V. SIMONCINI, AND M. ZASLAVSKY Abstract. Model reduction approaches have shown to be powerful techniques

More information

On the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012

On the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012 On the Superlinear Convergence of MINRES Valeria Simoncini and Daniel B. Szyld Report 12-01-11 January 2012 This report is available in the World Wide Web at http://www.math.temple.edu/~szyld 0 Chapter

More information

Model Reduction for Unstable Systems

Model Reduction for Unstable Systems Model Reduction for Unstable Systems Klajdi Sinani Virginia Tech klajdi@vt.edu Advisor: Serkan Gugercin October 22, 2015 (VT) SIAM October 22, 2015 1 / 26 Overview 1 Introduction 2 Interpolatory Model

More information

ISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION

ISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION ISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION Harald K. Wimmer 1 The set of all negative-semidefinite solutions of the CARE A X + XA + XBB X C C = 0 is homeomorphic

More information

Solving Large-Scale Matrix Equations: Recent Progress and New Applications

Solving Large-Scale Matrix Equations: Recent Progress and New Applications 19th Conference of the International Linear Algebra Society (ILAS) Seoul, August 6 9, 2014 Solving Large-Scale Matrix Equations: Recent Progress and New Applications Peter Benner Max Planck Institute for

More information

1 Conjugate gradients

1 Conjugate gradients Notes for 2016-11-18 1 Conjugate gradients We now turn to the method of conjugate gradients (CG), perhaps the best known of the Krylov subspace solvers. The CG iteration can be characterized as the iteration

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 15-05 Induced Dimension Reduction method for solving linear matrix equations R. Astudillo and M. B. van Gijzen ISSN 1389-6520 Reports of the Delft Institute of Applied

More information

Balancing-Related Model Reduction for Large-Scale Systems

Balancing-Related Model Reduction for Large-Scale Systems Balancing-Related Model Reduction for Large-Scale Systems Peter Benner Professur Mathematik in Industrie und Technik Fakultät für Mathematik Technische Universität Chemnitz D-09107 Chemnitz benner@mathematik.tu-chemnitz.de

More information

Passivity Preserving Model Reduction for Large-Scale Systems. Peter Benner.

Passivity Preserving Model Reduction for Large-Scale Systems. Peter Benner. Passivity Preserving Model Reduction for Large-Scale Systems Peter Benner Mathematik in Industrie und Technik Fakultät für Mathematik Sonderforschungsbereich 393 S N benner@mathematik.tu-chemnitz.de SIMULATION

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Numerical Linear Algebra Background Cho-Jui Hsieh UC Davis May 15, 2018 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP) MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

More information

Linear Matrix Inequalities in Robust Control. Venkataramanan (Ragu) Balakrishnan School of ECE, Purdue University MTNS 2002

Linear Matrix Inequalities in Robust Control. Venkataramanan (Ragu) Balakrishnan School of ECE, Purdue University MTNS 2002 Linear Matrix Inequalities in Robust Control Venkataramanan (Ragu) Balakrishnan School of ECE, Purdue University MTNS 2002 Objective A brief introduction to LMI techniques for Robust Control Emphasis on

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

Modern Optimal Control

Modern Optimal Control Modern Optimal Control Matthew M. Peet Arizona State University Lecture 19: Stabilization via LMIs Optimization Optimization can be posed in functional form: min x F objective function : inequality constraints

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 16-02 The Induced Dimension Reduction method applied to convection-diffusion-reaction problems R. Astudillo and M. B. van Gijzen ISSN 1389-6520 Reports of the Delft

More information

Model Reduction for Linear Dynamical Systems

Model Reduction for Linear Dynamical Systems Summer School on Numerical Linear Algebra for Dynamical and High-Dimensional Problems Trogir, October 10 15, 2011 Model Reduction for Linear Dynamical Systems Peter Benner Max Planck Institute for Dynamics

More information

arxiv: v3 [math.na] 9 Oct 2016

arxiv: v3 [math.na] 9 Oct 2016 RADI: A low-ran ADI-type algorithm for large scale algebraic Riccati equations Peter Benner Zvonimir Bujanović Patric Kürschner Jens Saa arxiv:50.00040v3 math.na 9 Oct 206 Abstract This paper introduces

More information

EXTENDED KRYLOV SUBSPACE FOR PARAMETER DEPENDENT SYSTEMS

EXTENDED KRYLOV SUBSPACE FOR PARAMETER DEPENDENT SYSTEMS EXTENDED KRYLOV SUBSPACE FOR PARAMETER DEPENDENT SYSTEMS V. SIMONCINI Abstract. The Extended Krylov Subspace has recently received considerable attention as a powerful tool for matrix function evaluations

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

Numerical tensor methods and their applications

Numerical tensor methods and their applications Numerical tensor methods and their applications 8 May 2013 All lectures 4 lectures, 2 May, 08:00-10:00: Introduction: ideas, matrix results, history. 7 May, 08:00-10:00: Novel tensor formats (TT, HT, QTT).

More information

Efficient iterative algorithms for linear stability analysis of incompressible flows

Efficient iterative algorithms for linear stability analysis of incompressible flows IMA Journal of Numerical Analysis Advance Access published February 27, 215 IMA Journal of Numerical Analysis (215) Page 1 of 21 doi:1.193/imanum/drv3 Efficient iterative algorithms for linear stability

More information

A POD Projection Method for Large-Scale Algebraic Riccati Equations

A POD Projection Method for Large-Scale Algebraic Riccati Equations A POD Projection Method for Large-Scale Algebraic Riccati Equations Boris Kramer Department of Mathematics Virginia Tech Blacksburg, VA 24061-0123 Email: bokr@vt.edu John R. Singler Department of Mathematics

More information

Chebyshev semi-iteration in Preconditioning

Chebyshev semi-iteration in Preconditioning Report no. 08/14 Chebyshev semi-iteration in Preconditioning Andrew J. Wathen Oxford University Computing Laboratory Tyrone Rees Oxford University Computing Laboratory Dedicated to Victor Pereyra on his

More information

Krylov Subspace Methods for Nonlinear Model Reduction

Krylov Subspace Methods for Nonlinear Model Reduction MAX PLANCK INSTITUT Conference in honour of Nancy Nichols 70th birthday Reading, 2 3 July 2012 Krylov Subspace Methods for Nonlinear Model Reduction Peter Benner and Tobias Breiten Max Planck Institute

More information

Structure preserving Krylov-subspace methods for Lyapunov equations

Structure preserving Krylov-subspace methods for Lyapunov equations Structure preserving Krylov-subspace methods for Lyapunov equations Matthias Bollhöfer, André Eppler Institute Computational Mathematics TU Braunschweig MoRePas Workshop, Münster September 17, 2009 System

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

Model reduction for linear systems by balancing

Model reduction for linear systems by balancing Model reduction for linear systems by balancing Bart Besselink Jan C. Willems Center for Systems and Control Johann Bernoulli Institute for Mathematics and Computer Science University of Groningen, Groningen,

More information

Inexactness and flexibility in linear Krylov solvers

Inexactness and flexibility in linear Krylov solvers Inexactness and flexibility in linear Krylov solvers Luc Giraud ENSEEIHT (N7) - IRIT, Toulouse Matrix Analysis and Applications CIRM Luminy - October 15-19, 2007 in honor of Gérard Meurant for his 60 th

More information

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

MATH 304 Linear Algebra Lecture 34: Review for Test 2. MATH 304 Linear Algebra Lecture 34: Review for Test 2. Topics for Test 2 Linear transformations (Leon 4.1 4.3) Matrix transformations Matrix of a linear mapping Similar matrices Orthogonality (Leon 5.1

More information

Indefinite Preconditioners for PDE-constrained optimization problems. V. Simoncini

Indefinite Preconditioners for PDE-constrained optimization problems. V. Simoncini Indefinite Preconditioners for PDE-constrained optimization problems V. Simoncini Dipartimento di Matematica, Università di Bologna, Italy valeria.simoncini@unibo.it Partly joint work with Debora Sesana,

More information

Model order reduction of large-scale dynamical systems with Jacobi-Davidson style eigensolvers

Model order reduction of large-scale dynamical systems with Jacobi-Davidson style eigensolvers MAX PLANCK INSTITUTE International Conference on Communications, Computing and Control Applications March 3-5, 2011, Hammamet, Tunisia. Model order reduction of large-scale dynamical systems with Jacobi-Davidson

More information

Solving Ill-Posed Cauchy Problems in Three Space Dimensions using Krylov Methods

Solving Ill-Posed Cauchy Problems in Three Space Dimensions using Krylov Methods Solving Ill-Posed Cauchy Problems in Three Space Dimensions using Krylov Methods Lars Eldén Department of Mathematics Linköping University, Sweden Joint work with Valeria Simoncini February 21 Lars Eldén

More information

Reduced Order Methods for Large Scale Riccati Equations

Reduced Order Methods for Large Scale Riccati Equations Reduced Order Methods for Large Scale Riccati Equations Miroslav Karolinov Stoyanov Dissertation submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial fulfillment

More information

On a Nonlinear Riccati Matrix Eigenproblem

On a Nonlinear Riccati Matrix Eigenproblem Applied Mathematical Sciences, Vol. 11, 2017, no. 33, 1619-1650 HIKARI Ltd, www.m-hikari.com https://doi.org/10.12988/ams.2017.74152 On a Nonlinear Riccati Matrix Eigenproblem L. Kohaupt Beuth University

More information

ME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ

ME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ ME 234, Lyapunov and Riccati Problems. This problem is to recall some facts and formulae you already know. (a) Let A and B be matrices of appropriate dimension. Show that (A, B) is controllable if and

More information

Krylov Subspaces. Lab 1. The Arnoldi Iteration

Krylov Subspaces. Lab 1. The Arnoldi Iteration Lab 1 Krylov Subspaces Lab Objective: Discuss simple Krylov Subspace Methods for finding eigenvalues and show some interesting applications. One of the biggest difficulties in computational linear algebra

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra Definite versus Indefinite Linear Algebra Christian Mehl Institut für Mathematik TU Berlin Germany 10th SIAM Conference on Applied Linear Algebra Monterey Bay Seaside, October 26-29, 2009 Indefinite Linear

More information

ADI-preconditioned FGMRES for solving large generalized Lyapunov equations - A case study

ADI-preconditioned FGMRES for solving large generalized Lyapunov equations - A case study -preconditioned for large - A case study Matthias Bollhöfer, André Eppler TU Braunschweig Institute Computational Mathematics Syrene-MOR Workshop, TU Hamburg October 30, 2008 2 / 20 Outline 1 2 Overview

More information

Chapter 6: Orthogonality

Chapter 6: Orthogonality Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products

More information

Exploiting off-diagonal rank structures in the solution of linear matrix equations

Exploiting off-diagonal rank structures in the solution of linear matrix equations Stefano Massei Exploiting off-diagonal rank structures in the solution of linear matrix equations Based on joint works with D. Kressner (EPFL), M. Mazza (IPP of Munich), D. Palitta (IDCTS of Magdeburg)

More information

H 2 optimal model reduction - Wilson s conditions for the cross-gramian

H 2 optimal model reduction - Wilson s conditions for the cross-gramian H 2 optimal model reduction - Wilson s conditions for the cross-gramian Ha Binh Minh a, Carles Batlle b a School of Applied Mathematics and Informatics, Hanoi University of Science and Technology, Dai

More information

AN ITERATIVE METHOD TO SOLVE SYMMETRIC POSITIVE DEFINITE MATRIX EQUATIONS

AN ITERATIVE METHOD TO SOLVE SYMMETRIC POSITIVE DEFINITE MATRIX EQUATIONS AN ITERATIVE METHOD TO SOLVE SYMMETRIC POSITIVE DEFINITE MATRIX EQUATIONS DAVOD KHOJASTEH SALKUYEH and FATEMEH PANJEH ALI BEIK Communicated by the former editorial board Let A : R m n R m n be a symmetric

More information

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

1. Find the solution of the following uncontrolled linear system. 2 α 1 1 Appendix B Revision Problems 1. Find the solution of the following uncontrolled linear system 0 1 1 ẋ = x, x(0) =. 2 3 1 Class test, August 1998 2. Given the linear system described by 2 α 1 1 ẋ = x +

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

Solutions for Math 225 Assignment #5 1

Solutions for Math 225 Assignment #5 1 Solutions for Math 225 Assignment #5 1 (1) Find a polynomial f(x) of degree at most 3 satisfying that f(0) = 2, f( 1) = 1, f(1) = 3 and f(3) = 1. Solution. By Lagrange Interpolation, ( ) (x + 1)(x 1)(x

More information

Krylov Subspaces. The order-n Krylov subspace of A generated by x is

Krylov Subspaces. The order-n Krylov subspace of A generated by x is Lab 1 Krylov Subspaces Lab Objective: matrices. Use Krylov subspaces to find eigenvalues of extremely large One of the biggest difficulties in computational linear algebra is the amount of memory needed

More information

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7 Linear Algebra and its Applications-Lab 1 1) Use Gaussian elimination to solve the following systems x 1 + x 2 2x 3 + 4x 4 = 5 1.1) 2x 1 + 2x 2 3x 3 + x 4 = 3 3x 1 + 3x 2 4x 3 2x 4 = 1 x + y + 2z = 4 1.4)

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Lecture 5: Numerical Linear Algebra Cho-Jui Hsieh UC Davis April 20, 2017 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

Iterative solvers for saddle point algebraic linear systems: tools of the trade. V. Simoncini

Iterative solvers for saddle point algebraic linear systems: tools of the trade. V. Simoncini Iterative solvers for saddle point algebraic linear systems: tools of the trade V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The problem

More information

arxiv: v1 [cs.na] 25 May 2018

arxiv: v1 [cs.na] 25 May 2018 NUMERICAL METHODS FOR DIFFERENTIAL LINEAR MATRIX EQUATIONS VIA KRYLOV SUBSPACE METHODS M. HACHED AND K. JBILOU arxiv:1805.10192v1 [cs.na] 25 May 2018 Abstract. In the present paper, we present some numerical

More information

The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix

The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix Chun-Yueh Chiang Center for General Education, National Formosa University, Huwei 632, Taiwan. Matthew M. Lin 2, Department of

More information

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices Chapter 7 Iterative methods for large sparse linear systems In this chapter we revisit the problem of solving linear systems of equations, but now in the context of large sparse systems. The price to pay

More information

Solving large sparse eigenvalue problems

Solving large sparse eigenvalue problems Solving large sparse eigenvalue problems Mario Berljafa Stefan Güttel June 2015 Contents 1 Introduction 1 2 Extracting approximate eigenpairs 2 3 Accuracy of the approximate eigenpairs 3 4 Expanding the

More information

Module 03 Linear Systems Theory: Necessary Background

Module 03 Linear Systems Theory: Necessary Background Module 03 Linear Systems Theory: Necessary Background Ahmad F. Taha EE 5243: Introduction to Cyber-Physical Systems Email: ahmad.taha@utsa.edu Webpage: http://engineering.utsa.edu/ taha/index.html September

More information