Order reduction numerical methods for the algebraic Riccati equation. V. Simoncini

Similar documents
On the numerical solution of large-scale linear matrix equations. V. Simoncini

The rational Krylov subspace for parameter dependent systems. V. Simoncini

Adaptive rational Krylov subspaces for large-scale dynamical systems. V. Simoncini

ON THE NUMERICAL SOLUTION OF LARGE-SCALE RICCATI EQUATIONS

Computational methods for large-scale matrix equations and application to PDEs. V. Simoncini

On Solving Large Algebraic. Riccati Matrix Equations

A Newton-Galerkin-ADI Method for Large-Scale Algebraic Riccati Equations

Computational methods for large-scale matrix equations: recent advances and applications. V. Simoncini

The Newton-ADI Method for Large-Scale Algebraic Riccati Equations. Peter Benner.

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna.

Computational methods for large-scale matrix equations and application to PDEs. V. Simoncini

Efficient Implementation of Large Scale Lyapunov and Riccati Equation Solvers

Computational methods for large-scale linear matrix equations and application to FDEs. V. Simoncini

Model Reduction for Dynamical Systems

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

The norms can also be characterized in terms of Riccati inequalities.

Downloaded 05/27/14 to Redistribution subject to SIAM license or copyright; see

The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method. V. Simoncini. Dipartimento di Matematica, Università di Bologna

arxiv: v1 [math.na] 4 Apr 2018

MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS

Numerical Solution of Matrix Equations Arising in Control of Bilinear and Stochastic Systems

ADAPTIVE RATIONAL KRYLOV SUBSPACES FOR LARGE-SCALE DYNAMICAL SYSTEMS. 1. Introduction. The time-invariant linear system

CME 345: MODEL REDUCTION - Projection-Based Model Order Reduction

Approximation of functions of large matrices. Part I. Computational aspects. V. Simoncini

M.A. Botchev. September 5, 2014

Approximation of the Linearized Boussinesq Equations

arxiv: v2 [math.na] 11 Apr 2017

Low Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL

S N. hochdimensionaler Lyapunov- und Sylvestergleichungen. Peter Benner. Mathematik in Industrie und Technik Fakultät für Mathematik TU Chemnitz

Reduced Basis Method for Parametric

BALANCING-RELATED MODEL REDUCTION FOR DATA-SPARSE SYSTEMS

Krylov Subspace Type Methods for Solving Projected Generalized Continuous-Time Lyapunov Equations

MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2.

Model reduction of large-scale dynamical systems

The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications

Factorized Solution of Sylvester Equations with Applications in Control

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop

Matrix Equations and and Bivariate Function Approximation

Statistical Geometry Processing Winter Semester 2011/2012

A Continuation Approach to a Quadratic Matrix Equation

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES

EXTENDED KRYLOV SUBSPACE FOR PARAMETER DEPENDENT SYSTEMS

Linear and Nonlinear Matrix Equations Arising in Model Reduction

Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control

u(t) = Au(t), u(0) = B, (1.1)

On the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012

Model Reduction for Unstable Systems

ISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION

Solving Large-Scale Matrix Equations: Recent Progress and New Applications

1 Conjugate gradients

DELFT UNIVERSITY OF TECHNOLOGY

Balancing-Related Model Reduction for Large-Scale Systems

Passivity Preserving Model Reduction for Large-Scale Systems. Peter Benner.

The Conjugate Gradient Method

STA141C: Big Data & High Performance Statistical Computing

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Linear Matrix Inequalities in Robust Control. Venkataramanan (Ragu) Balakrishnan School of ECE, Purdue University MTNS 2002

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Modern Optimal Control

DELFT UNIVERSITY OF TECHNOLOGY

Model Reduction for Linear Dynamical Systems

arxiv: v3 [math.na] 9 Oct 2016

EXTENDED KRYLOV SUBSPACE FOR PARAMETER DEPENDENT SYSTEMS

The Lanczos and conjugate gradient algorithms

Numerical tensor methods and their applications

Efficient iterative algorithms for linear stability analysis of incompressible flows

A POD Projection Method for Large-Scale Algebraic Riccati Equations

Chebyshev semi-iteration in Preconditioning

Krylov Subspace Methods for Nonlinear Model Reduction

Structure preserving Krylov-subspace methods for Lyapunov equations

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

Model reduction for linear systems by balancing

Inexactness and flexibility in linear Krylov solvers

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

Indefinite Preconditioners for PDE-constrained optimization problems. V. Simoncini

Model order reduction of large-scale dynamical systems with Jacobi-Davidson style eigensolvers

Solving Ill-Posed Cauchy Problems in Three Space Dimensions using Krylov Methods

Reduced Order Methods for Large Scale Riccati Equations

On a Nonlinear Riccati Matrix Eigenproblem

ME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ

Krylov Subspaces. Lab 1. The Arnoldi Iteration

Algebra C Numerical Linear Algebra Sample Exam Problems

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra

ADI-preconditioned FGMRES for solving large generalized Lyapunov equations - A case study

Chapter 6: Orthogonality

Exploiting off-diagonal rank structures in the solution of linear matrix equations

H 2 optimal model reduction - Wilson s conditions for the cross-gramian

AN ITERATIVE METHOD TO SOLVE SYMMETRIC POSITIVE DEFINITE MATRIX EQUATIONS

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

MAT Linear Algebra Collection of sample exams

Solutions for Math 225 Assignment #5 1

Krylov Subspaces. The order-n Krylov subspace of A generated by x is

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7

STA141C: Big Data & High Performance Statistical Computing

Iterative solvers for saddle point algebraic linear systems: tools of the trade. V. Simoncini

arxiv: v1 [cs.na] 25 May 2018

The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices

Solving large sparse eigenvalue problems

Module 03 Linear Systems Theory: Necessary Background

Transcription:

Order reduction numerical methods for the algebraic Riccati equation V. Simoncini Dipartimento di Matematica Alma Mater Studiorum - Università di Bologna valeria.simoncini@unibo.it 1

The problem Find X R n n such that AX+XA XBB X+C C = 0 with A R n n, B R n p, C R s n, p,s = O(1) Rich literature on analysis, applications and numerics: Lancaster-Rodman 1995, Bini-Iannazzo-Meini 2012, Mehrmann etal 2003... We focus on the large scale case: n 1000 Different strategies (Inexact) Kleinman iteration (Newton-type method) Projection methods Invariant subspace iteration (Sparse) multilevel methods... 2

The problem Find X R n n such that AX+XA XBB X+C C = 0 with A R n n, B R n p, C R s n, p,s = O(1) Rich literature on analysis, applications and numerics: Lancaster-Rodman 1995, Bini-Iannazzo-Meini 2012, Mehrmann etal 2003... We focus on the large scale case: n 1000 Different strategies (Inexact) Kleinman iteration (Newton-type method) Projection methods Invariant subspace iteration (Sparse) multilevel methods... 3

Newton-Kleinman iteration Assume A stable. Compute sequence {X k } with X k k X (A X k BB )X k+1 +X k+1 (A BB X k )+C C +X k BB X k = 0 1: Given X 0 R n n such that X 0 = X 0, A BB X 0 is stable 2: For k = 0,1,..., until convergence 3: Set A k = A BB X k 4: Set Ck = [X k B, C ] 5: Solve A k X k+1 +X k+1 A k +C k C k = 0 Critical issues: The full matrix X k cannot be stored (sparse or low-rank approx) Need a computable stopping criterion Each iteration k requires the solution of the Lyapunov equation: (Benner, Feitzinger, Hylla, Saak, Sachs,...) 4

Newton-Kleinman iteration Assume A stable. Compute sequence {X k } with X k k X (A X k BB )X k+1 +X k+1 (A BB X k )+C C +X k BB X k = 0 1: Given X 0 R n n such that X 0 = X 0, A BB X 0 is stable 2: For k = 0,1,..., until convergence 3: Set A k = A BB X k 4: Set Ck = [X k B, C ] 5: Solve A k X k+1 +X k+1 A k +C k C k = 0 Critical issues: The full matrix X k cannot be stored (sparse or low-rank approx) Need a computable stopping criterion Each iteration k requires the solution of the Lyapunov equation: (Benner, Feitzinger, Hylla, Saak, Sachs,...) 5

Newton-Kleinman iteration Assume A stable. Compute sequence {X k } with X k k X (A X k BB )X k+1 +X k+1 (A BB X k )+C C +X k BB X k = 0 1: Given X 0 R n n such that X 0 = X 0, A BB X 0 is stable. 2: For k = 0,1,..., until convergence 3: Set A k = A BB X k 4: Set Ck = [X k B, C ] 5: Solve A k X k+1 +X k+1 A k +C k C k = 0 Critical issues: The full matrix X k cannot be stored (sparse or low-rank approx) Need a computable stopping criterion Each iteration k requires the solution of the Lyapunov equation (Benner, Feitzinger, Hylla, Saak, Sachs,...) 6

Galerkin projection method for the Riccati equation Given the orth basis V k for an approximation space, determine X k = V k Y k V k to the Riccati solution matrix by orthogonal projection: Galerkin condition: Residual orthogonal to approximation space V k (AX k +X k A X k BB X k +C C)V k = 0 giving the reduced Riccati equation (V k AV k )Y +Y(V k A V k ) Y(V k BB V k )Y +(V k C )(CV k ) = 0 Y k is the stabilizing solution (Heyouni-Jbilou 2009) Key questions: Which approximation space? Is this meaningful from a Control Theory perspective? 7

Galerkin projection method for the Riccati equation Given the orth basis V k for an approximation space, determine X k = V k Y k V k to the Riccati solution matrix by orthogonal projection: Galerkin condition: Residual orthogonal to approximation space V k (AX k +X k A X k BB X k +C C)V k = 0 giving the reduced Riccati equation (V k AV k )Y +Y(V k A V k ) Y(V k BB V k )Y +(V k C )(CV k ) = 0 Y k is the stabilizing solution (Heyouni-Jbilou 2009) Key questions: Which approximation space? Is this meaningful from a Control Theory perspective? 8

Galerkin projection method for the Riccati equation Given the orth basis V k for an approximation space, determine X k = V k Y k V k to the Riccati solution matrix by orthogonal projection: Galerkin condition: Residual orthogonal to approximation space V k (AX k +X k A X k BB X k +C C)V k = 0 giving the reduced Riccati equation (V k AV k )Y +Y(V k A V k ) Y(V k BB V k )Y +(V k C )(CV k ) = 0 Y k is the stabilizing solution (Heyouni-Jbilou 2009) Key questions: Which approximation space? Is this meaningful from a Control Theory perspective? 9

Galerkin projection method for the Riccati equation Given the orth basis V k for an approximation space, determine X k = V k Y k V k to the Riccati solution matrix by orthogonal projection: Galerkin condition: Residual orthogonal to approximation space V k (AX k +X k A X k BB X k +C C)V k = 0 giving the reduced Riccati equation (V k AV k )Y +Y(V k A V k ) Y(V k BB V k )Y +(V k C )(CV k ) = 0 Y k is the stabilizing solution (Heyouni-Jbilou 2009) Key questions: Which approximation space? Is this meaningful from a Control Theory perspective? 10

Dynamical systems and the Riccati equation AX+XA XBB X+C C = 0 Time-invariant linear system ẋ(t) = Ax(t)+Bu(t), x(0) = x 0 y(t) = Cx(t), u(t) : control (input) vector; y(t) : output vector x(t) : state vector; x 0 : initial state Minimization problem for a Cost functional: (simplified form) inf u J(u,x 0) J(u,x 0 ) := 0 (x(t) C Cx(t)+u(t) u(t))dt 11

Dynamical systems and the Riccati equation AX+XA XBB X+C C = 0 Time-invariant linear system ẋ(t) = Ax(t)+Bu(t), x(0) = x 0 y(t) = Cx(t), u(t) : control (input) vector; y(t) : output vector x(t) : state vector; x 0 : initial state Minimization problem for a Cost functional: (simplified form) inf u J(u,x 0) J(u,x 0 ) := 0 ( x(t) C Cx(t)+u(t) u(t) ) dt 12

Dynamical systems and the Riccati equation AX+XA XBB X+C C = 0 inf u J(u,x 0) J(u,x 0 ) := 0 ( x(t) C Cx(t)+u(t) u(t) ) dt theorem Let the pair (A,B) be stabilizable and (C,A) observable. Then there is a unique solution X 0 of the Riccati equation. Moreover, i) For each x 0 there is a unique optimal control, and it is given by u (t) = B Xexp((A BB X)t)x 0 for t 0 ii) J(u,x 0 ) = x 0 Xx 0 for all x 0 R n see, e.g., Lancaster & Rodman, 1995 13

Order reduction of dynamical systems by projection Let V k R n d k have orthonormal columns, d k n Let T k = V k AV k, B k = V k B, C k = V k C Reduced order dynamical system: x(t) = T k x(t)+b k û(t), ŷ(t) = C k x(t) x(0) = x 0 :=V k x 0 x k (t) = V k x(t) x(t) Typical frameworks: Transfer function approximation Model reduction 14

The role of the projected Riccati equation Consider again the reduced Riccati equation: (V k AV k )Y +Y(V k A V k ) Y(V k BB V k )Y +(V k C )(CV k ) = 0 that is T k Y +YT k YB k B k Y +C k C k = 0 ( ) Theorem. Let the pair (T k,b k ) be stabilizable and (C k,t k ) observable. Then there is a unique solution Y k 0 of (*) that for each x 0 gives the feedback optimal control û (t) = B ky k exp((t k B k B ky k )t) x 0, t 0 for the reduced system. If there exists a matrix K such that A BK is passive, then the pair (T k,b k ) is stabilizable. 15

The role of the projected Riccati equation Consider again the reduced Riccati equation: (V k AV k )Y +Y(V k A V k ) Y(V k BB V k )Y +(V k C )(CV k ) = 0 that is T k Y +YT k YB k B k Y +C k C k = 0 ( ) Theorem. Let the pair (T k,b k ) be stabilizable and (C k,t k ) observable. Then there is a unique solution Y k 0 of (*) that for each x 0 gives the feedback optimal control û (t) = B ky k exp((t k B k B ky k )t) x 0, t 0 for the reduced system. If there exists a matrix K such that A BK is passive, then the pair (T k,b k ) is stabilizable. 16

The role of the projected Riccati equation Consider again the reduced Riccati equation: (V k AV k )Y +Y(V k A V k ) Y(V k BB V k )Y +(V k C )(CV k ) = 0 that is T k Y +YT k YB k B k Y +C k C k = 0 ( ) Theorem. Let the pair (T k,b k ) be stabilizable and (C k,t k ) observable. Then there is a unique solution Y k 0 of (*) that for each x 0 gives the feedback optimal control û (t) = B ky k exp((t k B k B ky k )t) x 0, t 0 for the reduced system. If there exists a matrix K such that A BK is passive, then the pair (T k,b k ) is stabilizable. 17

Projected optimal control vs approximate control Our projected optimal control function: û (t) = B k Y k exp((t k B k B k Y k )t) x 0, t 0 with X k = V k Y k V k Commonly used approximate control function: If X is some approximation to X, then ũ(t) := B X x(t) where x(t) := exp((a BB X)t)x0 û ũ They induce different actions on the functional J, even for X = X k 18

X k = V k Y k V k Projected optimal control vs approximate control Residual matrix: R k := AX k +X k A X k BB X k +C C Projected optimal control function: û (t) = B k Y kexp((t k B k B k Y k)t) theorem. Assume that A BB X k is stable and that ũ(t) := B X k x(t) approx control. Then J(ũ,x 0 ) Ĵk(û, x 0 ) = E k, with E k R k 2α x 0 x 0, where α > 0 is such that e (A BB X k )t e αt for all t 0. Note: J(ũ,x 0 ) Ĵk(û, x 0 ) is nonzero for R k 0 19

On the choice of approximation space Approximate solution X k = V k Y k V k, with (V k AV k )Y k +Y k (V k A V k ) Y k (V k BB V k )Y k +(V k C )(CV k ) = 0 Krylov-type subspaces: (extensively used in the linear case) K k (A,C ) := Range([C,AC,...,A k 1 C ]) (Polynomial) EK k (A,C ) := K k (A,C )+K k (A 1,A 1 C ) (EKS, Rational) RK k (A,C,s) := k 1 Range([C,(A s 2 I) 1 C,..., (A s j+1 I) 1 C ]) j=1 (RKS, Rational) Matrix BB not involved Parameters s j (adaptively) chosen in field of values of A 20

On the choice of approximation space Approximate solution X k = V k Y k V k, with (V k AV k )Y k +Y k (V k A V k ) Y k (V k BB V k )Y k +(V k C )(CV k ) = 0 Krylov-type subspaces: (extensively used in the linear case) K k (A,C ) := Range([C,AC,...,A k 1 C ]) (Polynomial) EK k (A,C ) := K k (A,C )+K k (A 1,A 1 C ) (EKS, Rational) RK k (A,C,s) := k 1 Range([C,(A s 2 I) 1 C,..., (A s j+1 I) 1 C ]) j=1 (RKS, Rational) Matrix BB not involved (nonlinear term!) Parameters s j (adaptively) chosen in field of values of A 21

On the choice of approximation space Approximate solution X k = V k Y k V k, with (V k AV k )Y k +Y k (V k A V k ) Y k (V k BB V k )Y k +(V k C )(CV k ) = 0 Krylov-type subspaces: (extensively used in the linear case) K k (A,C ) := Range([C,AC,...,A k 1 C ]) (Polynomial) EK k (A,C ) := K k (A,C )+K k (A 1,A 1 C ) (EKS, Rational) RK k (A,C,s) := k 1 Range([C,(A s 2 I) 1 C,..., (A s j+1 I) 1 C ]) j=1 (RKS, Rational) Matrix BB not involved (nonlinear term!) Parameters s j (adaptively) chosen in field of values of A 22

Performance of solvers Problem: A: 3D Laplace operator, B, C randn matrices, tol=10 8 (n,p,s) = (125000,5,5) its inner its time space dim rank(x f ) Newton X 0 = 0 15 5,..., 5 808 100 95 GP-EKS 20 531 200 105 GP-RKS 25 524 125 105 (n,p,s) = (125000,20,20) its inner its time space dim rank(x f ) Newton X 0 = 0 19 5,..., 5 2332 400 346 GP-EKS 15 622 600 364 GP-RKS 20 720 400 358 GP=Galerkin projection (V.Simoncini & D.Szyld & M.Monsalve, 2014) 23

A numerical example on the role of BB Consider the 500 500 Toeplitz matrix A = toeplitz( 1,2.5,1,1,1), C = [1, 2,1, 2,...],B = 1 10 4 10 2 RKSM 10 0 absolute residual norm 10 2 10 4 10 6 10 8 10 10 10 12 0 5 10 15 20 25 30 35 40 space dimension Parameter computation: Left: adaptive RKS on A Right: adaptive RKS on A BB X k (Lin & Simoncini 2015) 24

A numerical example on the role of BB Consider the 500 500 Toeplitz matrix A = toeplitz( 1,2.5,1,1,1), C = [1, 2,1, 2,...],B = 1 10 4 RKSM 10 4 RKSM 10 2 10 2 10 0 10 0 absolute residual norm 10 2 10 4 10 6 absolute residual norm 10 2 10 4 10 6 10 8 10 8 10 10 10 10 10 12 0 5 10 15 20 25 30 35 40 space dimension 10 12 0 5 10 15 20 25 30 35 40 space dimension Parameter computation: Left: adaptive RKS on A Right: adaptive RKS on A BB X k (Lin & Simoncini 2015) 25

On the residual matrix and adaptive RKS R k := AX k +X k A X k BB X k +C C theorem. Let T k = T k B k B k Y k. Then R k = R k V k +V k R k, with Rk = AV k Y k +V k Y k T k +C (CV k ) so that R k F = 2 R k F At least formally: V k Y k V k is a solution to the Riccati equation (R k = 0) if and only if Z k = V k Y k is the solution to the Sylvester equation ( R k = 0) 26

On the residual matrix and adaptive RKS R k = R k V k +V k R k Expression for the semi-residual Rk : theorem. Assume C R n, Range(V k )= RK k (A,C,s). Assume that T k = T k B k B k Y k is diagonalizable. Then R k = ψ k,tk (A)C CV k (ψ k,tk ( T k )) 1. where ψ k,tk (z) = det(zi T k) k j=1 (z s j) (see also Beckermann 2011 for the linear case) 27

On the choice of the next parameters s k+1 with ψ k,tk (z) = det(zi T k) k j=1 (z s j) R k = ψ k,tk (A)C CV k (ψ k,tk ( T k )) 1. Greedy strategy: Next shift should make (ψ k,tk ( T k )) 1 smaller Determine for which s in the spectral region of T k the quantity (ψ k,tk ( s)) 1 is large, and add a root there s k+1 = arg max s S k 1 ψ k,tk (s) S k region enclosing the eigenvalues of T k = (T k B k B k Y k) (This argument is new also for linear equations) 28

Selection of s k+1 in RKS. An example A: 900 900 2D Laplacian, B = t1 with t j = 5 10 j, C = [1, 2,1, 2,1, 2,...] 10 0 10 2 absolute residual norm 10 4 10 6 t 2 t 3 10 8 t 1 t 1 t 2 t 3 10 10 0 5 10 15 20 25 space dimension RKS convergence with and without modified shift selection as t varies Solid curves: use of T k Dashed curves: use of T k 29

Further results not presented but relevant Stabilization properties of the approx solution X k Accuracy tracking as the approximation space grows Interpretation via invariant subspace approximation (V.Simoncini, 2016) Wrap-up Projection-type methods fill the gap between MOR and Riccati equation Clearer role of the non-linear term during the projection 30

Further results not presented but relevant Stabilization properties of the approx solution X k Accuracy tracking as the approximation space grows Interpretation via invariant subspace approximation (V.Simoncini, 2016) Wrap-up Projection-type methods fill the gap between MOR and Riccati equation Clearer role of the non-linear term during the projection 31

Outlook Petrov-Galerkin projection (à la Balanced truncation) (work in progress, with A. Alla, Florida State Univ.) Projected Differential Riccati equations (see, e.g., Koskela & Mena, tr 2017) Parameterized Algebraic Riccati equations (see, e.g., Schmidt & Haasdonk, tr 2017) References - V. Simoncini, Daniel B. Szyld and Marlliny Monsalve, On two numerical methods for the solution of large-scale algebraic Riccati equations IMA Journal of Numerical Analysis, (2014) - Yiding Lin and V. Simoncini, A new subspace iteration method for the algebraic Riccati equation Numerical Linear Algebra w/appl., (2015) - V.Simoncini, Analysis of the rational Krylov subspace projection method for large-scale algebraic Riccati equations SIAM J. Matrix Analysis Appl, (2016). - V. Simoncini, Computational methods for linear matrix equations (Survey article) SIAM Review, (2016). 32

Outlook Petrov-Galerkin projection (à la Balanced truncation) (work in progress, with A. Alla, Florida State Univ.) Projected Differential Riccati equations (see, e.g., Koskela & Mena, tr 2017) Parameterized Algebraic Riccati equations (see, e.g., Schmidt & Haasdonk, tr 2017) References - V. Simoncini, Daniel B. Szyld and Marlliny Monsalve, On two numerical methods for the solution of large-scale algebraic Riccati equations IMA Journal of Numerical Analysis, (2014) - Yiding Lin and V. Simoncini, A new subspace iteration method for the algebraic Riccati equation Numerical Linear Algebra w/appl., (2015) - V.Simoncini, Analysis of the rational Krylov subspace projection method for large-scale algebraic Riccati equations SIAM J. Matrix Anal.Appl, (2016) - V. Simoncini, Computational methods for linear matrix equations SIAM Review, (2016) 33