FOR LARGE-SCALE SYSTEMS Professur Mathematik in Industrie und Technik Fakultät für Mathematik Technische Universität Chemnitz Recent Advances in Model Order Reduction Workshop TU Eindhoven, November 23, 2007
Overview 1 Model Reduction for Linear Systems Application Areas Goals 2 The Basic Ideas Balancing-Related Model Reduction Numerical Algorithms for 3 Solving Large-Scale Matrix ADI Method for Lyapunov Newton s Method for AREs 4 of Complexity < O(n 3 ) 5 Systems with a Large Number of Terminals 6 7 8
Linear Systems Application Areas Goals Dynamical Systems with Σ : states x(t) R n, inputs u(t) R m, outputs y(t) R p. { ẋ(t) = f (t, x(t), u(t)), x(t0 ) = x 0, y(t) = g(t, x(t), u(t)),
Model Reduction for Dynamical Systems Linear Systems Application Areas Goals Original System Σ : j ẋ(t) = f (t, x(t), u(t)), y(t) = g(t, x(t), u(t)). states x(t) R n, inputs u(t) R m, outputs y(t) R p. Reduced-Order System bσ : j ˆx(t) = b f (t, ˆx(t), u(t)), ŷ(t) = bg(t, ˆx(t), u(t)). states ˆx(t) R r, r n inputs u(t) R m, outputs ŷ(t) R p. Goal: y ŷ < tolerance u for all admissible input signals.
Model Reduction for Dynamical Systems Linear Systems Application Areas Goals Original System Σ : j ẋ(t) = f (t, x(t), u(t)), y(t) = g(t, x(t), u(t)). states x(t) R n, inputs u(t) R m, outputs y(t) R p. Reduced-Order System bσ : j ˆx(t) = b f (t, ˆx(t), u(t)), ŷ(t) = bg(t, ˆx(t), u(t)). states ˆx(t) R r, r n inputs u(t) R m, outputs ŷ(t) R p. Goal: y ŷ < tolerance u for all admissible input signals.
Model Reduction for Dynamical Systems Linear Systems Application Areas Goals Original System Σ : j ẋ(t) = f (t, x(t), u(t)), y(t) = g(t, x(t), u(t)). states x(t) R n, inputs u(t) R m, outputs y(t) R p. Reduced-Order System bσ : j ˆx(t) = b f (t, ˆx(t), u(t)), ŷ(t) = bg(t, ˆx(t), u(t)). states ˆx(t) R r, r n inputs u(t) R m, outputs ŷ(t) R p. Goal: y ŷ < tolerance u for all admissible input signals.
Model Reduction for Linear Systems Linear Systems Application Areas Goals Linear, Time-Invariant (LTI) Systems f (t, x, u) = Ax + Bu, A R n n, B R n m, g(t, x, u) = Cx + Du, C R p n, D R p m. Linear Systems Frequency Domain Application of Laplace transformation (x(t) x(s), ẋ(t) sx(s)) to linear system with x(0) = 0: sx(s) = Ax(s) + Bu(s), yields I/O-relation in frequency domain: ( y(s) = G is the transfer function of Σ. C(sI n A) 1 B + D }{{} =:G(s) y(s) = Bx(s) + Du(s), ) u(s)
Model Reduction for Linear Systems Linear Systems Application Areas Goals Linear, Time-Invariant (LTI) Systems f (t, x, u) = Ax + Bu, A R n n, B R n m, g(t, x, u) = Cx + Du, C R p n, D R p m. Linear Systems Frequency Domain Application of Laplace transformation (x(t) x(s), ẋ(t) sx(s)) to linear system with x(0) = 0: sx(s) = Ax(s) + Bu(s), yields I/O-relation in frequency domain: ( y(s) = G is the transfer function of Σ. C(sI n A) 1 B + D }{{} =:G(s) y(s) = Bx(s) + Du(s), ) u(s)
Model Reduction for Linear Systems Linear Systems Application Areas Goals Linear, Time-Invariant (LTI) Systems f (t, x, u) = Ax + Bu, A R n n, B R n m, g(t, x, u) = Cx + Du, C R p n, D R p m. Linear Systems Frequency Domain Application of Laplace transformation (x(t) x(s), ẋ(t) sx(s)) to linear system with x(0) = 0: sx(s) = Ax(s) + Bu(s), yields I/O-relation in frequency domain: ( y(s) = G is the transfer function of Σ. C(sI n A) 1 B + D }{{} =:G(s) y(s) = Bx(s) + Du(s), ) u(s)
Model Reduction for Linear Systems Linear Systems Application Areas Goals Problem Approximate the dynamical system ẋ = Ax + Bu, A R n n, B R n m, y = Cx + Du, C R p n, D R p m. by reduced-order system ˆx = ˆx + ˆBu,  R r r, ˆB R r m, ŷ = Ĉ ˆx + ˆDu, Ĉ R p r, ˆD R p m, of order r n, such that y ŷ = Gu Ĝu G Ĝ u < tolerance u. = Approximation problem: min order ( Ĝ) r G Ĝ.
Model Reduction for Linear Systems Linear Systems Application Areas Goals Problem Approximate the dynamical system ẋ = Ax + Bu, A R n n, B R n m, y = Cx + Du, C R p n, D R p m. by reduced-order system ˆx = ˆx + ˆBu,  R r r, ˆB R r m, ŷ = Ĉ ˆx + ˆDu, Ĉ R p r, ˆD R p m, of order r n, such that y ŷ = Gu Ĝu G Ĝ u < tolerance u. = Approximation problem: min order ( Ĝ) r G Ĝ.
Application Areas Linear Systems Application Areas Goals Feedback Control controllers designed by LQR/LQG, H 2, H methods are LTI systems of order n, but technological implementation needs order 10. Optimization/open-loop control time-discretization of already large-scale systems leads to huge number of equality constraints in mathematical program. Microelectronics verification of VLSI/ULSI chip design requires high number of simulations for different input signals, various effects due to progressive miniaturization lead to large-scale systems of differential(-algebraic) equations (order 10 8 ). MEMS/Microsystem design smart system integration needs compact models for efficient coupled simulation....
Goals Linear Systems Application Areas Goals Automatic generation of compact models. Satisfy desired error tolerance for all admissible input signals, i.e., want y ŷ < tolerance u u L 2 (R, R m ). = Need computable error bound/estimate! Preserve physical properties: stability (poles of G in C, i.e., Λ (A) C ), minimum phase (zeroes of G in C ), passivity: t u(τ)t y(τ) dτ 0 t R, u L 2 (R, R m ). ( system does not generate energy ).
Goals Linear Systems Application Areas Goals Automatic generation of compact models. Satisfy desired error tolerance for all admissible input signals, i.e., want y ŷ < tolerance u u L 2 (R, R m ). = Need computable error bound/estimate! Preserve physical properties: stability (poles of G in C, i.e., Λ (A) C ), minimum phase (zeroes of G in C ), passivity: t u(τ)t y(τ) dτ 0 t R, u L 2 (R, R m ). ( system does not generate energy ).
Goals Linear Systems Application Areas Goals Automatic generation of compact models. Satisfy desired error tolerance for all admissible input signals, i.e., want y ŷ < tolerance u u L 2 (R, R m ). = Need computable error bound/estimate! Preserve physical properties: stability (poles of G in C, i.e., Λ (A) C ), minimum phase (zeroes of G in C ), passivity: t u(τ)t y(τ) dτ 0 t R, u L 2 (R, R m ). ( system does not generate energy ).
Goals Linear Systems Application Areas Goals Automatic generation of compact models. Satisfy desired error tolerance for all admissible input signals, i.e., want y ŷ < tolerance u u L 2 (R, R m ). = Need computable error bound/estimate! Preserve physical properties: stability (poles of G in C, i.e., Λ (A) C ), minimum phase (zeroes of G in C ), passivity: t u(τ)t y(τ) dτ 0 t R, u L 2 (R, R m ). ( system does not generate energy ).
Goals Linear Systems Application Areas Goals Automatic generation of compact models. Satisfy desired error tolerance for all admissible input signals, i.e., want y ŷ < tolerance u u L 2 (R, R m ). = Need computable error bound/estimate! Preserve physical properties: stability (poles of G in C, i.e., Λ (A) C ), minimum phase (zeroes of G in C ), passivity: t u(τ)t y(τ) dτ 0 t R, u L 2 (R, R m ). ( system does not generate energy ).
Goals Linear Systems Application Areas Goals Automatic generation of compact models. Satisfy desired error tolerance for all admissible input signals, i.e., want y ŷ < tolerance u u L 2 (R, R m ). = Need computable error bound/estimate! Preserve physical properties: stability (poles of G in C, i.e., Λ (A) C ), minimum phase (zeroes of G in C ), passivity: t u(τ)t y(τ) dτ 0 t R, u L 2 (R, R m ). ( system does not generate energy ).
The Basic Ideas The Basic Ideas Balancing- Related MR Numerical Algorithms for Idea: A system Σ, realized by (A, B, C, D), is called balanced, if solutions P, Q of the Lyapunov equations AP + PA T + BB T = 0, A T Q + QA + C T C = 0, satisfy: P = Q = diag(σ 1,..., σ n ) with σ 1 σ 2... σ n > 0. {σ 1,..., σ n } are the Hankel singular values (HSVs) of Σ. Compute balanced realization of the system via state-space transformation T : (A, B, C, D) (TAT 1, TB, CT 1, D)»» A11 A 12 B1 =, A 21 A 22 B 2 (Â, ˆB, Ĉ, ˆD) = (A 11, B 1, C 1, D)., ˆ C 1 C 2, D «
The Basic Ideas The Basic Ideas Balancing- Related MR Numerical Algorithms for Idea: A system Σ, realized by (A, B, C, D), is called balanced, if solutions P, Q of the Lyapunov equations AP + PA T + BB T = 0, A T Q + QA + C T C = 0, satisfy: P = Q = diag(σ 1,..., σ n ) with σ 1 σ 2... σ n > 0. {σ 1,..., σ n } are the Hankel singular values (HSVs) of Σ. Compute balanced realization of the system via state-space transformation T : (A, B, C, D) (TAT 1, TB, CT 1, D)»» A11 A 12 B1 =, A 21 A 22 B 2 (Â, ˆB, Ĉ, ˆD) = (A 11, B 1, C 1, D)., ˆ C 1 C 2, D «
The Basic Ideas The Basic Ideas Balancing- Related MR Numerical Algorithms for Idea: A system Σ, realized by (A, B, C, D), is called balanced, if solutions P, Q of the Lyapunov equations AP + PA T + BB T = 0, A T Q + QA + C T C = 0, satisfy: P = Q = diag(σ 1,..., σ n ) with σ 1 σ 2... σ n > 0. {σ 1,..., σ n } are the Hankel singular values (HSVs) of Σ. Compute balanced realization of the system via state-space transformation T : (A, B, C, D) (TAT 1, TB, CT 1, D)»» A11 A 12 B1 =, A 21 A 22 B 2 (Â, ˆB, Ĉ, ˆD) = (A 11, B 1, C 1, D)., ˆ C 1 C 2, D «
The Basic Ideas The Basic Ideas Balancing- Related MR Numerical Algorithms for Idea: A system Σ, realized by (A, B, C, D), is called balanced, if solutions P, Q of the Lyapunov equations AP + PA T + BB T = 0, A T Q + QA + C T C = 0, satisfy: P = Q = diag(σ 1,..., σ n ) with σ 1 σ 2... σ n > 0. {σ 1,..., σ n } are the Hankel singular values (HSVs) of Σ. Compute balanced realization of the system via state-space transformation T : (A, B, C, D) (TAT 1, TB, CT 1, D)»» A11 A 12 B1 =, A 21 A 22 B 2 (Â, ˆB, Ĉ, ˆD) = (A 11, B 1, C 1, D)., ˆ C 1 C 2, D «
The Basic Ideas The Basic Ideas Balancing- Related MR Numerical Algorithms for Motivation: HSV are system invariants: they are preserved under T and determine the energy transfer given by the Hankel map H : L 2 (, 0) L 2 (0, ) : u y +.
The Basic Ideas The Basic Ideas Balancing- Related MR Numerical Algorithms for Motivation: HSV are system invariants: they are preserved under T and determine the energy transfer given by the Hankel map H : L 2 (, 0) L 2 (0, ) : u y +. In balanced coordinates... energy transfer from u to y + : E := sup u L 2 (,0] x(0)=x 0 y(t) T y(t) dt 0 0 u(t) T u(t) dt = 1 x 0 2 n σj 2 x0,j 2 j=1
The Basic Ideas The Basic Ideas Balancing- Related MR Numerical Algorithms for Motivation: HSV are system invariants: they are preserved under T and determine the energy transfer given by the Hankel map H : L 2 (, 0) L 2 (0, ) : u y +. In balanced coordinates... energy transfer from u to y + : E := sup u L 2 (,0] x(0)=x 0 y(t) T y(t) dt 0 0 u(t) T u(t) dt = 1 x 0 2 n σj 2 x0,j 2 j=1 = Truncate states corresponding to small HSVs = complete analogy to best approximation via SVD!
The Basic Ideas The Basic Ideas Balancing- Related MR Numerical Algorithms for Implementation: SR Method 1 Compute Cholesky factors of the solutions of the Lyapunov equations, P = S T S, Q = R T R. 2 Compute SVD (thanks, Gene!) 3 Set SR T = [ U 1, U 2 ] [ Σ1 Σ 2 ] [ V T 1 V T 2 ]. W = R T V 1 Σ 1/2 1, V = S T U 1 Σ 1/2 1. 4 Reduced model is (W T AV, W T B, CV, D).
The Basic Ideas The Basic Ideas Balancing- Related MR Numerical Algorithms for Implementation: SR Method 1 Compute Cholesky factors of the solutions of the Lyapunov equations, P = S T S, Q = R T R. 2 Compute SVD (thanks, Gene!) 3 Set SR T = [ U 1, U 2 ] [ Σ1 Σ 2 ] [ V T 1 V T 2 ]. W = R T V 1 Σ 1/2 1, V = S T U 1 Σ 1/2 1. 4 Reduced model is (W T AV, W T B, CV, D).
The Basic Ideas The Basic Ideas Balancing- Related MR Numerical Algorithms for Implementation: SR Method 1 Compute Cholesky factors of the solutions of the Lyapunov equations, P = S T S, Q = R T R. 2 Compute SVD (thanks, Gene!) 3 Set SR T = [ U 1, U 2 ] [ Σ1 Σ 2 ] [ V T 1 V T 2 ]. W = R T V 1 Σ 1/2 1, V = S T U 1 Σ 1/2 1. 4 Reduced model is (W T AV, W T B, CV, D).
Balancing-Related Model Reduction The Basic Ideas Balancing- Related MR Numerical Algorithms for Basic Principle of Given positive semidefinite matrices P = S T S, Q = R T R, compute balancing state-space transformation so that P = Q = diag(σ 1,..., σ n ) = Σ, σ 1... σ n 0, and truncate corresponding realization at size r with σ r > σ r+1.
Balancing-Related Model Reduction The Basic Ideas Balancing- Related MR Numerical Algorithms for Basic Principle of Given positive semidefinite matrices P = S T S, Q = R T R, compute balancing state-space transformation so that P = Q = diag(σ 1,..., σ n ) = Σ, σ 1... σ n 0, and truncate corresponding realization at size r with σ r > σ r+1. Classical (BT) Mullis/Roberts 76, Moore 81 P = controllability Gramian of system given by (A, B, C, D). Q = observability Gramian of system given by (A, B, C, D). P, Q solve dual Lyapunov equations AP + PA T + BB T = 0, A T Q + QA + C T C = 0.
Balancing-Related Model Reduction The Basic Ideas Balancing- Related MR Numerical Algorithms for Basic Principle of Given positive semidefinite matrices P = S T S, Q = R T R, compute balancing state-space transformation so that P = Q = diag(σ 1,..., σ n ) = Σ, σ 1... σ n 0, and truncate corresponding realization at size r with σ r > σ r+1. LQG (LQGBT) Jonckheere/Silverman 83 P/Q = controllability/observability Gramian of closed-loop system based on LQG compensator. P, Q solve dual algebraic Riccati equations (AREs) 0 = AP + PA T PC T CP + B T B, 0 = A T Q + QA QBB T Q + C T C.
Balancing-Related Model Reduction The Basic Ideas Balancing- Related MR Numerical Algorithms for Basic Principle of Given positive semidefinite matrices P = S T S, Q = R T R, compute balancing state-space transformation so that P = Q = diag(σ 1,..., σ n ) = Σ, σ 1... σ n 0, and truncate corresponding realization at size r with σ r > σ r+1. Stochastic (BST) Desai/Pal 84, Green 88 P = controllability Gramian of system given by (A, B, C, D), i.e., solution of Lyapunov equation AP + PA T + BB T = 0. Q = observability Gramian of right spectral factor of power spectrum of system given by (A, B, C, D), i.e., solution of ARE  T Q + Q + QB W (DD T ) 1 B T W Q + C T (DD T ) 1 C = 0, where  := A B W (DD T ) 1 C, B W := BD T + PC T.
Balancing-Related Model Reduction The Basic Ideas Balancing- Related MR Numerical Algorithms for Basic Principle of Given positive semidefinite matrices P = S T S, Q = R T R, compute balancing state-space transformation so that P = Q = diag(σ 1,..., σ n ) = Σ, σ 1... σ n 0, and truncate corresponding realization at size r with σ r > σ r+1. Positive-Real (PRBT) Green 88 Based on positive-real equations, related to positive real (Kalman-Yakubovich-Popov-Anderson) lemma. P, Q solve dual AREs 0 = ĀP + PĀ T + PC T R 1 CP + B R 1 B T, 0 = ĀT Q + QĀ + QB R 1 B T Q + C T R 1 C, where R = D + D T, Ā = A B R 1 C.
Balancing-Related Model Reduction The Basic Ideas Balancing- Related MR Numerical Algorithms for Basic Principle of Given positive semidefinite matrices P = S T S, Q = R T R, compute balancing state-space transformation so that P = Q = diag(σ 1,..., σ n ) = Σ, σ 1... σ n 0, and truncate corresponding realization at size r with σ r > σ r+1. Other Balancing-Based Methods Bounded-real balanced truncation (BRBT) based on bounded real lemma [Opdenacker/Jonckheere 88]; H balanced truncation (HinfBT) closed-loop balancing based on H compensator [Mustafa/Glover 91]. Both approaches require solution of dual AREs. Frequency-weighted versions of the above approaches.
Balancing-Related Model Reduction Properties The Basic Ideas Balancing- Related MR Numerical Algorithms for Guaranteed preservation of physical properties like stability (all), passivity (PRBT), minimum phase (BST). Computable error bounds, e.g., BT: G G r 2 LQGBT: G G r 2 BST: G G r n j=r+1 n j=r+1 n j=r+1 σ BT j, σ LQG j q 1+(σ LQG j ) 2 1+σ BST j 1 σ BST j 1 G, Can be combined with singular perturbation approximation for steady-state performance. Computations can be modularized.
Numerical Algorithms for General misconception: complexity O(n 3 ) true for several implementations! (e.g., Matlab, SLICOT). The Basic Ideas Balancing- Related MR Numerical Algorithms for
Numerical Algorithms for General misconception: complexity O(n 3 ) true for several implementations! (e.g., Matlab, SLICOT). Algorithmic ideas from numerical linear algebra (since 1997): The Basic Ideas Balancing- Related MR Numerical Algorithms for
Numerical Algorithms for The Basic Ideas Balancing- Related MR Numerical Algorithms for General misconception: complexity O(n 3 ) true for several implementations! (e.g., Matlab, SLICOT). Algorithmic ideas from numerical linear algebra (since 1997): Instead of Gramians P, Q compute S, R R n k, k n, such that P SS T, Q RR T. Compute S, R with problem-specific Lyapunov/Riccati solvers of low complexity directly.
Numerical Algorithms for The Basic Ideas Balancing- Related MR Numerical Algorithms for General misconception: complexity O(n 3 ) true for several implementations! (e.g., Matlab, SLICOT). Algorithmic ideas from numerical linear algebra (since 1997): Instead of Gramians P, Q compute S, R R n k, k n, such that P SS T, Q RR T. Compute S, R with problem-specific Lyapunov/Riccati solvers of low complexity directly. need solver for large-scale matrix equations which computes S, R directly!
Solving Large-Scale Matrix Large-Scale Algebraic Lyapunov and Riccati ADI for Lyapunov Newton s Method for AREs General form for A, G = G T, W = W T R n n given and X R n n unknown: 0 = L(X ) := A T X + XA + W, 0 = R(X ) := A T X + XA XGX + W. In large scale applications, typically n = 10 3 10 6 (= 10 6 10 12 unknowns!), A has sparse representation (A = M 1 K for FEM), G, W low-rank with G, W {BB T, C T C}, where B R n m, m n, C R p n, p n. Standard (eigenproblem-based) O(n 3 ) methods are not applicable!
Solving Large-Scale Matrix Large-Scale Algebraic Lyapunov and Riccati ADI for Lyapunov Newton s Method for AREs General form for A, G = G T, W = W T R n n given and X R n n unknown: 0 = L(X ) := A T X + XA + W, 0 = R(X ) := A T X + XA XGX + W. In large scale applications, typically n = 10 3 10 6 (= 10 6 10 12 unknowns!), A has sparse representation (A = M 1 K for FEM), G, W low-rank with G, W {BB T, C T C}, where B R n m, m n, C R p n, p n. Standard (eigenproblem-based) O(n 3 ) methods are not applicable!
ADI Method for Lyapunov ADI for Lyapunov Newton s Method for AREs For A R n n stable, B R n m (w n), consider Lyapunov equation AX + XA T = BB T. ADI Iteration: [Wachspress 88] (A + p k I )X (j 1)/2 = BB T X k 1 (A T p k I ) (A + p k I )X k T = BB T X (j 1)/2 (A T p k I ) with parameters p k C and p k+1 = p k if p k R. For X 0 = 0 and proper choice of p k : lim k X k = X superlinear. Re-formulation using X k = Y k Y T k yields iteration for Y k...
ADI Method for Lyapunov ADI for Lyapunov Newton s Method for AREs For A R n n stable, B R n m (w n), consider Lyapunov equation AX + XA T = BB T. ADI Iteration: [Wachspress 88] (A + p k I )X (j 1)/2 = BB T X k 1 (A T p k I ) (A + p k I )X k T = BB T X (j 1)/2 (A T p k I ) with parameters p k C and p k+1 = p k if p k R. For X 0 = 0 and proper choice of p k : lim k X k = X superlinear. Re-formulation using X k = Y k Y T k yields iteration for Y k...
ADI Method for Lyapunov ADI for Lyapunov Newton s Method for AREs For A R n n stable, B R n m (w n), consider Lyapunov equation AX + XA T = BB T. ADI Iteration: [Wachspress 88] (A + p k I )X (j 1)/2 = BB T X k 1 (A T p k I ) (A + p k I )X k T = BB T X (j 1)/2 (A T p k I ) with parameters p k C and p k+1 = p k if p k R. For X 0 = 0 and proper choice of p k : lim k X k = X superlinear. Re-formulation using X k = Y k Y T k yields iteration for Y k...
ADI Method for Lyapunov ADI for Lyapunov Newton s Method for AREs For A R n n stable, B R n m (w n), consider Lyapunov equation AX + XA T = BB T. ADI Iteration: [Wachspress 88] (A + p k I )X (j 1)/2 = BB T X k 1 (A T p k I ) (A + p k I )X k T = BB T X (j 1)/2 (A T p k I ) with parameters p k C and p k+1 = p k if p k R. For X 0 = 0 and proper choice of p k : lim k X k = X superlinear. Re-formulation using X k = Y k Y T k yields iteration for Y k...
Factored ADI Iteration Lyapunov equation AX + XA T = BB T. Setting X k = Y k Y T k, some algebraic manipulations = ADI for Lyapunov Newton s Method for AREs Algorithm [Penzl 97, Li/White 02, B./Li/Penzl 99/ 07] V 1 p 2Re (p 1)(A + p 1I ) 1 B, Y 1 V 1 FOR j = 2, 3,... q V k Re (pk ) `Vk 1 Re (p k 1 (p ) k + p k 1 )(A + p k I ) 1 V k 1, Y k rrqr `ˆ Y k 1 V k % column compression At convergence, Y kmax Yk T max X, where range (Y kmax ) = range ([ V 1... V kmax ]), Vk = C n m.
Factored ADI Iteration Lyapunov equation AX + XA T = BB T. Setting X k = Y k Y T k, some algebraic manipulations = ADI for Lyapunov Newton s Method for AREs Algorithm [Penzl 97, Li/White 02, B./Li/Penzl 99/ 07] V 1 p 2Re (p 1)(A + p 1I ) 1 B, Y 1 V 1 FOR j = 2, 3,... q V k Re (pk ) `Vk 1 Re (p k 1 (p ) k + p k 1 )(A + p k I ) 1 V k 1, Y k rrqr `ˆ Y k 1 V k % column compression At convergence, Y kmax Yk T max X, where range (Y kmax ) = range ([ V 1... V kmax ]), Vk = C n m.
Newton s Method for AREs [Kleinman 68, Mehrmann 91, Lancaster/Rodman 95, B./Byers 94/ 98, B. 97, Guo/Laub 99] ADI for Lyapunov Newton s Method for AREs Consider 0 = R(X ) = C T C + A T X + XA XBB T X. Frechét derivative of R(X ) at X : R X : Z (A BBT X ) T Z + Z(A BB T X ). Newton-Kantorovich method: ( ) 1 X j+1 = X j R X R(Xj j ), j = 0, 1, 2,... Newton s method (with line search) for AREs FOR j = 0, 1,... 1 A j A BB T X j =: A BK j. 2 Solve the Lyapunov equation A T j N j + N j A j = R(X j ). 3 X j+1 X j + t j N j. END FOR j
Newton s Method for AREs [Kleinman 68, Mehrmann 91, Lancaster/Rodman 95, B./Byers 94/ 98, B. 97, Guo/Laub 99] ADI for Lyapunov Newton s Method for AREs Consider 0 = R(X ) = C T C + A T X + XA XBB T X. Frechét derivative of R(X ) at X : R X : Z (A BBT X ) T Z + Z(A BB T X ). Newton-Kantorovich method: ( ) 1 X j+1 = X j R X R(Xj j ), j = 0, 1, 2,... Newton s method (with line search) for AREs FOR j = 0, 1,... 1 A j A BB T X j =: A BK j. 2 Solve the Lyapunov equation A T j N j + N j A j = R(X j ). 3 X j+1 X j + t j N j. END FOR j
Newton s Method for AREs [Kleinman 68, Mehrmann 91, Lancaster/Rodman 95, B./Byers 94/ 98, B. 97, Guo/Laub 99] ADI for Lyapunov Newton s Method for AREs Consider 0 = R(X ) = C T C + A T X + XA XBB T X. Frechét derivative of R(X ) at X : R X : Z (A BBT X ) T Z + Z(A BB T X ). Newton-Kantorovich method: ( ) 1 X j+1 = X j R X R(Xj j ), j = 0, 1, 2,... Newton s method (with line search) for AREs FOR j = 0, 1,... 1 A j A BB T X j =: A BK j. 2 Solve the Lyapunov equation A T j N j + N j A j = R(X j ). 3 X j+1 X j + t j N j. END FOR j
Newton s Method for AREs [Kleinman 68, Mehrmann 91, Lancaster/Rodman 95, B./Byers 94/ 98, B. 97, Guo/Laub 99] ADI for Lyapunov Newton s Method for AREs Consider 0 = R(X ) = C T C + A T X + XA XBB T X. Frechét derivative of R(X ) at X : R X : Z (A BBT X ) T Z + Z(A BB T X ). Newton-Kantorovich method: ( ) 1 X j+1 = X j R X R(Xj j ), j = 0, 1, 2,... Newton s method (with line search) for AREs FOR j = 0, 1,... 1 A j A BB T X j =: A BK j. 2 Solve the Lyapunov equation A T j N j + N j A j = R(X j ). 3 X j+1 X j + t j N j. END FOR j
Newton s Method for AREs Properties and Implementation ADI for Lyapunov Newton s Method for AREs Convergence for K 0 stabilizing: A j = A BK j = A BB T X j is stable j 0. lim j R(X j ) F = 0 (monotonically). lim j X j = X 0 (locally quadratic). Need large-scale Lyapunov solver; here, ADI iteration: linear systems with dense, but sparse+low rank coefficient matrix A j : A j = A B K j = sparse m m n = efficient inversion using Sherman-Morrison-Woodbury formula: (A BK j ) 1 = (I n + A 1 B(I m K j A 1 B) 1 K j )A 1. BUT: X = X T R n n = n(n + 1)/2 unknowns!
Newton s Method for AREs Properties and Implementation ADI for Lyapunov Newton s Method for AREs Convergence for K 0 stabilizing: A j = A BK j = A BB T X j is stable j 0. lim j R(X j ) F = 0 (monotonically). lim j X j = X 0 (locally quadratic). Need large-scale Lyapunov solver; here, ADI iteration: linear systems with dense, but sparse+low rank coefficient matrix A j : A j = A B K j = sparse m m n = efficient inversion using Sherman-Morrison-Woodbury formula: (A BK j ) 1 = (I n + A 1 B(I m K j A 1 B) 1 K j )A 1. BUT: X = X T R n n = n(n + 1)/2 unknowns!
Newton s Method for AREs Properties and Implementation ADI for Lyapunov Newton s Method for AREs Convergence for K 0 stabilizing: A j = A BK j = A BB T X j is stable j 0. lim j R(X j ) F = 0 (monotonically). lim j X j = X 0 (locally quadratic). Need large-scale Lyapunov solver; here, ADI iteration: linear systems with dense, but sparse+low rank coefficient matrix A j : A j = A B K j = sparse m m n = efficient inversion using Sherman-Morrison-Woodbury formula: (A BK j ) 1 = (I n + A 1 B(I m K j A 1 B) 1 K j )A 1. BUT: X = X T R n n = n(n + 1)/2 unknowns!
Newton s Method for AREs Properties and Implementation ADI for Lyapunov Newton s Method for AREs Convergence for K 0 stabilizing: A j = A BK j = A BB T X j is stable j 0. lim j R(X j ) F = 0 (monotonically). lim j X j = X 0 (locally quadratic). Need large-scale Lyapunov solver; here, ADI iteration: linear systems with dense, but sparse+low rank coefficient matrix A j : A j = A B K j = sparse m m n = efficient inversion using Sherman-Morrison-Woodbury formula: (A BK j ) 1 = (I n + A 1 B(I m K j A 1 B) 1 K j )A 1. BUT: X = X T R n n = n(n + 1)/2 unknowns!
Low-Rank Newton-ADI for AREs ADI for Lyapunov Newton s Method for AREs Re-write Newton s method for AREs A T j (X j + N j ) }{{} A T j A T j N j + N j A j = R(X j ) + (X j + N j ) }{{} =X j+1 A j = C T C X j BB T X j }{{} =X j+1 =: W j W T j Set X j = Z j Zj T for rank (Z j ) n = ( Zj+1 Zj+1 T ) ( + Zj+1 Zj+1 T ) Aj = W j Wj T Factored Newton Iteration [B./Li/Penzl 99/ 07] Solve Lyapunov equations for Z j+1 directly by factored ADI iteration and use sparse + low-rank structure of A j.
Low-Rank Newton-ADI for AREs ADI for Lyapunov Newton s Method for AREs Re-write Newton s method for AREs A T j (X j + N j ) }{{} A T j A T j N j + N j A j = R(X j ) + (X j + N j ) }{{} =X j+1 A j = C T C X j BB T X j }{{} =X j+1 =: W j W T j Set X j = Z j Zj T for rank (Z j ) n = ( Zj+1 Zj+1 T ) ( + Zj+1 Zj+1 T ) Aj = W j Wj T Factored Newton Iteration [B./Li/Penzl 99/ 07] Solve Lyapunov equations for Z j+1 directly by factored ADI iteration and use sparse + low-rank structure of A j.
of Complexity < O(n 3 ) Parallelization: Efficient parallel algorithms based on matrix sign function. Complexity O(n 3 /q) on q-processor machine. library PLiCMR. (B./Quintana-Ortí/Quintana-Ortí since 1999) Formatted Arithmetic: For special problems from PDE control use implementation based on hierarchical matrices and matrix sign function method (Baur/B.), complexity O(n log 2 (n)r 2 ).
of Complexity < O(n 3 ) Parallelization: Efficient parallel algorithms based on matrix sign function. Complexity O(n 3 /q) on q-processor machine. library PLiCMR. (B./Quintana-Ortí/Quintana-Ortí since 1999) Formatted Arithmetic: For special problems from PDE control use implementation based on hierarchical matrices and matrix sign function method (Baur/B.), complexity O(n log 2 (n)r 2 ).
of Complexity < O(n 3 ) Sparse : Sparse implementation using sparse Lyapunov solver (ADI+MUMPS/SuperLU). Complexity O(n(k 2 + r 2 )). : + Matlab toolbox LyaPack (Penzl 99), + library SpaRed with WebComputing interface. (Badía/B./Quintana-Ortí/Quintana-Ortí since 03)
Systems with a Large Number of Terminals Efficient BT implementations are based on assumption n m, p. For on-chip clock distribution networks, power grids, wide buses, this assumption is not justified; here,, e.g., m = p = n 2, n 4. Cure: BT can easily be combined with SVDMOR [Feldmann/Liu 04]: for G(s) = C(sE A) 1 B, let G(s 0) = C(s 0E A) 1 B = ˆ»» Σ 1 V T U 1 U 1 2 Σ 2 V2 T U 1Σ 1V T 1 (rank-k approximation), so that G(s 0) U 1Σ 1V T 1 2 = σ k+1. Now define B := BV 1, C := U T 1 C, then G(s) U 1 B(sE A) 1 C V1 T. {z } =: G(s) Now apply BT to G(s) Ĝ(s), then G(s) U1Ĝ(s)V T 1.
Systems with a Large Number of Terminals Efficient BT implementations are based on assumption n m, p. For on-chip clock distribution networks, power grids, wide buses, this assumption is not justified; here,, e.g., m = p = n 2, n 4. Cure: BT can easily be combined with SVDMOR [Feldmann/Liu 04]: for G(s) = C(sE A) 1 B, let G(s 0) = C(s 0E A) 1 B = ˆ»» Σ 1 V T U 1 U 1 2 Σ 2 V2 T U 1Σ 1V T 1 (rank-k approximation), so that G(s 0) U 1Σ 1V T 1 2 = σ k+1. Now define B := BV 1, C := U T 1 C, then G(s) U 1 B(sE A) 1 C V1 T. {z } =: G(s) Now apply BT to G(s) Ĝ(s), then G(s) U1Ĝ(s)V T 1.
Systems with a Large Number of Terminals Efficient BT implementations are based on assumption n m, p. For on-chip clock distribution networks, power grids, wide buses, this assumption is not justified; here,, e.g., m = p = n 2, n 4. Cure: BT can easily be combined with SVDMOR [Feldmann/Liu 04]: for G(s) = C(sE A) 1 B, let G(s 0) = C(s 0E A) 1 B = ˆ»» Σ 1 V T U 1 U 1 2 Σ 2 V2 T U 1Σ 1V T 1 (rank-k approximation), so that G(s 0) U 1Σ 1V T 1 2 = σ k+1. Now define B := BV 1, C := U T 1 C, then G(s) U 1 B(sE A) 1 C V1 T. {z } =: G(s) Now apply BT to G(s) Ĝ(s), then G(s) U1Ĝ(s)V T 1.
Systems with a Large Number of Terminals Efficient BT implementations are based on assumption n m, p. For on-chip clock distribution networks, power grids, wide buses, this assumption is not justified; here,, e.g., m = p = n 2, n 4. Cure: BT can easily be combined with SVDMOR [Feldmann/Liu 04]: for G(s) = C(sE A) 1 B, let G(s 0) = C(s 0E A) 1 B = ˆ»» Σ 1 V T U 1 U 1 2 Σ 2 V2 T U 1Σ 1V T 1 (rank-k approximation), so that G(s 0) U 1Σ 1V T 1 2 = σ k+1. Now define B := BV1, C := U T 1 C, then G(s) U 1 B(sE A) 1 C V1 T. {z } =: G(s) Now apply BT to G(s) Ĝ(s), then G(s) U1Ĝ(s)V T 1.
Systems with a Large Number of Terminals Efficient BT implementations are based on assumption n m, p. For on-chip clock distribution networks, power grids, wide buses, this assumption is not justified; here,, e.g., m = p = n 2, n 4. Cure: BT can easily be combined with SVDMOR [Feldmann/Liu 04]: for G(s) = C(sE A) 1 B, let G(s 0) = C(s 0E A) 1 B = ˆ»» Σ U 1 U 1 V T 1 2 Σ 2 V2 T U 1Σ 1V T 1 (rank-k approximation), so that G(s 0) U 1Σ 1V T 1 2 = σ k+1. Now define B := BV1, C := U T 1 C, then G(s) U 1 B(sE A) 1 C V1 T. {z } =: G(s) Now apply BT to G(s) Ĝ(s), then G(s) U1Ĝ(s)V T 1. multi-point truncated SVD, error bounds Master s thesis A. Schneider, 2007.
Systems with a Large Number of Terminals Efficient BT implementations are based on assumption n m, p. For on-chip clock distribution networks, power grids, wide buses, this assumption is not justified; here,, e.g., m = p = n 2, n 4. Cure: BT can easily be combined with SVDMOR [Feldmann/Liu 04]: for G(s) = C(sE A) 1 B, let G(s 0) = C(s 0E A) 1 B = ˆ»» Σ U 1 U 1 V T 1 2 Σ 2 V2 T U 1Σ 1V T 1 (rank-k approximation), so that G(s 0) U 1Σ 1V T 1 2 = σ k+1. Now define B := BV 1, C := U T 1 C, then G(s) U 1 B(sE A) 1 C V1 T. {z } =: G(s) Now apply BT to G(s) Ĝ(s), then G(s) U 1 Ĝ(s)V T 1. Alternative for medium-size m: superposition of reduced-order SIMO models using Padé-type approximation [Feng/B./Rudnyi 07].
Optimal Control: Cooling of Steel Profiles Mathematical model: boundary control for linearized 2D heat equation. 16 92 15 83 34 c ρ t x = λ x, ξ Ω λ n x = κ(u k x), ξ Γ k, 1 k 7, x = 0, ξ Γ7. n = m = 7, p = 6. FEM Discretization, different models for initial mesh (n = 371), 1, 2, 3, 4 steps of mesh refinement n = 1357, 5177, 20209, 79841. 55 51 47 43 9 10 4 60 22 63 3 2 Source: Physical model: courtesy of Mannesmann/Demag. Math. model: Tröltzsch/Unger 1999/2001, Penzl 1999, Saak 2003.
Optimal Control: Cooling of Steel Profiles n = 1357, Absolute Error BT model computed with sign function method, MT w/o static condensation, same order as BT model.
Optimal Control: Cooling of Steel Profiles n = 1357, Absolute Error n = 79841, Absolute error BT model computed with sign function method, MT w/o static condensation, same order as BT model. BT model computed using SpaRed, computation time: 8 min.
MEMS: Microthruster Co-integration of solid fuel with silicon micromachined system. Goal: Ignition of solid fuel cells by electric impulse. Application: nano satellites. Thermo-dynamical model, ignition via heating an electric resistance by applying voltage source. Design problem: reach ignition temperature of fuel cell w/o firing neighboring cells. Spatial FEM discretization of thermo-dynamical model linear system, m = 1, p = 7. Source: The Oberwolfach Benchmark Collection http://www.imtek.de/simulation/benchmark Courtesy of C. Rossi, LAAS-CNRS/EU project Micropyros.
MEMS: Microthruster axial-symmetric 2D model FEM discretization using linear (quadratic) elements n = 4, 257 (11, 445) m = 1, p = 7. Reduced model computed using SpaRed, modal truncation using ARPACK, and Z. Bai s PVL implementation.
MEMS: Microthruster axial-symmetric 2D model FEM discretization using linear (quadratic) elements n = 4, 257 (11, 445) m = 1, p = 7. Reduced model computed using SpaRed, modal truncation using ARPACK, and Z. Bai s PVL implementation. Relative error n = 4, 257
MEMS: Microthruster axial-symmetric 2D model FEM discretization using linear (quadratic) elements n = 4, 257 (11, 445) m = 1, p = 7. Reduced model computed using SpaRed, modal truncation using ARPACK, and Z. Bai s PVL implementation. Relative error n = 4, 257 Relative error n = 11, 445
MEMS: Microthruster axial-symmetric 2D model FEM discretization using linear (quadratic) elements n = 4, 257 (11, 445) m = 1, p = 7. Reduced model computed using SpaRed, modal truncation using ARPACK, and Z. Bai s PVL implementation. Frequency Response BT/PVL
MEMS: Microthruster axial-symmetric 2D model FEM discretization using linear (quadratic) elements n = 4, 257 (11, 445) m = 1, p = 7. Reduced model computed using SpaRed, modal truncation using ARPACK, and Z. Bai s PVL implementation. Frequency Response BT/PVL Frequency Response BT/MT
MEMS: Microgyroscope (Butterfly Gyro) Vibrating micro-mechanical gyroscope for inertial navigation. Rotational position sensor. By applying AC voltage to electrodes, wings are forced to vibrate in anti-phase in wafer plane. Coriolis forces induce motion of wings out of wafer plane yielding sensor data. Source: The Oberwolfach Benchmark Collection http://www.imtek.de/simulation/benchmark Courtesy of D. Billger (Imego Institute, Göteborg), Saab Bofors Dynamics AB.
MEMS: Butterfly Gyro FEM discretization of structure dynamical model using quadratic tetrahedral elements (ANSYS-SOLID187) n = 34, 722, m = 1, p = 12. Reduced model computed using SpaRed, r = 30.
MEMS: Butterfly Gyro FEM discretization of structure dynamical model using quadratic tetrahedral elements (ANSYS-SOLID187) n = 34, 722, m = 1, p = 12. Reduced model computed using SpaRed, r = 30. Frequency Response Analysis
MEMS: Butterfly Gyro FEM discretization of structure dynamical model using quadratic tetrahedral elements (ANSYS-SOLID187) n = 34, 722, m = 1, p = 12. Reduced model computed using SpaRed, r = 30. Frequency Response Analysis Hankel Singular Values
Robust Control: Heat Flow in Copper Boundary control problem for 2D heat flow in copper on rectangular domain; control acts on two sides via Robins BC. FDM n = 4496, m = 2; 4 sensor locations p = 4. Numerical ranks of BT Gramians are 68 and 124, respectively, for LQG BT both have rank 210. Computed reduced-order model: r = 10. Source: COMPl eib v1.1, www.compleib.de.
Robust Control: Heat Flow in Copper Boundary control problem for 2D heat flow in copper on rectangular domain; control acts on two sides via Robins BC. FDM n = 4496, m = 2; 4 sensor locations p = 4. Numerical ranks of BT Gramians are 68 and 124, respectively, for LQG BT both have rank 210. Computed reduced-order model: r = 10. Source: COMPl eib v1.1, www.compleib.de.
Robust Control: Heat Flow in Copper Boundary control problem for 2D heat flow in copper on rectangular domain; control acts on two sides via Robins BC. FDM n = 4496, m = 2; 4 sensor locations p = 4. Numerical ranks of BT Gramians are 68 and 124, respectively, for LQG BT both have rank 210. Computed reduced-order model: r = 10. Source: COMPl eib v1.1, www.compleib.de.
Model Reduction SLICOT Model and Controller Reduction Toolbox Stand-alone Matlab Toolbox based on the Subroutine Library in Systems and Control Theory SLICOT which provides > 400 Fortran 77 routines for systems and control related computations; other Matlab Toolboxes: SLICOT Basic Systems and Control Toolbox, SLICOT System Identification Toolbox. Much enhanced functionality compared to Matlab s Control Toolboxes, in particular coprime factorization methods, frequency-weighted BT methods, controller reduction. Maintained by NICONET e.v. Distributed by Synoptio GmbH, Berlin. For more information, visit http://www.slicot.org and http://www.synoptio.de.
Model Reduction LyaPack Matlab toolbox for computations involving large-scale Lyapunov equations with sparse coefficient matrix A and low-rank constant term; contains BT implementation and dominant subspace approximation method. Available as additional software (no registration necessary) from http://www.slicot.org. New version (algorithmic improvements, easier to use) in progress. PLiCMR, SpaRed Parallel implementations of BT and some balancing-related methods for dense and sparse linear systems. MorLAB Model-reduction LABoratory, contains Matlab functions for balancing-related methods, available from http://www.tu-chemnitz.de/~benner/software.php.
1 G. Obinata and B.D.O. Anderson. Model Reduction for Control System Design. Springer-Verlag, London, UK, 2001. 2 Z. Bai. Krylov subspace techniques for reduced-order modeling of large-scale dynamical systems. Appl. Numer. Math, 43(1 2):9 44, 2002. 3 R. Freund. Model reduction methods based on Krylov subspaces. Acta Numerica, 12:267 319, 2003. 4 P. Benner, E.S. Quintana-Ortí, and G. Quintana-Ortí. State-space truncation methods for parallel model reduction of large-scale systems. Parallel Comput., 29:1701 1722, 2003. 5 P. Benner. Numerical linear algebra for model reduction in control and simulation. GAMM Mitt., 29(2):275 296, 2006. 6 P. Benner, V. Mehrmann, and D. Sorensen (editors). Dimension Reduction of Large-Scale Systems. Lecture Notes in Computational Science and Engineering, Vol. 45, Springer-Verlag, Berlin/Heidelberg, Germany, 2005. 7 A.C. Antoulas. Lectures on the Approximation of Large-Scale Dynamical Systems. SIAM Publications, Philadelphia, PA, 2005. 8 P. Benner, R. Freund, D. Sorensen, and A. Varga (editors). Special issue on Order Reduction of Large-Scale Systems. Linear Algebra Appl., Vol. 415, June 2006.