Model Reduction for Linear Dynamical Systems

Size: px
Start display at page:

Download "Model Reduction for Linear Dynamical Systems"

Transcription

1 Summer School on Numerical Linear Algebra for Dynamical and High-Dimensional Problems Trogir, October 10 15, 2011 Model Reduction for Linear Dynamical Systems Peter Benner Max Planck Institute for Dynamics of Complex Technical Systems Computational Methods in Systems and Control Theory Magdeburg, Germany

2 Outline 1 Introduction Application Areas Motivation Model Reduction for Dynamical Systems Qualitative and Quantitative Study of the Approximation Error 2 Model Reduction by Projection Projection Basics Modal Truncation 3 Balanced Truncation The basic method ADI Methods for Lyapunov Equations Factored Galerkin-ADI Iteration Balancing-Related Model Reduction 4 Interpolatory Model Reduction Padé Approximation Short Introduction H 2 -Optimal Model Reduction 5 Numerical Comparison of MOR Approaches Microthruster 6 Final Remarks Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 2/52

3 Introduction Model Reduction Abstract Definition Problem Given a physical problem with dynamics described by the states x R n, where n is the dimension of the state space. Because of redundancies, complexity, etc., we want to describe the dynamics of the system using a reduced number of states. This is the task of model reduction (also: dimension reduction, order reduction). Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 3/52

4 Introduction Model Reduction Abstract Definition Problem Given a physical problem with dynamics described by the states x R n, where n is the dimension of the state space. Because of redundancies, complexity, etc., we want to describe the dynamics of the system using a reduced number of states. This is the task of model reduction (also: dimension reduction, order reduction). Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 3/52

5 Introduction Model Reduction Abstract Definition Problem Given a physical problem with dynamics described by the states x R n, where n is the dimension of the state space. Because of redundancies, complexity, etc., we want to describe the dynamics of the system using a reduced number of states. This is the task of model reduction (also: dimension reduction, order reduction). Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 3/52

6 Application Areas (Optimal) Control Feedback Controllers A feedback controller (dynamic compensator) is a linear system of order N, where input = output of plant, output = input of plant. Modern (LQG-/H 2 -/H -) control design: N n. Practical controllers require small N (N 10, say) due to real-time constraints, increasing fragility for larger N. = reduce order of plant (n) and/or controller (N). Standard MOR techniques in systems and control: balanced truncation and related methods. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 4/52

7 Application Areas (Optimal) Control Feedback Controllers A feedback controller (dynamic compensator) is a linear system of order N, where input = output of plant, output = input of plant. Modern (LQG-/H 2 -/H -) control design: N n. Practical controllers require small N (N 10, say) due to real-time constraints, increasing fragility for larger N. = reduce order of plant (n) and/or controller (N). Standard MOR techniques in systems and control: balanced truncation and related methods. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 4/52

8 Application Areas (Optimal) Control Feedback Controllers A feedback controller (dynamic compensator) is a linear system of order N, where input = output of plant, output = input of plant. Modern (LQG-/H 2 -/H -) control design: N n. Practical controllers require small N (N 10, say) due to real-time constraints, increasing fragility for larger N. = reduce order of plant (n) and/or controller (N). Standard MOR techniques in systems and control: balanced truncation and related methods. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 4/52

9 Application Areas (Optimal) Control Feedback Controllers A feedback controller (dynamic compensator) is a linear system of order N, where input = output of plant, output = input of plant. Modern (LQG-/H 2 -/H -) control design: N n. Practical controllers require small N (N 10, say) due to real-time constraints, increasing fragility for larger N. = reduce order of plant (n) and/or controller (N). Standard MOR techniques in systems and control: balanced truncation and related methods. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 4/52

10 Application Areas Micro Electronics/Circuit Simulation Progressive miniaturization: Moore s Law states that the number of on-chip transistors doubles each 12 (now: 18) months. Verification of VLSI/ULSI chip design requires high number of simulations for different input signals. Increase in packing density requires modeling of interconncet to ensure that thermic/electro-magnetic effects do not disturb signal transmission. Linear systems in micro electronics occur through modified nodal analysis (MNA) for RLC networks, e.g., when decoupling large linear subcircuits, modeling transmission lines (interconnect, powergrid), parasitic effects, modeling pin packages in VLSI chips, modeling circuit elements described by Maxwell s equation using partial element equivalent circuits (PEEC). Standard MOR techniques in circuit simulation: Krylov subspace / Padé approximation / rational interpolation methods. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 5/52

11 Application Areas Micro Electronics/Circuit Simulation Progressive miniaturization: Moore s Law states that the number of on-chip transistors doubles each 12 (now: 18) months. Verification of VLSI/ULSI chip design requires high number of simulations for different input signals. Increase in packing density requires modeling of interconncet to ensure that thermic/electro-magnetic effects do not disturb signal transmission. Linear systems in micro electronics occur through modified nodal analysis (MNA) for RLC networks, e.g., when decoupling large linear subcircuits, modeling transmission lines (interconnect, powergrid), parasitic effects, modeling pin packages in VLSI chips, modeling circuit elements described by Maxwell s equation using partial element equivalent circuits (PEEC). Standard MOR techniques in circuit simulation: Krylov subspace / Padé approximation / rational interpolation methods. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 5/52

12 Application Areas Micro Electronics/Circuit Simulation Progressive miniaturization: Moore s Law states that the number of on-chip transistors doubles each 12 (now: 18) months. Verification of VLSI/ULSI chip design requires high number of simulations for different input signals. Increase in packing density requires modeling of interconncet to ensure that thermic/electro-magnetic effects do not disturb signal transmission. Linear systems in micro electronics occur through modified nodal analysis (MNA) for RLC networks, e.g., when decoupling large linear subcircuits, modeling transmission lines (interconnect, powergrid), parasitic effects, modeling pin packages in VLSI chips, modeling circuit elements described by Maxwell s equation using partial element equivalent circuits (PEEC). Standard MOR techniques in circuit simulation: Krylov subspace / Padé approximation / rational interpolation methods. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 5/52

13 Application Areas Micro Electronics/Circuit Simulation Progressive miniaturization: Moore s Law states that the number of on-chip transistors doubles each 12 (now: 18) months. Verification of VLSI/ULSI chip design requires high number of simulations for different input signals. Increase in packing density requires modeling of interconncet to ensure that thermic/electro-magnetic effects do not disturb signal transmission. Linear systems in micro electronics occur through modified nodal analysis (MNA) for RLC networks, e.g., when decoupling large linear subcircuits, modeling transmission lines (interconnect, powergrid), parasitic effects, modeling pin packages in VLSI chips, modeling circuit elements described by Maxwell s equation using partial element equivalent circuits (PEEC). Standard MOR techniques in circuit simulation: Krylov subspace / Padé approximation / rational interpolation methods. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 5/52

14 Application Areas Micro Electronics/Circuit Simulation Progressive miniaturization: Moore s Law states that the number of on-chip transistors doubles each 12 (now: 18) months. Verification of VLSI/ULSI chip design requires high number of simulations for different input signals. Increase in packing density requires modeling of interconncet to ensure that thermic/electro-magnetic effects do not disturb signal transmission. Linear systems in micro electronics occur through modified nodal analysis (MNA) for RLC networks, e.g., when decoupling large linear subcircuits, modeling transmission lines (interconnect, powergrid), parasitic effects, modeling pin packages in VLSI chips, modeling circuit elements described by Maxwell s equation using partial element equivalent circuits (PEEC). Standard MOR techniques in circuit simulation: Krylov subspace / Padé approximation / rational interpolation methods. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 5/52

15 Application Areas Structural Mechanics / Finite Element Modeling Resolving complex 3D geometries millions of degrees of freedom. Analysis of elastic deformations requires many simulation runs for varying external forces. Standard MOR techniques in structural mechanics: modal truncation, combined with Guyan reduction (static condensation) Craig-Bampton method. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 6/52

16 Application Areas Structural Mechanics / Finite Element Modeling Resolving complex 3D geometries millions of degrees of freedom. Analysis of elastic deformations requires many simulation runs for varying external forces. Standard MOR techniques in structural mechanics: modal truncation, combined with Guyan reduction (static condensation) Craig-Bampton method. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 6/52

17 Motivation: Image Compression by Truncated SVD A digital image with n x n y pixels can be represented as matrix X R nx ny, where x ij contains color information of pixel (i, j). Memory: 4 n x n y bytes. Theorem: (Schmidt-Mirsky/Eckart-Young) nx ny Best rank-r approximation to X R w.r.t. X = r j=1 σ ju j v T spectral norm: where X = UΣV T is the singular value decomposition (SVD) of X. The approximation error is X X 2 = σ r+1. j, Idea for dimension reduction Instead of X save u 1,..., u r, σ 1 v 1,..., σ r v r. memory = 4r (n x + n y ) bytes. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 7/52

18 Motivation: Image Compression by Truncated SVD A digital image with n x n y pixels can be represented as matrix X R nx ny, where x ij contains color information of pixel (i, j). Memory: 4 n x n y bytes. Theorem: (Schmidt-Mirsky/Eckart-Young) nx ny Best rank-r approximation to X R w.r.t. X = r j=1 σ ju j v T spectral norm: where X = UΣV T is the singular value decomposition (SVD) of X. The approximation error is X X 2 = σ r+1. j, Idea for dimension reduction Instead of X save u 1,..., u r, σ 1 v 1,..., σ r v r. memory = 4r (n x + n y ) bytes. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 7/52

19 Motivation: Image Compression by Truncated SVD A digital image with n x n y pixels can be represented as matrix X R nx ny, where x ij contains color information of pixel (i, j). Memory: 4 n x n y bytes. Theorem: (Schmidt-Mirsky/Eckart-Young) nx ny Best rank-r approximation to X R w.r.t. X = r j=1 σ ju j v T spectral norm: where X = UΣV T is the singular value decomposition (SVD) of X. The approximation error is X X 2 = σ r+1. j, Idea for dimension reduction Instead of X save u 1,..., u r, σ 1 v 1,..., σ r v r. memory = 4r (n x + n y ) bytes. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 7/52

20 Example: Image Compression by Truncated SVD Example: Clown pixel 256 kb Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 8/52

21 Example: Image Compression by Truncated SVD Example: Clown rank r = 50, 104 kb pixel 256 kb Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 8/52

22 Example: Image Compression by Truncated SVD Example: Clown rank r = 50, 104 kb rank r = 20, 42 kb pixel 256 kb Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 8/52

23 Dimension Reduction via SVD Example: Gatlinburg Organizing committee Gatlinburg/Householder Meeting 1964: James H. Wilkinson, Wallace Givens, George Forsythe, Alston Householder, Peter Henrici, Fritz L. Bauer pixel, 1229 kb Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 9/52

24 Dimension Reduction via SVD Example: Gatlinburg Organizing committee Gatlinburg/Householder Meeting 1964: James H. Wilkinson, Wallace Givens, George Forsythe, Alston Householder, Peter Henrici, Fritz L. Bauer. rank r = 100, 448 kb rank r = 50, 224 kb pixel, 1229 kb Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 9/52

25 Background: Singular Value Decay Image data compression via SVD works, if the singular values decay (exponentially). Singular Values of the Image Data Matrices Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 10/52

26 Model Reduction for Dynamical Systems Dynamical Systems { ẋ(t) = f (t, x(t), u(t)), x(t0 ) = x Σ : 0, y(t) = g(t, x(t), u(t)) with states x(t) R n, inputs u(t) R m, outputs y(t) R p. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 11/52

27 Model Reduction for Dynamical Systems Original System Σ : j ẋ(t) = f (t, x(t), u(t)), y(t) = g(t, x(t), u(t)). states x(t) R n, inputs u(t) R m, outputs y(t) R p. Reduced-Order System bσ : j ˆx(t) = b f (t, ˆx(t), u(t)), ŷ(t) = bg(t, ˆx(t), u(t)). states ˆx(t) R r, r n inputs u(t) R m, outputs ŷ(t) R p. Goal: y ŷ < tolerance u for all admissible input signals. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 12/52

28 Model Reduction for Dynamical Systems Original System Σ : j ẋ(t) = f (t, x(t), u(t)), y(t) = g(t, x(t), u(t)). states x(t) R n, inputs u(t) R m, outputs y(t) R p. Reduced-Order System bσ : j ˆx(t) = b f (t, ˆx(t), u(t)), ŷ(t) = bg(t, ˆx(t), u(t)). states ˆx(t) R r, r n inputs u(t) R m, outputs ŷ(t) R p. Goal: y ŷ < tolerance u for all admissible input signals. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 12/52

29 Model Reduction for Dynamical Systems Original System Σ : j ẋ(t) = f (t, x(t), u(t)), y(t) = g(t, x(t), u(t)). states x(t) R n, inputs u(t) R m, outputs y(t) R p. Reduced-Order System bσ : j ˆx(t) = b f (t, ˆx(t), u(t)), ŷ(t) = bg(t, ˆx(t), u(t)). states ˆx(t) R r, r n inputs u(t) R m, outputs ŷ(t) R p. Goal: y ŷ < tolerance u for all admissible input signals. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 12/52

30 Model Reduction for Dynamical Systems Original System Σ : j ẋ(t) = f (t, x(t), u(t)), y(t) = g(t, x(t), u(t)). states x(t) R n, inputs u(t) R m, outputs y(t) R p. Reduced-Order System bσ : j ˆx(t) = b f (t, ˆx(t), u(t)), ŷ(t) = bg(t, ˆx(t), u(t)). states ˆx(t) R r, r n inputs u(t) R m, outputs ŷ(t) R p. Goal: y ŷ < tolerance u for all admissible input signals. Secondary goal: reconstruct approximation of x from ˆx. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 12/52

31 Model Reduction for Linear Systems Linear, Time-Invariant (LTI) Systems ẋ = f (t, x, u) = Ax + Bu, A R n n, B R n m, y = g(t, x, u) = Cx + Du, C R p n, D R p m. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 13/52

32 Model Reduction for Linear Systems Linear, Time-Invariant (LTI) Systems ẋ = f (t, x, u) = Ax + Bu, A R n n, B R n m, y = g(t, x, u) = Cx + Du, C R p n, D R p m. Assumptions (for now): t 0 = 0, x 0 = x(0) = 0, D = 0. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 13/52

33 Model Reduction for Linear Systems Linear, Time-Invariant (LTI) Systems ẋ = f (t, x, u) = Ax + Bu, A R n n, B R n m, y = g(t, x, u) = Cx + Du, C R p n, D R p m. State-Space Description for I/O-Relation Variation-of-constants = S : u y, y(t) = t Ce A(t τ) Bu(τ) dτ for all t R. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 13/52

34 Model Reduction for Linear Systems Linear, Time-Invariant (LTI) Systems ẋ = f (t, x, u) = Ax + Bu, A R n n, B R n m, y = g(t, x, u) = Cx + Du, C R p n, D R p m. State-Space Description for I/O-Relation Variation-of-constants = S : u y, y(t) = t Ce A(t τ) Bu(τ) dτ for all t R. S : U Y is a linear operator between (function) spaces. Recall: A R n m is a linear operator, A : R m R n! Basic Idea: use SVD approximation as for matrix A! Problem: in general, S does not have a discrete SVD and can therefore not be approximated as in the matrix case! Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 13/52

35 Model Reduction for Linear Systems Linear, Time-Invariant (LTI) Systems ẋ = f (t, x, u) = Ax + Bu, A R n n, B R n m, y = g(t, x, u) = Cx + Du, C R p n, D R p m. State-Space Description for I/O-Relation Variation-of-constants = S : u y, y(t) = t Ce A(t τ) Bu(τ) dτ for all t R. S : U Y is a linear operator between (function) spaces. Recall: A R n m is a linear operator, A : R m R n! Basic Idea: use SVD approximation as for matrix A! Problem: in general, S does not have a discrete SVD and can therefore not be approximated as in the matrix case! Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 13/52

36 Model Reduction for Linear Systems Linear, Time-Invariant (LTI) Systems ẋ = f (t, x, u) = Ax + Bu, A R n n, B R n m, y = g(t, x, u) = Cx + Du, C R p n, D R p m. State-Space Description for I/O-Relation Variation-of-constants = S : u y, y(t) = t Ce A(t τ) Bu(τ) dτ for all t R. S : U Y is a linear operator between (function) spaces. Recall: A R n m is a linear operator, A : R m R n! Basic Idea: use SVD approximation as for matrix A! Problem: in general, S does not have a discrete SVD and can therefore not be approximated as in the matrix case! Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 13/52

37 Model Reduction for Linear Systems Linear, Time-Invariant (LTI) Systems ẋ = f (t, x, u) = Ax + Bu, A R n n, B R n m, y = g(t, x, u) = Cx + Du, C R p n, D R p m. State-Space Description for I/O-Relation Variation-of-constants = S : u y, y(t) = t Ce A(t τ) Bu(τ) dτ for all t R. S : U Y is a linear operator between (function) spaces. Recall: A R n m is a linear operator, A : R m R n! Basic Idea: use SVD approximation as for matrix A! Problem: in general, S does not have a discrete SVD and can therefore not be approximated as in the matrix case! Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 13/52

38 Model Reduction for Linear Systems Linear, Time-Invariant (LTI) Systems ẋ = Ax + Bu, A R n n, B R n m, y = Cx, C R p n. Alternative to State-Space Operator: Hankel operator Instead of S : u y, y(t) = use Hankel operator t H : u y +, y + (t) = 0 Ce A(t τ) Bu(τ) dτ for all t R. Ce A(t τ) Bu(τ) dτ for all t > 0. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 14/52

39 Model Reduction for Linear Systems Linear, Time-Invariant (LTI) Systems ẋ = Ax + Bu, A R n n, B R n m, y = Cx, C R p n. Alternative to State-Space Operator: Hankel operator Instead of S : u y, y(t) = use Hankel operator t H : u y +, y + (t) = 0 Ce A(t τ) Bu(τ) dτ for all t R. Ce A(t τ) Bu(τ) dτ for all t > 0. H compact H has discrete SVD Hankel singular values {σ j } j=1 : σ 1 σ Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 14/52

40 Model Reduction for Linear Systems Linear, Time-Invariant (LTI) Systems ẋ = Ax + Bu, A R n n, B R n m, y = Cx, C R p n. Alternative to State-Space Operator: Hankel operator Instead of S : u y, y(t) = use Hankel operator t H : u y +, y + (t) = 0 Ce A(t τ) Bu(τ) dτ for all t R. Ce A(t τ) Bu(τ) dτ for all t > 0. H compact H has discrete SVD Hankel singular values {σ j } j=1 : σ 1 σ = SVD-type approximation of H possible! Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 14/52

41 Model Reduction for Linear Systems Linear, Time-Invariant (LTI) Systems ẋ = Ax + Bu, A R n n, B R n m, y = Cx, C R p n. Alternative to State-Space Operator: Hankel operator H compact H has discrete SVD Hankel singular values Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 14/52

42 Model Reduction for Linear Systems Linear, Time-Invariant (LTI) Systems ẋ = Ax + Bu, A R n n, B R n m, y = Cx, C R p n. Alternative to State-Space Operator: Hankel operator H : u y +, y + (t) = 0 H compact H has discrete SVD Ce A(t τ) Bu(τ) dτ for all t > 0. Best approximation problem w.r.t. 2-induced operator norm well-posed solution: Adamjan-Arov-Krein (AAK Theory, 1971/78). But: computationally unfeasible for large-scale systems. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 14/52

43 Model Reduction for Linear Systems Linear, Time-Invariant (LTI) Systems ẋ = Ax + Bu, A R n n, B R n m, y = Cx, C R p n. Alternative to State-Space Operator: Hankel operator H : u y +, y + (t) = 0 H compact H has discrete SVD Ce A(t τ) Bu(τ) dτ for all t > 0. Best approximation problem w.r.t. 2-induced operator norm well-posed solution: Adamjan-Arov-Krein (AAK Theory, 1971/78). But: computationally unfeasible for large-scale systems. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 14/52

44 Model Reduction for Linear Systems Linear, Time-Invariant (LTI) Systems ẋ = Ax + Bu, A R n n, B R n m, y = Cx, C R p n. Alternative to State-Space Operator: Hankel operator H : u y +, y + (t) = 0 H compact H has discrete SVD Ce A(t τ) Bu(τ) dτ for all t > 0. Best approximation problem w.r.t. 2-induced operator norm well-posed solution: Adamjan-Arov-Krein (AAK Theory, 1971/78). But: computationally unfeasible for large-scale systems. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 14/52

45 Linear Systems in Frequency Domain Linear, Time-Invariant (LTI) Systems { ẋ = Ax + Bu, A R Σ : n n, B R n m, y = Cx + Du, C R p n, D R p m. Assumptions: t 0 = 0, x 0 = x(0) = 0. Laplace Transform / Frequency Domain Application of Laplace transform L : x(t) x(s) = 0 e st x(t) dt with s C leads to linear system of equations: ( ẋ(t) sx(s)) sx(s) = Ax(s) + Bu(s), y(s) = Cx(s) + Du(s). Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 15/52

46 Linear Systems in Frequency Domain Linear, Time-Invariant (LTI) Systems { ẋ = Ax + Bu, A R Σ : n n, B R n m, y = Cx + Du, C R p n, D R p m. Assumptions: t 0 = 0, x 0 = x(0) = 0. Laplace Transform / Frequency Domain sx(s) = Ax(s) + Bu(s), y(s) = Cx(s) + Du(s) yields I/O-relation in frequency domain: ( ) y(s) = C(sI n A) 1 B + D }{{} =:G(s) u(s) = G(s)u(s). G is the transfer function of Σ, G : L m 2 Lp 2 (L 2 := L(L 2 (, ))). Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 15/52

47 Model Reduction as Approximation Problem Approximation Problem Approximate the dynamical system by reduced-order system of order r n, such that ẋ = Ax + Bu, A R n n, B R n m, y = Cx + Du, C R p n, D R p m. ˆx = ˆx + ˆBu,  R r r, ˆB R r m, ŷ = Ĉ ˆx + ˆDu, Ĉ R p r, ˆD R p m. y ŷ = Gu Ĝu G Ĝ u tolerance u. = Approximation problem: min order (Ĝ) r G Ĝ. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 16/52

48 Model Reduction as Approximation Problem Approximation Problem Approximate the dynamical system by reduced-order system of order r n, such that ẋ = Ax + Bu, A R n n, B R n m, y = Cx + Du, C R p n, D R p m. ˆx = ˆx + ˆBu,  R r r, ˆB R r m, ŷ = Ĉ ˆx + ˆDu, Ĉ R p r, ˆD R p m. y ŷ = Gu Ĝu G Ĝ u tolerance u. = Approximation problem: min order (Ĝ) r G Ĝ. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 16/52

49 Qualitative and Quantitative Study of the Approximation Error System Norms Consider transfer function and input functions u L m 2 G(s) = C (si A) 1 B + D u 2 2 := 1 2π = L m 2 (, ), with the 2-norm u (jω)u(jω) dω. Assume A is (asympotically) stable: Λ (A) C := {z C : re(z) < 0}. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 17/52

50 Qualitative and Quantitative Study of the Approximation Error System Norms Consider transfer function and input functions u L m 2 G(s) = C (si A) 1 B + D u 2 2 := 1 2π = L m 2 (, ), with the 2-norm u (jω)u(jω) dω. Assume A is (asympotically) stable: Λ (A) C := {z C : re(z) < 0}. Then for all s C + jr, G(s) M Z y (jω)y(jω) dω = Z u (jω)g (jω)g(jω)u(jω) dω Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 17/52

51 Qualitative and Quantitative Study of the Approximation Error System Norms Consider transfer function and input functions u L m 2 G(s) = C (si A) 1 B + D u 2 2 := 1 2π = L m 2 (, ), with the 2-norm u (jω)u(jω) dω. Assume A is (asympotically) stable: Λ (A) C := {z C : re(z) < 0}. Then for all s C + jr, G(s) M Z y (jω)y(jω) dω = = Z u (jω)g (jω)g(jω)u(jω) dω Z Z G(jω)u(jω) 2 dω M 2 u(jω) 2 dω (Here:,. denotes the Euclidian vector or spectral matrix norm.) Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 17/52

52 Qualitative and Quantitative Study of the Approximation Error System Norms Consider transfer function and input functions u L m 2 G(s) = C (si A) 1 B + D u 2 2 := 1 2π = L m 2 (, ), with the 2-norm u (jω)u(jω) dω. Assume A is (asympotically) stable: Λ (A) C := {z C : re(z) < 0}. Then for all s C + jr, G(s) M Z Z y (jω)y(jω) dω = u (jω)g (jω)g(jω)u(jω) dω Z = Z = M 2 u(jω) u(jω) dω <. Z G(jω)u(jω) 2 dω M 2 u(jω) 2 dω (Here:,. denotes the Euclidian vector or spectral matrix norm.) Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 17/52

53 Qualitative and Quantitative Study of the Approximation Error System Norms Consider transfer function and input functions u L m 2 G(s) = C (si A) 1 B + D u 2 2 := 1 2π = L m 2 (, ), with the 2-norm u (jω)u(jω) dω. Assume A is (asympotically) stable: Λ (A) C := {z C : re(z) < 0}. Then for all s C + jr, G(s) M Z Z y (jω)y(jω) dω = u (jω)g (jω)g(jω)u(jω) dω Z = Z = M 2 u(jω) u(jω) dω <. = y L p 2 (, ) = L p 2. Z G(jω)u(jω) 2 dω M 2 u(jω) 2 dω Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 17/52

54 Qualitative and Quantitative Study of the Approximation Error System Norms Consider transfer function and input functions u L m 2 G(s) = C (si A) 1 B + D u 2 2 := 1 2π = L m 2 (, ), with the 2-norm u (jω)u(jω) dω. Assume A is (asympotically) stable: Λ (A) C := {z C : re(z) < 0}. Consequently, the 2-induced operator norm G := is well defined. It can be shown that Gu 2 sup u 2 0 u 2 G := sup G(jω) = sup σ max (G(jω)). ω R ω R Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 17/52

55 Qualitative and Quantitative Study of the Approximation Error System Norms Consider transfer function and input functions u L m 2 G(s) = C (si A) 1 B + D u 2 2 := 1 2π = L m 2 (, ), with the 2-norm u (jω)u(jω) dω. Assume A is (asympotically) stable: Λ (A) C := {z C : re(z) < 0}. Consequently, the 2-induced operator norm G := is well defined. It can be shown that Gu 2 sup u 2 0 u 2 G := sup G(jω) = sup σ max (G(jω)). ω R ω R Sketch of proof: G(jω)u(jω) G(jω) u(jω). Construct u with Gu 2 = sup ω R G(jω) u 2. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 17/52

56 Qualitative and Quantitative Study of the Approximation Error System Norms Consider transfer function G(s) = C (si A) 1 B + D. Hardy space H Function space of matrix-/scalar-valued functions that are analytic and bounded in C +. The H -norm is F := sup σ max (F (s)) = sup σ max (F (jω)). re s>0 ω R Stable transfer functions are in the Hardy spaces H in the SISO case (single-input, single-output, m = p = 1); H p m in the MIMO case (multi-input, multi-output, m > 1, p > 1). Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 17/52

57 Qualitative and Quantitative Study of the Approximation Error System Norms Consider transfer function G(s) = C (si A) 1 B + D. Paley-Wiener Theorem (Parseval s equation/plancherel Theorem) L 2 (, ) = L 2, L 2 (0, ) = H 2 Consequently, 2-norms in time and frequency domains coincide! Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 17/52

58 Qualitative and Quantitative Study of the Approximation Error System Norms Consider transfer function G(s) = C (si A) 1 B + D. Paley-Wiener Theorem (Parseval s equation/plancherel Theorem) L 2 (, ) = L 2, L 2 (0, ) = H 2 Consequently, 2-norms in time and frequency domains coincide! H approximation error Reduced-order model transfer function Ĝ(s) = Ĉ(sI r Â) 1 ˆB + ˆD. y ŷ 2 = Gu Ĝû 2 G Ĝ u 2. = compute reduced-order model such that G Ĝ < tol! Note: error bound holds in time- and frequency domain due to Paley-Wiener! Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 17/52

59 Qualitative and Quantitative Study of the Approximation Error System Norms Consider transfer function G(s) = C (si A) 1 B, i.e. D = 0. Hardy space H 2 Function space of matrix-/scalar-valued functions that are analytic C + and bounded w.r.t. the H 2 -norm ( ) 1 2 F 2 := F (σ + jω) F dω sup re σ>0 ( = F (jω) F dω Stable transfer functions are in the Hardy spaces ) 1 2. H 2 in the SISO case (single-input, single-output, m = p = 1); H p m 2 in the MIMO case (multi-input, multi-output, m > 1, p > 1). Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 18/52

60 Qualitative and Quantitative Study of the Approximation Error System Norms Consider transfer function G(s) = C (si A) 1 B, i.e. D = 0. Hardy space H 2 Function space of matrix-/scalar-valued functions that are analytic C + and bounded w.r.t. the H 2 -norm ( ) 1 2 F 2 = F (jω) F dω. H 2 approximation error for impulse response (u(t) = u 0 δ(t)) Reduced-order model transfer function Ĝ(s) = Ĉ(sI r Â) 1 ˆB. y ŷ 2 = Gu 0 δ Ĝu 0δ 2 G Ĝ 2 u 0. = compute reduced-order model such that G Ĝ 2 < tol! Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 18/52

61 Qualitative and Quantitative Study of the Approximation Error Approximation Problems H -norm H 2-norm Hankel-norm G H := σ max best approximation problem for given reduced order r in general open; balanced truncation yields suboptimal solution with computable H -norm bound. necessary conditions for best approximation known; (local) optimizer computable with iterative rational Krylov algorithm (IRKA) optimal Hankel norm approximation (AAK theory). Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 19/52

62 Qualitative and Quantitative Study of the Approximation Error Computable error measures Evaluating system norms is computationally very (sometimes too) expensive. Other measures absolute errors G(jω j ) Ĝ(jω j ) 2, G(jω j ) Ĝ(jω j ) (j = 1,..., N ω); relative errors G(jω j ) Ĝ(jω j ) 2 G(jω j ) 2, G(jω j ) Ĝ(jω j ) G(jω j ) ; eyeball norm, i.e. look at frequency response/bode (magnitude) plot: for SISO system, log-log plot frequency vs. G(jω) (or G(jω) Ĝ(jω) ) in decibels, 1 db 20 log 10 (value). For MIMO systems, p m array of of plots G ij. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 20/52

63 Model Reduction by Projection Goals Automatic generation of compact models. Satisfy desired error tolerance for all admissible input signals, i.e., want y ŷ < tolerance u u L 2 (R, R m ). = Need computable error bound/estimate! Preserve physical properties: stability (poles of G in C ), minimum phase (zeroes of G in C ), passivity t u(τ) T y(τ) dτ 0 t R, u L 2 (R, R m ). ( system does not generate energy ). Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 21/52

64 Model Reduction by Projection Goals Automatic generation of compact models. Satisfy desired error tolerance for all admissible input signals, i.e., want y ŷ < tolerance u u L 2 (R, R m ). = Need computable error bound/estimate! Preserve physical properties: stability (poles of G in C ), minimum phase (zeroes of G in C ), passivity t u(τ) T y(τ) dτ 0 t R, u L 2 (R, R m ). ( system does not generate energy ). Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 21/52

65 Model Reduction by Projection Goals Automatic generation of compact models. Satisfy desired error tolerance for all admissible input signals, i.e., want y ŷ < tolerance u u L 2 (R, R m ). = Need computable error bound/estimate! Preserve physical properties: stability (poles of G in C ), minimum phase (zeroes of G in C ), passivity t u(τ) T y(τ) dτ 0 t R, u L 2 (R, R m ). ( system does not generate energy ). Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 21/52

66 Model Reduction by Projection Goals Automatic generation of compact models. Satisfy desired error tolerance for all admissible input signals, i.e., want y ŷ < tolerance u u L 2 (R, R m ). = Need computable error bound/estimate! Preserve physical properties: stability (poles of G in C ), minimum phase (zeroes of G in C ), passivity t u(τ) T y(τ) dτ 0 t R, u L 2 (R, R m ). ( system does not generate energy ). Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 21/52

67 Model Reduction by Projection Goals Automatic generation of compact models. Satisfy desired error tolerance for all admissible input signals, i.e., want y ŷ < tolerance u u L 2 (R, R m ). = Need computable error bound/estimate! Preserve physical properties: stability (poles of G in C ), minimum phase (zeroes of G in C ), passivity t u(τ) T y(τ) dτ 0 t R, u L 2 (R, R m ). ( system does not generate energy ). Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 21/52

68 Model Reduction by Projection Goals Automatic generation of compact models. Satisfy desired error tolerance for all admissible input signals, i.e., want y ŷ < tolerance u u L 2 (R, R m ). = Need computable error bound/estimate! Preserve physical properties: stability (poles of G in C ), minimum phase (zeroes of G in C ), passivity t u(τ) T y(τ) dτ 0 t R, u L 2 (R, R m ). ( system does not generate energy ). Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 21/52

69 Model Reduction by Projection Linear Algebra Basics Projector A projector is a matrix P R n n with P 2 = P. Let V = range (P), then P is projector onto V. On the other hand, if {v 1,..., v r } is a basis of V and V = [ v 1,..., v r ], then P = V (V T V ) 1 V T is a projector onto V. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 22/52

70 Model Reduction by Projection Linear Algebra Basics Projector A projector is a matrix P R n n with P 2 = P. Let V = range (P), then P is projector onto V. On the other hand, if {v 1,..., v r } is a basis of V and V = [ v 1,..., v r ], then P = V (V T V ) 1 V T is a projector onto V. Properties: If P = P T, then P is an orthogonal projector (aka: Galerkin projection), otherwise an oblique projector. (aka: Petrov-Galerkin projection.) P is the identity operator on V, i.e., Pv = v v V. I P is the complementary projector onto ker P. If V is an A-invariant subspace corresponding to a subset of A s spectrum, then we call P a spectral projector. Let W R n be another r-dimensional subspace and W = [ w 1,..., w r ] be a basis matrix for W, then P = V (W T V ) 1 W T is an oblique projector onto V along W. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 22/52

71 Model Reduction by Projection MOR Methods Based on Projection Methods: 1 Modal Truncation 2 Rational Interpolation (Padé-Approximation and (rational) Krylov Subspace Methods) 3 Balanced Truncation 4 many more... Joint feature of these methods: computation of reduced-order model (ROM) by projection! Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 23/52

72 Model Reduction by Projection MOR Methods Based on Projection Joint feature of these methods: computation of reduced-order model (ROM) by projection! Assume trajectory x(t; u) is contained in low-dimensional subspace V. Thus, use Galerkin or Petrov-Galerkin-type projection of state-space onto V along complementary subspace W: x VW T x =: x, where range (V ) = V, range (W ) = W, W T V = I r. Then, with ˆx = W T x, we obtain x V ˆx so that and the reduced-order model is x x = x V ˆx, Â := W T AV, ˆB := W T B, Ĉ := CV, ( ˆD := D). Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 23/52

73 Model Reduction by Projection MOR Methods Based on Projection Joint feature of these methods: computation of reduced-order model (ROM) by projection! Assume trajectory x(t; u) is contained in low-dimensional subspace V. Thus, use Galerkin or Petrov-Galerkin-type projection of state-space onto V along complementary subspace W: x VW T x =: x, and the reduced-order model is ˆx = W T x Important observations: Â := W T AV, ˆB := W T B, Ĉ := CV, ( ˆD := D). The state equation residual satisfies x A x Bu W, since W T x A x Bu = W T VW T ẋ AVW T x Bu Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 23/52

74 Model Reduction by Projection MOR Methods Based on Projection Joint feature of these methods: computation of reduced-order model (ROM) by projection! Assume trajectory x(t; u) is contained in low-dimensional subspace V. Thus, use Galerkin or Petrov-Galerkin-type projection of state-space onto V along complementary subspace W: x VW T x =: x, and the reduced-order model is ˆx = W T x Important observations: Â := W T AV, ˆB := W T B, Ĉ := CV, ( ˆD := D). The state equation residual satisfies x A x Bu W, since W T x A x Bu = W T VW T ẋ AVW T x Bu = W {z T ẋ } W T {z AV } W {z T x } W {z T B } u ˆx =ˆx =Â = ˆB Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 23/52

75 Model Reduction by Projection MOR Methods Based on Projection Joint feature of these methods: computation of reduced-order model (ROM) by projection! Assume trajectory x(t; u) is contained in low-dimensional subspace V. Thus, use Galerkin or Petrov-Galerkin-type projection of state-space onto V along complementary subspace W: x VW T x =: x, and the reduced-order model is ˆx = W T x Important observations:  := W T AV, ˆB := W T B, Ĉ := CV, ( ˆD := D). The state equation residual satisfies x A x Bu W, since W T x A x Bu = W T VW T ẋ AVW T x Bu = W {z T ẋ } W T {z AV } W {z T x } W {z T B } u ˆx =ˆx = = ˆx ˆx ˆBu = 0. = ˆB Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 23/52

76 Model Reduction by Projection MOR Methods Based on Projection Projection Rational Interpolation Given the ROM Â = W T AV, ˆB = W T B, Ĉ = CV, ( ˆD = D), the error transfer function can be written as G(s) Ĝ(s) = C(sI n A) 1 B + D Ĉ(sIn Â) 1 ˆB + ˆD Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 23/52

77 Model Reduction by Projection MOR Methods Based on Projection Projection Rational Interpolation Given the ROM Â = W T AV, ˆB = W T B, Ĉ = CV, ( ˆD = D), the error transfer function can be written as G(s) Ĝ(s) = C(sI n A) 1 B + D = C Ĉ(sIn Â) 1 ˆB + ˆD (si n A) 1 V (si r Â) 1 W T B Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 23/52

78 Model Reduction by Projection MOR Methods Based on Projection Projection Rational Interpolation Given the ROM Â = W T AV, ˆB = W T B, Ĉ = CV, ( ˆD = D), the error transfer function can be written as G(s) Ĝ(s) = C(sI n A) 1 B + D = C Ĉ(sIn Â) 1 ˆB + ˆD (si n A) 1 V (si r Â) 1 W T B = C`I n V (si r Â) 1 W T (si n A) (sin A) 1 B. {z } =:P(s) Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 23/52

79 Model Reduction by Projection MOR Methods Based on Projection Projection Rational Interpolation Given the ROM Â = W T AV, ˆB = W T B, Ĉ = CV, ( ˆD = D), the error transfer function can be written as G(s) Ĝ(s) = C(sI n A) 1 B + D Ĉ(sIn Â) 1 ˆB + ˆD P(s) is a projector onto V: = C`I n V (si r Â) 1 W T (si n A) (sin A) 1 B. {z } =:P(s) range (P(s)) range (V ), all matrices have full rank =, and P(s) 2 = V (si r Â) 1 W T (si n A)V (si r Â) 1 W T (si n A) Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 23/52

80 Model Reduction by Projection MOR Methods Based on Projection Projection Rational Interpolation Given the ROM Â = W T AV, ˆB = W T B, Ĉ = CV, ( ˆD = D), the error transfer function can be written as G(s) Ĝ(s) = C(sI n A) 1 B + D Ĉ(sIn Â) 1 ˆB + ˆD P(s) is a projector onto V: = C`I n V (si r Â) 1 W T (si n A) (sin A) 1 B. {z } =:P(s) range (P(s)) range (V ), all matrices have full rank =, and P(s) 2 = V (si r Â) 1 W T (si n A)V (si r Â) 1 W T (si n A) = V (si r Â) 1 (si r Â)(sIr Â) 1 W T (si n A) = P(s). {z } =I r Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 23/52

81 Model Reduction by Projection MOR Methods Based on Projection Projection Rational Interpolation Given the ROM Â = W T AV, ˆB = W T B, Ĉ = CV, ( ˆD = D), the error transfer function can be written as G(s) Ĝ(s) = C(sI n A) 1 B + D Ĉ(sIn Â) 1 ˆB + ˆD P(s) is a projector onto V = Given s C \ Λ (A) Λ (Â), hence = C`I n V (si r Â) 1 W T (si n A) (sin A) 1 B. {z } =:P(s) if (s I n A) 1 B V, then (I n P(s ))(s I n A) 1 B = 0, G(s ) Ĝ(s ) = 0 G(s ) = Ĝ(s ), i.e., Ĝ interpolates G in s! Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 23/52

82 Model Reduction by Projection MOR Methods Based on Projection Projection Rational Interpolation Given the ROM Â = W T AV, ˆB = W T B, Ĉ = CV, ( ˆD = D), the error transfer function can be written as G(s) Ĝ(s) = C(sI n A) 1 B + D Ĉ(sIn Â) 1 ˆB + ˆD = C`I n V (si r Â) 1 W T (si n A) (sin A) 1 B. {z } =:P(s) Analogously, = C(sI n A) 1`I n (si n A)V (si r Â) 1 W T B. {z } =:Q(s) Q(s) is a projector onto W = Given s C \ Λ (A) Λ (Â), hence if (s I n A) C T W, then C(s I n A) 1 (I n Q(s )) = 0, G(s ) Ĝ(s ) = 0 G(s ) = Ĝ(s ), i.e., Ĝ interpolates G in s! Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 23/52

83 Model Reduction by Projection MOR Methods Based on Projection Theorem [Grimme 97, Villemagne/Skelton 87] Given the ROM Â = W T AV, ˆB = W T B, Ĉ = CV, ( ˆD = D), and s C \ (Λ (A) Λ (Â)), if either (s I n A) 1 B range (V ), or (s I n A) C T range (W ), then the interpolation condition in s holds. G(s ) = Ĝ(s ). Note: extension to Hermite interpolation conditions later! Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 23/52

84 Modal Truncation Basic method: Assume A is diagonalizable, T 1 AT = D A, project state-space onto A-invariant subspace V = span(t 1,..., t r ), v k = eigenvectors corresp. to dominant modes / eigenvalues of A. Then with V = T (:, 1 : r) = [ t 1,..., t r ], W = T 1 (:, 1 : r), W = W (V W ) 1, reduced-order model is  := W AV = diag {λ 1,..., λ r }, ˆB := W B, Ĉ = CV Also computable by truncation: " #»  ˆB T 1 AT =, T 1 B = A 2 B 2, CT = [ Ĉ, C2 ], ˆD = D. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 24/52

85 Modal Truncation Basic method: Assume A is diagonalizable, T 1 AT = D A, project state-space onto A-invariant subspace V = span(t 1,..., t r ), v k = eigenvectors corresp. to dominant modes / eigenvalues of A. Then with V = T (:, 1 : r) = [ t 1,..., t r ], W = T 1 (:, 1 : r), W = W (V W ) 1, reduced-order model is  := W AV = diag {λ 1,..., λ r }, ˆB := W B, Ĉ = CV Also computable by truncation: " #»  ˆB T 1 AT =, T 1 B = Properties: A 2 B 2, CT = [ Ĉ, C2 ], ˆD = D. Simple computation for large-scale systems, using, e.g., Krylov subspace methods (Lanczos, Arnoldi), Jacobi-Davidson method. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 24/52

86 Modal Truncation Basic method: T 1 AT = " Â A 2 #» ˆB, T 1 B = B 2, CT = [ Ĉ, C2 ], ˆD = D. Properties: Error bound: Proof: 1 G Ĝ C 2 B 2 min λ Λ (A2 ) Re(λ). G(s) = C(sI A) 1 B + D = CTT 1 (si A) 1 TT 1 B + D = CT (si T 1 AT ) 1 T 1 B + D " #» (sir = [ Ĉ, C 2 ] Â) 1 ˆB (si n r A 2 ) 1 B 2 = Ĝ(s) + C 2(sI n r A 2 ) 1 B 2, + D Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 24/52

87 Modal Truncation Basic method: T 1 AT = " Â A 2 #» ˆB, T 1 B = B 2, CT = [ Ĉ, C2 ], ˆD = D. Properties: Error bound: Proof: 1 G Ĝ C 2 B 2 min λ Λ (A2 ) Re(λ). G(s) = Ĝ(s) + C 2 (si n r A 2 ) 1 B 2, observing that G Ĝ = sup ω R σmax(c 2(jωI n r A 2 ) 1 B 2 ), and «C 2 (jωi n r A 2 ) B 2 = C 2 diag,..., B 2. jω λ r+1 jω λ n Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 24/52

88 Modal Truncation Basic method: Assume A is diagonalizable, T 1 AT = D A, project state-space onto A-invariant subspace V = span(t 1,..., t r ), v k = eigenvectors corresp. to dominant modes / eigenvalues of A. Then reduced-order model is  := W AV = diag {λ 1,..., λ r }, ˆB := W B, Ĉ = CV Also computable by truncation: " #»  ˆB T 1 AT =, T 1 B = Difficulties: A 2 Eigenvalues contain only limited system information. B 2, CT = [ Ĉ, C2 ], ˆD = D. Dominance measures are difficult to compute. ([Litz 79] use Jordan canoncial form; otherwise merely heuristic criteria, e.g., [Varga 95]. Recent improvement: dominant pole algorithm.) Error bound not computable for really large-scale problems. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 24/52

89 Modal Truncation Example BEAM, SISO system from SLICOT Benchmark Collection for Model Reduction, n = 348, m = p = 1, reduced using 13 dominant complex conjugate eigenpairs, error bound yields G Ĝ Bode plots of transfer functions and error function MATLAB demo. Coffee break! Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 25/52

90 Modal Truncation Example BEAM, SISO system from SLICOT Benchmark Collection for Model Reduction, n = 348, m = p = 1, reduced using 13 dominant complex conjugate eigenpairs, error bound yields G Ĝ Bode plots of transfer functions and error function MATLAB demo. Coffee break! Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 25/52

91 Modal Truncation Example BEAM, SISO system from SLICOT Benchmark Collection for Model Reduction, n = 348, m = p = 1, reduced using 13 dominant complex conjugate eigenpairs, error bound yields G Ĝ Bode plots of transfer functions and error function MATLAB demo. Coffee break! Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 25/52

92 Balanced Truncation Basic principle: A system Σ, realized by (A, B, C, D), is called balanced, if the Gramians, i.e., solutions P, Q of the Lyapunov equations AP + PA T + BB T = 0, A T Q + QA + C T C = 0, satisfy: P = Q = diag(σ 1,..., σ n ) with σ 1 σ 2... σ n > 0. Λ (PQ) 1 2 = {σ 1,..., σ n } are the Hankel singular values (HSVs) of Σ. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 26/52

93 Balanced Truncation Basic principle: A system Σ, realized by (A, B, C, D), is called balanced, if the Gramians, i.e., solutions P, Q of the Lyapunov equations AP + PA T + BB T = 0, A T Q + QA + C T C = 0, satisfy: P = Q = diag(σ 1,..., σ n ) with σ 1 σ 2... σ n > 0. Λ (PQ) 1 2 = {σ 1,..., σ n } are the Hankel singular values (HSVs) of Σ. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 26/52

94 Balanced Truncation Basic principle: AP + PA T + BB T = 0, A T Q + QA + C T C = 0, Λ (PQ) 1 2 = {σ 1,..., σ n } are the Hankel singular values (HSVs) of Σ. Proof: Recall Hankel operator Z 0 y(t) = Hu(t) = Ce A(t τ) Bu(τ) dτ Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 26/52

95 Balanced Truncation Basic principle: AP + PA T + BB T = 0, A T Q + QA + C T C = 0, Λ (PQ) 1 2 = {σ 1,..., σ n } are the Hankel singular values (HSVs) of Σ. Proof: Recall Hankel operator Z 0 y(t) = Hu(t) = Z 0 Ce A(t τ) Bu(τ) dτ =: Ce At e Aτ Bu(τ) dτ {z } =:z Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 26/52

96 Balanced Truncation Basic principle: AP + PA T + BB T = 0, A T Q + QA + C T C = 0, Λ (PQ) 1 2 = {σ 1,..., σ n } are the Hankel singular values (HSVs) of Σ. Proof: Recall Hankel operator Z 0 y(t) = Hu(t) = Z 0 Ce A(t τ) Bu(τ) dτ =: Ce At e Aτ Bu(τ) dτ = Ce At z. {z } =:z Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 26/52

97 Balanced Truncation Basic principle: AP + PA T + BB T = 0, A T Q + QA + C T C = 0, Λ (PQ) 1 2 = {σ 1,..., σ n } are the Hankel singular values (HSVs) of Σ. Proof: Recall Hankel operator Z 0 y(t) = Hu(t) = Ce A(t τ) Bu(τ) dτ = Ce At z. Hankel singular values = square roots of eigenvalues of H H, Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 26/52

98 Balanced Truncation Basic principle: AP + PA T + BB T = 0, A T Q + QA + C T C = 0, Λ (PQ) 1 2 = {σ 1,..., σ n } are the Hankel singular values (HSVs) of Σ. Proof: Recall Hankel operator Z 0 y(t) = Hu(t) = Ce A(t τ) Bu(τ) dτ = Ce At z. Hankel singular values = square roots of eigenvalues of H H, Z H y(t) = B T e AT (τ t) C T y(τ) dτ 0 Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 26/52

99 Balanced Truncation Basic principle: AP + PA T + BB T = 0, A T Q + QA + C T C = 0, Λ (PQ) 1 2 = {σ 1,..., σ n } are the Hankel singular values (HSVs) of Σ. Proof: Recall Hankel operator Z 0 y(t) = Hu(t) = Ce A(t τ) Bu(τ) dτ = Ce At z. Hankel singular values = square roots of eigenvalues of H H, Z Z H y(t) = B T e AT (τ t) C T y(τ) dτ = B T e AT t e AT τ C T y(τ) dτ. 0 0 Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 26/52

100 Balanced Truncation Basic principle: AP + PA T + BB T = 0, A T Q + QA + C T C = 0, Λ (PQ) 1 2 = {σ 1,..., σ n } are the Hankel singular values (HSVs) of Σ. Proof: Recall Hankel operator Z 0 y(t) = Hu(t) = Ce A(t τ) Bu(τ) dτ = Ce At z. Hankel singular values = square roots of eigenvalues of H H, Z H y(t) = = B T e AT t e AT τ C T y(τ) dτ. 0 Hence, Z H Hu(t) = B T e AT t e AT τ C T Ce Aτ z dτ 0 Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 26/52

101 Balanced Truncation Basic principle: AP + PA T + BB T = 0, A T Q + QA + C T C = 0, Λ (PQ) 1 2 = {σ 1,..., σ n } are the Hankel singular values (HSVs) of Σ. Proof: Recall Hankel operator Z 0 y(t) = Hu(t) = Ce A(t τ) Bu(τ) dτ = Ce At z. Hankel singular values = square roots of eigenvalues of H H, Z H y(t) = = B T e AT t e AT τ C T y(τ) dτ. 0 Hence, Z H Hu(t) = B T e AT t e AT τ C T Ce Aτ z dτ 0 Z = B T e AT t e AT τ C T Ce Aτ dτ z 0 {z } Q Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 26/52

102 Balanced Truncation Basic principle: AP + PA T + BB T = 0, A T Q + QA + C T C = 0, Λ (PQ) 1 2 = {σ 1,..., σ n } are the Hankel singular values (HSVs) of Σ. Proof: Recall Hankel operator Z 0 y(t) = Hu(t) = Ce A(t τ) Bu(τ) dτ = Ce At z. Hankel singular values = square roots of eigenvalues of H H, Z H y(t) = = B T e AT t e AT τ C T y(τ) dτ. 0 Hence, Z H Hu(t) = B T e AT t e AT τ C T Ce Aτ z dτ 0 = B T e AT t Qz Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 26/52

103 Balanced Truncation Basic principle: AP + PA T + BB T = 0, A T Q + QA + C T C = 0, Λ (PQ) 1 2 = {σ 1,..., σ n } are the Hankel singular values (HSVs) of Σ. Proof: Recall Hankel operator Z 0 y(t) = Hu(t) = Ce A(t τ) Bu(τ) dτ = Ce At z. Hankel singular values = square roots of eigenvalues of H H, Z H y(t) = = B T e AT t e AT τ C T y(τ) dτ. 0 Hence, H Hu(t) = B T e AT t Qz Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 26/52

104 Balanced Truncation Basic principle: AP + PA T + BB T = 0, A T Q + QA + C T C = 0, Λ (PQ) 1 2 = {σ 1,..., σ n } are the Hankel singular values (HSVs) of Σ. Proof: Recall Hankel operator Z 0 y(t) = Hu(t) = Ce A(t τ) Bu(τ) dτ = Ce At z. Hankel singular values = square roots of eigenvalues of H H, Z H y(t) = = B T e AT t e AT τ C T y(τ) dτ. 0 Hence, H Hu(t) = B T e AT t Qz. = σ 2 u(t). Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 26/52

105 Balanced Truncation Basic principle: AP + PA T + BB T = 0, A T Q + QA + C T C = 0, Λ (PQ) 1 2 = {σ 1,..., σ n } are the Hankel singular values (HSVs) of Σ. Proof: Hankel singular values = square roots of eigenvalues of H H, = u(t) = 1 σ 2 B T e AT t Qz H Hu(t) = B T e AT t Qz. = σ 2 u(t). Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 26/52

106 Balanced Truncation Basic principle: AP + PA T + BB T = 0, A T Q + QA + C T C = 0, Λ (PQ) 1 2 = {σ 1,..., σ n } are the Hankel singular values (HSVs) of Σ. Proof: Hankel singular values = square roots of eigenvalues of H H, H Hu(t) = B T e AT t Qz. = σ 2 u(t). = u(t) = 1 σ 2 B T e AT t Qz = (recalling z = R 0 e Aτ Bu(τ) dτ) Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 26/52

107 Balanced Truncation Basic principle: AP + PA T + BB T = 0, A T Q + QA + C T C = 0, Λ (PQ) 1 2 = {σ 1,..., σ n } are the Hankel singular values (HSVs) of Σ. Proof: Hankel singular values = square roots of eigenvalues of H H, H Hu(t) = B T e AT t Qz. = σ 2 u(t). = u(t) = 1 σ 2 B T e AT t Qz = (recalling z = R 0 e Aτ Bu(τ) dτ) Z 0 z = e Aτ B 1 σ 2 BT e AT τ Qz dτ Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 26/52

108 Balanced Truncation Basic principle: AP + PA T + BB T = 0, A T Q + QA + C T C = 0, Λ (PQ) 1 2 = {σ 1,..., σ n } are the Hankel singular values (HSVs) of Σ. Proof: Hankel singular values = square roots of eigenvalues of H H, H Hu(t) = B T e AT t Qz. = σ 2 u(t). = u(t) = 1 σ 2 B T e AT t Qz = (recalling z = R 0 e Aτ Bu(τ) dτ) Z 0 z = e Aτ B 1 σ 2 BT e AT τ Qz dτ = 1 σ 2 Z 0 e Aτ BB T e AT τ dτ Qz Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 26/52

109 Balanced Truncation Basic principle: AP + PA T + BB T = 0, A T Q + QA + C T C = 0, Λ (PQ) 1 2 = {σ 1,..., σ n } are the Hankel singular values (HSVs) of Σ. Proof: Hankel singular values = square roots of eigenvalues of H H, H Hu(t) = B T e AT t Qz. = σ 2 u(t). = u(t) = 1 σ 2 B T e AT t Qz = (recalling z = R 0 e Aτ Bu(τ) dτ) Z 0 z = e Aτ B 1 σ 2 BT e AT τ Qz dτ = = 1 σ 2 1 σ 2 Z 0 e Aτ BB T e AT τ dτ Qz Z e At BB T e AT t dt Qz 0 {z } P Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 26/52

110 Balanced Truncation Basic principle: AP + PA T + BB T = 0, A T Q + QA + C T C = 0, Λ (PQ) 1 2 = {σ 1,..., σ n } are the Hankel singular values (HSVs) of Σ. Proof: Hankel singular values = square roots of eigenvalues of H H, H Hu(t) = B T e AT t Qz. = σ 2 u(t). = u(t) = 1 σ 2 B T e AT t Qz = (recalling z = R 0 e Aτ Bu(τ) dτ) Z 0 z = e Aτ B 1 σ 2 BT e AT τ Qz dτ = = 1 σ 2 Z e At BB T e AT t dt Qz 0 {z } P 1 σ 2 PQz Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 26/52

111 Balanced Truncation Basic principle: AP + PA T + BB T = 0, A T Q + QA + C T C = 0, Λ (PQ) 1 2 = {σ 1,..., σ n } are the Hankel singular values (HSVs) of Σ. Proof: Hankel singular values = square roots of eigenvalues of H H, H Hu(t) = B T e AT t Qz. = σ 2 u(t). = u(t) = 1 σ 2 B T e AT t Qz = (recalling z = R 0 e Aτ Bu(τ) dτ) Z 0 z = e Aτ B 1 σ 2 BT e AT τ Qz dτ PQz = σ 2 z. = = 1 σ 2 Z e At BB T e AT t dt Qz 0 {z } P 1 σ 2 PQz Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 26/52

112 Balanced Truncation Basic principle: A system Σ, realized by (A, B, C, D), is called balanced, if the Gramians, i.e., solutions P, Q of the Lyapunov equations AP + PA T + BB T = 0, A T Q + QA + C T C = 0, satisfy: P = Q = diag(σ 1,..., σ n ) with σ 1 σ 2... σ n > 0. Λ (PQ) 1 2 = {σ 1,..., σ n } are the Hankel singular values (HSVs) of Σ. Compute balanced realization of the system via state-space transformation T : (A, B, C, D) (TAT 1, TB, CT 1, D)»» A11 A 12 B1 =, A 21 A 22 B 2 Truncation (Â, ˆB, Ĉ, ˆD) := (A 11, B 1, C 1, D)., ˆ C 1 C 2, D «Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 26/52

113 Balanced Truncation Basic principle: A system Σ, realized by (A, B, C, D), is called balanced, if the Gramians, i.e., solutions P, Q of the Lyapunov equations AP + PA T + BB T = 0, A T Q + QA + C T C = 0, satisfy: P = Q = diag(σ 1,..., σ n ) with σ 1 σ 2... σ n > 0. Λ (PQ) 1 2 = {σ 1,..., σ n } are the Hankel singular values (HSVs) of Σ. Compute balanced realization of the system via state-space transformation T : (A, B, C, D) (TAT 1, TB, CT 1, D)»» A11 A 12 B1 =, A 21 A 22 B 2 Truncation (Â, ˆB, Ĉ, ˆD) := (A 11, B 1, C 1, D)., ˆ C 1 C 2, D «Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 26/52

114 Balanced Truncation Motivation: HSVs are system invariants: they are preserved under T : (A, B, C, D) (TAT 1, TB, CT 1, D): in transformed coordinates, the Gramians satisfy (TAT 1 )(TPT T ) + (TPT T )(TAT 1 ) T + (TB)(TB) T = 0, (TAT 1 ) T (T T QT 1 ) + (T T QT 1 )(TAT 1 ) + (CT 1 ) T (CT 1 ) = 0 (TPT T )(T T QT 1 ) = TPQT 1, hence Λ (PQ) = Λ ((TPT T )(T T QT 1 )). Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 26/52

115 Balanced Truncation Motivation: HSVs are system invariants: they are preserved under T : (A, B, C, D) (TAT 1, TB, CT 1, D) HSVs determine the energy transfer given by the Hankel map H : L 2 (, 0) L 2 (0, ) : u y +. In balanced coordinates... energy transfer from u to y + : E := sup u L 2 (,0] x(0)=x 0 y(t) T y(t) dt 0 0 u(t) T u(t) dt = 1 x 0 2 n σj 2 x0,j 2 j=1 Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 26/52

116 Balanced Truncation Motivation: HSVs are system invariants: they are preserved under T : (A, B, C, D) (TAT 1, TB, CT 1, D) HSVs determine the energy transfer given by the Hankel map H : L 2 (, 0) L 2 (0, ) : u y +. In balanced coordinates... energy transfer from u to y + : E := sup u L 2 (,0] x(0)=x 0 y(t) T y(t) dt 0 0 u(t) T u(t) dt = 1 x 0 2 n σj 2 x0,j 2 j=1 = Truncate states corresponding to small HSVs = complete analogy to best approximation via SVD! Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 26/52

117 Balanced Truncation Implementation: SR Method 1 Compute (Cholesky) factors of the Gramians, P = S T S, Q = R T R. [ ] [ ] Σ1 2 V Compute SVD SR T T = [ U 1, U 2 ] 1. 3 ROM is (W T AV, W T B, CV, D), where Σ 2 V T 2 W = R T V 1 Σ 1 2 1, V = S T U 1 Σ Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 26/52

118 Balanced Truncation Implementation: SR Method 1 Compute (Cholesky) factors of the Gramians, P = S T S, Q = R T R. [ ] [ ] Σ1 2 V Compute SVD SR T T = [ U 1, U 2 ] 1. 3 ROM is (W T AV, W T B, CV, D), where Σ 2 V T 2 W = R T V 1 Σ 1 2 1, V = S T U 1 Σ Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 26/52

119 Balanced Truncation Implementation: SR Method 1 Compute (Cholesky) factors of the Gramians, P = S T S, Q = R T R. [ ] [ ] Σ1 2 V Compute SVD SR T T = [ U 1, U 2 ] 1. 3 ROM is (W T AV, W T B, CV, D), where Σ 2 V T 2 W = R T V 1 Σ 1 2 1, V = S T U 1 Σ Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 26/52

120 Balanced Truncation Implementation: SR Method 1 Compute (Cholesky) factors of the Gramians, P = S T S, Q = R T R. [ ] [ ] Σ1 2 V Compute SVD SR T T = [ U 1, U 2 ] 1. 3 ROM is (W T AV, W T B, CV, D), where Σ 2 V T 2 Note: W = R T V 1 Σ 1 2 1, V = S T U 1 Σ V T W = (Σ U T 1 S)(R T V 1Σ ) Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 26/52

121 Balanced Truncation Implementation: SR Method 1 Compute (Cholesky) factors of the Gramians, P = S T S, Q = R T R. [ ] [ ] Σ1 2 V Compute SVD SR T T = [ U 1, U 2 ] 1. 3 ROM is (W T AV, W T B, CV, D), where Σ 2 V T 2 Note: W = R T V 1 Σ 1 2 1, V = S T U 1 Σ V T W = (Σ U1 T S)(R T V 1Σ ) = Σ U1 T UΣV T V 1Σ Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 26/52

122 Balanced Truncation Implementation: SR Method 1 Compute (Cholesky) factors of the Gramians, P = S T S, Q = R T R. [ ] [ ] Σ1 2 V Compute SVD SR T T = [ U 1, U 2 ] 1. 3 ROM is (W T AV, W T B, CV, D), where Σ 2 V T 2 Note: W = R T V 1 Σ 1 2 1, V = S T U 1 Σ V T W = (Σ U1 T S)(R T V 1Σ ) = Σ U1 T UΣV T V 1Σ " #» = Σ 1 Σ1 2 Ir 1 [ I r, 0 ] Σ Σ 2 Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 26/52

123 Balanced Truncation Implementation: SR Method 1 Compute (Cholesky) factors of the Gramians, P = S T S, Q = R T R. [ ] [ ] Σ1 2 V Compute SVD SR T T = [ U 1, U 2 ] 1. 3 ROM is (W T AV, W T B, CV, D), where Σ 2 V T 2 Note: W = R T V 1 Σ 1 2 1, V = S T U 1 Σ V T W = (Σ U1 T S)(R T V 1Σ ) = Σ U1 T UΣV T V 1Σ " #» = Σ 1 Σ1 2 Ir 1 [ I r, 0 ] Σ = Σ Σ 0 1Σ = I r = VW T is an oblique projector, hence balanced truncation is a Petrov-Galerkin projection method. Σ 2 Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 26/52

124 Balanced Truncation Properties: Reduced-order model is stable with HSVs σ 1,..., σ r. Adaptive choice of r via computable error bound: y ŷ 2 (2 n ) u 2. k=r+1 σ k Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 26/52

125 Balanced Truncation Properties: Reduced-order model is stable with HSVs σ 1,..., σ r. Adaptive choice of r via computable error bound: y ŷ 2 (2 n ) u 2. k=r+1 σ k Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 26/52

126 Balanced Truncation Properties: General misconception: complexity O(n 3 ) true for several implementations! (e.g., MATLAB, SLICOT). Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 26/52

127 Balanced Truncation Properties: General misconception: complexity O(n 3 ) true for several implementations! (e.g., MATLAB, SLICOT). New algorithmic ideas from numerical linear algebra: Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 26/52

128 Balanced Truncation Properties: General misconception: complexity O(n 3 ) true for several implementations! (e.g., MATLAB, SLICOT). New algorithmic ideas from numerical linear algebra: Instead of Gramians P, Q compute S, R R n k, k n, such that P SS T, Q RR T. Compute S, R with problem-specific Lyapunov solvers of low complexity directly. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 26/52

129 Balanced Truncation Properties: General misconception: complexity O(n 3 ) true for several implementations! (e.g., MATLAB, SLICOT). New algorithmic ideas from numerical linear algebra: Sparse Balanced Truncation: Sparse implementation using sparse Lyapunov solver ( ADI+MUMPS/SuperLU). Complexity O(n(k 2 + r 2 )). Software: + MATLAB toolbox LyaPack (Penzl 1999), + Software library M.E.S.S. a in C/MATLAB [B./Saak/Köhler]. a Matrix Equation Sparse Solvers Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 26/52

130 ADI Methods for Lyapunov Equations Background Recall Peaceman Rachford ADI: Consider Au = s where A R n n spd, s R n. ADI Iteration Idea: Decompose A = H + V with H, V R n n such that can be solved easily/efficiently. ADI Iteration (H + pi )v = r (V + pi )w = t If H, V spd p k, k = 1, 2,... such that u 0 = 0 (H + p k I )u k 1 = (p 2 k I V )u k 1 + s (V + p k I )u k = (p k I H)u k 1 + s 2 converges to u R n solving Au = s. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 27/52

131 ADI Methods for Lyapunov Equations Background Recall Peaceman Rachford ADI: Consider Au = s where A R n n spd, s R n. ADI Iteration Idea: Decompose A = H + V with H, V R n n such that can be solved easily/efficiently. ADI Iteration (H + pi )v = r (V + pi )w = t If H, V spd p k, k = 1, 2,... such that u 0 = 0 (H + p k I )u k 1 = (p 2 k I V )u k 1 + s (V + p k I )u k = (p k I H)u k 1 + s 2 converges to u R n solving Au = s. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 27/52

132 ADI Methods for Lyapunov Equations The Lyapunov operator L : P AX + XA T can be decomposed into the linear operators L H : X AX, L V : X XA T. In analogy to the standard ADI method we find the ADI iteration for the Lyapunov equation [Wachspress 88] X 0 = 0 (A + p k I )X k 1 = W X 2 k 1 (A T p k I ) (A + p k I )Xk T = W X T (A T p k 1 k I ). 2 Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 28/52

133 ADI Methods for Lyapunov Equations Low-Rank ADI Consider AX + XA T = BB T for stable A, B R n m with m n. ADI iteration for the Lyapunov equation [Wachspress 95] For k = 1,..., k max X 0 = 0 (A + p k I )X k 1 = BB T X 2 k 1 (A T p k I ) (A + p k I )Xk T = BB T X T (A T p k 1 k I ) 2 Rewrite as one step iteration and factorize X k = Z k Z T k, k = 0,..., k max Z 0 Z0 T = 0 Z k Zk T = 2p k (A + p k I ) 1 BB T (A + p k I ) T +(A + p k I ) 1 (A p k I )Z k 1 Zk 1 T (A p ki ) T (A + p k I ) T... low-rank Cholesky factor ADI [Penzl 97/ 00, Li/White 99/ 02, B./Li/Penzl 99/ 08, Gugercin/Sorensen/Antoulas 03] Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 29/52

134 ADI Methods for Lyapunov Equations Low-Rank ADI Consider AX + XA T = BB T for stable A, B R n m with m n. ADI iteration for the Lyapunov equation [Wachspress 95] For k = 1,..., k max X 0 = 0 (A + p k I )X k 1 = BB T X 2 k 1 (A T p k I ) (A + p k I )Xk T = BB T X T (A T p k 1 k I ) 2 Rewrite as one step iteration and factorize X k = Z k Z T k, k = 0,..., k max Z 0 Z0 T = 0 Z k Zk T = 2p k (A + p k I ) 1 BB T (A + p k I ) T +(A + p k I ) 1 (A p k I )Z k 1 Zk 1 T (A p ki ) T (A + p k I ) T... low-rank Cholesky factor ADI [Penzl 97/ 00, Li/White 99/ 02, B./Li/Penzl 99/ 08, Gugercin/Sorensen/Antoulas 03] Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 29/52

135 ADI Methods for Lyapunov Equations Low-Rank ADI Consider AX + XA T = BB T for stable A, B R n m with m n. ADI iteration for the Lyapunov equation [Wachspress 95] For k = 1,..., k max X 0 = 0 (A + p k I )X k 1 = BB T X 2 k 1 (A T p k I ) (A + p k I )Xk T = BB T X T (A T p k 1 k I ) 2 Rewrite as one step iteration and factorize X k = Z k Z T k, k = 0,..., k max Z 0 Z0 T = 0 Z k Zk T = 2p k (A + p k I ) 1 BB T (A + p k I ) T +(A + p k I ) 1 (A p k I )Z k 1 Zk 1 T (A p ki ) T (A + p k I ) T... low-rank Cholesky factor ADI [Penzl 97/ 00, Li/White 99/ 02, B./Li/Penzl 99/ 08, Gugercin/Sorensen/Antoulas 03] Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 29/52

136 Balanced Truncation ADI Methods for Lyapunov Equations Z k = [ 2p k (A + p k I ) 1 B, (A + p k I ) 1 (A p k I )Z k 1 ] [Penzl 00] Observing that (A p i I ), (A + p k I ) 1 commute, we rewrite Z kmax as Z kmax = [z kmax, P kmax 1z kmax, P kmax 2(P kmax 1z kmax ),..., P 1(P 2 P kmax 1z kmax )], where and P i := z kmax = 2p kmax (A + p kmax I ) 1 B 2pi 2pi+1 [ I (pi + p i+1 )(A + p i I ) 1]. [Li/White 02] Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 30/52

137 Balanced Truncation ADI Methods for Lyapunov Equations Z k = [ 2p k (A + p k I ) 1 B, (A + p k I ) 1 (A p k I )Z k 1 ] [Penzl 00] Observing that (A p i I ), (A + p k I ) 1 commute, we rewrite Z kmax as Z kmax = [z kmax, P kmax 1z kmax, P kmax 2(P kmax 1z kmax ),..., P 1(P 2 P kmax 1z kmax )], where and P i := z kmax = 2p kmax (A + p kmax I ) 1 B 2pi 2pi+1 [ I (pi + p i+1 )(A + p i I ) 1]. [Li/White 02] Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 30/52

138 ADI Methods for Lyapunov Equations Lyapunov equation 0 = AX + XA T + BB T. Algorithm [Penzl 97/ 00, Li/White 99/ 02, B. 04, B./Li/Penzl 99/ 08] V 1 2 re p 1(A + p 1I ) 1 B, Z 1 V 1 FOR k = 2, 3,... V k q re p k re p k 1 `Vk 1 (p k + p k 1 )(A + p k I ) 1 V k 1 Z k ˆ Z k 1 V k Z k rrlq(z k, τ) column compression At convergence, Z kmax Zk T max X, where (without column compression) Z kmax = [ V 1... V kmax ], Vk = C n m. Note: Implementation in real arithmetic possible by combining two steps [B./Li/Penzl 99/ 08] or using new idea employing the relation of 2 consecutive complex factors [B./Kürschner/Saak 11]. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 31/52

139 ADI Methods for Lyapunov Equations Lyapunov equation 0 = AX + XA T + BB T. Algorithm [Penzl 97/ 00, Li/White 99/ 02, B. 04, B./Li/Penzl 99/ 08] V 1 2 re p 1(A + p 1I ) 1 B, Z 1 V 1 FOR k = 2, 3,... V k q re p k re p k 1 `Vk 1 (p k + p k 1 )(A + p k I ) 1 V k 1 Z k ˆ Z k 1 V k Z k rrlq(z k, τ) column compression At convergence, Z kmax Zk T max X, where (without column compression) Z kmax = [ V 1... V kmax ], Vk = C n m. Note: Implementation in real arithmetic possible by combining two steps [B./Li/Penzl 99/ 08] or using new idea employing the relation of 2 consecutive complex factors [B./Kürschner/Saak 11]. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 31/52

140 Numerical Results for ADI Optimal Cooling of Steel Profiles Mathematical model: boundary control for linearized 2D heat equation. c ρ t x = λ x, ξ Ω λ n x = κ(u k x), ξ Γ k, 1 k 7, x = 0, ξ Γ7. n = m = 7, p = FEM Discretization, different models for initial mesh (n = 371), 1, 2, 3, 4 steps of mesh refinement n = 1357, 5177, 20209, Source: Physical model: courtesy of Mannesmann/Demag. Math. model: Tröltzsch/Unger 1999/2001, Penzl 1999, Saak Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 32/52

141 Numerical Results for ADI Optimal Cooling of Steel Profiles Solve dual Lyapunov equations needed for balanced truncation, i.e., APM T + MPA T + BB T = 0, A T QM + M T QA + C T C = 0, for 79, shifts chosen by Penzl heuristic from 50/25 Ritz values of A of largest/smallest magnitude, no column compression performed. New version in M.E.S.S. requires no factorization of mass matrix! Computations done on Core2Duo at 2.8GHz with 3GB RAM and 32Bit-MATLAB. CPU times: 626 / 356 sec. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 33/52

142 Numerical Results for ADI Scaling / Mesh Independence Computations by Martin Köhler 10 A R n n FDM matrix for 2D heat equation on [0, 1] 2 (Lyapack benchmark demo l1, m = 1). 16 shifts chosen by Penzl heuristic from 50/25 Ritz values of A of largest/smallest magnitude. Computations using 2 dual core Intel Xeon 5160 with 16 GB RAM. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 34/52

143 Numerical Results for ADI Scaling / Mesh Independence Computations by Martin Köhler 10 A R n n FDM matrix for 2D heat equation on [0, 1] 2 (Lyapack benchmark demo l1, m = 1). 16 shifts chosen by Penzl heuristic from 50/25 Ritz values of A of largest/smallest magnitude. Computations using 2 dual core Intel Xeon 5160 with 16 GB RAM. CPU Times n M.E.S.S. (C) LyaPack M.E.S.S. (MATLAB) , , , , , out of memory , out of memory , out of memory ,000, out of memory Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 34/52

144 Numerical Results for ADI Scaling / Mesh Independence Computations by Martin Köhler 10 A R n n FDM matrix for 2D heat equation on [0, 1] 2 (Lyapack benchmark demo l1, m = 1). 16 shifts chosen by Penzl heuristic from 50/25 Ritz values of A of largest/smallest magnitude. Computations using 2 dual core Intel Xeon 5160 with 16 GB RAM. Note: for n = 1, 000, 000, first sparse LU needs 1, 100 sec., using UMFPACK this reduces to 30 sec. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 34/52

145 Factored Galerkin-ADI Iteration Lyapunov equation 0 = AX + XA T + BB T Projection-based methods for Lyapunov equations with A + A T < 0: 1 Compute orthonormal basis range (Z), Z R n r, for subspace Z R n, dim Z = r. 2 Set  := Z T AZ, ˆB := Z T B. 3 Solve small-size Lyapunov equation  ˆX + ˆX ÂT + ˆB ˆB T = 0. 4 Use X Z ˆX Z T. Examples: Krylov subspace methods, i.e., for m = 1: Z = K(A, B, r) = span{b, AB, A 2 B,..., A r 1 B} [Saad 90, Jaimoukha/Kasenally 94, Jbilou 02 08]. K-PIK [Simoncini 07], Z = K(A, B, r) K(A 1, B, r). Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 35/52

146 Factored Galerkin-ADI Iteration Lyapunov equation 0 = AX + XA T + BB T Projection-based methods for Lyapunov equations with A + A T < 0: 1 Compute orthonormal basis range (Z), Z R n r, for subspace Z R n, dim Z = r. 2 Set  := Z T AZ, ˆB := Z T B. 3 Solve small-size Lyapunov equation  ˆX + ˆX ÂT + ˆB ˆB T = 0. 4 Use X Z ˆX Z T. Examples: Krylov subspace methods, i.e., for m = 1: Z = K(A, B, r) = span{b, AB, A 2 B,..., A r 1 B} [Saad 90, Jaimoukha/Kasenally 94, Jbilou 02 08]. K-PIK [Simoncini 07], Z = K(A, B, r) K(A 1, B, r). Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 35/52

147 Factored Galerkin-ADI Iteration Lyapunov equation 0 = AX + XA T + BB T Projection-based methods for Lyapunov equations with A + A T < 0: 1 Compute orthonormal basis range (Z), Z R n r, for subspace Z R n, dim Z = r. 2 Set  := Z T AZ, ˆB := Z T B. 3 Solve small-size Lyapunov equation  ˆX + ˆX ÂT + ˆB ˆB T = 0. 4 Use X Z ˆX Z T. Examples: ADI subspace [B./R.-C. Li/Truhar 08]: Note: Z = colspan [ V 1,..., V r ]. 1 ADI subspace is rational Krylov subspace [J.-R. Li/White 02]. 2 Similar approach: ADI-preconditioned global Arnoldi method [Jbilou 08]. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 35/52

148 Factored Galerkin-ADI Iteration Numerical examples for Galerkin-ADI FEM semi-discretized control problem for parabolic PDE: optimal cooling of rail profiles, n = 20, 209, m = 7, p = 6. Good ADI shifts CPU times: 80s (projection every 5th ADI step) vs. 94s (no projection). Computations by Jens Saak 10. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 36/52

149 Factored Galerkin-ADI Iteration Numerical examples for Galerkin-ADI FEM semi-discretized control problem for parabolic PDE: optimal cooling of rail profiles, n = 20, 209, m = 7, p = 6. Bad ADI shifts CPU times: 368s (projection every 5th ADI step) vs. 1207s (no projection). Computations by Jens Saak 10. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 36/52

150 Factored Galerkin-ADI Iteration Numerical examples for Galerkin-ADI: optimal cooling of rail profiles, n = 79, 841. MESS w/o Galerkin projection and column compression Rank of solution factors: 532 / 426 MESS with Galerkin projection and column compression Rank of solution factors: 269 / 205 Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 37/52

151 Balanced Truncation Numerical example for BT: Optimal Cooling of Steel Profiles n = 1357, Absolute Error BT model computed with sign function method, MT w/o static condensation, same order as BT model. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 38/52

152 Balanced Truncation Numerical example for BT: Optimal Cooling of Steel Profiles n = 1357, Absolute Error n = 79841, Absolute Error BT model computed with sign function method, MT w/o static condensation, same order as BT model. BT model computed using M.E.S.S. in MATLAB, dualcore, computation time: <10 min. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 38/52

153 Balanced Truncation Numerical example for BT: Microgyroscope (Butterfly Gyro) Vibrating micro-mechanical gyroscope for inertial navigation. Rotational position sensor. By applying AC voltage to electrodes, wings are forced to vibrate in anti-phase in wafer plane. Coriolis forces induce motion of wings out of wafer plane yielding sensor data. Source: The Oberwolfach Benchmark Collection Courtesy of D. Billger (Imego Institute, Göteborg), Saab Bofors Dynamics AB. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 39/52

154 Balanced Truncation Numerical example for BT: Microgyroscope (Butterfly Gyro) FEM discretization of structure dynamical model using quadratic tetrahedral elements (ANSYS-SOLID187) n = 34, 722, m = 1, p = 12. Reduced model computed using SpaRed, r = 30. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 40/52

155 Balanced Truncation Numerical example for BT: Microgyroscope (Butterfly Gyro) FEM discretization of structure dynamical model using quadratic tetrahedral elements (ANSYS-SOLID187) n = 34, 722, m = 1, p = 12. Reduced model computed using SpaRed, r = 30. Frequency Repsonse Analysis Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 40/52

156 Balanced Truncation Numerical example for BT: Microgyroscope (Butterfly Gyro) FEM discretization of structure dynamical model using quadratic tetrahedral elements (ANSYS-SOLID187) n = 34, 722, m = 1, p = 12. Reduced model computed using SpaRed, r = 30. Frequency Repsonse Analysis Hankel Singular Values Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 40/52

157 Balancing-Related Model Reduction Basic Principle Given positive semidefinite matrices P = S T S, Q = R T R, compute balancing state-space transformation so that P = Q = diag(σ 1,..., σ n ) = Σ, σ 1... σ n 0, and truncate corresponding realization at size r with σ r > σ r+1. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 41/52

158 Balancing-Related Model Reduction Basic Principle Given positive semidefinite matrices P = S T S, Q = R T R, compute balancing state-space transformation so that P = Q = diag(σ 1,..., σ n ) = Σ, σ 1... σ n 0, and truncate corresponding realization at size r with σ r > σ r+1. Classical Balanced Truncation (BT) [Mullis/Roberts 76, Moore 81] P = controllability Gramian of system given by (A, B, C, D). Q = observability Gramian of system given by (A, B, C, D). P, Q solve dual Lyapunov equations AP + PA T + BB T = 0, A T Q + QA + C T C = 0. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 41/52

159 Balancing-Related Model Reduction Basic Principle Given positive semidefinite matrices P = S T S, Q = R T R, compute balancing state-space transformation so that P = Q = diag(σ 1,..., σ n ) = Σ, σ 1... σ n 0, and truncate corresponding realization at size r with σ r > σ r+1. LQG Balanced Truncation (LQGBT) [Jonckheere/Silverman 83] P/Q = controllability/observability Gramian of closed-loop system based on LQG compensator. P, Q solve dual algebraic Riccati equations (AREs) 0 = AP + PA T PC T CP + B T B, 0 = A T Q + QA QBB T Q + C T C. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 41/52

160 Balancing-Related Model Reduction Basic Principle Given positive semidefinite matrices P = S T S, Q = R T R, compute balancing state-space transformation so that P = Q = diag(σ 1,..., σ n ) = Σ, σ 1... σ n 0, and truncate corresponding realization at size r with σ r > σ r+1. Balanced Stochastic Truncation (BST) [Desai/Pal 84, Green 88] P = controllability Gramian of system given by (A, B, C, D), i.e., solution of Lyapunov equation AP + PA T + BB T = 0. Q = observability Gramian of right spectral factor of power spectrum of system given by (A, B, C, D), i.e., solution of ARE  T Q + Q + QB W (DD T ) 1 B T W Q + C T (DD T ) 1 C = 0, where  := A B W (DD T ) 1 C, B W := BD T + PC T. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 41/52

161 Balancing-Related Model Reduction Basic Principle Given positive semidefinite matrices P = S T S, Q = R T R, compute balancing state-space transformation so that P = Q = diag(σ 1,..., σ n ) = Σ, σ 1... σ n 0, and truncate corresponding realization at size r with σ r > σ r+1. Positive-Real Balanced Truncation (PRBT) [Green 88] Based on positive-real equations, related to positive real (Kalman-Yakubovich-Popov-Anderson) lemma. P, Q solve dual AREs 0 = ĀP + PĀT + PC T R 1 CP + B R 1 B T, 0 = ĀT Q + QĀ + QB R 1 B T Q + C T R 1 C, where R = D + D T, Ā = A B R 1 C. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 41/52

162 Balancing-Related Model Reduction Basic Principle Given positive semidefinite matrices P = S T S, Q = R T R, compute balancing state-space transformation so that P = Q = diag(σ 1,..., σ n ) = Σ, σ 1... σ n 0, and truncate corresponding realization at size r with σ r > σ r+1. Other Balancing-Based Methods Bounded-real balanced truncation (BRBT) based on bounded real lemma [Opdenacker/Jonckheere 88]; H balanced truncation (HinfBT) closed-loop balancing based on H compensator [Mustafa/Glover 91]. Both approaches require solution of dual AREs. Frequency-weighted versions of the above approaches. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 41/52

163 Balancing-Related Model Reduction Properties Guaranteed preservation of physical properties like stability (all), passivity (PRBT), minimum phase (BST). Computable error bounds, e.g., BT: G G r 2 LQGBT: G G r 2 BST: G G r n j=r+1 n j=r+1 n j=r+1 σ BT j, σ LQG j q 1+(σ LQG j ) 2 1+σ BST j 1 σ BST j 1 G, Can be combined with singular perturbation approximation for steady-state performance. Computations can be modularized. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 42/52

164 Padé Approximation Idea: Consider ẋ = Ax + Bu, y = Cx with transfer function G(s) = C(sI n A) 1 B. For s 0 Λ (A): G(s) = C ( I (s s 0 )(s 0 I n A) 1) 1 (s0 I n A) 1 B = m 0 + m 1 (s s 0 ) + m 2 (s s 0 ) Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 43/52

165 Padé Approximation Idea: Consider ẋ = Ax + Bu, y = Cx with transfer function G(s) = C(sI n A) 1 B. For s 0 Λ (A): G(s) = C((s 0 I n A) + (s s 0 )I n ) 1 B = C ( I (s s 0 )(s 0 I n A) 1) 1 (s0 I n A) 1 B = m 0 + m 1 (s s 0 ) + m 2 (s s 0 ) Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 43/52

166 Padé Approximation Idea: Consider ẋ = Ax + Bu, y = Cx with transfer function G(s) = C(sI n A) 1 B. For s 0 Λ (A): G(s) = C((s 0 I n A) + (s s 0 )I n ) 1 B = C ( I (s s 0 )(s 0 I n A) 1) 1 (s0 I n A) 1 B = m 0 + m 1 (s s 0 ) + m 2 (s s 0 ) Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 43/52

167 Padé Approximation Idea: Consider ẋ = Ax + Bu, y = Cx with transfer function G(s) = C(sI n A) 1 B. For s 0 Λ (A): G(s) = C ( I (s s 0 )(s 0 I n A) 1) 1 (s0 I n A) 1 B = m 0 + m 1 (s s 0 ) + m 2 (s s 0 ) For s 0 = 0: m j := C(A 1 ) j B = moments. For s 0 = : m j := CA j 1 B = Markov parameters. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 43/52

168 Padé Approximation Idea: Consider ẋ = Ax + Bu, y = Cx with transfer function G(s) = C(sI n A) 1 B. For s 0 Λ (A): G(s) = C ( I (s s 0 )(s 0 I n A) 1) 1 (s0 I n A) 1 B = m 0 + m 1 (s s 0 ) + m 2 (s s 0 ) As reduced-order model use rth Padé approximant Ĝ to G: G(s) = Ĝ(s) + O((s s 0 ) 2r ), i.e., m j = m j for j = 0,..., 2r 1 moment matching if s 0 <, partial realization if s 0 =. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 43/52

169 Padé Approximation Padé-via-Lanczos Method (PVL) Moments need not be computed explicitly; moment matching is equivalent to projecting state-space onto V = span( B, à B,..., Ãr 1 B) =: K( Ã, B, r) (where à = (s 0I n A) 1, B = (s0i n A) 1 B) along W = span(c T, à C T,..., (à ) r 1 C T ) =: K(Ã, C T, r). Computation via unsymmetric Lanczos method, yields system matrices of reduced-order model as by-product. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 43/52

170 Padé Approximation Padé-via-Lanczos Method (PVL) Moments need not be computed explicitly; moment matching is equivalent to projecting state-space onto V = span( B, à B,..., Ãr 1 B) =: K( Ã, B, r) (where à = (s 0I n A) 1, B = (s0i n A) 1 B) along W = span(c T, à C T,..., (à ) r 1 C T ) =: K(Ã, C T, r). Computation via unsymmetric Lanczos method, yields system matrices of reduced-order model as by-product. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 43/52

171 Padé Approximation Padé-via-Lanczos Method (PVL) Moments need not be computed explicitly; moment matching is equivalent to projecting state-space onto V = span( B, à B,..., Ãr 1 B) =: K( Ã, B, r) (where à = (s 0I n A) 1, B = (s0i n A) 1 B) along W = span(c T, à C T,..., (à ) r 1 C T ) =: K(Ã, C T, r). Computation via unsymmetric Lanczos method, yields system matrices of reduced-order model as by-product. Remark: Arnoldi (PRIMA) yields only G(s) = Ĝ(s) + O((s s 0 ) r ). Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 43/52

172 Padé Approximation Padé-via-Lanczos Method (PVL) Difficulties: Computable error estimates/bounds for y ŷ 2 often very pessimistic or expensive to evaluate. Mostly heuristic criteria for choice of expansion points. Optimal choice for second-order systems with proportional/rayleigh damping (Beattie/Gugercin 05). Good approximation quality only locally. Preservation of physical properties only in special cases; usually requires post processing which (partially) destroys moment matching properties. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 43/52

173 Padé Approximation Padé-via-Lanczos Method (PVL) Difficulties: Computable error estimates/bounds for y ŷ 2 often very pessimistic or expensive to evaluate. Mostly heuristic criteria for choice of expansion points. Optimal choice for second-order systems with proportional/rayleigh damping (Beattie/Gugercin 05). Good approximation quality only locally. Preservation of physical properties only in special cases; usually requires post processing which (partially) destroys moment matching properties. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 43/52

174 Padé Approximation Padé-via-Lanczos Method (PVL) Difficulties: Computable error estimates/bounds for y ŷ 2 often very pessimistic or expensive to evaluate. Mostly heuristic criteria for choice of expansion points. Optimal choice for second-order systems with proportional/rayleigh damping (Beattie/Gugercin 05). Good approximation quality only locally. Preservation of physical properties only in special cases; usually requires post processing which (partially) destroys moment matching properties. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 43/52

175 Padé Approximation Padé-via-Lanczos Method (PVL) Difficulties: Computable error estimates/bounds for y ŷ 2 often very pessimistic or expensive to evaluate. Mostly heuristic criteria for choice of expansion points. Optimal choice for second-order systems with proportional/rayleigh damping (Beattie/Gugercin 05). Good approximation quality only locally. Preservation of physical properties only in special cases; usually requires post processing which (partially) destroys moment matching properties. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 43/52

176 Interpolatory Model Reduction Short Introduction Computation of reduced-order model by projection Given an LTI system ẋ = Ax + Bu, y = Cx with transfer function G(s) = C(sI n A) 1 B, a reduced-order model is obtained using projection approach with V, W R n r and W T V = I r by computing  = W T AV, ˆB = W T B, Ĉ = CV. Petrov-Galerkin-type (two-sided) projection: W V, Galerkin-type (one-sided) projection: W = V. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 44/52

177 Interpolatory Model Reduction Short Introduction Computation of reduced-order model by projection Given an LTI system ẋ = Ax + Bu, y = Cx with transfer function G(s) = C(sI n A) 1 B, a reduced-order model is obtained using projection approach with V, W R n r and W T V = I r by computing  = W T AV, ˆB = W T B, Ĉ = CV. Petrov-Galerkin-type (two-sided) projection: W V, Galerkin-type (one-sided) projection: W = V. Rational Interpolation/Moment-Matching Choose V, W such that and G(s j ) = Ĝ(s j), j = 1,..., k, d i ds i G(s j) = d i ds i Ĝ(s j ), i = 1,..., K j, j = 1,..., k. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 44/52

178 Interpolatory Model Reduction Short Introduction Theorem (simplified) [Grimme 97, Villemagne/Skelton 87] If span { (s 1 I n A) 1 B,..., (s k I n A) 1 B } Ran(V ), span { (s 1 I n A) T C T,..., (s k I n A) T C T } Ran(W ), then G(s j ) = Ĝ(s j), d ds G(s j) = d ds Ĝ(s j), for j = 1,..., k. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 44/52

179 Interpolatory Model Reduction Short Introduction Theorem (simplified) [Grimme 97, Villemagne/Skelton 87] If span { (s 1 I n A) 1 B,..., (s k I n A) 1 B } Ran(V ), span { (s 1 I n A) T C T,..., (s k I n A) T C T } Ran(W ), then G(s j ) = Ĝ(s j), d ds G(s j) = d ds Ĝ(s j), for j = 1,..., k. Remarks: using Galerkin/one-sided projection yields G(s j ) = Ĝ(s j ), but in general d ds G(s j) d ds Ĝ(s j). Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 44/52

180 Interpolatory Model Reduction Short Introduction Theorem (simplified) [Grimme 97, Villemagne/Skelton 87] If span { (s 1 I n A) 1 B,..., (s k I n A) 1 B } Ran(V ), span { (s 1 I n A) T C T,..., (s k I n A) T C T } Ran(W ), then G(s j ) = Ĝ(s j), d ds G(s j) = d ds Ĝ(s j), for j = 1,..., k. Remarks: k = 1, standard Krylov subspace(s) of dimension K moment-matching methods/padé approximation, d i ds G(s1) = d i Ĝ(s i dsi 1 ), i = 0,..., K 1(+K). Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 44/52

181 Interpolatory Model Reduction Short Introduction Theorem (simplified) [Grimme 97, Villemagne/Skelton 87] If span { (s 1 I n A) 1 B,..., (s k I n A) 1 B } Ran(V ), span { (s 1 I n A) T C T,..., (s k I n A) T C T } Ran(W ), then G(s j ) = Ĝ(s j), d ds G(s j) = d ds Ĝ(s j), for j = 1,..., k. Remarks: computation of V, W from rational Krylov subspaces, e.g., dual rational Arnoldi/Lanczos [Grimme 97], Iterative Rational Krylov-Algo. [Antoulas/Beattie/Gugercin 07]. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 44/52

182 H 2 -Optimal Model Reduction Best H 2 -norm approximation problem Find arg minĝ H2 of order r G Ĝ 2. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 45/52

183 H 2 -Optimal Model Reduction Best H 2 -norm approximation problem Find arg minĝ H2 of order r G Ĝ 2. First-order necessary H 2 -optimality conditions: For SISO systems G( µ i ) = Ĝ( µ i), G ( µ i ) = Ĝ ( µ i ), where µ i are the poles of the reduced transfer function Ĝ. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 45/52

184 H 2 -Optimal Model Reduction Best H 2 -norm approximation problem Find arg minĝ H2 of order r G Ĝ 2. First-order necessary H 2 -optimality conditions: For MIMO systems C T i G( µ i ) B i = Ĝ( µ i) B i, for i = 1,..., r, C T i G( µ i ) = C i T Ĝ( µ i ), for i = 1,..., r, G ( µ i ) B i = C i T Ĝ ( µ i ) B i, for i = 1,..., r, where T 1 ÂT = diag {µ 1,..., µ r } = spectral decomposition and B = ˆB T T T, C = ĈT. tangential interpolation conditions. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 45/52

185 Interpolatory Model Reduction Interpolation of the Transfer Function by Projection Construct reduced transfer function by Petrov-Galerkin projection P = VW T, i.e. Ĝ(s) = CV ( si W T AV ) 1 W T B, where V and W are given as the rational Krylov subspaces Then V = [ ( µ 1 I A) 1 B,..., ( µ r I A) 1 B ], W = [ ( µ 1 I A T ) 1 C T,..., ( µ r I A T ) 1 C T ]. G( µ i ) = Ĝ( µ i) and G ( µ i ) = Ĝ ( µ i ), for i = 1,..., r as desired. iterative algorithms (IRKA/MIRIAm) that yield H 2 -optimal models. [Gugercin et al. 06], [Bunse-Gerstner et al. 07], [Van Dooren et al. 08] Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 46/52

186 Interpolatory Model Reduction Interpolation of the Transfer Function by Projection Construct reduced transfer function by Petrov-Galerkin projection P = VW T, i.e. Ĝ(s) = CV ( si W T AV ) 1 W T B, where V and W are given as the rational Krylov subspaces Then V = [ ( µ 1 I A) 1 B,..., ( µ r I A) 1 B ], W = [ ( µ 1 I A T ) 1 C T,..., ( µ r I A T ) 1 C T ]. G( µ i ) = Ĝ( µ i) and G ( µ i ) = Ĝ ( µ i ), for i = 1,..., r as desired. iterative algorithms (IRKA/MIRIAm) that yield H 2 -optimal models. [Gugercin et al. 06], [Bunse-Gerstner et al. 07], [Van Dooren et al. 08] Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 46/52

187 Interpolatory Model Reduction Interpolation of the Transfer Function by Projection Construct reduced transfer function by Petrov-Galerkin projection P = VW T, i.e. Ĝ(s) = CV ( si W T AV ) 1 W T B, where V and W are given as the rational Krylov subspaces Then V = [ ( µ 1 I A) 1 B,..., ( µ r I A) 1 B ], W = [ ( µ 1 I A T ) 1 C T,..., ( µ r I A T ) 1 C T ]. G( µ i ) = Ĝ( µ i) and G ( µ i ) = Ĝ ( µ i ), for i = 1,..., r as desired. iterative algorithms (IRKA/MIRIAm) that yield H 2 -optimal models. [Gugercin et al. 06], [Bunse-Gerstner et al. 07], [Van Dooren et al. 08] Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 46/52

188 Interpolatory Model Reduction Interpolation of the Transfer Function by Projection Construct reduced transfer function by Petrov-Galerkin projection P = VW T, i.e. Ĝ(s) = CV ( si W T AV ) 1 W T B, where V and W are given as the rational Krylov subspaces Then V = [ ( µ 1 I A) 1 B,..., ( µ r I A) 1 B ], W = [ ( µ 1 I A T ) 1 C T,..., ( µ r I A T ) 1 C T ]. G( µ i ) = Ĝ( µ i) and G ( µ i ) = Ĝ ( µ i ), for i = 1,..., r as desired. iterative algorithms (IRKA/MIRIAm) that yield H 2 -optimal models. [Gugercin et al. 06], [Bunse-Gerstner et al. 07], [Van Dooren et al. 08] Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 46/52

189 H 2 -Optimal Model Reduction The basic IRKA Algorithm Algorithm 1 IRKA Input: A stable, B, C, Â stable, ˆB, Ĉ, δ > 0. Output: A opt, B opt, C{ opt } µ j µ 1: while (max old j j=1,...,r > δ) do µ j 2: diag {µ 1,..., µ r } := T 1 ÂT = spectral decomposition, B = ˆB T, C = ĈT. ] 3: V = [( µ 1 I A) 1 B B 1,..., ( µ r I A) 1 B B r ] 4: W = [( µ 1 I A T ) 1 C T C 1,..., ( µ r I A T ) 1 C T C r 5: V = orth(v ), W = orth(w ) 6: Â = (W V ) 1 W AV, ˆB = (W V ) 1 W B, Ĉ = CV 7: end while 8: A opt = Â, B opt = ˆB, C opt = Ĉ Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 47/52

190 Numerical Comparison of MOR Approaches Microthruster Co-integration of solid fuel with silicon micromachined system. Goal: Ignition of solid fuel cells by electric impulse. Application: nano satellites. Thermo-dynamical model, ignition via heating an electric resistance by applying voltage source. Design problem: reach ignition temperature of fuel cell w/o firing neighbouring cells. Spatial FEM discretization of thermo-dynamical model linear system, m = 1, p = 7. Source: The Oberwolfach Benchmark Collection Courtesy of C. Rossi, LAAS-CNRS/EU project Micropyros. Max Planck Institute Magdeburg Peter Benner, MOR for Linear Dynamical Systems 48/52

BALANCING-RELATED MODEL REDUCTION FOR DATA-SPARSE SYSTEMS

BALANCING-RELATED MODEL REDUCTION FOR DATA-SPARSE SYSTEMS BALANCING-RELATED Peter Benner Professur Mathematik in Industrie und Technik Fakultät für Mathematik Technische Universität Chemnitz Computational Methods with Applications Harrachov, 19 25 August 2007

More information

Fast Frequency Response Analysis using Model Order Reduction

Fast Frequency Response Analysis using Model Order Reduction Fast Frequency Response Analysis using Model Order Reduction Peter Benner EU MORNET Exploratory Workshop Applications of Model Order Reduction Methods in Industrial Research and Development Luxembourg,

More information

Model Reduction for Unstable Systems

Model Reduction for Unstable Systems Model Reduction for Unstable Systems Klajdi Sinani Virginia Tech klajdi@vt.edu Advisor: Serkan Gugercin October 22, 2015 (VT) SIAM October 22, 2015 1 / 26 Overview 1 Introduction 2 Interpolatory Model

More information

Model reduction for linear systems by balancing

Model reduction for linear systems by balancing Model reduction for linear systems by balancing Bart Besselink Jan C. Willems Center for Systems and Control Johann Bernoulli Institute for Mathematics and Computer Science University of Groningen, Groningen,

More information

MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS

MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS Ulrike Baur joint work with Peter Benner Mathematics in Industry and Technology Faculty of Mathematics Chemnitz University of Technology

More information

BALANCING-RELATED MODEL REDUCTION FOR LARGE-SCALE SYSTEMS

BALANCING-RELATED MODEL REDUCTION FOR LARGE-SCALE SYSTEMS FOR LARGE-SCALE SYSTEMS Professur Mathematik in Industrie und Technik Fakultät für Mathematik Technische Universität Chemnitz Recent Advances in Model Order Reduction Workshop TU Eindhoven, November 23,

More information

Parametric Model Order Reduction for Linear Control Systems

Parametric Model Order Reduction for Linear Control Systems Parametric Model Order Reduction for Linear Control Systems Peter Benner HRZZ Project Control of Dynamical Systems (ConDys) second project meeting Zagreb, 2 3 November 2017 Outline 1. Introduction 2. PMOR

More information

Krylov Subspace Methods for Nonlinear Model Reduction

Krylov Subspace Methods for Nonlinear Model Reduction MAX PLANCK INSTITUT Conference in honour of Nancy Nichols 70th birthday Reading, 2 3 July 2012 Krylov Subspace Methods for Nonlinear Model Reduction Peter Benner and Tobias Breiten Max Planck Institute

More information

Iterative Rational Krylov Algorithm for Unstable Dynamical Systems and Generalized Coprime Factorizations

Iterative Rational Krylov Algorithm for Unstable Dynamical Systems and Generalized Coprime Factorizations Iterative Rational Krylov Algorithm for Unstable Dynamical Systems and Generalized Coprime Factorizations Klajdi Sinani Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University

More information

Model Reduction of Inhomogeneous Initial Conditions

Model Reduction of Inhomogeneous Initial Conditions Model Reduction of Inhomogeneous Initial Conditions Caleb Magruder Abstract Our goal is to develop model reduction processes for linear dynamical systems with non-zero initial conditions. Standard model

More information

CME 345: MODEL REDUCTION

CME 345: MODEL REDUCTION CME 345: MODEL REDUCTION Balanced Truncation Charbel Farhat & David Amsallem Stanford University cfarhat@stanford.edu These slides are based on the recommended textbook: A.C. Antoulas, Approximation of

More information

Robust Multivariable Control

Robust Multivariable Control Lecture 2 Anders Helmersson anders.helmersson@liu.se ISY/Reglerteknik Linköpings universitet Today s topics Today s topics Norms Today s topics Norms Representation of dynamic systems Today s topics Norms

More information

Gramians based model reduction for hybrid switched systems

Gramians based model reduction for hybrid switched systems Gramians based model reduction for hybrid switched systems Y. Chahlaoui Younes.Chahlaoui@manchester.ac.uk Centre for Interdisciplinary Computational and Dynamical Analysis (CICADA) School of Mathematics

More information

Approximation of the Linearized Boussinesq Equations

Approximation of the Linearized Boussinesq Equations Approximation of the Linearized Boussinesq Equations Alan Lattimer Advisors Jeffrey Borggaard Serkan Gugercin Department of Mathematics Virginia Tech SIAM Talk Series, Virginia Tech, April 22, 2014 Alan

More information

Model order reduction of large-scale dynamical systems with Jacobi-Davidson style eigensolvers

Model order reduction of large-scale dynamical systems with Jacobi-Davidson style eigensolvers MAX PLANCK INSTITUTE International Conference on Communications, Computing and Control Applications March 3-5, 2011, Hammamet, Tunisia. Model order reduction of large-scale dynamical systems with Jacobi-Davidson

More information

Moving Frontiers in Model Reduction Using Numerical Linear Algebra

Moving Frontiers in Model Reduction Using Numerical Linear Algebra Using Numerical Linear Algebra Max-Planck-Institute for Dynamics of Complex Technical Systems Computational Methods in Systems and Control Theory Group Magdeburg, Germany Technische Universität Chemnitz

More information

Raktim Bhattacharya. . AERO 632: Design of Advance Flight Control System. Norms for Signals and Systems

Raktim Bhattacharya. . AERO 632: Design of Advance Flight Control System. Norms for Signals and Systems . AERO 632: Design of Advance Flight Control System Norms for Signals and. Raktim Bhattacharya Laboratory For Uncertainty Quantification Aerospace Engineering, Texas A&M University. Norms for Signals ...

More information

Krylov-Subspace Based Model Reduction of Nonlinear Circuit Models Using Bilinear and Quadratic-Linear Approximations

Krylov-Subspace Based Model Reduction of Nonlinear Circuit Models Using Bilinear and Quadratic-Linear Approximations Krylov-Subspace Based Model Reduction of Nonlinear Circuit Models Using Bilinear and Quadratic-Linear Approximations Peter Benner and Tobias Breiten Abstract We discuss Krylov-subspace based model reduction

More information

Model reduction via tangential interpolation

Model reduction via tangential interpolation Model reduction via tangential interpolation K. Gallivan, A. Vandendorpe and P. Van Dooren May 14, 2002 1 Introduction Although most of the theory presented in this paper holds for both continuous-time

More information

Control Systems. Laplace domain analysis

Control Systems. Laplace domain analysis Control Systems Laplace domain analysis L. Lanari outline introduce the Laplace unilateral transform define its properties show its advantages in turning ODEs to algebraic equations define an Input/Output

More information

ME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ

ME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ ME 234, Lyapunov and Riccati Problems. This problem is to recall some facts and formulae you already know. (a) Let A and B be matrices of appropriate dimension. Show that (A, B) is controllable if and

More information

Comparison of methods for parametric model order reduction of instationary problems

Comparison of methods for parametric model order reduction of instationary problems Max Planck Institute Magdeburg Preprints Ulrike Baur Peter Benner Bernard Haasdonk Christian Himpe Immanuel Maier Mario Ohlberger Comparison of methods for parametric model order reduction of instationary

More information

Passivity Preserving Model Reduction for Large-Scale Systems. Peter Benner.

Passivity Preserving Model Reduction for Large-Scale Systems. Peter Benner. Passivity Preserving Model Reduction for Large-Scale Systems Peter Benner Mathematik in Industrie und Technik Fakultät für Mathematik Sonderforschungsbereich 393 S N benner@mathematik.tu-chemnitz.de SIMULATION

More information

Balanced Truncation 1

Balanced Truncation 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.242, Fall 2004: MODEL REDUCTION Balanced Truncation This lecture introduces balanced truncation for LTI

More information

Problem Set 4 Solution 1

Problem Set 4 Solution 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.245: MULTIVARIABLE CONTROL SYSTEMS by A. Megretski Problem Set 4 Solution Problem 4. For the SISO feedback

More information

Model reduction of nonlinear circuit equations

Model reduction of nonlinear circuit equations Model reduction of nonlinear circuit equations Tatjana Stykel Technische Universität Berlin Joint work with T. Reis and A. Steinbrecher BIRS Workshop, Banff, Canada, October 25-29, 2010 T. Stykel. Model

More information

3 Gramians and Balanced Realizations

3 Gramians and Balanced Realizations 3 Gramians and Balanced Realizations In this lecture, we use an optimization approach to find suitable realizations for truncation and singular perturbation of G. It turns out that the recommended realizations

More information

Problem set 5 solutions 1

Problem set 5 solutions 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.242, Fall 24: MODEL REDUCTION Problem set 5 solutions Problem 5. For each of the stetements below, state

More information

Fluid flow dynamical model approximation and control

Fluid flow dynamical model approximation and control Fluid flow dynamical model approximation and control... a case-study on an open cavity flow C. Poussot-Vassal & D. Sipp Journée conjointe GT Contrôle de Décollement & GT MOSAR Frequency response of an

More information

CDS Solutions to the Midterm Exam

CDS Solutions to the Midterm Exam CDS 22 - Solutions to the Midterm Exam Instructor: Danielle C. Tarraf November 6, 27 Problem (a) Recall that the H norm of a transfer function is time-delay invariant. Hence: ( ) Ĝ(s) = s + a = sup /2

More information

FEL3210 Multivariable Feedback Control

FEL3210 Multivariable Feedback Control FEL3210 Multivariable Feedback Control Lecture 8: Youla parametrization, LMIs, Model Reduction and Summary [Ch. 11-12] Elling W. Jacobsen, Automatic Control Lab, KTH Lecture 8: Youla, LMIs, Model Reduction

More information

ẋ n = f n (x 1,...,x n,u 1,...,u m ) (5) y 1 = g 1 (x 1,...,x n,u 1,...,u m ) (6) y p = g p (x 1,...,x n,u 1,...,u m ) (7)

ẋ n = f n (x 1,...,x n,u 1,...,u m ) (5) y 1 = g 1 (x 1,...,x n,u 1,...,u m ) (6) y p = g p (x 1,...,x n,u 1,...,u m ) (7) EEE582 Topical Outline A.A. Rodriguez Fall 2007 GWC 352, 965-3712 The following represents a detailed topical outline of the course. It attempts to highlight most of the key concepts to be covered and

More information

Order Reduction for Large Scale Finite Element Models: a Systems Perspective

Order Reduction for Large Scale Finite Element Models: a Systems Perspective Order Reduction for Large Scale Finite Element Models: a Systems Perspective William Gressick, John T. Wen, Jacob Fish ABSTRACT Large scale finite element models are routinely used in design and optimization

More information

Control Systems. Frequency domain analysis. L. Lanari

Control Systems. Frequency domain analysis. L. Lanari Control Systems m i l e r p r a in r e v y n is o Frequency domain analysis L. Lanari outline introduce the Laplace unilateral transform define its properties show its advantages in turning ODEs to algebraic

More information

6.241 Dynamic Systems and Control

6.241 Dynamic Systems and Control 6.241 Dynamic Systems and Control Lecture 24: H2 Synthesis Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology May 4, 2011 E. Frazzoli (MIT) Lecture 24: H 2 Synthesis May

More information

Hankel Optimal Model Reduction 1

Hankel Optimal Model Reduction 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.242, Fall 2004: MODEL REDUCTION Hankel Optimal Model Reduction 1 This lecture covers both the theory and

More information

H 2 optimal model reduction - Wilson s conditions for the cross-gramian

H 2 optimal model reduction - Wilson s conditions for the cross-gramian H 2 optimal model reduction - Wilson s conditions for the cross-gramian Ha Binh Minh a, Carles Batlle b a School of Applied Mathematics and Informatics, Hanoi University of Science and Technology, Dai

More information

Gramians of structured systems and an error bound for structure-preserving model reduction

Gramians of structured systems and an error bound for structure-preserving model reduction Gramians of structured systems and an error bound for structure-preserving model reduction D.C. Sorensen and A.C. Antoulas e-mail:{sorensen@rice.edu, aca@rice.edu} September 3, 24 Abstract In this paper

More information

6.241 Dynamic Systems and Control

6.241 Dynamic Systems and Control 6.241 Dynamic Systems and Control Lecture 12: I/O Stability Readings: DDV, Chapters 15, 16 Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology March 14, 2011 E. Frazzoli

More information

Principle Component Analysis and Model Reduction for Dynamical Systems

Principle Component Analysis and Model Reduction for Dynamical Systems Principle Component Analysis and Model Reduction for Dynamical Systems D.C. Sorensen Virginia Tech 12 Nov 2004 Protein Substate Modeling and Identification PCA Dimension Reduction Using the SVD Tod Romo

More information

MODEL-order reduction is emerging as an effective

MODEL-order reduction is emerging as an effective IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I: REGULAR PAPERS, VOL. 52, NO. 5, MAY 2005 975 Model-Order Reduction by Dominant Subspace Projection: Error Bound, Subspace Computation, and Circuit Applications

More information

Control Systems I. Lecture 6: Poles and Zeros. Readings: Emilio Frazzoli. Institute for Dynamic Systems and Control D-MAVT ETH Zürich

Control Systems I. Lecture 6: Poles and Zeros. Readings: Emilio Frazzoli. Institute for Dynamic Systems and Control D-MAVT ETH Zürich Control Systems I Lecture 6: Poles and Zeros Readings: Emilio Frazzoli Institute for Dynamic Systems and Control D-MAVT ETH Zürich October 27, 2017 E. Frazzoli (ETH) Lecture 6: Control Systems I 27/10/2017

More information

A comparison of model reduction techniques from structural dynamics, numerical mathematics and systems and control

A comparison of model reduction techniques from structural dynamics, numerical mathematics and systems and control A comparison of model reduction techniques from structural dynamics, numerical mathematics and systems and control B. Besselink a, A. Lutowska b, U. Tabak c, N. van de Wouw a, H. Nijmeijer a, M.E. Hochstenbach

More information

Comparison of Model Reduction Methods with Applications to Circuit Simulation

Comparison of Model Reduction Methods with Applications to Circuit Simulation Comparison of Model Reduction Methods with Applications to Circuit Simulation Roxana Ionutiu, Sanda Lefteriu, and Athanasios C. Antoulas Department of Electrical and Computer Engineering, Rice University,

More information

Fourier Model Reduction for Large-Scale Applications in Computational Fluid Dynamics

Fourier Model Reduction for Large-Scale Applications in Computational Fluid Dynamics Fourier Model Reduction for Large-Scale Applications in Computational Fluid Dynamics K. Willcox and A. Megretski A new method, Fourier model reduction (FMR), for obtaining stable, accurate, low-order models

More information

Denis ARZELIER arzelier

Denis ARZELIER   arzelier COURSE ON LMI OPTIMIZATION WITH APPLICATIONS IN CONTROL PART II.2 LMIs IN SYSTEMS CONTROL STATE-SPACE METHODS PERFORMANCE ANALYSIS and SYNTHESIS Denis ARZELIER www.laas.fr/ arzelier arzelier@laas.fr 15

More information

Control Systems Design

Control Systems Design ELEC4410 Control Systems Design Lecture 14: Controllability Julio H. Braslavsky julio@ee.newcastle.edu.au School of Electrical Engineering and Computer Science Lecture 14: Controllability p.1/23 Outline

More information

Linear System Theory

Linear System Theory Linear System Theory Wonhee Kim Chapter 6: Controllability & Observability Chapter 7: Minimal Realizations May 2, 217 1 / 31 Recap State space equation Linear Algebra Solutions of LTI and LTV system Stability

More information

Control Systems I. Lecture 4: Diagonalization, Modal Analysis, Intro to Feedback. Readings: Emilio Frazzoli

Control Systems I. Lecture 4: Diagonalization, Modal Analysis, Intro to Feedback. Readings: Emilio Frazzoli Control Systems I Lecture 4: Diagonalization, Modal Analysis, Intro to Feedback Readings: Emilio Frazzoli Institute for Dynamic Systems and Control D-MAVT ETH Zürich October 13, 2017 E. Frazzoli (ETH)

More information

Model Order Reduction via Matlab Parallel Computing Toolbox. Istanbul Technical University

Model Order Reduction via Matlab Parallel Computing Toolbox. Istanbul Technical University Model Order Reduction via Matlab Parallel Computing Toolbox E. Fatih Yetkin & Hasan Dağ Istanbul Technical University Computational Science & Engineering Department September 21, 2009 E. Fatih Yetkin (Istanbul

More information

Zeros and zero dynamics

Zeros and zero dynamics CHAPTER 4 Zeros and zero dynamics 41 Zero dynamics for SISO systems Consider a linear system defined by a strictly proper scalar transfer function that does not have any common zero and pole: g(s) =α p(s)

More information

Model reduction of large-scale dynamical systems

Model reduction of large-scale dynamical systems Model reduction of large-scale dynamical systems Lecture III: Krylov approximation and rational interpolation Thanos Antoulas Rice University and Jacobs University email: aca@rice.edu URL: www.ece.rice.edu/

More information

Problem Set 5 Solutions 1

Problem Set 5 Solutions 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.245: MULTIVARIABLE CONTROL SYSTEMS by A. Megretski Problem Set 5 Solutions The problem set deals with Hankel

More information

Model reduction of interconnected systems

Model reduction of interconnected systems Model reduction of interconnected systems A Vandendorpe and P Van Dooren 1 Introduction Large scale linear systems are often composed of subsystems that interconnect to each other Instead of reducing the

More information

Gramian-Based Model Reduction for Data-Sparse Systems CSC/ Chemnitz Scientific Computing Preprints

Gramian-Based Model Reduction for Data-Sparse Systems CSC/ Chemnitz Scientific Computing Preprints Ulrike Baur Peter Benner Gramian-Based Model Reduction for Data-Sparse Systems CSC/07-01 Chemnitz Scientific Computing Preprints Impressum: Chemnitz Scientific Computing Preprints ISSN 1864-0087 (1995

More information

ECEEN 5448 Fall 2011 Homework #4 Solutions

ECEEN 5448 Fall 2011 Homework #4 Solutions ECEEN 5448 Fall 2 Homework #4 Solutions Professor David G. Meyer Novemeber 29, 2. The state-space realization is A = [ [ ; b = ; c = [ which describes, of course, a free mass (in normalized units) with

More information

Intro. Computer Control Systems: F9

Intro. Computer Control Systems: F9 Intro. Computer Control Systems: F9 State-feedback control and observers Dave Zachariah Dept. Information Technology, Div. Systems and Control 1 / 21 dave.zachariah@it.uu.se F8: Quiz! 2 / 21 dave.zachariah@it.uu.se

More information

Robust Control 2 Controllability, Observability & Transfer Functions

Robust Control 2 Controllability, Observability & Transfer Functions Robust Control 2 Controllability, Observability & Transfer Functions Harry G. Kwatny Department of Mechanical Engineering & Mechanics Drexel University /26/24 Outline Reachable Controllability Distinguishable

More information

Passive Interconnect Macromodeling Via Balanced Truncation of Linear Systems in Descriptor Form

Passive Interconnect Macromodeling Via Balanced Truncation of Linear Systems in Descriptor Form Passive Interconnect Macromodeling Via Balanced Truncation of Linear Systems in Descriptor Form Boyuan Yan, Sheldon X.-D. Tan,PuLiu and Bruce McGaughy Department of Electrical Engineering, University of

More information

An iterative SVD-Krylov based method for model reduction of large-scale dynamical systems

An iterative SVD-Krylov based method for model reduction of large-scale dynamical systems Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 1964 1986 www.elsevier.com/locate/laa An iterative SVD-Krylov based method for model reduction of large-scale dynamical

More information

Two-sided Eigenvalue Algorithms for Modal Approximation

Two-sided Eigenvalue Algorithms for Modal Approximation Two-sided Eigenvalue Algorithms for Modal Approximation Master s thesis submitted to Faculty of Mathematics at Chemnitz University of Technology presented by: Supervisor: Advisor: B.sc. Patrick Kürschner

More information

Multivariable Control. Lecture 03. Description of Linear Time Invariant Systems. John T. Wen. September 7, 2006

Multivariable Control. Lecture 03. Description of Linear Time Invariant Systems. John T. Wen. September 7, 2006 Multivariable Control Lecture 3 Description of Linear Time Invariant Systems John T. Wen September 7, 26 Outline Mathematical description of LTI Systems Ref: 3.1-3.4 of text September 7, 26Copyrighted

More information

Model order reduction of electrical circuits with nonlinear elements

Model order reduction of electrical circuits with nonlinear elements Model order reduction of electrical circuits with nonlinear elements Andreas Steinbrecher and Tatjana Stykel 1 Introduction The efficient and robust numerical simulation of electrical circuits plays a

More information

Control Systems I. Lecture 5: Transfer Functions. Readings: Emilio Frazzoli. Institute for Dynamic Systems and Control D-MAVT ETH Zürich

Control Systems I. Lecture 5: Transfer Functions. Readings: Emilio Frazzoli. Institute for Dynamic Systems and Control D-MAVT ETH Zürich Control Systems I Lecture 5: Transfer Functions Readings: Emilio Frazzoli Institute for Dynamic Systems and Control D-MAVT ETH Zürich October 20, 2017 E. Frazzoli (ETH) Lecture 5: Control Systems I 20/10/2017

More information

Kalman Decomposition B 2. z = T 1 x, where C = ( C. z + u (7) T 1, and. where B = T, and

Kalman Decomposition B 2. z = T 1 x, where C = ( C. z + u (7) T 1, and. where B = T, and Kalman Decomposition Controllable / uncontrollable decomposition Suppose that the controllability matrix C R n n of a system has rank n 1

More information

SJÄLVSTÄNDIGA ARBETEN I MATEMATIK

SJÄLVSTÄNDIGA ARBETEN I MATEMATIK SJÄLVSTÄNDIGA ARBETEN I MATEMATIK MATEMATISKA INSTITUTIONEN, STOCKHOLMS UNIVERSITET Model Reduction for Piezo-Mechanical Systems Using Balanced Truncation av Mohammad Monir Uddin 2011 - No 5 Advisor: Prof.

More information

Model reduction of coupled systems

Model reduction of coupled systems Model reduction of coupled systems Tatjana Stykel Technische Universität Berlin ( joint work with Timo Reis, TU Kaiserslautern ) Model order reduction, coupled problems and optimization Lorentz Center,

More information

Linear and Nonlinear Matrix Equations Arising in Model Reduction

Linear and Nonlinear Matrix Equations Arising in Model Reduction International Conference on Numerical Linear Algebra and its Applications Guwahati, January 15 18, 2013 Linear and Nonlinear Matrix Equations Arising in Model Reduction Peter Benner Tobias Breiten Max

More information

H 2 -optimal model reduction of MIMO systems

H 2 -optimal model reduction of MIMO systems H 2 -optimal model reduction of MIMO systems P. Van Dooren K. A. Gallivan P.-A. Absil Abstract We consider the problem of approximating a p m rational transfer function Hs of high degree by another p m

More information

Control Systems I. Lecture 7: Feedback and the Root Locus method. Readings: Jacopo Tani. Institute for Dynamic Systems and Control D-MAVT ETH Zürich

Control Systems I. Lecture 7: Feedback and the Root Locus method. Readings: Jacopo Tani. Institute for Dynamic Systems and Control D-MAVT ETH Zürich Control Systems I Lecture 7: Feedback and the Root Locus method Readings: Jacopo Tani Institute for Dynamic Systems and Control D-MAVT ETH Zürich November 2, 2018 J. Tani, E. Frazzoli (ETH) Lecture 7:

More information

1 Continuous-time Systems

1 Continuous-time Systems Observability Completely controllable systems can be restructured by means of state feedback to have many desirable properties. But what if the state is not available for feedback? What if only the output

More information

Nonlinear Control Systems

Nonlinear Control Systems Nonlinear Control Systems António Pedro Aguiar pedro@isr.ist.utl.pt 5. Input-Output Stability DEEC PhD Course http://users.isr.ist.utl.pt/%7epedro/ncs2012/ 2012 1 Input-Output Stability y = Hu H denotes

More information

Introduction to Modern Control MT 2016

Introduction to Modern Control MT 2016 CDT Autonomous and Intelligent Machines & Systems Introduction to Modern Control MT 2016 Alessandro Abate Lecture 2 First-order ordinary differential equations (ODE) Solution of a linear ODE Hints to nonlinear

More information

Clustering-Based Model Order Reduction for Multi-Agent Systems with General Linear Time-Invariant Agents

Clustering-Based Model Order Reduction for Multi-Agent Systems with General Linear Time-Invariant Agents Max Planck Institute Magdeburg Preprints Petar Mlinarić Sara Grundel Peter Benner Clustering-Based Model Order Reduction for Multi-Agent Systems with General Linear Time-Invariant Agents MAX PLANCK INSTITUT

More information

EE263: Introduction to Linear Dynamical Systems Review Session 6

EE263: Introduction to Linear Dynamical Systems Review Session 6 EE263: Introduction to Linear Dynamical Systems Review Session 6 Outline diagonalizability eigen decomposition theorem applications (modal forms, asymptotic growth rate) EE263 RS6 1 Diagonalizability consider

More information

Observability and state estimation

Observability and state estimation EE263 Autumn 2015 S Boyd and S Lall Observability and state estimation state estimation discrete-time observability observability controllability duality observers for noiseless case continuous-time observability

More information

Contents lecture 5. Automatic Control III. Summary of lecture 4 (II/II) Summary of lecture 4 (I/II) u y F r. Lecture 5 H 2 and H loop shaping

Contents lecture 5. Automatic Control III. Summary of lecture 4 (II/II) Summary of lecture 4 (I/II) u y F r. Lecture 5 H 2 and H loop shaping Contents lecture 5 Automatic Control III Lecture 5 H 2 and H loop shaping Thomas Schön Division of Systems and Control Department of Information Technology Uppsala University. Email: thomas.schon@it.uu.se,

More information

Implicit Volterra Series Interpolation for Model Reduction of Bilinear Systems

Implicit Volterra Series Interpolation for Model Reduction of Bilinear Systems Max Planck Institute Magdeburg Preprints Mian Ilyas Ahmad Ulrike Baur Peter Benner Implicit Volterra Series Interpolation for Model Reduction of Bilinear Systems MAX PLANCK INSTITUT FÜR DYNAMIK KOMPLEXER

More information

Model reduction of large-scale systems by least squares

Model reduction of large-scale systems by least squares Model reduction of large-scale systems by least squares Serkan Gugercin Department of Mathematics, Virginia Tech, Blacksburg, VA, USA gugercin@mathvtedu Athanasios C Antoulas Department of Electrical and

More information

System Identification by Nuclear Norm Minimization

System Identification by Nuclear Norm Minimization Dept. of Information Engineering University of Pisa (Italy) System Identification by Nuclear Norm Minimization eng. Sergio Grammatico grammatico.sergio@gmail.com Class of Identification of Uncertain Systems

More information

Linear dynamical systems with inputs & outputs

Linear dynamical systems with inputs & outputs EE263 Autumn 215 S. Boyd and S. Lall Linear dynamical systems with inputs & outputs inputs & outputs: interpretations transfer function impulse and step responses examples 1 Inputs & outputs recall continuous-time

More information

Discrete and continuous dynamic systems

Discrete and continuous dynamic systems Discrete and continuous dynamic systems Bounded input bounded output (BIBO) and asymptotic stability Continuous and discrete time linear time-invariant systems Katalin Hangos University of Pannonia Faculty

More information

ME Fall 2001, Fall 2002, Spring I/O Stability. Preliminaries: Vector and function norms

ME Fall 2001, Fall 2002, Spring I/O Stability. Preliminaries: Vector and function norms I/O Stability Preliminaries: Vector and function norms 1. Sup norms are used for vectors for simplicity: x = max i x i. Other norms are also okay 2. Induced matrix norms: let A R n n, (i stands for induced)

More information

Model Order Reduction for Parameter-Varying Systems

Model Order Reduction for Parameter-Varying Systems Model Order Reduction for Parameter-Varying Systems Xingang Cao Promotors: Wil Schilders Siep Weiland Supervisor: Joseph Maubach Eindhoven University of Technology CASA Day November 1, 216 Xingang Cao

More information

Contents. 1 State-Space Linear Systems 5. 2 Linearization Causality, Time Invariance, and Linearity 31

Contents. 1 State-Space Linear Systems 5. 2 Linearization Causality, Time Invariance, and Linearity 31 Contents Preamble xiii Linear Systems I Basic Concepts 1 I System Representation 3 1 State-Space Linear Systems 5 1.1 State-Space Linear Systems 5 1.2 Block Diagrams 7 1.3 Exercises 11 2 Linearization

More information

Methods for eigenvalue problems with applications in model order reduction

Methods for eigenvalue problems with applications in model order reduction Methods for eigenvalue problems with applications in model order reduction Methoden voor eigenwaardeproblemen met toepassingen in model orde reductie (met een samenvatting in het Nederlands) Proefschrift

More information

Lecture 4: Analysis of MIMO Systems

Lecture 4: Analysis of MIMO Systems Lecture 4: Analysis of MIMO Systems Norms The concept of norm will be extremely useful for evaluating signals and systems quantitatively during this course In the following, we will present vector norms

More information

On Dominant Poles and Model Reduction of Second Order Time-Delay Systems

On Dominant Poles and Model Reduction of Second Order Time-Delay Systems On Dominant Poles and Model Reduction of Second Order Time-Delay Systems Maryam Saadvandi Joint work with: Prof. Karl Meerbergen and Dr. Elias Jarlebring Department of Computer Science, KULeuven ModRed

More information

Balancing-Related Model Reduction for Large-Scale Systems

Balancing-Related Model Reduction for Large-Scale Systems Balancing-Related Model Reduction for Large-Scale Systems Peter Benner Professur Mathematik in Industrie und Technik Fakultät für Mathematik Technische Universität Chemnitz D-09107 Chemnitz benner@mathematik.tu-chemnitz.de

More information

Lecture 3. Chapter 4: Elements of Linear System Theory. Eugenio Schuster. Mechanical Engineering and Mechanics Lehigh University.

Lecture 3. Chapter 4: Elements of Linear System Theory. Eugenio Schuster. Mechanical Engineering and Mechanics Lehigh University. Lecture 3 Chapter 4: Eugenio Schuster schuster@lehigh.edu Mechanical Engineering and Mechanics Lehigh University Lecture 3 p. 1/77 3.1 System Descriptions [4.1] Let f(u) be a liner operator, u 1 and u

More information

An iterative SVD-Krylov based method for model reduction of large-scale dynamical systems

An iterative SVD-Krylov based method for model reduction of large-scale dynamical systems An iterative SVD-Krylov based method for model reduction of large-scale dynamical systems Serkan Gugercin Department of Mathematics, Virginia Tech., Blacksburg, VA, USA, 24061-0123 gugercin@math.vt.edu

More information

CME 345: MODEL REDUCTION

CME 345: MODEL REDUCTION CME 345: MODEL REDUCTION Proper Orthogonal Decomposition (POD) Charbel Farhat & David Amsallem Stanford University cfarhat@stanford.edu 1 / 43 Outline 1 Time-continuous Formulation 2 Method of Snapshots

More information

SYSTEM-THEORETIC METHODS FOR MODEL REDUCTION OF LARGE-SCALE SYSTEMS: SIMULATION, CONTROL, AND INVERSE PROBLEMS. Peter Benner

SYSTEM-THEORETIC METHODS FOR MODEL REDUCTION OF LARGE-SCALE SYSTEMS: SIMULATION, CONTROL, AND INVERSE PROBLEMS. Peter Benner SYSTEM-THEORETIC METHODS FOR MODEL REDUCTION OF LARGE-SCALE SYSTEMS: SIMULATION, CONTROL, AND INVERSE PROBLEMS Professur Mathematik in Industrie und Technik Fakultät für Mathematik Technische Universität

More information

Control Systems Design

Control Systems Design ELEC4410 Control Systems Design Lecture 13: Stability Julio H. Braslavsky julio@ee.newcastle.edu.au School of Electrical Engineering and Computer Science Lecture 13: Stability p.1/20 Outline Input-Output

More information

EEE582 Homework Problems

EEE582 Homework Problems EEE582 Homework Problems HW. Write a state-space realization of the linearized model for the cruise control system around speeds v = 4 (Section.3, http://tsakalis.faculty.asu.edu/notes/models.pdf). Use

More information

Computing Transfer Function Dominant Poles of Large Second-Order Systems

Computing Transfer Function Dominant Poles of Large Second-Order Systems Computing Transfer Function Dominant Poles of Large Second-Order Systems Joost Rommes Mathematical Institute Utrecht University rommes@math.uu.nl http://www.math.uu.nl/people/rommes joint work with Nelson

More information

AN OVERVIEW OF MODEL REDUCTION TECHNIQUES APPLIED TO LARGE-SCALE STRUCTURAL DYNAMICS AND CONTROL MOTIVATING EXAMPLE INVERTED PENDULUM

AN OVERVIEW OF MODEL REDUCTION TECHNIQUES APPLIED TO LARGE-SCALE STRUCTURAL DYNAMICS AND CONTROL MOTIVATING EXAMPLE INVERTED PENDULUM Controls Lab AN OVERVIEW OF MODEL REDUCTION TECHNIQUES APPLIED TO LARGE-SCALE STRUCTURAL DYNAMICS AND CONTROL Eduardo Gildin (UT ICES and Rice Univ.) with Thanos Antoulas (Rice ECE) Danny Sorensen (Rice

More information

Module 09 From s-domain to time-domain From ODEs, TFs to State-Space Modern Control

Module 09 From s-domain to time-domain From ODEs, TFs to State-Space Modern Control Module 09 From s-domain to time-domain From ODEs, TFs to State-Space Modern Control Ahmad F. Taha EE 3413: Analysis and Desgin of Control Systems Email: ahmad.taha@utsa.edu Webpage: http://engineering.utsa.edu/

More information

Model Order Reduction using SPICE Simulation Traces. Technical Report

Model Order Reduction using SPICE Simulation Traces. Technical Report Model Order Reduction using SPICE Simulation Traces Paul Winkler, Henda Aridhi, and Sofiène Tahar Department of Electrical and Computer Engineering, Concordia University, Montreal, Canada pauwink@web.de,

More information

Empirical Gramians and Balanced Truncation for Model Reduction of Nonlinear Systems

Empirical Gramians and Balanced Truncation for Model Reduction of Nonlinear Systems Empirical Gramians and Balanced Truncation for Model Reduction of Nonlinear Systems Antoni Ras Departament de Matemàtica Aplicada 4 Universitat Politècnica de Catalunya Lecture goals To review the basic

More information