Model reduction of large-scale systems

Similar documents
Model reduction of large-scale systems

Model reduction of large-scale systems

Model reduction of large-scale dynamical systems

Model reduction of large-scale systems

Model Reduction for Unstable Systems

Comparison of Model Reduction Methods with Applications to Circuit Simulation

Stability preserving post-processing methods applied in the Loewner framework

BALANCING-RELATED MODEL REDUCTION FOR DATA-SPARSE SYSTEMS

MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS

Approximation of the Linearized Boussinesq Equations

Fluid flow dynamical model approximation and control

H 2 -optimal model reduction of MIMO systems

H 2 optimal model reduction - Wilson s conditions for the cross-gramian

Krylov Techniques for Model Reduction of Second-Order Systems

Iterative Rational Krylov Algorithm for Unstable Dynamical Systems and Generalized Coprime Factorizations

Model reduction of large-scale systems by least squares

Model Reduction for Linear Dynamical Systems

Case study: Approximations of the Bessel Function

Gramians of structured systems and an error bound for structure-preserving model reduction

Krylov-Subspace Based Model Reduction of Nonlinear Circuit Models Using Bilinear and Quadratic-Linear Approximations

AN OVERVIEW OF MODEL REDUCTION TECHNIQUES APPLIED TO LARGE-SCALE STRUCTURAL DYNAMICS AND CONTROL MOTIVATING EXAMPLE INVERTED PENDULUM

Balanced Truncation 1

Passivity Preserving Model Reduction for Large-Scale Systems. Peter Benner.

Model reduction via tangential interpolation

Model reduction for linear systems by balancing

Model reduction of nonlinear circuit equations

Parallel Model Reduction of Large Linear Descriptor Systems via Balanced Truncation

H 2 optimal model approximation by structured time-delay reduced order models

Wavelet-Based Passivity Preserving Model Order Reduction for Wideband Interconnect Characterization

Model order reduction of electrical circuits with nonlinear elements

Krylov Subspace Methods for Nonlinear Model Reduction

CME 345: MODEL REDUCTION

Large-scale and infinite dimensional dynamical systems approximation

An iterative SVD-Krylov based method for model reduction of large-scale dynamical systems

Data-driven model order reduction of linear switched systems

An iterative SVD-Krylov based method for model reduction of large-scale dynamical systems

Fourier Model Reduction for Large-Scale Applications in Computational Fluid Dynamics

Problem set 5 solutions 1

Model Reduction of Inhomogeneous Initial Conditions

Infinite dimensional model analysis through data-driven interpolatory techniques

Model reduction of interconnected systems

Model Order Reduction of Continuous LTI Large Descriptor System Using LRCF-ADI and Square Root Balanced Truncation

On Dominant Poles and Model Reduction of Second Order Time-Delay Systems

Model Reduction (Approximation) of Large-Scale Systems. Introduction, motivating examples and problem formulation Lecture 1

Passive Interconnect Macromodeling Via Balanced Truncation of Linear Systems in Descriptor Form

Parametric Model Order Reduction for Linear Control Systems

Module 03 Linear Systems Theory: Necessary Background

FEL3210 Multivariable Feedback Control

Model reduction of coupled systems

Implicit Volterra Series Interpolation for Model Reduction of Bilinear Systems

Gramians based model reduction for hybrid switched systems

Balancing of Lossless and Passive Systems

Projection of state space realizations

Computing Transfer Function Dominant Poles of Large Second-Order Systems

A comparison of model reduction techniques from structural dynamics, numerical mathematics and systems and control

Robust Multivariable Control

CONTROL DESIGN FOR SET POINT TRACKING

System Identification by Nuclear Norm Minimization

Clustering-Based Model Order Reduction for Multi-Agent Systems with General Linear Time-Invariant Agents

Gramian Based Model Reduction of Large-scale Dynamical Systems

BALANCING AS A MOMENT MATCHING PROBLEM

CDS Solutions to Final Exam

An iterative SVD-Krylov based method for model reduction of large-scale dynamical systems

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications

Computation and Application of Balanced Model Order Reduction

Balancing-Related Model Reduction for Large-Scale Systems

Hankel Optimal Model Reduction 1

Interpolatory Model Reduction of Large-scale Dynamical Systems

CME 345: MODEL REDUCTION - Projection-Based Model Order Reduction

Control Systems. Laplace domain analysis

Robust Control 2 Controllability, Observability & Transfer Functions

Zeros and zero dynamics

Comparison of methods for parametric model order reduction of instationary problems

arxiv: v3 [math.na] 6 Jul 2018

Rational Krylov Methods for Model Reduction of Large-scale Dynamical Systems

Proper Orthogonal Decomposition (POD)

Identification of Electrical Circuits for Realization of Sparsity Preserving Reduced Order Models

S N. hochdimensionaler Lyapunov- und Sylvestergleichungen. Peter Benner. Mathematik in Industrie und Technik Fakultät für Mathematik TU Chemnitz

Technische Universität Berlin

Model Order Reduction for Systems with Non-Rational Transfer Function Arising in Computational Electromagnetics

Empirical Gramians and Balanced Truncation for Model Reduction of Nonlinear Systems

3 Gramians and Balanced Realizations

Problem Set 5 Solutions 1

Balanced Truncation Model Reduction of Large and Sparse Generalized Linear Systems

Optimal Approximation of Dynamical Systems with Rational Krylov Methods. Department of Mathematics, Virginia Tech, USA

CDS Solutions to the Midterm Exam

Introduction to Nonlinear Control Lecture # 4 Passivity

Efficient computation of multivariable transfer function dominant poles

Linear dynamical systems with inputs & outputs

Introduction to Modern Control MT 2016

S. Gugercin and A.C. Antoulas Department of Electrical and Computer Engineering Rice University

ẋ n = f n (x 1,...,x n,u 1,...,u m ) (5) y 1 = g 1 (x 1,...,x n,u 1,...,u m ) (6) y p = g p (x 1,...,x n,u 1,...,u m ) (7)

Pole placement control: state space and polynomial approaches Lecture 2

Structure preserving model reduction of network systems

Using Model Reduction techniques for simulating the heat transfer in electronic systems.

Model Order Reduction using SPICE Simulation Traces. Technical Report

Fast Frequency Response Analysis using Model Order Reduction

Moving Frontiers in Model Reduction Using Numerical Linear Algebra

Order reduction numerical methods for the algebraic Riccati equation. V. Simoncini

Transcription:

Model reduction of large-scale systems An overview and some new results Thanos Antoulas Rice University email: aca@rice.edu URL: www.ece.rice.edu/ aca LinSys2008, Sde Boker, 15-19 September 2008 Thanos Antoulas (Rice University) Model reduction of large-scale systems 1 / 66

Outline 1 Overview 2 Motivation 3 Approximation methods SVD-based methods Krylov-based methods 4 Choice of projection points Passivity preserving model reduction Optimal H 2 model reduction 5 Model reduction from measurements S-parameters Classical realization theory Finite data points: the Loewner matrix Tangential interpolation: L & σl Recursive Loewner-matrix framework 6 Conclusions Thanos Antoulas (Rice University) Model reduction of large-scale systems 2 / 66

Outline Overview 1 Overview 2 Motivation 3 Approximation methods SVD-based methods Krylov-based methods 4 Choice of projection points Passivity preserving model reduction Optimal H 2 model reduction 5 Model reduction from measurements S-parameters Classical realization theory Finite data points: the Loewner matrix Tangential interpolation: L & σl Recursive Loewner-matrix framework 6 Conclusions Thanos Antoulas (Rice University) Model reduction of large-scale systems 3 / 66

The big picture Overview Physical System and/or Data Modeling ODEs discretization PDEs Model reduction Simulation reduced # of ODEs Control Thanos Antoulas (Rice University) Model reduction of large-scale systems 4 / 66

Dynamical systems Overview u 1 ( ) u 2 ( ). u m ( ) x 1 ( ) x 2 ( ). x n ( ) y 1 ( ) y 2 ( ). y p ( ) We consider explicit state equations Σ : ẋ(t) = f(x(t), u(t)), y(t) = h(x(t), u(t)) with state x( ) of dimension n m, p. Thanos Antoulas (Rice University) Model reduction of large-scale systems 5 / 66

Problem statement Overview Given: dynamical system Σ = (f, h) with: u(t) R m, x(t) R n, y(t) R p. Problem: Approximate Σ with: ˆΣ = (ˆf, ĥ) with : u(t) R m, ˆx(t) R k, ŷ(t) R p, k n : (1) Approximation error small - global error bound (2) Preservation of stability/passivity (3) Procedure must be computationally efficient Thanos Antoulas (Rice University) Model reduction of large-scale systems 6 / 66

Overview Approximation by projection Unifying feature of approximation methods: projections. Let V, W R n k, such that W V = I k Π = VW is a projection. Define ˆx = W x. Then ˆΣ : { d dt ˆx(t) = W f(vˆx(t), u(t)) y(t) = h(vˆx(t), u(t)) Thus ˆΣ is good approximation of Σ, if x Πx is small. Thanos Antoulas (Rice University) Model reduction of large-scale systems 7 / 66

Overview Special case: linear dynamical systems Σ: Eẋ(t) = Ax(t) + Bu(t), y(t) = Cx(t) + Du(t) ( ) E, A B Σ = C D Problem: Approximate Σ by projection: Π = VW ( Ê, Â ˆΣ = ˆB Ĉ ˆD ) ( W = EV, W AV W B CV D k n = V ), k n W n n E, A B k Σ: Ê, Â ˆB : ˆΣ C D Ĉ ˆD Norms: H -norm: worst output error y(t) ŷ(t) for u(t) = 1. H 2 -norm: h(t) ĥ(t) Thanos Antoulas (Rice University) Model reduction of large-scale systems 8 / 66

Outline Motivation 1 Overview 2 Motivation 3 Approximation methods SVD-based methods Krylov-based methods 4 Choice of projection points Passivity preserving model reduction Optimal H 2 model reduction 5 Model reduction from measurements S-parameters Classical realization theory Finite data points: the Loewner matrix Tangential interpolation: L & σl Recursive Loewner-matrix framework 6 Conclusions Thanos Antoulas (Rice University) Model reduction of large-scale systems 9 / 66

Motivation Motivating Examples: Simulation/Control 1. Passive devices VLSI circuits Thermal issues Power delivery networks 2. Data assimilation North sea forecast Air quality forecast 3. Molecular systems MD simulations Heat capacity 4. CVD reactor Bifurcations 5. Mechanical systems: Windscreen vibrations Buildings 6. Optimal cooling Steel profile 7. MEMS: Micro Electro- -Mechanical Systems Elf sensor 8. Nano-Electronics Plasmonics Thanos Antoulas (Rice University) Model reduction of large-scale systems 10 / 66

Motivation Passive devices: VLSI circuits 1960 s: IC 1971: Intel 4004 2001: Intel Pentium IV 10µ details 0.18µ details 2300 components 42M components 64KHz speed 2GHz speed 2km interconnect 7 layers Thanos Antoulas (Rice University) Model reduction of large-scale systems 11 / 66

Motivation Passive devices: VLSI circuits Delay (ns) 1.0 0.1 Average Wiring Delay Typical Gate Delay 0.25 μm Today s Technology: 65 nm 1.3 1.0 0.8 0.5 0.3 Technology (μm) 65nm technology: 0.1 0.08 gate delay < interconnect delay! Conclusion: Simulations are required to verify that internal electromagnetic fields do not significantly delay or distort circuit signals. Therefore interconnections must be modeled. Electromagnetic modeling of packages and interconnects resulting models very complex: using PEEC methods (discretization of Maxwell s equations): n 10 5 10 6 SPICE: inadequate Thanos Antoulas (Rice University) Model reduction of large-scale systems 12 / 66

Motivation Mechanical Systems: Buildings Earthquake prevention Taipei 101: 508m Damper between 87-91 floors Building Height Control mechanism CN Tower, Toronto Hancock building, Boston Sydney tower Rokko Island P&G, Kobe Yokohama Landmark tower Shinjuku Park Tower TYG Building, Atsugi 533 m 244 m 305 m 117 m 296 m 296 m 159 m Passive tuned mass damper Two passive tuned dampers Passive tuned pendulum Passive tuned pendulum Active tuned mass dampers (2) Active tuned mass dampers (3) Tuned liquid dampers (720) Thanos Antoulas (Rice University) 730 ton damper Damping frequency Damping mass 0.14Hz, 2x300t 0.1,0.5z, 220t 0.33-0.62Hz, 270t 0.185Hz, 340t 330t 0.53Hz, 18.2t Model reduction of large-scale systems 13 / 66

Outline Approximation methods 1 Overview 2 Motivation 3 Approximation methods SVD-based methods Krylov-based methods 4 Choice of projection points Passivity preserving model reduction Optimal H 2 model reduction 5 Model reduction from measurements S-parameters Classical realization theory Finite data points: the Loewner matrix Tangential interpolation: L & σl Recursive Loewner-matrix framework 6 Conclusions Thanos Antoulas (Rice University) Model reduction of large-scale systems 14 / 66

Approximation methods Approximation methods: Overview Krylov Realization Interpolation Lanczos Arnoldi Nonlinear systems POD methods Empirical Gramians SVD Linear systems Balanced truncation Hankel approximation Krylov/SVD Methods Thanos Antoulas (Rice University) Model reduction of large-scale systems 15 / 66

Approximation methods SVD Approximation methods A prototype approximation problem the SVD (Singular Value Decomposition): A = UΣV. Supernova Clown Singular values of Clown and Supernova Supernova: original picture 1 2 Clown: original picture Clown: rank 6 approximation 3 4 5 green: clown red: supernova (log log scale) 0.5 1 1.5 2 2.5 Supernova: rank 6 approximation Supernova: rank 20 approximation Clown: rank 12 approximation Clown: rank 20 approximation Singular values provide trade-off between accuracy and complexity Thanos Antoulas (Rice University) Model reduction of large-scale systems 16 / 66

Approximation methods SVD-based methods POD: Proper Orthogonal Decomposition Consider: ẋ(t) = f(x(t), u(t)), y(t) = h(x(t), u(t)). Snapshots of the state: X = [x(t 1 ) x(t 2 ) x(t N )] R n N SVD: X = UΣV U k Σ k V k, k n. Approximate the state: ˆx(t) = U k x(t) x(t) U k ˆx(t), ˆx(t) R k Project state and output equations. Reduced order system: ˆx(t) = U k f(u k ˆx(t), u(t)), y(t) = h(u k ˆx(t), u(t)) ˆx(t) evolves in a low-dimensional space. Issues with POD: (a) Choice of snapshots, (b) singular values not I/O invariants. Thanos Antoulas (Rice University) Model reduction of large-scale systems 17 / 66

Approximation methods SVD-based methods SVD methods: the Hankel singular values Trade-off between accuracy and complexity for linear dynamical systems is provided by the Hankel Singular Values, which are computed (for stable systems) as follows: Define the gramians P = 0 e At BB e A t dt, Q = 0 e A t C Ce At dt To compute the gramians we need to solve 2 Lyapunov equations: AP + PA + BB = 0, P > 0 A Q + QA + C C = 0, Q > 0 σ i : Hankel singular values of system Σ. } σ i = λ i (PQ) Thanos Antoulas (Rice University) Model reduction of large-scale systems 18 / 66

Approximation methods SVD-based methods SVD methods: approximation by balanced truncation There exists basis where P = Q = S = diag (σ 1,, σ n ). This is the balanced basis of the system. In this basis partition: ( ) A11 A A = 12, B = A 21 A 22 ( B1 B 2 ) ( ) Σ1 0, C = (C 1 C 2 ), S =. 0 Σ 2 The reduced system is obtained by balanced truncation ( ) A11 B ˆΣ = 1. C 1 Σ 2 contains the small Hankel singular values. ( ) Projector: Π = VW Ik where V = W =. 0 Thanos Antoulas (Rice University) Model reduction of large-scale systems 19 / 66

Approximation methods SVD-based methods Properties of balanced reduction 1 Stability is preserved 2 Global error bound: σ k+1 Σ ˆΣ 2(σ k+1 + + σ n ) Drawbacks 1 Dense computations, matrix factorizations and inversions may be ill-conditioned 2 Need whole transformed system in order to truncate number of operations O(n 3 ) 3 Bottleneck: solution of two Lyapunov equations Thanos Antoulas (Rice University) Model reduction of large-scale systems 20 / 66

Approximation methods Krylov-based methods Approximation methods: Krylov methods Krylov Realization Interpolation Lanczos Arnoldi Nonlinear systems POD methods Empirical Gramians SVD Linear systems Balanced truncation Hankel approximation Krylov/SVD Methods Thanos Antoulas (Rice University) Model reduction of large-scale systems 21 / 66

Approximation methods The basic Krylov iteration Krylov-based methods Given A R n n and b R n, let v 1 = b b. At the k th step: AV k = V k H k + f k e k where e k R k : canonical unit vector V k = [v 1 v k ] R k k, V vv k = I k H k = V k AV k R k k v k+1 = f k f k Rn Computational complexity for k steps: O(n 2 k); storage O(nk). The Lanczos and the Arnoldi algorithms result. The Krylov iteration involves the subspace R k = [ b, Ab,, A k 1 b ]. Arnoldi iteration arbitrary A H k upper Hessenberg. Symmetric (one-sided) Lanczos iteration symmetric A = A H k tridiagonal and symmetric. Two-sided Lanczos iteration with two starting vectors b, c arbitrary A H k tridiagonal. Thanos Antoulas (Rice University) Model reduction of large-scale systems 22 / 66

Approximation methods Krylov-based methods Three uses of the Krylov iteration (1) Iterative solution of Ax = b: approximate the solution x iteratively. (2) Iterative approximation of the eigenvalues of A. In this case b is not fixed apriori. The eigenvalues of the projected H k approximate the dominant eigenvalues of A. (3) Approximation of linear systems by moment matriching. Item (3) is of interest in the present context. Thanos Antoulas (Rice University) Model reduction of large-scale systems 23 / 66

Approximation methods Krylov-based methods Approximation by moment matching Given Eẋ(t) = Ax(t) + Bu(t), y(t) = Cx(t) + Du(t), expand transfer function around s 0 : G(s) = η 0 + η 1 (s s 0 ) + η 2 (s s 0 ) 2 + η 3 (s s 0 ) 3 + Moments at s 0 : η j. Find Ê ˆx(t) = ˆx(t) + ˆBu(t), y(t) = Ĉˆx(t) + ˆDu(t), with Ĝ(s) = ˆη 0 + ˆη 1 (s s 0 ) + ˆη 2 (s s 0 ) 2 + ˆη 3 (s s 0 ) 3 + such that for appropriate s 0 and l: η j = ˆη j, j = 1, 2,, l Thanos Antoulas (Rice University) Model reduction of large-scale systems 24 / 66

Approximation methods Krylov-based methods Krylov and rational Krylov methods Expansion around infinity: η j are the Markov parameters Problem: partial realization solved by the Krylov iteration. Expansion around arbitrary s 0 C: η j moments Problem: rational interpolation solved by the rational Krylov iteration. Remark. The Krylov and rational Krylov algorithms match moments without computing them. Thus moment matching methods can be implemented in a numerically efficient way. Thanos Antoulas (Rice University) Model reduction of large-scale systems 25 / 66

Approximation methods Krylov-based methods Projectors for Krylov and rational Krylov methods Given: ( ) E, A B Σ = by projection: Π = VW C D, Π 2 = Π obtain ( Ê, Â ˆΣ = ˆB ) ( ) W = EV, W AV W B, where k < n. Ĉ ˆD CV D Krylov: let E = I and [ ] V = B, AB,, A k 1 B R n k W = C CA. CA k 1 Rk n W = ( W 1 V) W then the Markov parameters match: CA i B = ĈÂ i ˆB Rational Krylov: let [ ] V = (λ 1 E A) 1 B (λ k E A) 1 B R n k C(λ k+1 E A) 1 C(λ k+2 E A) 1 W = R k n. C(λ 2k E A) 1 W = ( W 1 V) W then the moments of Ĝ match those of G at λ i : G(λ i ) = D + C(λ i E A) 1 B = ˆD + Ĉ(λ i Ê Â) 1 ˆB = Ĝ(λ i ) Thanos Antoulas (Rice University) Model reduction of large-scale systems 26 / 66

Approximation methods Krylov-based methods Properties of Krylov methods (a) Number of operations: O(kn 2 ) or O(k 2 n) vs. O(n 3 ) efficiency (b) Only matrix-vector multiplications are required. No matrix factorizations and/or inversions. No need to compute transformed model and then truncate. (c) Drawbacks global error bound? ˆΣ may not be stable. Q: How to choose the projection points? Thanos Antoulas (Rice University) Model reduction of large-scale systems 27 / 66

Outline Choice of projection points 1 Overview 2 Motivation 3 Approximation methods SVD-based methods Krylov-based methods 4 Choice of projection points Passivity preserving model reduction Optimal H 2 model reduction 5 Model reduction from measurements S-parameters Classical realization theory Finite data points: the Loewner matrix Tangential interpolation: L & σl Recursive Loewner-matrix framework 6 Conclusions Thanos Antoulas (Rice University) Model reduction of large-scale systems 28 / 66

Choice of projection points Choice of Krylov projection points: Passivity preserving model reduction Passivity preserving model reduction Passive systems: Re t u(τ) y(τ)dτ 0, t R, u L 2 (R). Positive real rational functions: (1) G(s) = D + C(sE A) 1 B, is analytic for Re(s) > 0, (2) Re G(s) 0 for Re(s) 0, s not a pole of G(s). ( ) E, A B Theorem: Σ = is passive G(s) is positive real. C D Conclusion: Positive realness of G(s) implies the existence of a spectral factorization G(s) + G ( s) = W(s)W ( s), where W(s) is stable rational and W(s) 1 is also stable. The spectral zeros λ i of the system are the zeros of the spectral factor W(λ i ) = 0, i = 1,, n. Thanos Antoulas (Rice University) Model reduction of large-scale systems 29 / 66

Choice of projection points Passivity preserving model reduction Passivity preserving model reduction New result Method: Rational Krylov Solution: projection points = spectral zeros V = [ (λ 1 E A) 1 B (λ k E A) 1 B ] R n k C(λ Recall: k+1 E A) 1 W =. R k n C(λ 2k E A) 1 Main result. If V, W are defined as above, where λ 1,, λ k are spectral zeros, and in addition λ k+i = λ i, the reduced system satisfies: (i) the interpolation constraints, (ii) it is stable, and (iii) it is passive. Thanos Antoulas (Rice University) Model reduction of large-scale systems 30 / 66

Choice of projection points Passivity preserving model reduction Spectral zero interpolation preserving passivity Hamiltonian EVD & projection Hamiltonian eigenvalue problem A 0 B X 0 A C Y C B 1 Z = E 0 0 0 E 0 0 0 0 The generalized eigenvalues Λ are the spectral zeros of Σ X Y Z Λ Partition eigenvectors X Y Z = X X + Y Y + Z Z +, Λ = Λ Λ + ± Λ are the stable spectral zeros Reduce by projection V = X, W = Y Ê = W EV, Â = W AV, ˆB = W B, Ĉ = CV, ˆD = D Thanos Antoulas (Rice University) Model reduction of large-scale systems 31 / 66

Choice of projection points Passivity preserving model reduction Dominant spectral zeros SADPA SADPA What is a good choice of k spectral zeros out of n? Dominance criterion: Spectral zero s j is dominant if: large. R j R(s j ), is Efficient computation for large scale systems: we compute the k n most dominant eigenmodes of the Hamiltonian pencil. SADPA (Subspace Accelerated Dominant Pole Algorithm ) solves this iteratively. Conclusion: Passivity preserving model reduction becomes a structured eigenvalue problem Thanos Antoulas (Rice University) Model reduction of large-scale systems 32 / 66

Choice of projection points Choice of Krylov projection points: Optimal H 2 model reduction Optimal H 2 model reduction The H 2 norm of a (scalar) system is: Σ H2 = ( + ) 1/2 h 2 (t)dt Goal: construct a Krylov projection such that Σ k = arg min Σ ˆΣ. deg(ˆσ) = r H2 ˆΣ : stable That is, find a Krylov projection Π = VW, V, W R n k, W V = I k, such that: Â = W AV, ˆB = W B, Ĉ = CV Thanos Antoulas (Rice University) Model reduction of large-scale systems 33 / 66

Choice of projection points Optimal H 2 model reduction Necessary optimality conditions & resulting algorithm Let (Â, ˆB, Ĉ) solve the optimal H 2 problem and let ˆλ i denote the eigenvalues of Â. The necessary optimality conditions are G( ˆλ i ) = Ĝ( ˆλ i ) and d ds G(s) s= ˆλ = ds d Ĝ(s) s= ˆλ i i Thus the reduced system has to match the first two moments of the original system at the mirror images of the eigenvalues of Â. The proposed algorithm produces such a reduced order system. 1 Make an initial selection of σ i, for i = 1,..., k 2 W = [(σ 1 I A ) 1 C,, (σ k I A ) 1 C ] 3 V = [(σ 1 I A) 1 B,, (σ k I A) 1 B] 4 while (not converged) Â = (W V) 1 W AV, σ i λ i (Â) + Newton correction, i = 1,..., k W = [(σ 1 I A ) 1 C,, (σ k I A ) 1 C ] V = [(σ 1 I A) 1 B,, (σ k I A) 1 B] 5 Â = (W V) 1 W AV, ˆB = (W V) 1 W B, Ĉ = CV Thanos Antoulas (Rice University) Model reduction of large-scale systems 34 / 66

Choice of projection points Example: Transmission line u y R L C R C R L... C R C Optimal H 2 model reduction (E singular) L R L C R C oscillatory frequency response C = 10 8 F, L = 10 6 H, R C = 10 8 Ω, R L = 10 2 Ω, R = 10Ω!20!25 Frequency response Total states, n = 155 Independent states, dim = 101 non-oscillatory frequency response C = 10 8 F, L = 10 6 H, R C = 10 8 Ω R L = 1Ω, R = 1Ω 0!5!10 Frequency response Total states, n = 155 Independent states, dim = 101 Singular values (db)!30!35!40!45!50!55!2!1 0 1 2 3 Frequency x 10 8 (rad/s) Original(singular) Original(finite) Singular values (db)!15!20!25!30!35!40!45!50!2!1 0 1 2 3 Frequency x 10 8 (rad/s) Original(singular) Original(finite) Thanos Antoulas (Rice University) Model reduction of large-scale systems 35 / 66

Plots Choice of projection points Oscillatory system Hankel singular values and Positive real singular values dim = 101 Optimal H 2 model reduction Non-oscillatory system Hankel singular values and Positive real singular values dim = 101 10 0 10 0 10!1 10!1 10!2 10!2 10!3 10!3 10!4 10!4 HSV PRHSV HSV PRHSV 0 20 40 60 80 100 Order k 0 20 40 60 80 100 Order k Hankel singular values and positive real Hankel singular values Error plots, Positie Real Balancing, Prima and Dominant SZM Error plots, Positie Real Balancing, Prima and Dominant SZM n = 155 dim = 101 k = 11 n = 155 dim = 101 k = 11!20 0!30 Singular values (db)!40!50!60!70 Singular values (db)!50!100!80 PR BalTr Prima SZM with SADPA!90!1!0.5 0 0.5 1 1.5 2 PR BalTr Prima SZM with SADPA!150!1!0.5 0 0.5 1 1.5 2 2.5 3 Frequency x 10 8 (rad/s) Frequency x 10 8 (rad/s) Thanos Antoulas (Rice University) Error Model plots reduction for passivity of large-scale preserving systems methods 36 / 66

Choice of projection points H and H 2 error norms Optimal H 2 model reduction Relative norms of the error systems Reduction Method n = 155, dim = 101, k = 11 Oscillatory Non-oscillatory H H 2 H H 2 PRIMA 1.4775-0.9005 - Spectral Zero Method (SZM with SADPA) 0.9628 0.841 0.3599 0.3047 SZM with SADPA (interp = 31) followed by PRBT 0.9617 0.8144 0.0721 0.0220 Optimal H 2 0.5943 0.4621 0.0016 0.00023 Balanced truncation (BT) 0.9393 0.6466 0.0024 0.00054 Riccati Balanced Truncation (PRBT) 0.9617 0.8164 0.0021 0.00037 Thanos Antoulas (Rice University) Model reduction of large-scale systems 37 / 66

Choice of projection points Optimal H 2 model reduction Approximation methods: Summary Krylov Realization Interpolation Lanczos Arnoldi Properties numerical efficiency n 10 3 SVD Nonlinear systems Linear systems POD methods Balanced truncation Empirical Gramians Hankel approximation Krylov/SVD Methods Properties Stability Error bound n 10 3 Thanos Antoulas (Rice University) Model reduction of large-scale systems 38 / 66

Outline Model reduction from measurements 1 Overview 2 Motivation 3 Approximation methods SVD-based methods Krylov-based methods 4 Choice of projection points Passivity preserving model reduction Optimal H 2 model reduction 5 Model reduction from measurements S-parameters Classical realization theory Finite data points: the Loewner matrix Tangential interpolation: L & σl Recursive Loewner-matrix framework 6 Conclusions Thanos Antoulas (Rice University) Model reduction of large-scale systems 39 / 66

Model reduction from measurements Recall: the big picture Physical System and/or Data Modeling ODEs discretization PDEs Model reduction Simulation reduced # of ODEs Control Thanos Antoulas (Rice University) Model reduction of large-scale systems 40 / 66

Model reduction from measurements S-parameters A motivation: electronic systems Growth in communications and networking and demand for high data bandwidth requires streamlining of the simulation of entire complex systems from chips to packages to boards, etc. Thus in circuit simulation, signal intergrity (lack of signal distortion) of high speed electronic designs require that interconnect models be valid over a wide bandwidth. An important tool: S-parameters They represent a component as a black box. Accurate simulations require accurate component models. In high frequencies S-parameters are important because wave phenomena become dominant. Advantages: 0 S 1 and can be measured using VNAs (Vector Network Analyzers). Thanos Antoulas (Rice University) Model reduction of large-scale systems 41 / 66

Model reduction from measurements S-parameters Scattering or S-parameters Given a system in I/O representation: y(s) = H(s)u(s), the associated S-paremeter representation is where ȳ(s) = S(s)ū(s) = [H(s) + I][H(s) I] 1 ū(s), }{{} S(s) ȳ = 1 2 (y + u) are the transmitted waves and, ū = 1 2 (y u) are the reflected waves. S-parameter measurements. S(jω k ): samples of the frequency response of the S-parameter system representation. Thanos Antoulas (Rice University) Model reduction of large-scale systems 42 / 66

Model reduction from measurements S-parameters Measurement of S-parameters VNA (Vector Network Analyzer) Magnitude of S-parameters for 2 ports Thanos Antoulas (Rice University) Model reduction of large-scale systems 43 / 66

Model reduction from measurements Classical realization theory Model construction from data (at infinity): Classical realization Given h t R p m, t = 1, 2,, find A R n n, B R n m, C R p n, such that h t = CA t 1 B, t > 0 Main tool: Hankel matrix h 1 h 2 h 3 h 2 h 3 h 4 H = h 3 h 4 h 5 =...... C CA CA 2 }. {{ } O [ B AB A 2 B ] }{{} R Thanos Antoulas (Rice University) Model reduction of large-scale systems 44 / 66

Model reduction from measurements Classical realization theory Classical realization Solvability rank H = n < Solution: Find R n n : det 0. Let σ R n n : shifted matrix. Let Γ R n m : first m columns, while Λ R p n : the first p rows. Then Consequence. In terms of the data: A = 1 σ, B = 1 Γ, C = Λ. H(s) = Λ(s σ ) 1 Γ Thanos Antoulas (Rice University) Model reduction of large-scale systems 45 / 66

Model reduction from measurements Finite data points: the Loewner matrix Model construction from data at finite points: Interpolation Assume for simplicity that the given data are scalar: (s i, φ i ), i = 1, 2,, N, s i s j, i j Find H(s) = n(s) d(s) such that H(s i) = φ i, i = 1, 2,, N, and n, d: coprime polynomials. A solution always exists (e.g. Lagrange interpolating polynomial). Additional constraints for H: minimality, stability, bounded realness etc. Main tool: Loewner matrix. Divide the data in disjoint sets: (λ i, w i ), i = 1, 2,, k, (µ j, v j ), j = 1, 2,, q, k + q = N: L = v 1 w 1 v 1 w k µ 1 λ k µ 1 λ 1..... v q w 1 µ q λ 1 v q w k µ q λ k Cq k Thanos Antoulas (Rice University) Model reduction of large-scale systems 46 / 66

Model reduction from measurements Finite data points: the Loewner matrix Model construction from data at finite points: Interpolation Main result (1986). The rank of L encodes the information about the minimal degree interpolants: rank L or N rank L. Remarks. (a) In this framework the strict properness assumption has been dropped. Thus rational functions with polynomial part can be recovered from input-output data. (b) The construction of interpolants will be deferred until later. (c) If H(s) = C(sI A) 1 B + D, then C(λ 1 I A) 1 C(λ 2 I A) 1 [ L = (µ1 I A) 1 B (µ q I A) 1 B ]. }{{} C(λ k I A) 1 R }{{} O Thanos Antoulas (Rice University) Model reduction of large-scale systems 47 / 66

Model reduction from measurements Finite data points: the Loewner matrix Scalar interpolation multiple points Special case. single point with multiplicity: (s 0 ; φ 0, φ 1,, φ N 1 ), i.e. the value of the function and that of a number of derivatives is provided.the Loewner matrix becomes: L = φ 1 1! φ 2 2! φ 3 3! φ 4 4!. φ 2 2! φ 3 3! φ 4 4!. φ 3 3! φ 4 4! φ 4 4!... Hankel matrix Thus the Loewner matrix generalizes the Hankel matrix when general interpolation replaces realization. Thanos Antoulas (Rice University) Model reduction of large-scale systems 48 / 66

Model reduction from measurements Tangential interpolation: L & σl General framework tangential interpolation Given: right data: (λ i ; r i, w i ), i = 1,, k left data: (µ j ; l j, v j ), j = 1,, q. We assume for simplicity that all points are distinct. Problem: Find rational p m matrices H(s), such that H(λ i )r i = w i Right data: λ 1 Λ =... C k k, Left data: µ 1 M =... µ q λ k l 1 C q q, L =. l j H(µ j ) = v j R = [r 1 r 2, r k ] C m k, W = [w 1 w 2 w k ] C p k l q C q p, V = v 1. v q C q m Thanos Antoulas (Rice University) Model reduction of large-scale systems 49 / 66

Model reduction from measurements Tangential interpolation: L & σl General framework tangential interpolation Input-output data. The Loewner matrix is: v 1 r 1 l 1 w 1 µ 1 λ 1 L =..... v qr 1 l qw 1 µ q λ 1 Recall: v 1 r k l 1 w k µ 1 λ k v qr k l qw k µ q λ k Cq k H(λ i )r i = w i, l j H(µ j ) = v j Therefore L satisfies the Sylvester equation LΛ ML = VR LW Thanos Antoulas (Rice University) Model reduction of large-scale systems 50 / 66

Model reduction from measurements Tangential interpolation: L & σl General framework tangential interpolation State space data. Suppose that E, A, B, C of minimal degree n are given such that H(s) = C(sE A) 1 B. Let X, Y satisfy the following Sylvester equations EXΛ AX = BR and MYE YA = LC If the generalized eigenvalues of (E, A) are distinct from λ i and µ j, X, Y are unique solutions of these equations. Actually x i = (λ i E A) 1 Br i X: generalized reachability matrix y j = l j C(µ j E A) 1 Y: generalized observability matrix. L = YEX Thanos Antoulas (Rice University) Model reduction of large-scale systems 51 / 66

Model reduction from measurements Tangential interpolation: L & σl Construction of Interpolants Suggested construction procedure for an interpolant of McMillan degree n: 1 Factor L = YEX, so that E has rank n. 2 Construct A, B, C to satisfy the Sylvester equations above. 3 Define D := w j C(s j E A) 1 B. Steps 2 and 3 are easy; the first step is problematic: how do we choose E? If the system is proper, then size (E) = rank (E), and we could use, for example, the SVD to factor L proper systems are easy. If the system is singular, then size (E) > rank (E), and we re stuck. Solution: use the shifted Loewner matrix σl Thanos Antoulas (Rice University) Model reduction of large-scale systems 52 / 66

Model reduction from measurements The shifted Loewner matrix Tangential interpolation: L & σl The shifted Loewner matrix, σl, is the Loewner matrix associated to sh(s). µ 1 v 1 r 1 l 1 w 1 λ 1 µ µ 1 λ 1 1 v 1 r k l 1 w k λ k µ 1 λ k σl =..... Cq k µ qv qr 1 l qw 1 λ 1 µ q λ 1 σl satisfies the Sylvester equation σl can be factored as µ qv qr k l qw k λ k µ q λ k σlλ MσL = VRΛ MLW σl = YAX Thanos Antoulas (Rice University) Model reduction of large-scale systems 53 / 66

Model reduction from measurements Tangential interpolation: L & σl Construction of Interpolants (Models) Assume that k = l, and let Then det (xl σl) 0, x {λ i } {µ j } E = L, A = σl, B = V, C = W is a minimal realization of an interpolant of the data, i.e., the function interpolates the data. H(s) = W(σL sl) 1 V Proof. Multiplying the first equation by s and subtracting it from the second we get (σl sl)λ M(σL sl) = LW(Λ si) (M si)vr. Multiplying this equation by e i on the right and setting s = λ i, we obtain (λ i I M)(σL λ i L)e i = (λ i I M)Vr i (λ i L σl)e i = Vr i We i = W(λ i L σl) 1 V Therefore w i = H(λ i )r i. This proves the right tangential interpolation property. To prove the left tangential interpolation property, we multiply the above equation by e j on the left and set s = µ j : e j (σl µ j L)(Λ µ j I) = e j LW(µ j I Λ) e j (σl µ j L) = l j W e j V = l j W(σL µ j L) 1 V Therefore v j = l j H(µ j ). Thanos Antoulas (Rice University) Model reduction of large-scale systems 54 / 66

Model reduction from measurements Construction of interpolants: New procedure Main assumption: rank (xl σl) = rank ( L σl ) = rank Tangential interpolation: L & σl ( L σl ) =: k, x {λ i } {µ j } Then for some x {λ i } {µ j }, we compute the SVD xl σl = YΣX with rank (xl σl) = rank (Σ) = size (Σ) =: k, Y C ν k, X C k ρ. Theorem. A realization [E, A, B, C], of an interpolant is given as follows: E = Y LX A = Y σlx B = Y V C = WX Remark. The system [E, A, B, C] can now be further reduced using any of the usual reduction methods. Thanos Antoulas (Rice University) Model reduction of large-scale systems 55 / 66

which is much smaller than the discretization error which is of order 10 4. Model reduction from measurements Tangential interpolation: L & σl Constrained damped mass-spring system Example: mechanical Consider the holonomically constrained damped mass-spring system illus- Mechanical trated inexample: Figure 3.4. Stykel, Mehrmann k 1 k i k i+1 k g 1 u m 1 m i m g d 1 d i d i+1 d g 1 κ 1 κ i κ g δ 1 δ i δ g Fig. 3.4. A damped mass-spring system with a holonomic constraint. The vibration of this system is described in generalized state space form as: The ith mass of weight m i is connected to the (i + 1)st mass by a spring and a damper with ṗ(t) constants = k i v(t) and d i, respectively, and also to the ground by a spring and a damper with constants κ i and δ i, respectively. Additionally, the M v(t) = Kp(t) + Dv(t) G λ(t) + B first mass is connected to the last one by a 2 u(t) rigid bar and it is influenced 0 = by the Gp(t) control u(t). The vibration of this system is described by a descriptor system y(t) = C 1 p(t) ṗ(t) = v(t), Measurements: 500 M v(t) frequency = K p(t) + Dv(t) response G T λ(t) data + B 2 between u(t), [ 2i, (3.34) +2i]. 0 = G p(t), Thanos Antoulas (Rice University) y(t) = C 1 p(t), Model reduction of large-scale systems 56 / 66

Model reduction from measurements Mechanical system: plots Tangential interpolation: L & σl 20 Frequency responses of original n=97 and identified systems k=2,6,10,14,18 20 Error plots identified systems k=2,6,10,14,18 10 10 0 0 10 10 20 30 20 40 30 50 60 40 70 50 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 80 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 Left: Frequency responses of original system and approximants (orders 2, 6, 10, 14, 18) Right: Frequecy responses of error systems Thanos Antoulas (Rice University) Model reduction of large-scale systems 57 / 66

Model reduction from measurements Tangential interpolation: L & σl Example: Four-pole band-pass filter 1000 measurements between 40 and 120 GHz; S-parameters 2 2, MIMO (approximate) interpolation L, σl R 2000 2000. 0 Singular values of 2 2 system vs 1 1 systems 0 Magnitude of S(1,1), S(1,2) and 21st order approximants 2 5 4 10 6 8 15 10 20 12 25 14 16 30 18 0 10 20 30 40 50 60 70 80 35 1.6 1.65 1.7 1.75 1.8 1.85 1.9 1.95 2 The singular values of L, σl The S(1, 1) and S(1, 2) parameter data 17-th order model Thanos Antoulas (Rice University) Model reduction of large-scale systems 58 / 66

Model reduction from measurements Recursive Loewner-matrix framework Recursive Loewner-matrix framework Consider the (right and left) interpolation data defined by the matrices [ R C m k, W C p k, Λ C k k] and [ L C l p, V C l m, M C l l], as well as the associated Loewner matrix L and shifted Lowener matrix σl, both of size l k, which satisfy the Sylvester equations LΛ ML = LW VR = ( L V ) ( ) ( ) 0 I R, I 0 W σlλ MσL = LWΛ MVR = ( L MV ) ( ) ( ) 0 I R. I 0 WΛ We now define the (p + m) (p + m) rational matrices: [ ] [ Ip 0 Θ(s) = + 0 I m W R [ ] [ Θ(s) = Θ 1 Ip 0 W (s) = + 0 I m R ] (sl LΛ) 1 [ L V ] ( Θ11 (s) Θ = 12 (s) Θ 21 (s) Θ 22 (s) ), ] (sl ML) 1 [ L V ] ( Θ = 11 (s) Θ 12 (s) Θ 21 (s) Θ 22 (s) ). Thanos Antoulas (Rice University) Model reduction of large-scale systems 59 / 66

Model reduction from measurements Recursive interpolation Recursive Loewner-matrix framework These are the generating matrices of the interpolation problem, as all interpolants can be obtained from these quantities. First, we notice that the interpolation conditions are satisfied. Lemma [L j V j ]Θ(µ j ) = 0 l (p+m), for all j, and Θ(λ k ) ( Wk R k ) = 0 (p+m) k, for all k. Next, we assert that all interpolants can be obtained as linear matrix fractions constructed from the entries of Θ and Θ. Theorem The above result implies that the rational matrix Ψ is an interpolant for any Γ(s), where: Ψ(s) = [Θ 11 (s)γ(s) + Θ 12 (s)][θ 21 (s)γ(s) + Θ 22 (s)] 1 = [ Θ 11 (s) Γ(s) Θ 21 (s)] 1 [ Θ 12 (s) Γ(s) Θ 22 (s)], are right, left coprime factorizations, respectively. Thanos Antoulas (Rice University) Model reduction of large-scale systems 60 / 66

Model reduction from measurements Recursive Loewner-matrix framework Cascade representation of recursive interpolation Feedback interpretation of the parametrization of all solutions of the rational interpolation problem u y Θ ŷ û Γ Cascade representation of the recursive interpolation problem. u... y Θ 1 Θ 2... Θ k Γ k Thanos Antoulas (Rice University) Model reduction of large-scale systems 61 / 66

Model reduction from measurements Recursive Loewner-matrix framework Recursive Loewner and shifted Loewner matrices For the recursive procedure, the error quantities at each step are the key, and are computed as follows: ( ) ( ) Wk,e [L k,e V k,e ] = [L k V k ]Θ k 1 (µ k ) and = R ˆΘ Wk k 1 (λ k ). k,e R k It follows that the recursive quantities for 3 stages are: W e = [W e1 W e2 W e3 ], L e = L 1e L 2e, σl e = σl 1e L 1e W 2e L 1e W 3e V 2e R 1e σl 2e L 2e W 3e, V e = L 3e V 3e R 1e V 3e R 2e σl 3e The resulting generating system is [ ] [ ] Ip 0 We Θ(s) = + (sl 0 I m R e σl e + V e R e ) 1 [ L e e V e ] V 1e V 2e V 3e. The above procedure recursively constructs an L-D-U factorization of the Loewner matrix. Thanos Antoulas (Rice University) Model reduction of large-scale systems 62 / 66

Model reduction from measurements Example: delay system Recursive Loewner-matrix framework where E, A 0, A 1 are 500 500 and B, C are 500-vectors. Eẋ(t) = A 0 x(t) + A 1 x(t τ) + Bu(t), y(t) = Cx(t), Procedure: compute 1000 frequency response samples. Then apply recursive/adaptive Loewner-framework procedure. (Blue: original, red: approximants.) 10!1 Adaptive/Recursive approximant N = 35; Hinf!error =.008 10!1 Non!adaptive/recursive approximant N = 50; Hinf!error =.180 10!2 10!2 10!3 10!3 10!4 0 1 2 3 10!4 0 1 2 3 35-th order adaptively constructed model; 50-th order non-adaptively constructed model; H norm of error: 0.008. H norm of error: 0.180. Thanos Antoulas (Rice University) Model reduction of large-scale systems 63 / 66

Outline Conclusions 1 Overview 2 Motivation 3 Approximation methods SVD-based methods Krylov-based methods 4 Choice of projection points Passivity preserving model reduction Optimal H 2 model reduction 5 Model reduction from measurements S-parameters Classical realization theory Finite data points: the Loewner matrix Tangential interpolation: L & σl Recursive Loewner-matrix framework 6 Conclusions Thanos Antoulas (Rice University) Model reduction of large-scale systems 64 / 66

Conclusions Summary and conclusions Given input/output data, we can construct with no computation, a high order system in generalized state space form Rightarrow let the data reveal the underlying system This system is such that (L, σl) is a singular pencil. To address this issue: 1 SVD of L (σl). 2 Recursive (and adaptive) procedure. Approach is the natural way to construct models and reduced models from data as it does not require (force) the inversion of E. Thanos Antoulas (Rice University) Model reduction of large-scale systems 65 / 66

Conclusions References Passivity preserving model reduction Antoulas: Systems and Control Letters (2005) Sorensen: Systems and Control Letters (2005) Ionutiu, Rommes, Antoulas: IEEE Trans. Comp. Aided Design (2008) Optimal H 2 model reduction Gugercin, Antoulas, Beattie: SIAM J. Matrix Anal. Appl. (2008) Model reduction from data Mayo, Antoulas: Linear Algebra Applic. PAF Issue (2007) Lefteriu, Antoulas: Tech. Report (2008) General reference: Antoulas, SIAM (2005) Thanos Antoulas (Rice University) Model reduction of large-scale systems 66 / 66