Model Order Reduction for Systems with Non-Rational Transfer Function Arising in Computational Electromagnetics

Similar documents
Model Reduction for Dynamical Systems

Parametrische Modellreduktion mit dünnen Gittern

Model reduction of large-scale dynamical systems

An Optimum Fitting Algorithm for Generation of Reduced-Order Models

EE5900 Spring Lecture 5 IC interconnect model order reduction Zhuo Feng

Numerical Analysis Lecture Notes

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

Lecture 4 Orthonormal vectors and QR factorization

Two-Sided Arnoldi in Order Reduction of Large Scale MIMO Systems

Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition

Krylov Techniques for Model Reduction of Second-Order Systems

Chapter 3 Transformations

A Multi-Step Hybrid Method for Multi-Input Partial Quadratic Eigenvalue Assignment with Time Delay

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

Order reduction of large scale second-order systems using Krylov subspace methods

A New Simulation Technique for Periodic Small-Signal Analysis

Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries

Recycling Krylov Subspaces for Solving Linear Systems with Successively Changing Right-Hand Sides Arising in Model Reduction

Solving Large Nonlinear Sparse Systems

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

On Solving Large Algebraic. Riccati Matrix Equations

arxiv: v1 [math.na] 5 May 2011

Parametric Model Order Reduction of a Thermoelectric Generator for Electrically Active Implants

There are two things that are particularly nice about the first basis

Model-Order Reduction of High-Speed Interconnects: Challenges and Opportunities

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

PRINCIPAL COMPONENTS ANALYSIS

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

MAT Linear Algebra Collection of sample exams

MATH Linear Algebra

Linear Analysis Lecture 16

Solutions to Review Problems for Chapter 6 ( ), 7.1

Structure-Preserving Model Order Reduction of RCL Circuit Equations

INNER PRODUCT SPACE. Definition 1

Model reduction via tangential interpolation

235 Final exam review questions

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

MATH Topics in Applied Mathematics Lecture 12: Evaluation of determinants. Cross product.

ELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES

Linear Algebra II. 7 Inner product spaces. Notes 7 16th December Inner products and orthonormal bases

Elementary linear algebra

Linear Algebra. and

Linear Algebra Massoud Malek

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

THE solution of the absolute value equation (AVE) of

Model Order Reduction

Model order reduction of electrical circuits with nonlinear elements

Introduction to Numerical Linear Algebra II

Model Reduction for Unstable Systems

Qualification of tabulated scattering parameters

On the loss of orthogonality in the Gram-Schmidt orthogonalization process

Parametric Model Order Reduction for Linear Control Systems

6. Orthogonality and Least-Squares

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment

7. Symmetric Matrices and Quadratic Forms

/$ IEEE

MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2.

Math 3191 Applied Linear Algebra

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method

Lecture 3: QR-Factorization

Chap 3. Linear Algebra

Orthogonal Transformations

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

A fast randomized algorithm for overdetermined linear least-squares regression

MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS

A global Arnoldi method for the model reduction of second-order structural dynamical systems

Incomplete Cholesky preconditioners that exploit the low-rank property

arxiv: v1 [cs.sy] 29 Dec 2018

Passive Interconnect Macromodeling Via Balanced Truncation of Linear Systems in Descriptor Form

LUMPED EQUIVALENT MODELS OF COMPLEX RF STRUCTURES

arxiv: v1 [hep-lat] 2 May 2012

Statistical approach in complex circuits by a wavelet based Thevenin s theorem

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

This can be accomplished by left matrix multiplication as follows: I

Canonical lossless state-space systems: staircase forms and the Schur algorithm

Advanced Digital Signal Processing -Introduction

A Note on the Pin-Pointing Solution of Ill-Conditioned Linear System of Equations

Review problems for MA 54, Fall 2004.

arxiv: v1 [cs.na] 7 Jul 2017

Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices

An H-LU Based Direct Finite Element Solver Accelerated by Nested Dissection for Large-scale Modeling of ICs and Packages

5 Linear Algebra and Inverse Problem

System Level Modeling of Microsystems using Order Reduction Methods

Passive reduced-order modeling via Krylov-subspace methods

Elements of linear algebra

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Review of Basic Concepts in Linear Algebra

Adaptive Control of a Class of Nonlinear Systems with Nonlinearly Parameterized Fuzzy Approximators

COMP 558 lecture 18 Nov. 15, 2010

Conceptual Questions for Review

Model reduction of interconnected systems

THE RELATION BETWEEN THE QR AND LR ALGORITHMS

Lecture 03. Math 22 Summer 2017 Section 2 June 26, 2017

Towards parallel bipartite matching algorithms

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Quadrature Prefilters for the Discrete Wavelet Transform. Bruce R. Johnson. James L. Kinsey. Abstract

BALANCING-RELATED MODEL REDUCTION FOR DATA-SPARSE SYSTEMS

Transcription:

Model Order Reduction for Systems with Non-Rational Transfer Function Arising in Computational Electromagnetics Lihong Feng and Peter Benner Abstract We consider model order reduction of a system described by a nonrational transfer function. The systems under consideration result from the discretization of electromagnetic systems with surface losses [1]. In this problem, the frequency parameter s appears nonlinearly. We interpret the nonlinear functions containing s as parameters of the systems and apply parametric model order reduction (PMOR) to the system. Since the parameters are functions of the frequency s, they are coupled to each other. Nevertheless, PMOR treats them as individual parameters. We review existing PMOR methods, and discuss their applicability to the problem considered here. Based on our findings, we propose an optimized method for the parametric system considered in this paper. We report on numerical experiments performed with the optimized method applied to real-life data. 1 Problem Description The transfer functions of the systems considered here take the form H(s) = sb T (s 2 I n 1/ sd+a) 1 B, (1) where A,D are n n matrices, B is an n p matrix (p n), and I n is the identity matrix of size n. These transfer functions result from the spatial discretization of electromagnetic field equations, i.e., the Maxwell equations, describing the electrodynamical behavior of microwave devices, when surface losses are included in the physical model. For details see [1]. Mathematics in Industry and Technology, Faculty of Mathematics, Chemnitz University of Technology, D-09107 Chemnitz, Germany, e-mail: lihong.feng@mathematik.tu-chemnitz.de, benner@mathematik.tu-chemnitz.de. This research was supported by the Alexander von Humboldt-Foundation and by the DFG grant BE 2174/7-1, Automatic, Parameter-Preserving Model Reduction for Applications in Microsystems Technology. 1

2 Lihong Feng, Peter Benner Due to the symmetry inherent in the model under consideration, we will focus here on one-sided projection methods for model order reduction. That is, the reduced-order model is obtained by finding a projection matrix V R n q and V T V = I r, such that Ĥ(s) = s ˆB T (s 2 I r 1/ s ˆD+ Â) 1 ˆB H(s) is the reduced-order transfer function, with ˆD = V T DV, Â = V T AV R q q, ˆB = V T B R q p, and q n. Since H(s) contains not only s, but also two nonlinear functions of s, namely s 2 and 1/ s, we int to apply parametric model order reduction (PMOR) methods (e.g., [2,3]) in order to compute Ĥ(s). Notice that model reduction of a similar transfer function as in (1) is also discussed in [1], where a conventional non-parametric model order reduction method is considered. There, the generated projection matrix V deps not only on some expansion point s 0, but also on some value s used for fixing the term 1/ s to 1/ s in order to obtain a standard rational transfer function. This may cause poor approximation properties of Ĥ(s) at values s where 1/ s is not close to 1/ s. In our situation, the basic difference of PMOR compared to nonparametric model reduction is that the computed projection matrix V only deps on the expansion point s 0, but not on any fixed value s other than s 0 as the term 1/ s is treated as free parameter. Usually, this strategy produces a reduced-order model (transfer function) with evenly distributed small error for a large frequency range. In order to apply PMOR methods as those suggested in [2, 3], H(s) has to be expanded into a power series, then the projection matrix V is computed based on the coefficients of the series expansion. Different ways of computing V constitute different PMOR algorithms. Therefore, in the following we first consider various series expansions of H(s) and discuss their suitability for PMOR. The first possibility for an expansion of H(s) into a Neumann series is obtained when A is nonsingular as follows: H(s) = sb T (s 2 I n 1/ sd+a) 1 B = sb T (s 2 A 1 1 s A 1 D+I n ) 1 A 1 B = sb T [I n ( 1 s A 1 D s 2 A 1 )] 1 A 1 B = sb T ( 1 s A 1 D s 2 A 1 ) i A 1 B, i=0 where the last equality only holds if 1 s A 1 D s 2 A 1 ) < 1 for a suitably chosen matrix norm. However, in the considered application, A is a singular matrix. Thus, the series expansion above cannot be used. Extracting s 2 from the inverse in (1), H(s) = 1 s BT (I n 1 s 2 s D+ 1 s 2 A) 1 B, (2) a second possibility for a Neumann series expansion is obtained:

MOR for Systems with Non-Rational Transfer Function 3 H(s) = 1 s BT (I n 1 s 2 s D+ 1 s 2 A) 1 B = 1 s BT i=0 1 ( s 2 s D 1 s 2 A ) i B. (3) }{{} :=Q(s) Next we check whether the above Neumann series expansion is convergent or not. Although the interesting frequency in applications of (1) is relatively high (s = jω, ω 10 9 Hz for the real-life data used in Section 4), ill-scaling of the matrices D and A also often encountered in practice prevents convergence of the above series expansion. (Note: in the example considered in Section 4, D i j 10 27, A i j 10 23.) Therefore, the series expansion in (3) is not applicable in practice either. (One would need s > 10 12 in order to achieve convergence of the Neumann series!) Finally, we study a third alternative for power series expansion of (1). We also use (2) and define s 1 (s) := 1 s 2 s, s 2(s) := 1 s 2. If we choose a nonzero expansion point s 0 and let s 1 (s) = s 1 (s 0 )+σ 1 (s), s 2 (s) = s 2 (s 0 )+σ 2 (s), then we get H(s) = 1 s BT (I n s 1 (s)d+s 2 (s)a) 1 B = 1 s BT (I n s 1 (s 0 )D+s 2 (s 0 )A σ 1 D+σ 2 A) 1 B }{{} :=G = 1 s BT [I n (σ 1 G 1 D σ 2 G 1 A)] 1 G 1 B = 1 s BT (σ 1 G 1 D σ 2 G 1 A) i=0 }{{} i G 1 B :=Q Simple calculations show that the entries of Q are small enough to guarantee Q < 1 for small σ 1,σ 2, so that the series is convergent. Therefore, we will use the series expansion (4) in the following analysis. We will treat σ 1 and σ 2 as two individual parameters, although, they are both functions of s, and hence are coupled parameters. For ease of notation, in the following we will use B M := G 1 B, M 1 := G 1 D, and M 2 := G 1 A. The difference between PMOR methods lies in the computation of the projection matrix V. Therefore, in the next section, we review two different methods for computing V and analyze drawbacks of these methods. Based on these considerations, we propose an improved method in Section 3. We will report on numerical experiments with these methods in Section 4. (4) 2 Different Methods for Computing the Projection matrix V 2.1 Directly Computing V A simple and direct way of obtaining V is to compute the coefficient matrices in the series expansion: H(s) = 1 s BT [B M + M 1 B M σ 1 + M 2 B M σ 2 +M 2 1 B Mσ 2 1 +(M 1M 2 + M 2 M 1 )B M σ 1 σ 2 +...+ M 3 1 B Mσ 3 1 +...]. (5)

4 Lihong Feng, Peter Benner by direct matrix multiplication, then orthogonalize these coefficients to get the matrix V [2] as below: range{v} = orth{b M,M 1 B M,...,(M 1 M 2 + M 2 M 1 )B M,...}. (6) Unfortunately, the coefficients quickly become linearly depent due to numerical instability (see the analysis, e.g., in [3, 4]). In the, the matrix V is often so inaccurate that it does not possess the expected theoretical properties. 2.2 Recursively Computing V A recursive method for computing V is proposed in [3], which is based on certain recursions between the coefficients of the series expansion. The series expansion (4) can also be written in the following form: H(s) = 1 s [B M +(σ 1 M 1 + σ 2 M 2 )B M +...+(σ 1 M 1 + σ 2 M 2 ) i B M +...]. (7) Using (7), we define R 0 = B M, R 1 = [M 1 R 0,M 2 R 0 ],. R j = [M 1 R j 1,M 2 R j 1 ],... (8) We see that R 0,R 1,...,R j,... include all the coefficient matrices in the series expansion (7). Therefore, we can use R 0,R 1,... to generate the projection matrix V: range{v} = colspan{r 0,R 1,...,R m }. (9) Here, V can be computed by employing the recursive relations between R j, j = 0, 1,..., m combined with the modified Gram-Schmidt process [3]. A disadvantage of this approach is that coefficients of the same powers of σ 1,σ 2 are treated separately and are orthogonalized sequentially using, e.g., the modified Gram-Schmidt process. This may lead to reduced-order models of larger order than the direct approach. 3 An Optimized Method for Computing the Projection Matrix V In this section, we will overcome the drawback of the recursive method at least for the situation with two parameters as encountered in the electromagnetics model

MOR for Systems with Non-Rational Transfer Function 5 considered in the introduction. We will explain our approach using the first mixed moment in the series expansion (4). For this purpose, note that the coefficients M 1 M 2 B M and M 2 M 1 B M are treated as two individual terms in (8). Observing that they are actually both coefficients of σ 1 σ 2, they can be considered as one term during the computation as in (6). There are some similar coefficients which are computed separately in (8), but which are added up in (6), such as M 2 M1 2B M,M1 2M 2B M,M 1 M 2 M 1 B M, since they are all coefficients of σ1 2σ 2. Their treatment as individual matrices in (8) will, in some cases, result in more columns in the final projection matrix V as compared to (6). This will produce reduced-order models which are not as small as possible. Next we develop a new set of recursions for the coefficient matrices in (6), such that the coefficients of σ1 iσ j 2, i, j > 0, yield only one term. Furthermore, the modified Gram-Schmidt algorithm is applied to these recursions in order to compute the matrix V in a numerically robust way which will result in an accurate reduced-order model. Using the coefficient matrices in (6), we define new matrices V i as below, where i is to be understood as an index, not as a power. If J = i+ j, corresponding to the powers of the monomial σ1 iσ j 2, then J = 0 : B M := V 1 = [V 1 1 ] J = 1 : [M 1 B M,M 2 B M ] := V 2 = [V 2 1,V 2 2 ] J = 2 : [M 2 1 B M,(M 1 M 2 + M 2 M 1 )B M,M 2 2 B M] := V 3 = [V 3 1,V 3 2,V 3 3 ] J = 3 : [M 3 1 B M,(M 2 M 2 1 + M2 1 M 2 + M 1 M 2 M 1 )B M, (M 2 M 1 M 2 + M 2 2 M 1 + M 1 M 2 2 )B M,M 3 2 B M] := V 4 = [V 4 1,V 4 2,V 4 3,V 4 4 ]. J = m : [M m 1 B M,,M m 2 B M] := V m+1 = [V m+1 1,,V m+1 m+1 ] The above definitions are based on the observation that J = i actually corresponds to i+1 coefficient matrices. Let V0 i = 0 and V i+1 i = 0, i = 1,2,...,m+1, we derive the following recursions: V i j = M 2V i 1 j 1 + M 1V i 1 j, j = 1,2,...,i; i = 2,3,...,m+1. (10) Based on the recursions in (10), we propose an algorithm which computes V as in (6) using a modified Gram-Schmidt process. We describe this algorithm as in Algorithm 1. In this algorithm, p is the number of columns of B M, i.e., the number of inputs. B M (:,i) is the ith column of B M, i = 1,2,..., p, tol is a tolerance determining linear depency of the next computed column vector of V. V i j (:,l) is the lth column in matrix V i j, s 0 is the expansion point in (4). Algorithm 1 1. Apply the modified Gram-Schmidt process to the columns of V 1 = B M : V 1 1 (:,1) = B M(:,1)/ B M (:,1) ;

6 Lihong Feng, Peter Benner for i = 2 : p w = B M (:,i); for j = 1 : i 1 w = w ((V1 1(:, j))t w) V1 1 (:, j); if w > tol V1 1 (:,i) = w/ w ; else V1 1 (:,i) = 0; 2. Apply the modified Gram-Schmidt process to the columns in Vj i, i = 2,3,...,m+1, j = 1,...,i for i = 2 : m+1, j = 1 : i, l = 1 : p, do w = M 2 Vj 1 i 1(:,l)+M 1Vj i 1 (:,l); ( ) for t i = 1,...,i 1, t j = 1,...,t i, t = 1,..., p, do w = w ((V t i t j (:,t)) T w) V t i t j (:,t); for t j = 1,..., j 1, t = 1,..., p, do w = w ((Vt i j (:,t)) T w) Vt i j (:,t); for t = 1,...,l 1, do w = w ((V j i(:,t))t w) Vj i(:,t); if w > tol Vj i (:,l) = w/ w ; else Vj i (:,l) = 0; 3. Delete the zero columns in V i, yielding Ṽ i, i = 1,2,...,m+1. 4. V = {Ṽ 1,Ṽ 2,,Ṽ m+1 }. 5. if s 0 is not a real number, range{v} = span{real(v),imag(v)}. Remark 1. a) In step 2. ( ), the multiplication with zero columns can be avoided, thus saving a matrix multiplication and the application of G 1. Moreover, if both Vj 1 i 1 i 1 (:,l) and Vj (:, l) are zero, the whole orthogonalization procedure can be skipped. This is implemented with conditional statements and careful bookkeeping for the zero columns (which must be kept until step 3. for dimension consistency). The necessary statements are not shown in Algorithm 1 to keep the algorithmic description brief. b) A more efficient variant of Algorithm 1 would apply modified Gram-Schmidt orthogonalization only on level i of the recursion and then run modified Gram- Schmidt again at step 4. in order to obtain V with orthonormal columns. Whether this results in noticeable numerical inaccuracies requires further investigation.

MOR for Systems with Non-Rational Transfer Function 7 It is shown in the next section that the order of the reduced-order model computed by Algorithm 1 is smaller than the order of the one derived by (9). The improved algorithm is well suited for the parametric system from computational electromagnetics considered in this paper. Its performance will be illustrated using an industrial test example in the next section. 4 Simulation results In this section, we compare the three introduced methods for computing V when applied to an industrial test case. 1 For convenience, we call the method directly computing V DirectV. The recursive method in Section 2.2 is named RecV, the optimized method proposed in Section 3 is called ImRecV, since it is an improved method based on both DirectV and RecV. The order of the original system is n = 29,295. We show the numerical instability of DirectV in Table 1. In Table 2, we compare the orders of the reduced-order models derived by RecV and ImRecV. In Table 1 and Table 2, J is defined as above, i.e., the coefficients corresponding to σ1 iσ j 2, i + j = 0,1,2,...,J are used to compute V. q is the number of columns in the final projection matrix V and thus also the order of the reduced-order model. From Table 1 we see that although DirectV and ImRecV use the same coefficient matrices (6) to compute V, the number of columns of V computed by DirectV is smaller than for V computed by ImRecV. The 4th column of Table 1 shows the difference in the number of columns of V. This difference increases with J which is due to the numerical instability of DirectV, resulting in linear depency of computed columns of V that would be linearly indepent if they were computed exactly. Table 1 Numerical instability of DirectV J q of DirectV q of ImRecV number of columns deleted by DirectV 2 12 12 12-12=0 4 32 38 38-32=6 5 44 58 58-44=14 6 52 82 82-52=30 In Table 2, we compare ImRecV with RecV. Recall that RecV computes V according to (9), where some coefficient matrices are computed separately, which results in a larger reduced-order model for the system considered here. The second row of Table 2 shows that RecV computes more columns than ImRecV, therefore, the reduced-order model computed by RecV is less compact than that computed by Im- RecV and thus less efficient for numerical simulations. 1 Provided by Dr. S. Reitzinger, CST Computer Simulation Technology, Darmstadt, Germany.

8 Lihong Feng, Peter Benner The accuracy of ImRecV is illustrated in Fig. 1, where the order of the reduced transfer function Ĥ(s) computed by ImRecV is q = 38. The solid line in Fig. 1 is the magnitude of the original transfer function H( jω) in the frequency range of interest in the application. The star markers correspond to Ĥ(s), and match the solid line very well. Furthermore, the CPU time for evaluating H( jω) at 1000 frequency points for ω [4 10 9, 8 10 9 ] is around 8 hours, while the CPU time for evaluating Ĥ( jω) of order q = 38 at the same frequency points is only 0.84 seconds. All simulations are run on an IBM notebook with Intel CPU T2400, 1.83GHz, 1GB RAM. Table 2 Compactness of reduced-order models computed by RecV and ImRecV J 2 4 5 6 q (RecV) 24 116 242 494 q (ImRecV) 12 38 58 82 3.2 3 original transfer function H(jw) by full simulation reduced transfer function of order q=38 2.8 2.6 2.4 H(jw) 2.2 2 1.8 1.6 1.4 Fig. 1 Comparison of transfer functions 4 5 6 7 8 frequency w (Hz) x 10 9 References 1. Wittig, T., Schuhmann, R., Weiland, T.: Model order reduction for large systems in computational electromagnetics. Linear Algebra Appl. 415(2-3), 499 530 (2006) 2. Daniel, L., Siong, O., Chay, L., Lee, K., White, J.: A multiparameter moment-matching modelreduction approach for generating geometrically parameterized interconnect performance models. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 22(5), 678 693 (2004) 3. Feng, L., Benner, P.: A robust algorithm for parametric model order reduction. Proc. Appl. Math. Mech. 7(1), 1021,501 1021,502 (2008) 4. Feldmann, P., Freund, R.: Efficient linear circuit analysis by Padé approximation via the Lanczos process. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 14, 639 649 (1995)