Model Order Reduction for Nonlinear Eddy Current Problems

Similar documents
A Trajectory Piecewise-Linear Approach to Model Order Reduction and Fast Simulation of Nonlinear Circuits and Micromachined Devices

Model Reduction via Projection onto Nonlinear Manifolds, with Applications to Analog Circuits and Biochemical Systems

Krylov Subspace Methods for Nonlinear Model Reduction

A Symmetric and Low-Frequency Stable Potential Formulation for the Finite-Element Simulation of Electromagnetic Fields

Krylov-Subspace Based Model Reduction of Nonlinear Circuit Models Using Bilinear and Quadratic-Linear Approximations

A Trajectory Piecewise-Linear Approach to Model Order Reduction and Fast Simulation of Nonlinear Circuits and Micromachined Devices

Lecture 2: Linear Algebra Review

CME 345: MODEL REDUCTION

Operation of an Electromagnetic Trigger with a Short-circuit Ring

Advanced Computational Methods for VLSI Systems. Lecture 4 RF Circuit Simulation Methods. Zhuo Feng

Pseudoinverse & Moore-Penrose Conditions

Linear Algebra and Robot Modeling

EE731 Lecture Notes: Matrix Computations for Signal Processing

Handling Nonlinearity by the Polarization Method and the Newton-Raphson Technique

1 Principal component analysis and dimensional reduction

Linear Algebra (Review) Volker Tresp 2017

BALANCING-RELATED MODEL REDUCTION FOR DATA-SPARSE SYSTEMS

CS 143 Linear Algebra Review

Lecture 1: Review of linear algebra

The Linear Induction Motor, a Useful Model for examining Finite Element Methods on General Induction Machines

Multipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations

Normed & Inner Product Vector Spaces

MODELLING OF RECIPROCAL TRANSDUCER SYSTEM ACCOUNTING FOR NONLINEAR CONSTITUTIVE RELATIONS

Newton s Method and Efficient, Robust Variants

Dynamic Decomposition for Monitoring and Decision Making in Electric Power Systems

Model order reduction of electrical circuits with nonlinear elements

The Newton-Raphson method accelerated by using a line search - comparison between energy functional and residual minimization

Towards Reduced Order Modeling (ROM) for Gust Simulations

Computing Phase Noise Eigenfunctions Directly from Steady-State Jacobian Matrices

Virtual Prototyping of Electrodynamic Loudspeakers by Utilizing a Finite Element Method

Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques

Model Order Reduction using SPICE Simulation Traces. Technical Report

Approximation of the Linearized Boussinesq Equations

ECE 275A Homework #3 Solutions

Medical Physics & Science Applications

Reduced-dimension Models in Nonlinear Finite Element Dynamics of Continuous Media

2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media,

LOCALIZED DISCRETE EMPIRICAL INTERPOLATION METHOD

Robust fuzzy control of an active magnetic bearing subject to voltage saturation

Linear Algebra and Eigenproblems

EE5900 Spring Lecture 5 IC interconnect model order reduction Zhuo Feng

Least Squares Optimization

An Algorithmic Framework of Large-Scale Circuit Simulation Using Exponential Integrators

Semi-implicit Krylov Deferred Correction Methods for Ordinary Differential Equations

A Circuit Reduction Technique for Finding the Steady-State Solution of Nonlinear Circuits

RICE UNIVERSITY. Nonlinear Model Reduction via Discrete Empirical Interpolation by Saifon Chaturantabut

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Implementing Nonlinear Oscillator Macromodels using Verilog-AMS for Accurate Prediction of Injection Locking Behaviors of Oscillators

Comparing iterative methods to compute the overlap Dirac operator at nonzero chemical potential

Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013

arxiv: v1 [math.na] 7 May 2009

FEM and sparse linear system solving

Properties of Matrices and Operations on Matrices

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

Reduced-Order Multiobjective Optimal Control of Semilinear Parabolic Problems. Laura Iapichino Stefan Trenz Stefan Volkwein

Scientific Computing: An Introductory Survey

Model reduction of nonlinear circuit equations

Linear Algebra (Review) Volker Tresp 2018

Least Squares Optimization

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations

Numerical approximation for optimal control problems via MPC and HJB. Giulia Fabrini

Problem # Max points possible Actual score Total 120

RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK

NONLINEAR FINITE ELEMENT METHOD IN MAGNETISM

Parallel Singular Value Decomposition. Jiaxing Tan

Numerical Solutions to Partial Differential Equations

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

Linear Quadratic Zero-Sum Two-Person Differential Games

Lie Groups for 2D and 3D Transformations

Block Bidiagonal Decomposition and Least Squares Problems

Using Model Order Reduction to Accelerate Optimization of Multi-Stage Linear Dynamical Systems

Sketching as a Tool for Numerical Linear Algebra

Lecture 6. Regularized least-squares and minimum-norm methods 6 1

An Efficient Graph Sparsification Approach to Scalable Harmonic Balance (HB) Analysis of Strongly Nonlinear RF Circuits

On rigorous integration of piece-wise linear systems

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

Empirical Gramians and Balanced Truncation for Model Reduction of Nonlinear Systems

Haus, Hermann A., and James R. Melcher. Electromagnetic Fields and Energy. Englewood Cliffs, NJ: Prentice-Hall, ISBN:

Krylov subspace projection methods

Modeling and Predicting Chaotic Time Series

High Performance Nonlinear Solvers

A SYMBOLIC-NUMERIC APPROACH TO THE SOLUTION OF THE BUTCHER EQUATIONS

EE731 Lecture Notes: Matrix Computations for Signal Processing

Comparison of methods for parametric model order reduction of instationary problems

'INVERSE' A DISCUSSION OF THE INVERSE PROBLEM IN ELECTROMAGNETIC NDT. L. Udpa and W. Lord. Department of Electrical Engineering

An Accelerated Block-Parallel Newton Method via Overlapped Partitioning

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation

Pseudoinverse & Orthogonal Projection Operators

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015

Journal of System Design and Dynamics

CHAPTER 10: Numerical Methods for DAEs

COMP 558 lecture 18 Nov. 15, 2010

Linear Algebra. Session 12

MSFEM with Network Coupling for the Eddy Current Problem of a Toroidal Transformer

1 = I I I II 1 1 II 2 = normalization constant III 1 1 III 2 2 III 3 = normalization constant...

An Optimum Fitting Algorithm for Generation of Reduced-Order Models

1 Discretizing BVP with Finite Element Methods.

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

Numerical Linear and Multilinear Algebra in Quantum Tensor Networks

Transcription:

Model Order Reduction for Nonlinear Eddy Current Problems Daniel Klis Stefan Burgard Ortwin Farle Romanus Dyczij-Edlinger Saarland University, 6613 Saarbrücken, Germany e-mail: d.klis@lte.uni-saarland.de. Abstract: Established order reduction methods for nonlinear descriptor systems project the nonlinear system onto a linear submanifold. This may lead to reduced models of large scale which provide little computational speed-up. To overcome this deficiency, the proposed method replaces the construction of a global projection matrix by an interpolation of locally reduced models. The present paper gives the underlying theory and demonstrates the accuracy and efficiency of the suggested approach by a finite element model of an eddy current problem. Keywords: Finite element method, Model reduction, Nonlinear systems, Eddy current problems, Descriptor systems 1. INTRODUCTION Conventional finite-element FE implementations for nonlinear eddy current problems employ some implicit time integration scheme together with an iterative technique, such as Newton s method [Kaltenbacher 1], for solving auxiliary nonlinear boundary value problems BVP in the spatial domain. Both implicit time stepping and Newton s method are computationally very expensive. In addition, not only design parameters such as the model geometry and material properties, but also different excitations require completely new runs. As a result, solving these systems leads to an unmanagable computational effort. Model order reduction MOR has become a well-established methodology for reducing computation times, and numerous methods are available for the linear case. In the less well developed nonlinear case, one viable approach is the trajectory piecewise linearization TPWL algorithm, which linearizes the system at multiple points along several training trajectories [Rewienski and White 3]. The resulting reduced order model ROM approximates the original system by a weighted superposition of the linearized models. The TPWL method employs the proper orthogonal decomposition POD technique [Albunni et al. 8] to generate a reduced basis for the linear models, by computing the singular value decomposition SVD of a matrix whose column vectors represent known system states of the nonlinear system. One limitation of TPWL method that is critical w.r.t. the applications considered in this paper is that, for large systems with many training inputs, generating the reduced basis becomes overly expensive, and the dimension of the ROM becomes rather high. To overcome both issues, this paper develops a local subspace approach, based on [Lohmann and Eid 9]. The proposed method ensures implicitly that neither the number of vectors for a single SVD nor the ROM size will grow too large. In contrast to other approaches, it is able to handle locally reduced models of variable dimensions. Section discusses the TPWL algorithm: it gives a brief overview of projection-based MOR, motivates the piecewise-linearization procedure, and presents the equations for a classical, global, TPWL-ROM. In Section3.1 we propose a new strategy to select the linearization points. Then, Section 3. introduces a local MOR approach for TPWL models which employs state transformations for superimposing models of different reduced bases. Hereafter it is shown that FE-eddy current problems easily fit into the presented order reduction framework. The last Section compares the numerical performance of both the global and the local approach.. TPWL REDUCED MODELS.1 Projection-based MOR for linear systems Consider a linear descriptor system of the form E x = Ax + bu, 1a y = c T x, 1b wherein x R N is the state vector of the system, E, A R N N are constant matrices, b, c R N are constant vectors, and u, y are the system input and output, respectively. If the order of the system N is very high, as in case of FE models, one seeks to approximate the state vector in a low dimensional subspace of order n N. If a basis of this subspace is explicitly available as the orthonormal columns of a projection matrix V R N n, the approximate solution x satisfies x x = Vˆx, with the reduced state vector ˆx R n. Pre-multiplication of 1a by V T and using yields the ROM

wherein Ẽ ˆx = Èx + bu, 3a y = c T ˆx, Ẽ = V T EV R n n, à = V T AV R n n, b = V T b R n, 3b 4a 4b 4c c T = c T V R n. 4d Since the ROM of order n does not depend on the original model size anymore, solving becomes very effective.. The TPWL method Now turn to a nonlinear descriptor system of the form E x = fx + bu, 5a y = c T x, 5b wherein fx R N is a nonlinear function of the state. To apply projection-based MOR as described above, an affine parametrization of f is needed. Otherwise, the term V T fvˆx would still require ON operations to evaluate. The TPWL algorithm generates the required affine parametrization by approximating f through a weighted sum of s linearized models: fx w i x fx i + J i x x i, 6 wherein x i denote the linearization points and J i = Jx i denote the Jacobians. The state-dependent weighting functions w i x are commonly taken of the form [Rewienski and White 3]: w i x = α i exp β x x i, = min x x j, 7 j wherein the exponent β is chosen by the user, and the scaling function α i is determined such that the weights satisfy the partition-of-unity property w i x 1. 8 Thus, the resulting TPWL system reads w i xj i + E x i = w i x fx i J i x i + bu, 9a y = c T x. 9b We will refer to 9 as the full piecewise-linear PWL model. Note that 9 fits into the projection MOR framework but is still nonlinear because of the weighting functions. The solution for x is obtained iteratively by making an initial guess x and then repeating the steps compute weighting functions w i x, solve for xw i, until convergence is achieved. In TPWL, the linearization points are chosen as follows: the original model 5 is solved along a number of training trajectories. From the resulting set of M state vectors, a suitable sub-set of s M states is selected to serve as the centers x,..., x s 1 in 9a. The specific strategy employed in this paper is given in Section 3.1. The construction above implies that the TPWL method can only be used for approximating trajectories whose states are sufficiently close to the linearization points..3 Conventional MOR for TPWL systems MOR for TPWL systems is usually done by Krylov subspace methods or POD [Rewienski and White 6]. The present work focuses on the latter, because it facilitates the re-use of training trajectories. The procedure starts by computing an orthogonal basis for the convex hull of the training states [x t1, x t,..., x tm ] by means of a SVD: [x t1, x t,..., x tm ] = UΣW T. 1 Herein U R N M and W R M M are orthogonal, and Σ is a diagonal matrix holding the singular values. Then, the first n N columns of U, corresponding to the largest n singular values, are chosen as the basis of the reducedorder state space. Hence the projection matrix is given by V = U:, 1 : n, 11 and, by 3, the resulting ROM reads w i ˆx J i + Ẽ ˆx = w i ˆx g i + bu, with J i = V T J i V, Ẽ = V T EV, y = c T ˆx, g i = V T fx i J i x i, b = V T b, c T = c T V. 1a 1b 13a 13b 13c 13d 13e Because V contains information about all training states, we will refer to 1 as the globally reduced model. 3. PROPOSED METHOD 3.1 Selection of linearization points The selection strategy for linearization points has to balance the conflicting goals of accuracy and speed. While a larger number of linearization points leads to a more accurate model, it also increases the costs of ROM generation and evaluation. Hence, the general idea [Tiwary and Rutenbar 5] is to keep the linearized model about a point x i as long as neither the current state along a training trajectory x nor the associated Jacobian Jx differ too much. In contrast to existing approaches, the proposed method utilizes the information contained in the linearized model, by comparing x not against x i but against the approximate current state x lin obtained from the linearized model. The

algorithm proceeds as follows: the first linearized model is built around the initial state. Then, if x x lin Jx Jx i < 14 x Jx i is violated, wherein is a user-defined threshold, the current state x is added to the set of linearization points. 3. Local projection MOR The conventional MOR approach discussed above exhibits two limitations that are critical for the present work. First, ROM size increases with the number of substantially different training trajectories so that solving the ROM becomes inefficient. By substiantially different, we mean trajectories that visit distant regions of the state space and the nonlinear function f, respectively, and hence contribute strongly to the subspace 1. In consequence, by the very construction of V 11, the basis of each linearized model tends to contain an increasing number of states that are located far away from its center and, hence, outside its region of validity. Second, computational complexity of the reduced SVD in 1 is roughly ONM [Trefethen and Bau 1997], i.e. quadratic in the number of training states. Hence, numerical costs of the SVD increase rapidly and may render ROM generation inefficient if not impossible. Both points motivate splitting the global basis into several local ones, which may be carried out in different ways; see [Gu and Roychowdhury 8], [Tiwary and Rutenbar 6], and [Lohmann and Eid 9]. In the present work, k s subsets of state vectors X j = {x j1,..., x jmj } are formed, each one containing m j subsequent states of a single training trajectory. Then, local projection matrices V j are generated by [ ] xj1,..., x jmj = Uj Σ j Wj T, 15 V j = U j :, 1 : n V j. 16 In contrast to existing methods where all V j are of the same size, the present work keeps only the column vectors corresponding to singular values above a relative magnitude limit ɛ. In this way, the local projection matrices V j differ in column dimension but lead to balanced error levels in the local bases. Keeping local ROM size variable proofs particularly advantageous when the nonlinear system exhibits regions of nearly linear behavior, e.g., when the system approaches a steady state. Then, the local ROMs assume very small size, as will be shown in the numerical example of Section 5. The local ROM for the linearized model about x i X j is generated by means of the associated projection matrix V j, providing a reduced basis for X j. Note that some coordinate transformation is required to carry out the superposition of ROMs in 1, because the local bases differ. [Lohmann and Eid 9] suggest a projection into a subspace common to all models that are involved in the superposition, which can again be computed by means of the SVD. Since we consider models of variable dimensions, numerical instabilities prevent us from using the projection matrices as input to the SVD. Instead, we propose to employ the states of the training trajectories. The result is a state-dependent common projection matrix [xv a, xv b,...] = UΣW T, 17 Rx = U:, 1 : n R ab..., 18 where xv a = [x a1,..., x ama ] and V a, V b,... denote all projection matrices corresponding to models with nonzero weighting functions w i x. However, in the present context, this approach is limited by the fact that the offline step of computing Rx requires all combinations of the k matrices V j and may thus become very expensive. In [Tiwary and Rutenbar 6], a similar approach turned out to be more costly than constructing the global projector. Thus, the present algorithm just considers pairs of projection matrices in the offline step of the common subspace computation. The restriction is justified as follows: except for degenerate cases, the system state always lies somewhere between two nearest linearization points, and one may assume that it is the corresponding two linearized models that mainly determine the system behavior. In consequence, only k k common subspaces exist, and evaluation of the sums over all models becomes very fast. The dimension of one common space, dim range RV i, V j, is n R ij nv i + n V j N, which is still very small compared to N. According to the definition, Rx will be piecewise constant. In contrast to existing methods, it will also change size because, similar to 16, for all Rx columns corresponding to singular values up to the same relative magnitude are used. We note that the weighting functions have to be truncated and renormalized. To realize the transformation into the common subspace, we define state-dependent transformation matrices T i by T i x = T ij i = Rx T V i = R T ij V i. 19 Here, superscripts in parentheses denote the indices of the projection matrices linked with the indexed quantity. So, in 19, Rx is the common subspace of V i and V j, because x stems from a superposition of linearized models that have been reduced with V i and V j, respectively. By 19, pre-multiplication with T ij i expresses an i-state in ij-coordinates: ˆx ij = R T ij x = R T ij V iˆx i = T ij i ˆx i. Vice versa, the best approximation in i-coordinates to an ij-state is given by ˆx i = V T i x = V T i R ijˆx ij = T T ij i ˆx ij. 1 The transformation cuts off the components of ˆx ij orthogonal to range V i. By the definition of the common subspace, the T i x do not depend on the size of the full model. In addition, they are piecewise constant. Thus, with the help of and 1, the following locally reduced model for 9 is obtained: [ ] w i ˆxT j ˆx J i + Ẽj T T j ˆx ˆx = w i ˆxT j ˆx g i + b j u, a y = w i ˆx c T j T T j ˆxˆx, b

with J i = V T j J i V j, Ẽ j = V T j EV j, g i = Vj T fx i J i x i, b j = Vj T b, 3a 3b 3c 3d c T j = c T V j. 3e Here, V j is the projection matrix used for reducing the i-th linearized model. In, the partition-of-unity property 8 has been used. It can be seen that it suffices to save the k k transformation matrices T i x rather than any matrices R of the dimension of the full model. On the other hand, k different versions of Ẽj, b j, c T j exist, compared to just one in case of the standard TPWL model. 3.3 Time stepping In order to solve the system, some kind of time integration has to be performed. In the algebraic discretization of x, state vectors evaluated at discrete points in time will appear. Although application to 1 is straightforward, the time-dependence of the basis of the locally reduced model requires some attention. In the worst case, the common projection matrix of changes from one time step to the next from R to R αβ with α β. Let ˆx denote the reduced state vector of the previous time step. First, this vector needs to be decomposed into components in V and V. From R ˆx = V ˆx + V ˆx, 4 the system of linear equations [ I V T ] ] [ ] V [ˆx T T V T V I ˆx = T T ˆx 5 is obtained. If range V range V, the decomposition is not unique, which does not matter, though. The second step is to approximate both contributions in the new bases: V ˆx = V αˆx α V ˆx = V αˆx α + V β ˆx β + ê, 6a + V β ˆx β + ê. 6b In 6, the components ê range V α range V β will be lost. Thus, the sought transformation reads ˆx αβ = T αβ α ˆx α + ˆx α + T αβ β ˆx β + ˆx β. 7 Equation 7 enables us to represent any reduced state vector in any local subspace, at the cost of storing kk 1 reduced order quantities of the form Vi T V j R nv i nv j in addition to the transformation matrices T. Also, note that the weighting functions for the local method can be computed very efficiently, by exploiting reduced-order quantities. For the distance from a given system state ˆx αβ to a linearization point x, we have R αβˆx x = ˆx T } R T αβ {{ R αβ } ˆx + x T x x T R αβˆx, 8 I and the corresponding weighting function w reads w = α exp β ˆx + x ˆx T αβ 1 ˆx. 9 Thus, to compute the weights, mkk 1 vectors ˆx T αβ = x T Rαβ R nr αβ and m scalars x need to be computed in an offline step. Then, the evaluation of 9 requires cheap operations on reduced order quantities only. 4. APPLICATION TO EDDY CURRENT PROBLEMS Consider a domain Ω whose boundary is partitioned into two parts, Ω = Γ D Γ N. Let n denote the outward looking unit normal vector on the boundary and µ, σ and J i stand for the magnetic permeability, the electric conductivity, and the imprinted current density, respectively. The quasi-static initial boundary value problem IBVP for the ungauged magnetic vector potential A is given by curl µ 1 curl A + σa = J i in Ω, 3a n A = on Γ D, t >, 3b n µ 1 curl A = on Γ N, t >, 3c A = in Ω, at t =. 3d In view of the numerical example of Section 5, we use cylindrical coordinates r, ϕ, z and consider the axisymmetric case with ϕ =. In this case, the spatial domain reduces to a subset of the z r half-plane, and the vector potential may be taken as Ar, ϕ, z; t = Ar, z; te ϕ, 31 wherein e ϕ is the unit vector in ϕ direction. Due to 31, the IBVP 3 is implicitly gauged and posseses a unique solution; see e.g. [Kaltenbacher 1]. The IBVP is nonlinear because, for ferromagnetic materials, the magnetic permeability is a monotonely decreasing function of the magnitude of the magnetic flux density B. In view of B = curl A, we have µ = µ B = µ curl A. 3 Without loss of generality, we consider a single coil of cross-sectional area Γ C with N windings and ohmic resistance R. The coil is driven by the terminal voltage Ut, and the resulting coil current It represents the output of the system. Under the assumptions that J i is constant and the windings are free of eddy currents, spatial semi-discretization of the IBVP 3 by the FE method [Kaltenbacher 1] leads to a system of the form: [ ] [ [ [ ] [ ] T x Sx b x b T = + U, 33a /R I] 1] I 1/R [ x y = [ 1]. 33b I] Here, x R N and b R N denote the state and excitation vector, respectively, T R N N is the mass matrix and Sx R N N the state-dependent stiffness matrix. Specifically, we have

-. r aa r ai r ii -.4.6.4 r ca r ci r ia. b a b i core ball h t h b ba= b i =17.5 r aa =39.1 r ai =34.5 r ia =15.6 r ii =4.5 h t =1.5 h b =4 r ca =3.5 r ci =16.6 Table 1. Computer run times in seconds. Duty cycle Process Global ROM Local ROM ROM generation 147 187.77 1 3 time steps 9.7 6.4 1 4 time steps 9. 5.8 ROM generation 147 187.78 1 3 time steps 9.1 4.7 1 4 time steps 9.8 13.4 1 1 3 -.6..3.4.5 Fig. 1. Electromagnetic levitation system. Dimensions are in mm. Windings: N = 45, J i, σ = S/m. Core: ferrite material N7 [EPCOS 6], σ = S/m. Steel ball: µ r = 1, σ = 1 7 S/m. I in A I in A 4 3 1..4.6.8.1 time in s 3.5 3.4 3.3 3..6.7 a Entire trajectory. full, nonlinear globally reduced locally reduced full, nonlinear globally reduced locally reduced 3.1.4.43.44.45 time in s b Close-up. Fig.. Transient response of current I for d =.77. S ij x = curl w i µ 1 x curl w j dω, 34a Ω T ij = w i σw j dω, 34b Ω N b i = w i J i dω. 34c J i Γ C Ω In 34, w i = w i e ϕ H 1 Ω denote FE basis functions. The nonlinear descriptor system 33 exhibits the same structure as 5 and is hence accessible to the MOR methodology of Section 3. 5. NUMERICAL EXAMPLE Figure 1 presents the structure of a magnetic levitation system [Glück et al. 11]. It consists of an electromagnet relative error 1 4 1 5 globally reduced locally reduced 1 6..4.6.8.1 time in s Fig. 3. Comparison of the global and the local TPWL model with respect to the nonlinear FE model. Maximum relative error per period along the d =.77 test trajectory. number of columns n V i 15 1 75 5 5 4 6 8 1 timestep Fig. 4. Dimensions of the local projection matrices for the d =.81 training trajectory. with nonlinear ferrite core, which suspends a hollow steel ball in the air. For the purposes of this paper, the distance between the coil and the ball is fixed at 5 mm. The system is excited by a pulse-width modulated voltage U = ±11.4 V of period T = 14 µs. In the numerical experiments below, the transient response of the current during the magnetizing of the coil is recorded for 1 time steps of duration t = T/1. Time integration in 33 is carried out by a backward Euler scheme [Hairer et al. 9]. All calculations are executed on an Intel i7-6k CPU processor with 16 MB RAM. The variable duty cycle d is taken as a parameter of the model. We use d =.66, d =.73, d =.81, d =.85 as training trajectories for the TPWL algorithm and test with a d =.77 trajectory. The adaptive method of Section 3.1 yields a total of 66 linearization points, for a threshold value of = 1 5 in 14. Figure presents the test trajectory computed by the proposed local MOR method.

Table. Memory requirements. Process Model Memory in MB ROM generation global ROM 34 peak local ROM 49 peak training data 15 Storage global ROM local ROM 5 Concerning the SVD, we always keep the singular values of relative magnitude larger than 1 8. Each training trajectory is subdivided into 1 sets X j, each one containing 1 states. If a linearization point is located within the first last vectors of 15, the previous following 3 states are also included in the SVD. This approach is heuristic and will be investigated more rigorously in the future. Figure 3 compares the global and local TPWL method to the original nonlinear FE model. It can be seen that both techniques perform very similarly: the mean relative error of the global and local ROM is 3.3 1 4 and 4.4 1 4, respectively. However, Table 1 demonstrates that the simulation times of the locally reduced ROM are significantly shorter, especially when the system approaches the steady state, as in the case of 1 4 time steps. This effect is due to the fact that the dimension of the global ROM is always 353, whereas the size of the local ROM varies between 37 and 5. For comparison, the underlying FE model 33 features 31597 degrees of freedom. Figure 4 illustrates the behavior of the dimension of the local projection matrices along the training trajectory d =.81. Small values occur in the beginning when the ferrite core behaves linearly and in the end when the system approaches the steady state. Table gives a comparison of the memory requirements of the global and local TPWL method. Both peak memory consumption during the MOR generation phase, where the SVD is a major contributor, and storage costs for the ROMs, which are crucial during the online solution process, are presented. The great advantages of the proposed, local, method over the conventional, global, approach are evident. Albunni, M., Rischmuller, V., Fritzsche, T., and Lohmann, B. 8. Model-order reduction of moving nonlinear electromagnetic devices. Magnetics, IEEE Transactions on, 447, 18 189. EPCOS 6. Ferrites and accessories, SIFERRIT material N7. Glück, T., Kemmetmüller, W., Tump, C., and Kugi, A. 11. A novel robust position estimator for self-sensing magnetic levitation systems based on least squares identification. Control Engineering Practice, 19, 146 157. Gu, C. and Roychowdhury, J. 8. Model reduction via projection onto nonlinear manifolds, with applications to analog circuits and biochemical systems. In Computer-Aided Design, 8. ICCAD 8. IEEE/ACM International Conference on, 85 9. Hairer, E., Nørsett, S.P., and Wanner, G. 9. Solving Ordinary Differential Equations I: Nonstiff Problems Springer Series in Computational Mathematics v. 1. Springer, nd edition. Kaltenbacher, M. 1. Numerical Simulation of Mechatronic Sensors and Actuators. Springer, edition. Lohmann, B. and Eid, R. 9. Efficient order reduction of parametric and nonlinear models by superposition of locally reduced models. In Methoden und Anwendungen der Regelungstechnik. Shaker Verlag. Rewienski, M. and White, J. 3. A trajectory piecewise-linear approach to model order reduction and fast simulation of nonlinear circuits and micromachined devices. Computer-Aided Design of Integrated Circuits and Systems, IEEE Transactions on,, 155 17. Rewienski, M. and White, J. 6. Model order reduction for nonlinear dynamical systems based on trajectory piecewise-linear approximations. Linear Algebra and its Applications, 415-3, 46 454. Tiwary, S.K. and Rutenbar, R.A. 5. Scalable trajectory methods for on-demand analog macromodel extraction. In Proceedings of the 4nd annual Design Automation Conference, DAC 5, 43 48. ACM. Tiwary, S.K. and Rutenbar, R.A. 6. Faster, parametric trajectory-based macromodels via localized linear reductions. In Proceedings of the 6 IEEE/ACM international conference on Computer-aided design, IC- CAD 6, 876 883. ACM. Trefethen, L.N. and Bau, D. 1997. Numerical linear algebra. SIAM. 6. CONCLUSION To overcome the high memory requirements of conventional TPWL methods for nonlinear descriptor systems, a new algorithm that employs locally reduced models has been proposed. The numerical example demonstrates similar faster simulation speed and significantly lower memory consumption of the suggested approach. REFERENCES