EINDHOVEN UNIVERSITY OF TECHNOLOGY Department ofmathematics and Computing Science

Similar documents
The application of preconditioned Jacobi-Davidson methods in pole-zero analysis

RANA03-02 January Jacobi-Davidson methods and preconditioning with applications in pole-zero analysis

Jacobi-Davidson methods and preconditioning with applications in pole-zero analysis Rommes, J.; Vorst, van der, H.A.; ter Maten, E.J.W.

Jacobi-Davidson methods and preconditioning with applications in pole-zero analysis

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems

Iterative methods for symmetric eigenvalue problems

Arnoldi Methods in SLEPc

Computing a partial generalized real Schur form using the Jacobi-Davidson method

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland

A Jacobi Davidson Method for Nonlinear Eigenproblems

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation

CONTROLLING INNER ITERATIONS IN THE JACOBI DAVIDSON METHOD

HARMONIC RAYLEIGH RITZ EXTRACTION FOR THE MULTIPARAMETER EIGENVALUE PROBLEM

Numerical Methods in Matrix Computations

Controlling inner iterations in the Jacobi Davidson method

HOMOGENEOUS JACOBI DAVIDSON. 1. Introduction. We study a homogeneous Jacobi Davidson variant for the polynomial eigenproblem

of dimension n 1 n 2, one defines the matrix determinants

Henk van der Vorst. Abstract. We discuss a novel approach for the computation of a number of eigenvalues and eigenvectors

6.4 Krylov Subspaces and Conjugate Gradients

Model order reduction of large-scale dynamical systems with Jacobi-Davidson style eigensolvers

Direct methods for symmetric eigenvalue problems

Application of Lanczos and Schur vectors in structural dynamics

Solving Large Nonlinear Sparse Systems

CONTROLLING INNER ITERATIONS IN THE JACOBI DAVIDSON METHOD

Domain decomposition on different levels of the Jacobi-Davidson method

Two-sided Eigenvalue Algorithms for Modal Approximation

DELFT UNIVERSITY OF TECHNOLOGY

Universiteit-Utrecht. Department. of Mathematics. Jacobi-Davidson algorithms for various. eigenproblems. - A working document -

A comparison of solvers for quadratic eigenvalue problems from combustion

Stability and Passivity of the Super Node Algorithm for EM Modeling of IC s

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

1. Introduction. In this paper we consider the large and sparse eigenvalue problem. Ax = λx (1.1) T (λ)x = 0 (1.2)

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

On the Modification of an Eigenvalue Problem that Preserves an Eigenspace

FEM and sparse linear system solving

c 2006 Society for Industrial and Applied Mathematics

Numerical linear algebra

Jos L.M. van Dorsselaer. February Abstract. Continuation methods are a well-known technique for computing several stationary

Preconditioned inverse iteration and shift-invert Arnoldi method

A Parallel Implementation of the Davidson Method for Generalized Eigenproblems

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS

A Jacobi Davidson-type projection method for nonlinear eigenvalue problems

Alternative correction equations in the Jacobi-Davidson method. Mathematical Institute. Menno Genseberger and Gerard L. G.

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD

DELFT UNIVERSITY OF TECHNOLOGY

The parallel computation of the smallest eigenpair of an. acoustic problem with damping. Martin B. van Gijzen and Femke A. Raeven.

Finding Rightmost Eigenvalues of Large, Sparse, Nonsymmetric Parameterized Eigenvalue Problems

Advanced Computational Methods for VLSI Systems. Lecture 4 RF Circuit Simulation Methods. Zhuo Feng

APPLIED NUMERICAL LINEAR ALGEBRA

An Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems

D2.14 Public report on Improved Initial Conditions for Harmonic Balance in solving free oscillatory problems

Preface to the Second Edition. Preface to the First Edition

COMPUTING DOMINANT POLES OF TRANSFER FUNCTIONS

COURSE DESCRIPTIONS. 1 of 5 8/21/2008 3:15 PM. (S) = Spring and (F) = Fall. All courses are 3 semester hours, unless otherwise noted.

A HARMONIC RESTARTED ARNOLDI ALGORITHM FOR CALCULATING EIGENVALUES AND DETERMINING MULTIPLICITY

Course Notes: Week 1

MICHIEL E. HOCHSTENBACH

Iterative methods for Linear System

Preconditioned Parallel Block Jacobi SVD Algorithm

Model order reduction of large-scale dynamical systems with Jacobi-Davidson style eigensolvers Benner, P.; Hochstenbach, M.E.; Kürschner, P.

A hybrid reordered Arnoldi method to accelerate PageRank computations

Alternative correction equations in the Jacobi-Davidson method

The solution of the discretized incompressible Navier-Stokes equations with iterative methods

Structure preserving preconditioner for the incompressible Navier-Stokes equations

A CHEBYSHEV-DAVIDSON ALGORITHM FOR LARGE SYMMETRIC EIGENPROBLEMS

AMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1.

Iterative Methods for Solving A x = b

[2] W. E. ARNOLDI, The principle of minimized iterations in the solution of the matrix eigenvalue problem, Quart. Appl. Math., 9 (1951), pp

WHEN studying distributed simulations of power systems,

Lecture 3: Inexact inverse iteration with preconditioning

Davidson Method CHAPTER 3 : JACOBI DAVIDSON METHOD

Subspace accelerated DPA and Jacobi-Davidson style methods

Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices

Linear Algebra and its Applications

DELFT UNIVERSITY OF TECHNOLOGY

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Domain decomposition in the Jacobi-Davidson method for eigenproblems

Model Order Reduction

Universiteit-Utrecht. Department. of Mathematics. The convergence of Jacobi-Davidson for. Hermitian eigenproblems. Jasper van den Eshof.

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

Methods for eigenvalue problems with applications in model order reduction

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems

Convergence Behavior of Left Preconditioning Techniques for GMRES ECS 231: Large Scale Scientific Computing University of California, Davis Winter

A JACOBI-DAVIDSON ITERATION METHOD FOR LINEAR EIGENVALUE PROBLEMS. GERARD L.G. SLEIJPEN y AND HENK A. VAN DER VORST y

CONVERGENCE BOUNDS FOR PRECONDITIONED GMRES USING ELEMENT-BY-ELEMENT ESTIMATES OF THE FIELD OF VALUES

A Jacobi Davidson type method for the generalized singular value problem

Solving Sparse Linear Systems: Iterative methods

Solving Sparse Linear Systems: Iterative methods

Jacobi s Ideas on Eigenvalue Computation in a modern context

Numerical Methods for Large Scale Eigenvalue Problems

Partial Eigenvalue Assignment in Linear Systems: Existence, Uniqueness and Numerical Solution

On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Contents. Preface... xi. Introduction...

Transcription:

EINDHOVEN UNIVERSITY OF TECHNOLOGY Department ofmathematics and Computing Science RANA03-15 July 2003 The application of preconditioned Jacobi-Davidson methods in pole-zero analysis by J. Rommes, C.W. Bomhof, H.A. van der Vorst, E.J.W. ter Maten Reports on Applied and Numerical Analysis Department of Mathematics and Computer Science Eindhoven University of Technology P.O. Box 513 5600 ME Eindhoven, The Netherlands ISSN: 0926-4507

The Application of Preconditioned Jacobi-Davidson Methods in Pole-zero Analysis J. Rommesl, C.W. Bomhor, H.A. van der Vorst 1, and E.J.W. ter Maten 3 1 Utrecht University, The Netherlands, jrommes(qcs. uu. nl 2 Plaxis BV, Delft, The Netherlands, 3 Philips Research Laboratories, Eindhoven University of Technology, The Netherlands Abstract. The application of Jacobi-Davidson style methods in electric circuit simulation will be discussed in comparison with other iterative methods (Arnoldi) and direct methods (QR, QZ). Numerical results show that the use of a preconditioner to solve the correction equation may improve the Jacobi-Davidson process, but may also cause computational and stability problems when solving the correction equation. Furthermore, some techniques to improve the stability and accuracy of the process will be given. 1 Introduction Pole-zero analysis is used in electrical engineering to analyse the stability of electric circuits [10,11,14]. For example, if a circuit is designed to be an oscillator, pole-zero analysis is one of the ways to verify that the circuit indeed oscillates. Another application is the verification of reduced order models over a wide frequency range [9]. Because the complexity of the circuits designed nowadays grows, together with the frequency range of interest, there is need for faster algorithms, not neglecting the accuracy. In this paper, Sect. 2 introduces the pole-zero problem. Section 3 describes the Jacobi-Davidson style methods as alternatives of conventional methods for the pole-zero problem. In Sect. 4, some typical numerical problems and techniques are discussed. In Sect. 5 the methods will be compared by numerical results, concluding with some future research topics. 2 Pole-zero Analysis in Circuit Simulation The Kirchhoff Current Law and the Kirchhoff Voltage Law describe the topology of an electric circuit. Together with the Branch Constitutive Relations, which reflect the electrical properties of the branches, the two Kirchhoff Laws result in a system of differential algebraic equations [10]: :tq(t, x) + j(t, x) = 0, (1)

2 J. Rommes, C.W. Bomhof, H.A. van der Vorst, and E.J.W. ter Maten where x E Rn contains the circuit state and q, j : R x Rn -t R are functions representing the reactive and resistive behaviour, respectively. The way (1) is solved depends on the kind of analysis (DC-analysis, AC-analysis, transient analysis, pole-zero analysis). In every analysis, the capacitance matrix C E Jrxn and the conductance matrix G E Rnxn appear: C(t,x) = 8 q (t,x), 8x G(t, x) = 8j(t, x). 8x Both matrices are real, non-symmetric and sparse. Starting from a linearization round the DC-operating point, the timedomain formulation is as follows: { Cd~~t) + Gx(t) = e(t) x(o) =0, where e(t) models the excitation. Because not all properties can be computed in the time domain, the problem is transformed to the frequency domain by applying a Laplace transform: (2) (sc + G)X(s) = (s), (3) where X, are the Laplace-transforms of the variables x, e and s is the variable in the frequency domain. The response of the circuit to a variation of the excitation is given by the transfer function H(s) = (sc +G)-l. (4) The elementary response of circuit variable X o to excitation i is given by The poles are the values Pk E C that satisfy det(pkc + G) = 0, hence (G + PkC)X = 0 for some x :j:. 0, which leads to a generalised eigenproblem (>' = -Pk): Gx = >'Cx, x :j:. O. (6) Because the problem of computing the zeroes is similar to the problem of computing the poles, the rest of this paper will consider the problem of computing the poles. Especially for large circuits (n > 10 4 ), robust, iterative methods for the generalised eigenvalue problem (6) with sufficient accuracy and acceptable computational costs are needed. Furthermore, all right half-plane poles and no false right half-plane poles are desired. The dominant poles and zeroes must be accurate enough to produce correct Bode-plots for the frequency range of interest. Two kinds of pole-zero methods are known in literature [10]: combined and separate pole-zero computation. This paper focuses on separate pole-zero computation. (5)

3 Jacobi-Davidson Style Methods Jacobi-Davidson Methods in Pole-zero Analysis 3 Because of its accuracy and robustness, the full-space QR-method (and the QZ-method to a less degree) is a popular choice as solver for the eigenproblem (6). However, the total costs of O(n 3 ) and the necessity of an LUdecomposition, which destroys the sparsity of G and causes numerical inaccuracies and maybe instabilities, become unacceptable for larger problems. Iterative methods like the implicitly restarted Arnoldi method also need the LU-decomposition and are designed to compute only a few (m «n) eigenvalues [11]. The Jacobi-Davidson method [13], on the other hand, is designed to converge fast to a few selected eigenvalues. Based on the Jacobi-Davidson method, the JDQR-method [8], which computes a partial Schur form, and the JDQZ-method [8], which computes a partial generalised Schur form, are developed. Without going into much detail, the basic idea behind the Jacobi Davidson methods is as follows. For the problem Ax = >.x, given the eigenpair approximation (Bk,Uk): - Search a correction ve ut for Uk such that A(Uk + v) = >'(Uk + v). - Solve v from the correction equation, with rk =AUk - BkUk: - Orthogonally expand the current basis V with v. The Ritz-vector Uk = V 8 is obtained by applying a full-space method, for instance the QR-method, to the projected matrix V*AV, resulting in the eigenpair «()k, 8). The Jacobi-Davidson method satisfies a Ritz-Galerkin condition [13]. The correction equation needs more attention. For the JDQR-method, it is (1 - QQ*)(A - ()k1)(1 - QQ*)v = -rk, (7) where Q E linx k IT the correction equation is solved exactly, the convergence of the Jacobi-Davidson method is quadratic [13]. Besides solving the correction equation exactly, one can use linear iterative methods, like GMRES, with or without preconditioning. Because exact solvers are often not feasible in practice, the focus is on iterative methods with preconditioning. Using a preconditioner, however, is not as easy as it seems. Consider a preconditioner K R; A - ()k1. There are three major issues. Firstly, the preconditioner is projected afterwards (f< = (1 - QQ*)K(1 - QQ*)). Secondly, A - ()k1 becomes more and more ill conditioned as the approximations ()k become near the eigenvalue >.. Thirdly, this ()k changes every iteration, and so does A - ()k1.

4 J. Rommes, C.W. Bombof, B.A. van der Vorst, and E.J.W. ter Maten 4 Numerical Problems and Techniques In [5], a technique is described to reduce the problem size of the ordinary eigenproblem. The idea is to remove the columns (and corresponding rows) from a-1c which are equal to zero, as well as the rows (and corresponding columns) equal to zero. This is justified because rows and columns equal to zero have corresponding eigenvalues of value zero, and removing these rows and columns does not influence the other eigenvalues. Because the product a-1c is not available explicitly, for computational reasons, the k rows and columns to keep are administrated in a matrix S = [ei!"'" eik]' where ej is the j-th unit vector of length n. The reduced matrix is then defined by sta-1cs. Reduction of the problem in this way has a number of advantages. Firstly, Jordan blocks may be removed, thereby improving the stability and accuracy of the computations. Secondly, the computational costs will be reduced. Table 1 shows the degree of reduction for some example problems. Table 1. The size, reduced size and degree of reduction for some example problems. The data is extracted from [5] anq [10]. Problem Size Size (reduced) Reduction pz_09 504 365 28% pz_28 177 74 58% pz_36_osc 120 86 28% pz_agc_crt 114 96 16% jr_1 815 681 16% Note, however, that the spectral properties ofthe reduced problem do not differ from the spectral properties of the original problem. As a consequence, the speed of convergence of the eigenmethod used will not be improved significantly. This technique is not applicable directly to the generalised eigenproblem. Considering the computation of the preconditioner (for the operator in the correction equation), one has to cope with two problems, i.e. the near singularity of the operator to precondition and the continuous change of the operator. These two problems appear in both Jacobi-Davidson QR and Jacobi Davidson QZ. For computational reasons, a new preconditioner should be computed at most once for each new eigenvalue target. This is possible in practice, because the search space and corresponding Ritz-values in general contain good approximationsfor new eigenvalues. Some additional advantage can be gained by only recomputing the preconditioner for changes in () larger than a certain threshold. Singularity problems can be dealt with by replacing zero diagonal elements by a small threshold value, a common technique

Jacobi-Davidson Methods in Pole-zero Analysis 5 for incomplete LU-decompositions. The costs and fill-in can be controlled by using a drop-tolerance for non-diagonal elements, resulting in ILUT [12] decompositions. 5 Numerical Results and Conclusions The data for the test problems was generated by the in-house analog electric circuit simulator Pstar of Philips Research [14]. Both full-space and iterative methods have been used to solve the ordinary and generalised eigenproblem. A small selection of the results presented in [10] has been made to identify the problems which are typical for the different approaches. Implementations of the Jacobi-Davidson methods are based on the algorithms in [4] and are available on the Internet: refer to [1] for JDQR and to [2] for JDQZ. Experiments have been done in Matlab 5.3 [3]. The transformation of the generalised eigenproblem to the ordinary eigenproblem may introduceinaccuracies, as has been mentioned before. Bode-plot (a) in Fig. 1 shows an example of this. Before computing the eigenvalues, the problem has been reduced from size 30 x 30 to 12 x 12 with the method described in [5]. The solution computed by QR differs significantly on two points from the exact solution, which is computed by using (5) for several frequencies s. The two notches are caused by non-cancelling poles and zeroes, which do cancel in the original problem. It is conceivable that this is caused by the inversion of G. The iterative methods Arnoldi and JDQR suffer even more from inaccuracies. Bode-plot (b) in Fig. 1 shows the computed solutions for the generalised eigenproblem. In this case, the Q Z-method nearly resembles the exact solution, while the iterative schemes still suffer from inaccuracies. The fact that QZ performs better than QR, while both methods in theory compute the same eigenvalues, strengthens the argument that the inversion of G introduces critical inaccuracies. A general remark can be made, ~.... 1 ~.... 1"\,., //-:.:}.x.~_. J / Fig.!. (a) Bode-plot computed from the ordinary eigenproblemj (b) Bode-plot computed from the generalised eigenproblem.

6 J. Rommes, C.W. Bomhof, H.A. van der Vorst, and E.J.W. ter Maten about the interpretation of Bode-plots. It is not clear how accurate the original data of the circuit is. As a consequence, one may argue that the resulting Bode-plots are only representative up to a certain frequency, depending on the accuracy of the original data. For applications in the RF-area, this issue and the accuracy of eigenvalues near zero play even a more important role. Using preconditioners when solving the correction equation of the JDQR method does indeed improve the speed of convergence, as can be seen in Fig. 2, where graph (a) shows the convergence history when using GMRES as solver, and graph (b) when using GMRES with an ILUT preconditioner (t = 10-8 ). The quality of the improvement strongly depends on the accuracy, 0C---;;;------7.,..;;--~"',-----="',-----="',------="'--=-----;! JDOR...'O.,.,..-lIO.raIcluIdllllt_l.1R*... '-5-2OO2,:rv:Q:57 1011 1$0 200 ZllI 3lItl JOQA... ~1Q.jma-ZO...IO_'.162ol4111-lll11 1.s-2lIOlI,2O:53:I1 Fig. 2. (a) Convergence history for JDQR with GMRES; (b) Convergence history for JDQR with ILUT-preconditioned GMRES (t = 10-8 ). A convergence history plots the residual against the Jacobi-Davidson iteration number; each drop below the tolerance means an accepted eigenvalue. of the preconditioner. When using an ILUT preconditioner, a drop-tolerance of maximal t = 10-6 is acceptable. This shows also one of the difficulties: the preconditioner has to be rather accurate, and in the case of ILU based preconditioners this means in general high costs. Apart from that, the ILU based preconditionersexperience problems for singular matrices, and the matrix A-Ok! becomes more and more singular. This last problem has appeared to be more severe for the JDQZ method. The fact that the preconditioner is projected afterwards has not a significant influence on the quality. The reduction technique for the ordinary eigenproblem does lead to lower computational costs for both the direct and iterative methods. However, the gain depends on the degree of reduction, and is more pronounced for the iterative methods, because of the dominating matrix-vector products. The computational gained varied from 15% for the QR-method to 30% for the iterative methods, with top gains of 50%. Also, the condition number of the problem is improved, and if Jordan blocks are removed, the computed eigenvalues are more accurate. The number of iterations needed to computed

Jacobi-Davidson Methods in Pole-zero Analysis 7 the eigenvalues is not decreased, as expected. For more information and result of this reduction technique, refer to [5,11]. The observations launch ideas for future work. One can think of efficient updates for preconditioners [6], model reduction techniques and combinations of Jacobi-Davidson with other iterative methods like Arnoldi or combined pole-zero methods, such as Parle via Lanczos [7]. References 1. http://www.math.uu.nl/people/sleijpen/ JD...software/JDQR.html. 2. http://www.math.uu.nl/people/sleijpen/jd...software/jdqz.html. 3. http://www.mathworks.com. 4. BAI, Z., DEMMEL, J., DONGARRA, J., RUHE, A., AND VAN DER VORST, H., Eds. Templates for the Solution of Algebraic Eigenvalue Problems: a Practical Guide. SIAM, 2000. 5. BOMHOF, C. Jacobi-Davidson methods for eigenvalue problems in pole zero analysis. Nat.Lab. Unclassified Report 012/97, Philips Electronics NV, 1997. 6. BOMHOF, C. Iterative and parallel methods for linear systems, with applications in circuit simulation. PhD thesis, Utrecht University, 2001. 7. FELDMANN, P., AND FREUND, R. W. Efficient linear circuit analysis by Pade approximation via the Lanczos process. IEEE 1Tans. CAD 14 (1995),639-649. 8. FOKKEMA, D. R., SLEIJPEN, G. L., AND VAN DER VORST, H. A. Jacobi Davidson style QR and QZ algorithms for the reduction of matrix pencils. SIAM J. Sc. Compo 20, 1 (1998), 94-125. 9. HERES, P. J., AND SCHILDERS, W. H. Reduced order modelling of RLCnetworks using an SVD-Laguerre based method. In SCEE 2002 Conference Proceedings (2002). 10. RoMMES, J. Jacobi-Davidson methods and preconditioning with applications in pole-zero analysis. Master's thesis, Utrecht University, 2002. 11. ROMMES, J., VAN DER VORST, H. A., AND TER MATEN, E. J. W. Jacobi Davidson Methods and Preconditioning with Applications in Pole-zero Analysis. In Progress in Industrial Mathematics at ECMI 2002 (2002). 12. SAAD, Y. Iterative methods for sparse linear systems. PWS Publishing Company, 1996. 13. SLEIJPEN, G. L., AND VAN DER VORST, H. A. A Jacobi-Davidson Iteration Method for Linear Eigenvalue Problems. SIAM Review 42, 2 (2000), 267-293. 14. TER MATEN, E. J. W. Numerical methods for frequency domain analysis of electronic circuits. Surv. Maths. Ind., 8 (1998), 171-185.