Inexact Newton Methods for Inverse Eigenvalue Problems

Similar documents
Errors for Linear Systems

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence.

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16

ON THE GENERAL ALGEBRAIC INVERSE EIGENVALUE PROBLEMS. 1. Introduction

Lecture 21: Numerical methods for pricing American type derivatives

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)

On the Interval Zoro Symmetric Single-step Procedure for Simultaneous Finding of Polynomial Zeros

ρ some λ THE INVERSE POWER METHOD (or INVERSE ITERATION) , for , or (more usually) to

On a direct solver for linear least squares problems

Nonlinear Overlapping Domain Decomposition Methods

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

EEE 241: Linear Systems

A new Approach for Solving Linear Ordinary Differential Equations

form, and they present results of tests comparng the new algorthms wth other methods. Recently, Olschowka & Neumaer [7] ntroduced another dea for choo

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )

4DVAR, according to the name, is a four-dimensional variational method.

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

Report on Image warping

1 GSW Iterative Techniques for y = Ax

MMA and GCMMA two methods for nonlinear optimization

General viscosity iterative method for a sequence of quasi-nonexpansive mappings

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b

Deriving the X-Z Identity from Auxiliary Space Method

Numerical Heat and Mass Transfer

for Linear Systems With Strictly Diagonally Dominant Matrix

Some modelling aspects for the Matlab implementation of MMA

Solution of Linear System of Equations and Matrix Inversion Gauss Seidel Iteration Method

Lecture Notes on Linear Regression

Least-Squares Solutions of Generalized Sylvester Equation with Xi Satisfies Different Linear Constraint

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

Relaxation Methods for Iterative Solution to Linear Systems of Equations

Singular Value Decomposition: Theory and Applications

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin

On an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1

Overlapping additive and multiplicative Schwarz iterations for H -matrices

Lecture 10 Support Vector Machines II

Slobodan Lakić. Communicated by R. Van Keer

Lecture 12: Discrete Laplacian

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Appendix B. The Finite Difference Scheme

SOLVING NON-LINEAR SYSTEMS BY NEWTON s METHOD USING SPREADSHEET EXCEL Tay Kim Gaik Universiti Tun Hussein Onn Malaysia

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2

Matrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD

A Tuned Preconditioner for Inexact Inverse Iteration Applied to Hermitian Eigenvalue Problems

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

Feature Selection: Part 1

The Order Relation and Trace Inequalities for. Hermitian Operators

Norms, Condition Numbers, Eigenvalues and Eigenvectors

Quantum Mechanics I - Session 4

10-701/ Machine Learning, Fall 2005 Homework 3

Math 217 Fall 2013 Homework 2 Solutions

Homework Notes Week 7

Application of B-Spline to Numerical Solution of a System of Singularly Perturbed Problems

Implicit Integration Henyey Method

A MODIFIED METHOD FOR SOLVING SYSTEM OF NONLINEAR EQUATIONS

arxiv: v1 [math.co] 12 Sep 2014

[7] R.S. Varga, Matrix Iterative Analysis, Prentice-Hall, Englewood Clis, New Jersey, (1962).

Perron Vectors of an Irreducible Nonnegative Interval Matrix

Time-Varying Systems and Computations Lecture 6

= = = (a) Use the MATLAB command rref to solve the system. (b) Let A be the coefficient matrix and B be the right-hand side of the system.

The lower and upper bounds on Perron root of nonnegative irreducible matrices

NUMERICAL DIFFERENTIATION

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 17. a ij x (k) b i. a ij x (k+1) (D + L)x (k+1) = b Ux (k)

A Hybrid Variational Iteration Method for Blasius Equation

Lecture 5.8 Flux Vector Splitting

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law:

Finding The Rightmost Eigenvalues of Large Sparse Non-Symmetric Parameterized Eigenvalue Problem

Convexity preserving interpolation by splines of arbitrary degree

1 Convex Optimization

Linear Approximation with Regularization and Moving Least Squares

Topic 5: Non-Linear Regression

2.5 Iterative Improvement of a Solution to Linear Equations

A FORMULA FOR COMPUTING INTEGER POWERS FOR ONE TYPE OF TRIDIAGONAL MATRIX

Solving the Quadratic Eigenvalue Complementarity Problem by DC Programming

Feb 14: Spatial analysis of data fields

MATH Sensitivity of Eigenvalue Problems

Dynamic Systems on Graphs

NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS

IV. Performance Optimization

Generalized Linear Methods

where the sums are over the partcle labels. In general H = p2 2m + V s(r ) V j = V nt (jr, r j j) (5) where V s s the sngle-partcle potental and V nt

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

Gaussian Mixture Models

LECTURE 9 CANONICAL CORRELATION ANALYSIS

Chapter - 2. Distribution System Power Flow Analysis

36.1 Why is it important to be able to find roots to systems of equations? Up to this point, we have discussed how to find the solution to

Least squares cubic splines without B-splines S.K. Lucas

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

Inductance Calculation for Conductors of Arbitrary Shape

A property of the elementary symmetric functions

The Analytical Solution of a System of Nonlinear Differential Equations

The Study of Teaching-learning-based Optimization Algorithm

The Exact Formulation of the Inverse of the Tridiagonal Matrix for Solving the 1D Poisson Equation with the Finite Difference Method

Transcription:

Inexact Newton Methods for Inverse Egenvalue Problems Zheng-jan Ba Abstract In ths paper, we survey some of the latest development n usng nexact Newton-lke methods for solvng nverse egenvalue problems. These methods requre the solutons of nonsymmetrc and large lnear systems. One can solve the approxmate Jacoban equaton by teratve methods. However, teratve methods usually oversolve the problem n the sense that they requre far more (nner) teratons than s requred for the convergence of the Newton-lke (outer) teratons. The nexact methods can reduce or mnmze the oversolvng problem and mprove the effcency. The convergence rates of the nexact methods are superlnear and a good tradeoff between the requred nner and outer teratons can be obtaned. AMS Subject Classfcatons. 65F18, 65F10, 65F15. 1 Introducton Let {A k } n k=1 be n real symmetrc n n matrces. For any c = (c 1,..., c n ) T R n, we defne A(c) n c A, (1) =1 and denote the egenvalues of A(c) by {λ (c)} n =1, where λ 1(c) λ 2 (c) λ n (c). The Inverse Egenvalue Problem (IEP) s defned as follows: (IEP) Gven n real numbers λ 1 λ n, fnd c R n such that λ (c ) = λ for = 1,..., n. There s a large lterature on condtons for the solvablty, perturbaton analyss and computatonal methods to the IEP, see for nstance [8, 12, 13] and the references theren. Recently Fredland, Nocedal, and Overton [8] have surveyed four fast locally convergent numercal methods for solvng the IEP. For a general nonlnear system g(x) = 0, the classcal locally convergent teratve procedure s as follows: x k+1 = x k + s k, where B k s k = g(x k ), x 0 gven. The process s a Newton method f B k = g (x k ), and a Newton-lke method f B k s an approxmaton to g (x k ), see for nstance [5, 9, 10]. (zjba@math.cuhk.edu.hk) Department of Mathematcs, Chnese Unversty of Hong Kong, Hong Kong, Chna. 1

The Newton and Newton-lke Methods are attractve because of ther rapd convergence from any suffcently good ntal guess x 0. However, n each Newton or Newton-lke teraton, we have to solve exactly the Jacoban or approxmate Jacoban equaton B k s k = g(x k ). (2) Computng the exact soluton usng a drect method such as Gaussan elmnaton can be expensve f the number of unknowns s large and may not justfed when x k s far from a soluton. Thus we can use an teratve method and solve (2) only approxmately,.e. by the nexact Newton or Newton-lke method: x k+1 = x k + s k, where B k s k = g(x k ) + r k, r k / g(x k ) η k, where denotes the Eucldean vector norm or ts correspondng nduced matrx norm. In general, the nonnegatve forcng term η k s gven n terms of g(x k ), see for nstance [4, 6, 7, 9]. When n s large, solvng the Jacoban or approxmate Jacoban equaton wll be costly. The cost can be reduced by usng teratve methods (the nner teratons). However, f η k s too small, an teratve method may oversolve the Jacoban or approxmate Jacoban equaton n the sense that the last tens or hundreds nner teratons before convergence may not mprove the convergence of the outer Newton or Newton-lke teratons. That s, addtonal accuracy n solvng the Jacoban or approxmate Jacoban equaton requres addtonal expense, but results n lttle or no progress toward a soluton [6]. In ths paper, we dscuss the use of nexact methods to two Newton-lke methods,.e. Method II and Method III n [8], for solvng the IEP. Our nexact methods solve the approxmate Jacoban equaton nexactly by stoppng the nner teratons before convergence, and thereby allevate or mnmze the oversolvng problem. We wll show that the convergence rates of our methods are superlnear. However, by stoppng the nner teratons earler, we can reduce the total cost of the nner-outer teratons. Ths paper s organzed as follows. In 2, we gve some background knowledge about the Newton method for the IEP. Then we nvestgate the nexact method to Method II and Method III of [8] n 3 4 respectvely. We gve the convergence analyss of the nexact methods wth llustratve numercal tests. 2 The Newton Method In ths secton, we brefly recall the Newton method for solvng the IEP. For detals, see [2, 8, 13]. For any c = (c 1,..., c n ) T R n, defne f : R n R n by f(c) (λ 1 (c) λ 1,..., λ n (c) λ n) T = 0. (3) Clearly, c s a soluton to the IEP f and only f f(c ) = 0. Therefore, we can formulate the IEP as a system of nonlnear equatons f(c) = 0. As n [8], we assume that the gven egenvalues {λ }n =1 are dstnct. Then the egenvalues of A(c) are dstnct too n some neghborhood of c. It follows that the functon f(c) s analytc n the same neghborhood and that the Jacoban of f s gven by [ ] J(c) = [f(c)] j c j = λ (c) c j = q (c) T A(c) c j q (c), 1, j n, 2

where q (c) are the normalzed egenvectors of A(c) correspondng to the egenvalues λ (c), see [13, Eq. (4.6.2)] or [2, 3]. Hence by (1), [ ] J(c) = q (c) T A j q (c), 1, j n. (4) j Thus by (1) agan, we have [J(c)c] = n c j q (c) T A j q (c) = q (c) T A(c)q (c) = λ (c), 1, j n, j=1.e. J(c)c = (λ 1 (c),, λ n (c)) T. By (3), ths becomes J(c)c = f(c) + λ, (5) where λ (λ 1,, λ n) T. Recall that the Newton method for f(c) = 0 s gven by J(c k )(c k+1 c k ) = f(c k ). By (5), ths can be rewrtten as J(c k )c k+1 = J(c k )c k f(c k ) = λ. To summarze, we have the followng Newton method for solvng the IEP. Algorthm 1: The Newton Method For k = 0 untl convergence, do: 1. Compute the egen-decomposton of A(c k ): Q(c k ) T A(c k )Q(c k ) = dag(λ 1 (c k ),, λ n (c k )), where Q(c k ) = [q 1 (c k ),, q n (c k )] s orthogonal. 2. Form the Jacoban matrx: [J(c k )] j = q (c k ) T A j q (c k ). 3. Solve c k+1 from the Jacoban equaton: J(c k )c k+1 = λ. Notce that n Step 1, we have to compute all the egenvalues and egenvectors of A(c k ) exactly. Ths method converges Q-quadratcally, see for nstance [8, Theorem 3.2] and [13, Theorem 4.6.1]. Here, we recall the defnton of two knd of convergence rates, see [4] and [10, Chap. 9]. Defnton 1 Let {x k } be a sequence whch converges to x. Then 1. x k x wth Q-convergence rate at least q (q > 1) f x k+1 x = O( x k x q ) as k ; (6) 2. x k x wth R-convergence rate at least q (q > 1) f lm sup x k x 1/qk < 1 as k. (7) k 3

3 The Inexact Newton-lke Method In ths secton, we frst recall Method II n [8], and then gve the nexact verson. 3.1 The Inexact Newton-lke Method In Algorthm 1 all the egenvectors of A(c) have to be computed exactly per step, whch s very tme consumng. Therefore, we consder approxmatng these egenvectors. Suppose that we have determned an estmate c k of c and an approxmaton Q k = [q k 1,..., qk n] to the orthogonal matrx of egenvectors Q(c k ). Then we compute a new estmate c k+1 by solvng J k c k+1 = λ, (8) where J k = [(q k ) T A j q k ]. To update our approxmatons to the egenvectors, we apply one step nverse teraton; that s, we compute w, = 1,..., n by solvng We then defne Q k+1 = [q k+1 1,..., q k+1 n ] by (A(c k+1 ) λ I)w = q k, = 1,..., n. (9) q k+1 = w, = 1,..., n. (10) w It s showed that Method II s Q-quadratcally convergent under the assumpton that the systems (9) and (8) are solved exactly, see [8, 3] and [13, Theorem 4.6.2]. In Method II, there are two nner teratons: the one-step nverse power method (9) and the approxmate Jacoban equaton (8). In order to avod the oversolvng of the nner teratons, we have to look for sutable tolerances small enough to guarantee the convergence of the outer teratons, but large enough to reduce the oversolvng problem of the nner teratons. Nonetheless, we fnd that the tolerance for the nverse power method (9) can be set very large (to 1/4), whereas the tolerance for the approxmate Jacoban equaton (8) has to be small n order to have a superlnear convergence for the outer teraton. The tolerance for (8) wll be n terms of a computable quantty n our nexact algorthm. Below we gve our algorthm. Algorthm 2: The Inexact Newton-Lke Method 1. Gven c 0, terate Algorthm 1 once to obtan c 1 and wrte P 0 = [p 0 1,, p0 n] = [q 1 (c 0 ), q n (c 0 )]. 2. For k = 1 untl convergence, do: (a) Solve w k nexactly n the one-step nverse power method (A(c k ) λ I)w k = p k 1 + t k, 1 n, (11) untl the resdual t k satsfes t k 1 4. (12) 4

(b) Normalze w k to obtan an approxmate egenvector p k of A(c k ): p k = wk w k 1 n. (13), (c) Form the approxmate Jacoban matrx: [J k ] j = (p k ) T A j p k, 1, j n. (14) (d) Solve c k+1 nexactly from the approxmate Jacoban equaton untl the resdual r k satsfes ( r k J k c k+1 = λ + r k, (15) max 1 n 1 v k ) β, 1 < β 2. (16) Note that the man dfference between Algorthm 2 and Method II s that we solve (11) and (15) approxmately rather than exactly as n (9) and (8). 3.2 Rate of Convergence In ths subsecton, we wll show that the Q-convergence rate of Algorthm 2 s β. As n [8], we assume that the gven egenvalues {λ }n =1 are dstnct and that the Jacoban J(c ) defned n (4) s nonsngular. Then we have the followng theorem on the rate of convergence. Theorem 1 [2] Let the gven egenvalues {λ }n =1 be dstnct and the Jacoban J(c ) be nonsngular. Then Algorthm 2 s locally convergent wth Q-convergence rate β. For numercal examples, we refer to [2]. 4 The Inexact Cayley Transform Method In ths secton, we brefly recall Method III n [8], and then gve the nexact verson,.e. the nexact Cayley transform method. 4.1 The Inexact Cayley Transform Method Method III n [8] s based on Cayley transforms. A soluton to the IEP can be descrbed by c and U, where U s an orthogonal matrx and U T A(c)U = Λ, Λ = dag(λ 1,..., λ n). (17) Suppose that an orthogonal matrx U k s the current approxmatons of U. Defne e Z k Uk T U. Then Z k s a skew-symmetrc matrx and (17) can be wrtten as U T k A(c)U k = e Z k Λ e Z k = Λ + Z k Λ Λ Z k + O( Z k 2 ). (18) 5

In Method III, we defne a new estmate of c k+1 by neglectng the second order terms n Z k, and equatng the dagonal elements n (18),.e. c k+1 s gven by (u k ) T A(c k+1 )u k = λ, = 1,..., n, (19) where {u k }n =1 are the column vectors of U k. By (1), (19) can be rewrtten as J k c k+1 = λ, (20) where λ (λ 1,..., λ n) T and J k s the approxmate Jacoban matrx wth entres [J k ] j = (u k ) T A j u k,, j = 1,..., n. (21) Once we get c k+1 from (20), we obtan Z k by neglectng the second order terms n Z k, and equatng the off-dagonal elements n (18),.e. [Z k ] j = (uk )T A(c k+1 )u k j λ j λ, 1 j n. (22) Fnally we update U k by settng U k+1 = U k S k, where S k s an orthogonal matrx constructed by the Cayley transform S k = (I + 1 2 Z k)(i 1 2 Z k) 1. The Q-quadratc convergence of Method III was proven under the assumpton that the approxmated Jacoban equaton (20) s solved exactly, see [8]. To reduce the oversolvng of the nner teratons, we have to look for sutable tolerances to mprove the effcency. For the nonlnear system f(c) = 0, the stoppng crteron s usually gven n terms of f(c). By (3), ths wll nvolve computng λ (c k ) of A(c k ) whch are costly to compute. Our dea s to replace them by Raylegh quotents, see (25) and (27) below. Thus we have the followng algorthm. Algorthm 3: Inexact Cayley Transform Method 1. Gven c 0, compute the orthonormal egenvectors {q (c 0 )} n =1 and the egenvalues {λ (c 0 )} n =1 of A(c0 ). Let V 0 = [v1 0,..., v0 n] = [q 1 (c 0 ),..., q n (c 0 )], and 2. For k = 0, 1, 2,..., untl convergence, do: ρ 0 = (λ 1 (c 0 ),..., λ n (c 0 )) T. (a) Form the approxmate Jacoban matrx: [J k ] j = (v k ) T A j v k, 1, j n. (23) (b) Solve c k+1 nexactly from the approxmate Jacoban equaton: untl the resdual r k satsfes J k c k+1 = λ + r k, (24) r k ρ k λ β, β (1, 2]. (25) 6

(c) Form the skew-symmetrc matrx Y k : [Y k ] j = (vk )T A(c k+1 ) v k j λ j λ (d) Compute V k+1 = [v1 k+1,..., vn k+1 ] by solvng (e) Compute ρ k+1 = (ρ k+1 1,..., ρ k+1 n ) T by 4.2 Rate of Convergence, 1 j n. (I + 1 2 Y k)v T k+1 = (I 1 2 Y k)v T k. (26) ρ k+1 = (v k+1 ) T A(c k+1 ) v k+1, = 1,..., n. (27) In the followng, we show that the R-convergence rate of Algorthm 3 s at least β. As n [8], we assume that the gven egenvalues {λ }n =1 are dstnct and the Jacoban J(c ) s nonsngular. Then we have the followng results on the rate of convergence. Theorem 2 [1] Let the gven egenvalues {λ }n =1 be dstnct and J(c ) be nonsngular. If c 0 s suffcently approxmate to c, then c k+1 c O( c 0 c 1/βk ). That s, Algorthm 3 converges wth R-convergence rate at least equal to β. For numercal experments, we refer to [1]. References [1] Z. Ba, R. Chan, and B. Morn, An Inexact Cayley Transform Method for Inverse Egenvalue Problems, submtted to Inverse Problems. [2] R. Chan, H. Chung, and S. Xu, The Inexact Newton-Lke Method for Inverse Egenvalue Problem, accepted for publcaton by BIT. [3] R. Chan, S. Xu, and H. Zhou, On the Convergence Rate of a Quas-Newton Method for Inverse Egenvalue Problem, SIAM J. Numer. Anal., 36 (1999), 436 441. [4] R. Dembo, S. Esenstat, and T. Stehaug, Inexact Newton Methods, SIAM J. Numer. Anal., 19 (1982), 400 408. [5] J. Denns and R. Schnabel, Numercal Methods for Unconstraned Optmzaton and Nonlnear Equatons, Prentce Hall, Englewood Clff, NJ, 1983. [6] S. Esenstat and H. Walker, Choosng the Forcng Terms n an Inexact Newton Method, SIAM J. Sc. Comput., 17 (1996), 16 32. [7] D. Fokkema, G. Slejen, and H. Vorst, Accelerated Inexact Newton Schemes for Large Systems of Nonlnear Equatons, SIAM J. Sc. Comput., 19 (1998), 657 674. 7

[8] S. Fredland, J. Nocedal, and M. Overton, The Formulaton and Analyss of Numercal Methods for Inverse Egenvalue Problems, SIAM J. Numer. Anal., 24 (1987), 634 667. [9] B. Morn, Convergence Behavour of Inexact Newton Methods, Math. Comput., 68 (1999), 1605 1613. [10] J. Ortega and W. Rhenboldt, Iteratve Soluton of Nonlnear Equatons n Several Varables, Academc Press, 1970. [11] Y. Saad, Iteratve Methods for Sparse Lnear Systems, PWS Publshng Company, 1996. [12] J. Sun, Backward Errors for the Inverse Egenvalue Problem, Numer. Math., 82 (1999), 339 349. [13] S. Xu, An Introducton to Inverse Algebrac Egenvalue Problems, Pekng Unversty Press and Veweg Publshng, 1998. 8