Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)

Similar documents
Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Inexact Newton Methods for Inverse Eigenvalue Problems

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16

Singular Value Decomposition: Theory and Applications

Developing an Improved Shift-and-Invert Arnoldi Method

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 17. a ij x (k) b i. a ij x (k+1) (D + L)x (k+1) = b Ux (k)

A Hybrid Variational Iteration Method for Blasius Equation

A new Approach for Solving Linear Ordinary Differential Equations

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence.

Deriving the X-Z Identity from Auxiliary Space Method

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

MMA and GCMMA two methods for nonlinear optimization

ρ some λ THE INVERSE POWER METHOD (or INVERSE ITERATION) , for , or (more usually) to

The Study of Teaching-learning-based Optimization Algorithm

Errors for Linear Systems

Norms, Condition Numbers, Eigenvalues and Eigenvectors

Finding The Rightmost Eigenvalues of Large Sparse Non-Symmetric Parameterized Eigenvalue Problem

Dynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence)

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b

The FEAST Algorithm for Sparse Symmetric Eigenvalue Problems

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

The lower and upper bounds on Perron root of nonnegative irreducible matrices

Lecture 3. Ax x i a i. i i

Solving Fractional Nonlinear Fredholm Integro-differential Equations via Hybrid of Rationalized Haar Functions

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

= = = (a) Use the MATLAB command rref to solve the system. (b) Let A be the coefficient matrix and B be the right-hand side of the system.

Chapter 12. Ordinary Differential Equation Boundary Value (BV) Problems

Overlapping additive and multiplicative Schwarz iterations for H -matrices

829. An adaptive method for inertia force identification in cantilever under moving mass

Feb 14: Spatial analysis of data fields

EFFICIENT DOMAIN DECOMPOSITION METHOD FOR ACOUSTIC SCATTERING IN MULTI-LAYERED MEDIA

1 GSW Iterative Techniques for y = Ax

NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS

Appendix B. The Finite Difference Scheme

DFT with Planewaves pseudopotential accuracy (LDA, PBE) Fast time to solution 1 step in minutes (not hours!!!) to be useful for MD

FUZZY GOAL PROGRAMMING VS ORDINARY FUZZY PROGRAMMING APPROACH FOR MULTI OBJECTIVE PROGRAMMING PROBLEM

A MODIFIED METHOD FOR SOLVING SYSTEM OF NONLINEAR EQUATIONS

for Linear Systems With Strictly Diagonally Dominant Matrix

Numerical Properties of the LLL Algorithm

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13

One-sided finite-difference approximations suitable for use with Richardson extrapolation

Nonlinear Overlapping Domain Decomposition Methods

Number of cases Number of factors Number of covariates Number of levels of factor i. Value of the dependent variable for case k

Global Sensitivity. Tuesday 20 th February, 2018

Lecture 21: Numerical methods for pricing American type derivatives

The Order Relation and Trace Inequalities for. Hermitian Operators

ON COMPUTING MAXIMUM/MINIMUM SINGULAR VALUES OF A GENERALIZED TENSOR SUM

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS

A FORMULA FOR COMPUTING INTEGER POWERS FOR ONE TYPE OF TRIDIAGONAL MATRIX

Preconditioning techniques in Chebyshev collocation method for elliptic equations

Linear Feature Engineering 11

Matrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD

On the Interval Zoro Symmetric Single-step Procedure for Simultaneous Finding of Polynomial Zeros

Nahid Emad. Abstract. the Explicitly Restarted Block Arnoldi method. Some restarting strategies for MERAM are given.

The Two-scale Finite Element Errors Analysis for One Class of Thermoelastic Problem in Periodic Composites

4DVAR, according to the name, is a four-dimensional variational method.

APPENDIX A Some Linear Algebra

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

Convexity preserving interpolation by splines of arbitrary degree

The Minimum Universal Cost Flow in an Infeasible Flow Network

Quantum Mechanics I - Session 4

p 1 c 2 + p 2 c 2 + p 3 c p m c 2

6.3.4 Modified Euler s method of integration

Non-linear Canonical Correlation Analysis Using a RBF Network

Lecture Notes on Linear Regression

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

Consistency & Convergence

CSCE 790S Background Results

New Method for Solving Poisson Equation. on Irregular Domains

A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

SIO 224. m(r) =(ρ(r),k s (r),µ(r))

The Exact Formulation of the Inverse of the Tridiagonal Matrix for Solving the 1D Poisson Equation with the Finite Difference Method

Perfect Fluid Cosmological Model in the Frame Work Lyra s Manifold

form, and they present results of tests comparng the new algorthms wth other methods. Recently, Olschowka & Neumaer [7] ntroduced another dea for choo

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

CS4495/6495 Introduction to Computer Vision. 3C-L3 Calibrating cameras

Fixed point method and its improvement for the system of Volterra-Fredholm integral equations of the second kind

General viscosity iterative method for a sequence of quasi-nonexpansive mappings

On an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

Comparison of Wiener Filter solution by SVD with decompositions QR and QLP

EEL 6266 Power System Operation and Control. Chapter 3 Economic Dispatch Using Dynamic Programming

PRECONDITIONING TECHNIQUES IN CHEBYSHEV COLLOCATION METHOD FOR ELLIPTIC EQUATIONS

Solution of Linear System of Equations and Matrix Inversion Gauss Seidel Iteration Method

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system

Min Cut, Fast Cut, Polynomial Identities

Implicit Integration Henyey Method

Relaxation Methods for Iterative Solution to Linear Systems of Equations

Finite Element Modelling of truss/cable structures

Kernel Methods and SVMs Extension

On a direct solver for linear least squares problems

Solving Nonlinear Differential Equations by a Neural Network Method

SOLVING NON-LINEAR SYSTEMS BY NEWTON s METHOD USING SPREADSHEET EXCEL Tay Kim Gaik Universiti Tun Hussein Onn Malaysia

Report on Image warping

The Analytical Solution of a System of Nonlinear Differential Equations

The Synchronous 8th-Order Differential Attack on 12 Rounds of the Block Cipher HyRAL

Comparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method

Transcription:

ISSN 1749-3889 (prnt), 1749-3897 (onlne) Internatonal Journal of Nonlnear Scence Vol.17(2014) No.2,pp.188-192 Modfed Block Jacob-Davdson Method for Solvng Large Sparse Egenproblems Hongy Mao, College of Scence, Nanjng Forestry Unversty, Nanjng 210037,Chna (Receved 20 June 2013, accepted 11 March 2014) Abstract: The Jacob-Davdson Method s an effcent egenvalue solver whch uses an nner-outer scheme. In the outer teraton one tres to approxmate an egenpar whle n the nner teraton a lnear system has to be solved, often teratvely. The more tme-consumng computaton les n solvng the lnear system whch s called correcton equatons. To handle the nexact soluton of the correcton equatons, we use extrapolaton technque to solve the correcton equatons. Furthermore, we use a a class of precondtoners when solvng the correcton equatons wth Krylov subspace methods, such as GMRES(m). Numercal experments show that the new algorthm s effcent. Keywords:Jacob-Davdson algorthm; correcton equaton; extrapolaton technque;precondtoner 1 Introducton In many felds of scence and engneerng technology, we often need to calculate several extreme (maxmum or mnmum) or nternal egenvalues and correspondng egenvectors of large sparse symmetrc matrx. In 2000, Slejpen and Van der Vorst [6] combned correcton method of Jacob wth nner -outer teratve method of Davdson [4, 5] and proposed Jacob- Davdson method. Ths method has good stablty and can acheve a fast convergence speed of non-dagonally domnant or non-normal matrces. At present, the Jacob-Davdson method s one of the most effectve methods for solvng egenvalue problems. But when the matrx has multple egenvalues, the valdty and relablty of Jacob-Davdson method wll decrease. To overcome ths, block Jacob-Davdson method was proposed and t can calculate the multple egenvalues of matrx. Algorthm 1(Block Jacob-Davdson Method) 1 nput: matrx A, the maxmum dmenson of projecton subspace m, block sze l, column orthonormal matrx V 1 wth sze n l; 2 For k = 1, 2,, m. (1) compute V T k AV k; (2) compute l egenpars (λ (k), y (k) ), ( = 1, 2,, l) of H k. (3) compute Rtz vector (ϕ (k), y (k) ), ( = 1, 2,, l); (4) compute resdual vector r (k) Test for convergence. Stop f r (k) (5) Solve { = (A λ (k) I)ϕ (k) < T ol s satsfed., ( = 1, 2,, l). (I ϕ (k) ϕ (k)t )(A λ (k) I)((I ϕ (k) ϕ (k)t ))t (k) = r (k) ϕ (k)t t (k) = 0, ( = 1, 2,, l). Get new matrx T k = [t (k) 1,, t(k) ], where ϕ (k) = (ϕ (k) 1,, ϕ (k)). Correspondng author. E-mal address: mhy@njfu.edu.cn l Copyrght c World Academc Press, World Academc Unon IJNS.2014.04.15/803

Hongy Mao: Modfed Block Jacob-Davdson Method for Solvng Large Sparse Egenproblems 189 (6) V k+1 = MGS(V k, T k )(ModfedGram Schmdt). The block Jacob Davdson algorthm s dvded nto two layers of nner and outer teraton, the outer teraton calculates egenpars of matrx and the nner teraton solves the lnear system whch s called correcton equatons. The man cost les n the nner teratons. In the Jacob-Davdson method, each teraton step requres the soluton of the correcton equaton { (I ϕ (k) ϕ (k)t )(A λ (k) I)((I ϕ (k) ϕ (k)t ))t (k) = r (k) ϕ (k)t t (k) ( = 1, 2,, l), (1) = 0, where ϕ T ϕ = I. In ths work, we propose a modfed verson of the block Jacob Davdson algorthm by ntroduce a extrapolaton parameter, whch can get egenvalues more effcently. The paper s organzed as follows. In secton 2, the Modfed Block Jacob-Davdson Method wll be gven,and the optmal parameter of ω s dscussed.then a precondtonng technque s used n GMRES(m) algorthm when solvng the correcton equaton. Whle n secton 3, numercal results are presented to llustrate the behavor of the new algorthms. 2 Modfed Block Jacob-Davdson Method Wth the approachng degree enhancement, the condton number of the coeffcent matrx equatons wll go bad, we can use the extrapolaton technque to overcome ths. If the soluton of (1) reads as, Let t (k) = Gt (k 1) + C. (2) t (k) = ω(gt (k 1) + C) + (1 ω)t (k 1) = [ωg + (1 ω)i]t (k 1) + ωc, (3) where ω 0 s a parameter. Then we get a new teratve method. Apparently, when ω = 1 the teraton formula (3) s the orgnal (2). Ths extrapolate method converges f and only f ρ(g ω ) < 1. If we only know all the egenvalues of G contaned n the nterval [a, b], then the egenvalues of G ω = ωg+(1 ω)i are located n the nterval wth the endponts of ωa+1 ω and ωb + 1 ω. Let λ(a) denote the set of egenvalues of matrx A, Then ρ(g ω ) = When 1 [a, b], we can choose ω such that ρ(g ω ) < 1 max λ = max ωλ + 1 ω max ωλ + 1 ω. λ λ(g ω) λ λ(g ω) a λ b Theorem 1 If all the egenvalues of G are real and locate n the nterval [a, b], and 1 [a, b], then the optmal parameter of ω s ω opt = 2 2 a b. And ρ(g ω opt ) 1 ω opt d, where d s the dstance from 1 to [a, b]. Proof. Functon max a λ b ωλ+1 ω has a mnmum value when ωa+1 ω = ωb+1 ω, whch results n ω opt = 2 2 a b. Snce 1 [a, b], so ether a > 1 or b < 1. For a b < 1, we have ω opt > 0 and d = 1 b. All the egenvalues of G ωopt satsfes the nequalty ω opt a + 1 ω opt λ ω opt b + 1 ω opt, so we have λ ω opt b + 1 ω opt = 1 + ω opt (b 1) = 1 ω opt d, and λ ω opt a + 1 ω opt = 1 + ω opt d. As a result, we get 1 + ω opt d ρ(g ωopt ) 1 ω opt d. Smlarly, for 1 < a b, we can obtan the same results. The proof s completed. We often use Krylov subspace methods to solve the correcton equaton (1). Restarted GMRES method s well known and wdely used whch s lsted below. Algorthm 2 (GMRES(m)) IJNS homepage: http://www.nonlnearscence.org.uk/

190 Internatonal Journal of Nonlnear Scence, Vol.17(2014), No.2, pp. 188-192 1 Compute r 0 = b Ax 0, β = r 0 2 and v 1 = r0 β 2 Generate the Arnold bass and the matrx H m by usng the Arnold algorthm startng wth v 1 3 Compute y m whch mnmzes βe 1 H m y 2 and x m = x 0 + V m y m 4 If satsfed then stop, else set x 0 := x m and Go To 1 We can use precondtonng technque when solvng the correcton equaton wth GMRES(m). The am of precondtonng s to accelerate the convergence speed by makng the egenvalues of matrx A located n the complex plane as cluster as possble. There are many technques to construct precondtoners, such as ncomplete LU decomposton, ncomplete Cholesky decomposton and so on. Incomplete LU decomposton method s dscussed here. The matrx A s dvded nto four blocks and the correspondng LU( decomposton ) s gven ( as follows. ) ( ) A11 A 12 L11 U11 U = 12 (4) A 21 A 22 L 21 L 22 U 22 where A 11 s a matrx wth sze d 1 d 1,A 22 s a matrx wth sze (n d 1 ) (n d 1 ). From (4),we have A 11 = L 11 U 11 (5) A 12 = L 11 U 12 (6) A 21 = L 21 U 11 (7) A 22 = L 21 U 12 +L 22 U 22 (8) Algorthm 3 Block ILU algorthm 1 For an Incomplete LU decomposton of the block matrx A 11, get L 11, U 11 2 Compute B 1, the nverse matrx of L 11, whch s stll an unt lower trangular matrx 3 U 12 = L 1 11 A 12 = B 1 A 12 4 Compute C 1, the nverse matrx of U 11, whch s stll an upper trangular matrx 5 L 21 = A 21 U 1 11 = A 21C 1 6 denote A 22 = A 22 L 21 U 12, the LU decomposton A 22 = L 22 U 22 The LU decomposton of A 22 can always be done by block calculaton, as long as a proper sze of the block s selected. Furthermore calculaton of the nverse matrx n the algorthm makes full use of the characterstcs of trangular matrx to elmnate unnecessary computaton, and thus the calculaton speed s mproved. 3 Numercal Experment In ths secton, numercal experments are mplemented by Matlab 6.5. The ntal matrx s generated randomly and ts column vectors are orthonormal. To be convenent, the block Jacob-Davdson method s denoted as BJD and modfed Jacob-Davdson method s denoted as MBJD.The block Jacob-Davdson method wth ILU decomposton precondtoner when solvng correctng equaton s denoted as ILUBJD. Example 1 Consder matrx A whch s of order 1600 1600, where the matrx B s of order 40 40. B I I B I A =, I I B where B = 4 1 1 4 1 1 1 4. The computed three bggest egenvalues together wth CPU tmes by (Modfed) block Jacob-Davdson Method are lsted n the Table 1. The Tolerances for the two methods are 10 5. The results show that the modfed block Jacob- Davdson method can accelerate the convergence speed by usng extrapolaton technque. IJNS emal for contrbuton: edtor@nonlnearscence.org.uk

Hongy Mao: Modfed Block Jacob-Davdson Method for Solvng Large Sparse Egenproblems 191 Table 1: The results for the bggest three egenvalues of A Algorthm egenvalues CPU tme (s) 7.9883 BJD 7.9707 184.15 7.9707 7.9883 MBJD 7.9707 166.23 7.9707 Example 2 Consder matrx A whch s of order 500 500, where the matrx B s of order 20 20. B I A = I B I, I B where B = 4 0.4 0.4 4 0.4 0.4 4. Table 2: The results for the bggest four egenvalues of A Algorthm egenvalues CPU tme (s) 6.7756 BJD 6.7754 16.32 6.7321 6.7315 6.7756 ILUBJD 6.7754 15.19 6.7321 6.7315 The computed four bggest egenvalues together wth CPU tmes by block Jacob-Davdson Method wth ILU precondtoner are lsted n the Table 2. The Tolerances for the two methods are 10 5. The results show that the block Jacob-Davdson method wth ILU decomposton precondtoner when solvng the correcton equaton can accelerate the convergence speed. Acknowledgments The authors would lke to thank the referees for ther valuable comments and suggestons whch mprove the manuscrpt greatly. References [1] A. A. Nftyevl and R.F.Efendev. Varable doman egenvalue problems for the laplace operator wth densty. Internatonal Journal of Nonlnear Scence, 16(2013):280-288. IJNS homepage: http://www.nonlnearscence.org.uk/

192 Internatonal Journal of Nonlnear Scence, Vol.17(2014), No.2, pp. 188-192 [2] R. Sngh and J.Kumar. Computaton of Egenvalues of Sngular Sturm-Louvlle Problems usng Modfed Adoman Decomposton Method. Internatonal Journal of Nonlnear Scence, 15(2013):247-258. [3] G. L. Sleljpen and H.A.Van Der Vorst. A Jacob-Davdson method for lnear egenvalue problems. SIAM Revew, 42(2000):267-293. [4] M.Crouzex, B.Phlppe and M.Sadkane, The Davdson method. SIAM Journal on Scentfc Computng, 15(1994):62-76. [5] E.R.Davdson. The teratve calculaton of a few of the lowest egenvalue and correspondng egenvectors of large real-symmetrc matrces. Journal of Computatonal Physcs, 17(1975):87-94. [6] G. L. Sleljpen and H.A.Van Der Vorst. A Jacob-Davdson style QR and QZ algorthms for the partal reducton of matrx pencls. SIAM Journal on Scentfc Computng, 20(1998): 94-125. IJNS emal for contrbuton: edtor@nonlnearscence.org.uk