ISSN 1749-3889 (prnt), 1749-3897 (onlne) Internatonal Journal of Nonlnear Scence Vol.17(2014) No.2,pp.188-192 Modfed Block Jacob-Davdson Method for Solvng Large Sparse Egenproblems Hongy Mao, College of Scence, Nanjng Forestry Unversty, Nanjng 210037,Chna (Receved 20 June 2013, accepted 11 March 2014) Abstract: The Jacob-Davdson Method s an effcent egenvalue solver whch uses an nner-outer scheme. In the outer teraton one tres to approxmate an egenpar whle n the nner teraton a lnear system has to be solved, often teratvely. The more tme-consumng computaton les n solvng the lnear system whch s called correcton equatons. To handle the nexact soluton of the correcton equatons, we use extrapolaton technque to solve the correcton equatons. Furthermore, we use a a class of precondtoners when solvng the correcton equatons wth Krylov subspace methods, such as GMRES(m). Numercal experments show that the new algorthm s effcent. Keywords:Jacob-Davdson algorthm; correcton equaton; extrapolaton technque;precondtoner 1 Introducton In many felds of scence and engneerng technology, we often need to calculate several extreme (maxmum or mnmum) or nternal egenvalues and correspondng egenvectors of large sparse symmetrc matrx. In 2000, Slejpen and Van der Vorst [6] combned correcton method of Jacob wth nner -outer teratve method of Davdson [4, 5] and proposed Jacob- Davdson method. Ths method has good stablty and can acheve a fast convergence speed of non-dagonally domnant or non-normal matrces. At present, the Jacob-Davdson method s one of the most effectve methods for solvng egenvalue problems. But when the matrx has multple egenvalues, the valdty and relablty of Jacob-Davdson method wll decrease. To overcome ths, block Jacob-Davdson method was proposed and t can calculate the multple egenvalues of matrx. Algorthm 1(Block Jacob-Davdson Method) 1 nput: matrx A, the maxmum dmenson of projecton subspace m, block sze l, column orthonormal matrx V 1 wth sze n l; 2 For k = 1, 2,, m. (1) compute V T k AV k; (2) compute l egenpars (λ (k), y (k) ), ( = 1, 2,, l) of H k. (3) compute Rtz vector (ϕ (k), y (k) ), ( = 1, 2,, l); (4) compute resdual vector r (k) Test for convergence. Stop f r (k) (5) Solve { = (A λ (k) I)ϕ (k) < T ol s satsfed., ( = 1, 2,, l). (I ϕ (k) ϕ (k)t )(A λ (k) I)((I ϕ (k) ϕ (k)t ))t (k) = r (k) ϕ (k)t t (k) = 0, ( = 1, 2,, l). Get new matrx T k = [t (k) 1,, t(k) ], where ϕ (k) = (ϕ (k) 1,, ϕ (k)). Correspondng author. E-mal address: mhy@njfu.edu.cn l Copyrght c World Academc Press, World Academc Unon IJNS.2014.04.15/803
Hongy Mao: Modfed Block Jacob-Davdson Method for Solvng Large Sparse Egenproblems 189 (6) V k+1 = MGS(V k, T k )(ModfedGram Schmdt). The block Jacob Davdson algorthm s dvded nto two layers of nner and outer teraton, the outer teraton calculates egenpars of matrx and the nner teraton solves the lnear system whch s called correcton equatons. The man cost les n the nner teratons. In the Jacob-Davdson method, each teraton step requres the soluton of the correcton equaton { (I ϕ (k) ϕ (k)t )(A λ (k) I)((I ϕ (k) ϕ (k)t ))t (k) = r (k) ϕ (k)t t (k) ( = 1, 2,, l), (1) = 0, where ϕ T ϕ = I. In ths work, we propose a modfed verson of the block Jacob Davdson algorthm by ntroduce a extrapolaton parameter, whch can get egenvalues more effcently. The paper s organzed as follows. In secton 2, the Modfed Block Jacob-Davdson Method wll be gven,and the optmal parameter of ω s dscussed.then a precondtonng technque s used n GMRES(m) algorthm when solvng the correcton equaton. Whle n secton 3, numercal results are presented to llustrate the behavor of the new algorthms. 2 Modfed Block Jacob-Davdson Method Wth the approachng degree enhancement, the condton number of the coeffcent matrx equatons wll go bad, we can use the extrapolaton technque to overcome ths. If the soluton of (1) reads as, Let t (k) = Gt (k 1) + C. (2) t (k) = ω(gt (k 1) + C) + (1 ω)t (k 1) = [ωg + (1 ω)i]t (k 1) + ωc, (3) where ω 0 s a parameter. Then we get a new teratve method. Apparently, when ω = 1 the teraton formula (3) s the orgnal (2). Ths extrapolate method converges f and only f ρ(g ω ) < 1. If we only know all the egenvalues of G contaned n the nterval [a, b], then the egenvalues of G ω = ωg+(1 ω)i are located n the nterval wth the endponts of ωa+1 ω and ωb + 1 ω. Let λ(a) denote the set of egenvalues of matrx A, Then ρ(g ω ) = When 1 [a, b], we can choose ω such that ρ(g ω ) < 1 max λ = max ωλ + 1 ω max ωλ + 1 ω. λ λ(g ω) λ λ(g ω) a λ b Theorem 1 If all the egenvalues of G are real and locate n the nterval [a, b], and 1 [a, b], then the optmal parameter of ω s ω opt = 2 2 a b. And ρ(g ω opt ) 1 ω opt d, where d s the dstance from 1 to [a, b]. Proof. Functon max a λ b ωλ+1 ω has a mnmum value when ωa+1 ω = ωb+1 ω, whch results n ω opt = 2 2 a b. Snce 1 [a, b], so ether a > 1 or b < 1. For a b < 1, we have ω opt > 0 and d = 1 b. All the egenvalues of G ωopt satsfes the nequalty ω opt a + 1 ω opt λ ω opt b + 1 ω opt, so we have λ ω opt b + 1 ω opt = 1 + ω opt (b 1) = 1 ω opt d, and λ ω opt a + 1 ω opt = 1 + ω opt d. As a result, we get 1 + ω opt d ρ(g ωopt ) 1 ω opt d. Smlarly, for 1 < a b, we can obtan the same results. The proof s completed. We often use Krylov subspace methods to solve the correcton equaton (1). Restarted GMRES method s well known and wdely used whch s lsted below. Algorthm 2 (GMRES(m)) IJNS homepage: http://www.nonlnearscence.org.uk/
190 Internatonal Journal of Nonlnear Scence, Vol.17(2014), No.2, pp. 188-192 1 Compute r 0 = b Ax 0, β = r 0 2 and v 1 = r0 β 2 Generate the Arnold bass and the matrx H m by usng the Arnold algorthm startng wth v 1 3 Compute y m whch mnmzes βe 1 H m y 2 and x m = x 0 + V m y m 4 If satsfed then stop, else set x 0 := x m and Go To 1 We can use precondtonng technque when solvng the correcton equaton wth GMRES(m). The am of precondtonng s to accelerate the convergence speed by makng the egenvalues of matrx A located n the complex plane as cluster as possble. There are many technques to construct precondtoners, such as ncomplete LU decomposton, ncomplete Cholesky decomposton and so on. Incomplete LU decomposton method s dscussed here. The matrx A s dvded nto four blocks and the correspondng LU( decomposton ) s gven ( as follows. ) ( ) A11 A 12 L11 U11 U = 12 (4) A 21 A 22 L 21 L 22 U 22 where A 11 s a matrx wth sze d 1 d 1,A 22 s a matrx wth sze (n d 1 ) (n d 1 ). From (4),we have A 11 = L 11 U 11 (5) A 12 = L 11 U 12 (6) A 21 = L 21 U 11 (7) A 22 = L 21 U 12 +L 22 U 22 (8) Algorthm 3 Block ILU algorthm 1 For an Incomplete LU decomposton of the block matrx A 11, get L 11, U 11 2 Compute B 1, the nverse matrx of L 11, whch s stll an unt lower trangular matrx 3 U 12 = L 1 11 A 12 = B 1 A 12 4 Compute C 1, the nverse matrx of U 11, whch s stll an upper trangular matrx 5 L 21 = A 21 U 1 11 = A 21C 1 6 denote A 22 = A 22 L 21 U 12, the LU decomposton A 22 = L 22 U 22 The LU decomposton of A 22 can always be done by block calculaton, as long as a proper sze of the block s selected. Furthermore calculaton of the nverse matrx n the algorthm makes full use of the characterstcs of trangular matrx to elmnate unnecessary computaton, and thus the calculaton speed s mproved. 3 Numercal Experment In ths secton, numercal experments are mplemented by Matlab 6.5. The ntal matrx s generated randomly and ts column vectors are orthonormal. To be convenent, the block Jacob-Davdson method s denoted as BJD and modfed Jacob-Davdson method s denoted as MBJD.The block Jacob-Davdson method wth ILU decomposton precondtoner when solvng correctng equaton s denoted as ILUBJD. Example 1 Consder matrx A whch s of order 1600 1600, where the matrx B s of order 40 40. B I I B I A =, I I B where B = 4 1 1 4 1 1 1 4. The computed three bggest egenvalues together wth CPU tmes by (Modfed) block Jacob-Davdson Method are lsted n the Table 1. The Tolerances for the two methods are 10 5. The results show that the modfed block Jacob- Davdson method can accelerate the convergence speed by usng extrapolaton technque. IJNS emal for contrbuton: edtor@nonlnearscence.org.uk
Hongy Mao: Modfed Block Jacob-Davdson Method for Solvng Large Sparse Egenproblems 191 Table 1: The results for the bggest three egenvalues of A Algorthm egenvalues CPU tme (s) 7.9883 BJD 7.9707 184.15 7.9707 7.9883 MBJD 7.9707 166.23 7.9707 Example 2 Consder matrx A whch s of order 500 500, where the matrx B s of order 20 20. B I A = I B I, I B where B = 4 0.4 0.4 4 0.4 0.4 4. Table 2: The results for the bggest four egenvalues of A Algorthm egenvalues CPU tme (s) 6.7756 BJD 6.7754 16.32 6.7321 6.7315 6.7756 ILUBJD 6.7754 15.19 6.7321 6.7315 The computed four bggest egenvalues together wth CPU tmes by block Jacob-Davdson Method wth ILU precondtoner are lsted n the Table 2. The Tolerances for the two methods are 10 5. The results show that the block Jacob-Davdson method wth ILU decomposton precondtoner when solvng the correcton equaton can accelerate the convergence speed. Acknowledgments The authors would lke to thank the referees for ther valuable comments and suggestons whch mprove the manuscrpt greatly. References [1] A. A. Nftyevl and R.F.Efendev. Varable doman egenvalue problems for the laplace operator wth densty. Internatonal Journal of Nonlnear Scence, 16(2013):280-288. IJNS homepage: http://www.nonlnearscence.org.uk/
192 Internatonal Journal of Nonlnear Scence, Vol.17(2014), No.2, pp. 188-192 [2] R. Sngh and J.Kumar. Computaton of Egenvalues of Sngular Sturm-Louvlle Problems usng Modfed Adoman Decomposton Method. Internatonal Journal of Nonlnear Scence, 15(2013):247-258. [3] G. L. Sleljpen and H.A.Van Der Vorst. A Jacob-Davdson method for lnear egenvalue problems. SIAM Revew, 42(2000):267-293. [4] M.Crouzex, B.Phlppe and M.Sadkane, The Davdson method. SIAM Journal on Scentfc Computng, 15(1994):62-76. [5] E.R.Davdson. The teratve calculaton of a few of the lowest egenvalue and correspondng egenvectors of large real-symmetrc matrces. Journal of Computatonal Physcs, 17(1975):87-94. [6] G. L. Sleljpen and H.A.Van Der Vorst. A Jacob-Davdson style QR and QZ algorthms for the partal reducton of matrx pencls. SIAM Journal on Scentfc Computng, 20(1998): 94-125. IJNS emal for contrbuton: edtor@nonlnearscence.org.uk