A GLOBAL ARNOLDI METHOD FOR LARGE NON-HERMITIAN EIGENPROBLEMS WITH SPECIAL APPLICATIONS TO MULTIPLE EIGENPROBLEMS. Congying Duan and Zhongxiao Jia*

Size: px
Start display at page:

Download "A GLOBAL ARNOLDI METHOD FOR LARGE NON-HERMITIAN EIGENPROBLEMS WITH SPECIAL APPLICATIONS TO MULTIPLE EIGENPROBLEMS. Congying Duan and Zhongxiao Jia*"

Transcription

1 TAIWANESE JOURNAL OF MATHEMATICS Vol. 15, No. 4, pp , August 2011 Ths paper s avalable onlne at A GLOBAL ARNOLDI METHOD FOR LARGE NON-HERMITIAN EIGENPROBLEMS WITH SPECIAL APPLICATIONS TO MULTIPLE EIGENPROBLEMS Congyng Duan and Zhongxao Ja* Abstract. Global projecton methods have been used for solvng numerous large matrx equatons, but nothng has been known on f and how ths method can be proposed for solvng large egenproblems. In ths paper, a global Arnold method s proposed for large egenproblems. It computes certan F-Rtz pars that are used to approxmate some egenpars. The global Arnold method nherts convergence propertes of the standard Arnold method appled to a larger matrx whose dstnct egenvalues are the egenvalues of the orgnal gven matrx. As an applcaton, assumng that A s dagonalzable, we show that the global Arnold method s able to solve multple egenvalue problems. To be practcal, we develop an mplctly restarted global Arnold algorthm wth certan F-shfts suggested. In partcular, ths algorthm can be adaptvely used to solve multple egenvalue problems. Numercal experments show that the algorthm s effcent for the egenproblem and s relable for qute ll-condtoned multple egenproblems. 1. INTRODUCTION Jblou, Messaoud and Sadok [16] propose a global projecton method for solvng matrx equatons. One of the man ngredents of the global methods s the use of the Frobenus scalar product. Smply speakng, here \global" descrbes the algorthms wth the F-nner product to be defned by (1). They present a global Arnold process that generates an F-orthonormal bass of a matrx Krylov subspace, and based on t they derve the global FOM and the global GMRES methods. Several Receved November 16, 2009, accepted February 23, Communcated by Wen-We Ln Mathematcs Subject Classfcaton: 65F15, 15A18. Key words and phrases: Global Arnold process, Global Arnold method, F-orthonormal, Egenvalue problem, Multple, F-Rtz value, F-Rtz vector, Convergence, Implct restart. Supported by the Natonal Basc Research Program of Chna 2011CB and the Natonal Scence Foundaton of Chna (Nos and ). *Correspondng author. 1497

2 1498 Congyng Duan and Zhongxao Ja authors [13, 14, 30] propose some other global methods, for nstance, the global versons of CG, SCG, CR and CMRH. Over the past several years, numerous global methods have been wdely used for solvng lnear systems wth multple rghthand sdes and matrx equatons, e.g., Sylvester equatons and Rccat equatons [4, 17, 18, 24, 31]. These methods fall nto category of global projecton methods onto certan matrx Krylov subspaces. A convergence analyss on the global GMRES method s made n [5]. These global Krylov subspace algorthms appear effectve when appled for solvng the problems mentoned above. Other applcatons of the global Krylov subspace methods are n model reducton, especally MIMO systems [7, 8, 9, 15]. However, no global projecton method has been proposed for large matrx egenproblems htherto. Whether or not a global projecton method can be proposed for the egenproblems and, f yes, how to develop a practcal algorthm have not been known. For large non-hermtan egenproblems, a major class of methods are orthogonal projecton methods, whch nclude the famous Arnold method [1, 26, 29, 34]. The Arnold method uses the Arnold process to construct an orthonormal bass of the Krylov subspace generated by a sngle startng vector and compute the Rtz pars to approxmate some of the egenpars of a large n n matrx A. Assumng that A s dagonalzable, however, t s known that the Arnold method tself s unable to determne the multplctes of the requred egenvalues and the assocated egenspaces [19, 20, 21]. In order to fgure out these problems, block versons of the Arnold method are proposed [2, 20, 23] that frst use the block Arnold process to construct an orthonormal bass of the block Krylov subspace generated by a set of vectors and then extract the Rtz pars from the block Krylov subspaces to approxmate the desred egenpars. In ths paper, based on the global Arnold process startng wth an n s ntal matrx, we show how to derve a global Arnold method for large unsymmetrc egenproblems and set up a general framework of global projecton methods for egenproblems, also called the F-orthogonal projecton methods. The method computes so-called F-Rtz pars to approxmate some of the egenpars of A. A fundamental dfference wth a usual projecton method s that now there are s F-Rtz vectors assocated wth each F-Rtz value, each of whch can be used as an approxmate egenvector. So we can pck up any one of the F-Rtz vectors for our use for each F-Rtz value! We show that the F-Rtz values are equal to certan usual Rtz values of a larger matrx wth each egenvalue of A as an s multple one over a closely related Krylov subspace and the F-Rtz pars are at least as accurate as the usual Rtz pars. So the global Arnold method nherts convergence propertes of the standard Arnold method. We wll pay specal attenton to the multple egenvalue problem. Under the assumpton that A s dagonalzable, we show that the global Arnold method can be appled to adaptvely determne multplctes of the desred

3 A Global Arnold Method for Large Non-Hermtan Egenproblems 1499 egenvalues and the correspondng egenspaces. To be clearer and more llustratve, we state more on the global method. The global Arnold process constructs an F-orthonormal bass V 1,V 2,...,V m of the matrx Krylov subspace K m (A, V 1 ) generated by an n s ntal matrx V 1 wth the Frobenus norm one, where V =(v 1,v 2,...,v s ), =1, 2,...,mare n s matrces. If we nterpret the bass (V 1,V 2,...,V m ) as ms lnearly ndependent vectors, ths matrx Krylov subspace can be regarded as a usual block Krylov subspace of dmenson ms startng wth the ntal block vector V 1, so that we can decompose t nto the drect sum of the s sngle vector Krylov subspaces K m (A, v 1j ),j=1, 2,...,s of dmenson m. By the F-orthogonal projecton prncple, we can get m approxmate egenvalues λ (m),=1, 2,...,m, called the F-Rtz values wth respect to the matrx Krylov subspace. For each F-Rtz value λ (m), we can obtan an approxmate egenvector ϕ (m) j, j =1, 2,..., s from each sngle vector Krylov subspace. Assume that the F-Rtz value λ (m) and these s correspondng F-Rtz vectors ϕ (m) 1,ϕ(m) 2,...,ϕ(m) s have converged. Then they are all good approxmatons to the egenpars of A. If a requred egenvalue λ s smple, the s F-Rtz vectors are lnearly dependent numercally. So f the multplcty of the desred egenvalue s not concerned, we smply use any one of the s F-Rtz vectors as an approxmate egenvector rather than compute all of them. If λ s d (d <s) multple, the s F-Rtz vectors must be numercally lnearly dependent, from whch we can numercally determne the multplcty d relably and effcently. If λ s d (d s) multple, these s F-Rtz vectors are lnearly ndependent. So λ s at least s multple. We then run the global Arnold method startng wth a new V 1 that are ndependent of the old V 1, and compute the new converged F-Rtz vectors. Addng them to the prevous ϕ (m) j, j=1, 2,..., s, f they are lnearly dependent numercally, then the rank s d ; otherwse, contnue. Proceed n such a way untl d s determned. We wll quanttatvely establsh a sold foundaton for the above clams. Both theory and numercal experments llustrate that the procedure s relable to determne egenvalue multplctes and egenspaces when condton numbers of the desred egenvectors are consderably smaller than recprocals of the resdual norms. Ths means that the procedure s effectve for qute ll-condtoned multple egenproblems. The global Arnold method becomes very expensve n storage and computatonal cost as m ncreases. Therefore, restartng s necessary when the method does not delver the approxmate egenpars wth prescrbed accuracy for a maxmum m allowed. The need for restart was frstly recognzed by Karush [22], and there are varous restartng schemes developed by many researchers, e.g., Page [25], Cullum and Donath [10], Golub and Underwood [12], Saad [27, 28] and Chateln and Ho [6]. All of these schemes are explct restartng. Over the years, the most popular restartng scheme s mplct restartng proposed by Sorensen [33] that

4 1500 Congyng Duan and Zhongxao Ja combnes the mplctly shfted QR teratons wth the Arnold process and leads to a truncated form of the mplctly shfted QR teraton. In ths paper, we extend mplct restartng to the global Arnold process and develop an mplctly restarted global Arnold algorthm (IRGA) wth those unwanted F-Rtz values as shfts, also called exact shfts as n [33]. The rest of the paper s organzed as follows. In the next secton, we brefly revew the global Arnold process and the global FOM and GMRES algorthms. In Secton 3, we propose a global Arnold method for large unsymmetrc egenproblems. In Secton 4, we show how the global Arnold method s used to solve the multple egenproblem. In Secton 5, we dscuss how to mplctly restart the global Arnold method and develop the mplctly restarted global Arnold algorthm wth the exact shfts suggested. Fnally, we report numercal examples to llustrate effcency and relablty of the algorthm n Secton 6. Some notatons to be used are ntroduced. A s an n n large dagonalzable matrx throughout the paper, and λ,ϕ are ts egenvalues and egenvectors wth λ labeled n a desred order. Denote by the spectral norm of a matrx and the vector 2-norm, by F the Frobenus norm of a matrx, by the superscrpt H the conjugate transpose of a matrx or vector and by I the dentty matrx wth the order clear from the context. The egenvectors and ther approxmatons are normalzed to have unt length. 2. THE GLOBAL ARNOLDI PROCESS AND THE GLOBAL FOM AND GMRES Let M n,s denote the complex lnear space of n s rectangular matrces. For two matrces X and Y n M n,s, we defne ther F-nner product by (1) X, Y F = tr(x H Y ), where tr(z) denotes the trace of the square matrx Z. Note that F = 1/2 F. We wll use the notaton X F Y to denote X, Y F =0, meanng that X and Y are F-orthogonal. For a startng matrx V M n,s, the matrx Krylov subspace K m (A, V ) s defned by (2) K m (A, V )=span{v,av,...,a m 1 V }, whch s a subset of M n,s. Z K m (A, V ) means that there are scalars α,= 1, 2,..., m such that (3) Z = m 1 =0 α A V.

5 A Global Arnold Method for Large Non-Hermtan Egenproblems 1501 Let V =(v 1,v 2,...,v s ) and defne a lnear operator vec: M n,s C ns by (4) vec(v )=(v H 1,v H 2,...,v H s ) H. Then we have (5) X, Y F = vec(x),vec(y ), where, denotes the usual l 2 nner product of the complex vector space C ns. Denote by A B the Kronecker product of the matrces A and B. The followng basc propertes hold, e.g., [11]: 1. (A B)(C D) =(AC) (BD). 2. (A B) H = A H B H. 3. If A C n n, X C n m, then vec(ax)=(i m A)vec(X). 4. Each egenvalue λ,=1, 2,...,nof A s an s multple egenvalue of I s A. The followng global Arnold process s based on the modfed Gram-Schmdt process. It constructs an F-orthonormal bass V 1,V 2,...,V m of the matrx Krylov subspace K m (A, V ), that s, tr(v H V j )=δ j for, j =1,...,m, where δ j s the Kronecker delta. Algorthm 1. The global Arnold process 1. V 1 = V/ V F 2. for j =1, 2,...,mdo W := AV j ; for =1, 2,...,j do h,j = W, V F ; W = W h,j V ; end h j+1,j = W F ; V j+1 = W/h j+1,j. end Ths process needs ms matrx A by vector products plus nm 2 s flops. Defne V m =(V 1,V 2,...,V m ) and H m and H m to be the m m and the (m + 1) m Hessenberg matrces whose nonzero entres h,j are defned by Algorthm 1. Then ( (6) H m = H m h m+1,m e H m where e m =(0,...,0, 1) H s the m-th canoncal bass of C m. We have the followng results [16, 24]. ),

6 1502 Congyng Duan and Zhongxao Ja Theorem 1. Let V m, H m and H m be defned as above. Then V 1,V 2,...,V m are an F-orthonormal bass of the matrx Krylov subspace K m (A, V 1 ) and (7) AV m = V m (H m I s )+h m+1,m (0 n s,...,0 n s,v m+1 ) (8) = V m (H m I s )+r m (e H m I s ), (9) AV m = V m+1 ( H m I s ), where 0 n s denotes the n s zero matrx, r m = h m+1,m V m+1. Based on the global Arnold process, Jblou et al. [16] propose a global FOM (GL-FOM) algorthm and a global GMRES (GL-GMRES) algorthm for solvng lnear systems wth multple rght-hand sdes and numerous matrx equatons. Takng the lnear system wth s multple rght-hand sdes as example, we brefly descrbe the GL-FOM and GL-GMRES algorthms. Let X 0 be an ntal guess to the soluton X of AX = B and R 0 = B AX 0 ts assocated resdual. At the m-th terate of the Gl-FOM algorthm, the correcton Z m = V m (y m I s ) s extracted from the matrx Krylov subspace K m (A, R 0 ) such that the resdual R m = B A(X 0 + Z m )=R 0 AZ m satsfes (10) R m F K m (A, R 0 ), where y m s shown to be the soluton of the followng m m lnear system: (11) H m y m = βe 1, where β = R 0 F. In the GL-GMRES algorthm, the correcton Z m = V m (y m I s ) s determned by mposng the resdual mnmzaton condton (12) R m F = mn R y C m 0 AV m (y I s ) F, where y m s shown to be the soluton of the (m +1) m least squares problem (13) y m = arg mn βe y C m 1 H m y, where β = R 0 F. For detals on these two global methods, we refer to [16]. We remark that f s =1then the global Arnold process reduces to the standard Arnold process and the GL-FOM and GL-GMRES methods become the standard FOM (Arnold) and GMRES methods. 3. A GLOBAL ARNOLDI METHOD FOR EIGENPROBLEMS As a frst step towards dervng a global Arnold method for egenproblems, a smple but fundamental key s that we can nterpret the matrx Krylov subspace K m (A, V 1 ) of M n,s as a standard block Krylov subspace of C n startng wth the

7 A Global Arnold Method for Large Non-Hermtan Egenproblems 1503 ntal block vector V 1. Correspondngly, we can nterpret each bass element V,= 1, 2,...,m of the matrx Krylov subspace K m (A, V 1 ) as usual s vectors and each n s element of t as s vectors nstead of an n s matrx. We wll swtch K m (A, V 1 ) between the matrx Krylov subspace and the standard Krylov subspace, followng our need. In the sequel, when speakng of the block Krylov subspace, suppose that V m =(V 1,V 2,...,V m ) s of full column rank. Then Algorthm 1 also generates a bass of the block Krylov subspace K m (A, V 1 ) n C n. However, we should keep n mnd that ths bass s generally not orthogonal. Mathematcally, when K m (A, V 1 ) s regarded as the block Krylov subspace, we mght use the standard orthogonal projecton prncple to solve egenproblems. In ths case, premultplyng (7) by (V H mv m ) 1 V H m gves (14) (V H m V m) 1 V H m AV m = H m I s +(V H m V m ) 1 V H m h m+1,m (0 n s,...,0 n s,v m+1 ), whch s just the projecton matrx of A onto the subspace spanned by the columns (m) of V m, whose ms egenvalues ˆλ are just the Rtz values of A wth respect to ths subspace and the (unnormalzng) Rtz vectors are ˆϕ (m) = V m ŷ (m), where the ŷ (m) are the egenvectors of the projecton matrx assocated wth the egenvalues ˆλ (m). Mathematcally, they are the same as the Rtz values and Rtz vectors when the standard block Arnold process s exploted to generate an orthonormal bass of K m (A, V 1 ). However, computatonally, we do not recommend ths approach. Frstly, t may be unstable as the columns of V m can be nearly lnearly dependent so that Vm H V m s nearly sngular; secondly, t s more costly than the block Arnold method for the same m. Actually, apart from the cost of m-step global Arnold process, we need n(ms) 2 +2(ms) 3 flops for formng Vm HV m and nvertng t, whle the block Arnold process costs ms matrx by vector products plus n(ms) 2 flops, assumng that no reorthogonalzaton s used. So such a procedure costs more than the block Arnold process for the same m. Therefore, we have to abandon t and seek other vable procedures. Defne A = I s A. Then t has each egenvalue λ of A as an s multple egenvalue. Assume that A s dagonalzable and defne D = dag(λ 1,λ 2,...,λ n ) and the egenvector matrx Then Φ Φ... Φ 1 Φ=(ϕ 1,ϕ 2,...,ϕ n ). Φ A Φ D... = Φ D.... D

8 1504 Congyng Duan and Zhongxao Ja From ths egen-decomposton, we get the s correspondng egenvectors of A assocated wth λ that have the form w = (0,...,ϕ H,...,0)H, whose possble nonzero entres are n postons n( 1) + 1 to n, =1, 2,...,s. Any lnear combnaton of them s stll an egenvector of A assocated wth λ and may have no specal structure. The global Arnold process on A startng wth the matrx V 1 s closely related to the standard Arnold process on A startng wth the ntal vector vec(v 1 ). The followng results can be easly justfed [24], and they are the very frst step to propose and understand a global Arnold method for egenproblems. Theorem 2. Let H m and H m be defned as above. Then vec(v 1 ),vec(v 2 ),..., vec(v m ) form an orthonormal bass of the usual Krylov subspace K m ( A,vec(V1 ) ) generated by A and the startng vector vec(v 1 ). Defne the matrx V m = ( vec(v 1 ),vec(v 2 ),...,vec(v m ) ). Then the followng standard Arnold process holds: (15) (16) AV m = V m H m + h m+1,m vec(v m+1 )e H m, = V m H m. Snce vec(v 1 ),vec(v 2 ),...,vec(v m ) are orthonormal, Theorem ( 2 shows that H m s the orthogonal projecton matrx of A onto the subspace K m A,vec(V1 ) ). Let λ (m),y (m), =1, 2,...,m be the egenpars of H m wth y (m) =1. The egenvalues λ (m) are the Rtz values of A wth respect to the subspace K m (A,vec(V 1 )) and the correspondng Rtz vectors are w (m) =(vec(v 1 ),vec(v 2 ),...,vec(v m ))y (m),=1, 2,...,s. So we can smply use the Rtz pars to approxmate some egenpars of A. The resdual norms are computed cheaply by (17) Aw (m) λ (m) w (m) = h m+1m e H m y(m) wthout explctly formng w (m) untl the convergence occurs. However, the stuaton s subtle and by no means so smple as A s much larger than A n sze and all the egenvalues are at least s multple. We note that on one hand each egenvalue of A s an s multple one of A and on the other hand the egenvalues of H m are always smple f t s dagonalzable. In fact, assumng that A s dagonalzable, the standard Arnold method works as f A had only smple egenvalues [19, 20, 21, 26, 27]. Therefore, when a Rtz par defned above

9 A Global Arnold Method for Large Non-Hermtan Egenproblems 1505 converges, we can get only one smple approxmaton to the s multple egenpars of A. So we have to do more. We now propose a better and practcal global Arnold method that works on the orgnal A drectly rather than on the much larger ns ns matrx A. Recall that the global Arnold process constructs an F-orthonormal bass V 1,V 2,...,V m of the matrx Krylov subspace K m (A, V 1 ). When speakng of a block Krylov subspace K m (A, V 1 ), we can smply decompose t nto the drect sum of the s sngle vector Krylov subspaces generated by the startng vectors v 11,v 12,...,v 1s, respectvely: (18) K m (A, V 1 )=K m (A, v 11 ) K m (A, v 12 ) K m (A, v 1s ). Keepng n mnd the assumpton that the columns of V 1,V 2,...,V m are lnearly ndependent. Then n the MATLAB language the columns of Vm j = V m (:,j : s : ms) form a bass of K m (A, v 1j ),j =1, 2,...,s. Snce v 1j,j =1, 2,...,s are supposed to be lnearly ndependent, we get the s dstnct m-dmensonal Krylov subspaces K m (A, v 1j ). Now we stll use λ (m), =1, 2,..., m as approxmatons to some of the egenvalues of A, called the F-Rtz values of A wth respect to the matrx Krylov subspace K m (A, V 1 ), but we compute the s new vectors (19) ϕ (m) j = V j my (m), j =1, 2,...,s, called the F-Rtz vectors of A wth respect to K m (A, V 1 ). For each λ (m) we now have the s correspondng F-Rtz vectors ϕ (m) j K m (A, v 1j ). We shed more lghts on the F-Rtz pars. Suppose that they already converge. Then f λ s smple, these s F-Rtz vectors must be almost parallel as they approxmate the same ϕ. If λ s multple, each of these s F-Rtz vectors s a good approxmate egenvector of A. If we do not care the multplcty of λ and do not determne the whole egenspace of A assocated wth λ, then we can smply use any of the F-Rtz vectors as an approxmate egenvector of A nstead of computng all of them. Let U (m) Then U (m) = V m (y (m) I s ) = (Vmy 1 (m),...,vmy s (m) ) = (ϕ (m) 1,...,ϕ(m) s ). F =1and t s easly verfed that (λ (m),u (m) ), =1, 2,...,m are the solutons of { U (m) V m (20) (A λ (m) I)U (m) F V m. We call the (λ (m),u (m) ) the F-Rtz pars of A wth respect to the matrx Krylov subspace K m (A, V 1 ). Therefore, the global Arnold method falls nto a general framework, and we call t the F-orthogonal projecton. We nvestgate how a F-Rtz value and a F-Rtz vector approxmates an egenvalue and ts assocated egenvector of A by relatng t to the standard Rtz par (λ (m),w (m) ( ) of A wth respect to K m A,vec(V1 ) ).

10 1506 Congyng Duan and Zhongxao Ja Theorem 3. We have (21) (A λ (m) I)U (m) F = h m+1,m e H my (m), (22) Proof. (A λ (m) I)ϕ (m) j h m+1,m e H my (m), j=1, 2,...,s. From (7) we have (A λ (m) = AV m (y (m) = V m (H m y (m) I)U (m) F I s ) λ (m) V m (y (m) I s ) I s )+h m+1,m (0 n s,...,0 n s,v m+1 )(y (m) I s ) λ (m) V m (y (m) I s ) = h m+1,m (0 n s,...,0 n s,v m+1 )(y (m) I s ) = h m+1,m V m+1 (e H m I s)(y (m) I s ) = h m+1,m e H m y(m) V m+1. Therefore, we get whch s just (21). Notng that AV j my (m) λ (m) we have shown (22). (A λ (m) I)U (m) Vmy j (m) AV m (y (m) F = h m+1,m e H my (m), I s ) λ (m) V m (y (m) I s ) F, Recall (17). (21) ndcates that the resdual Frobenus norm of the F-Rtz par (λ (m),u (m) ) s just equal to the resdual 2-norm of the Rtz par (λ (m),w (m) ) obtaned by the standard Arnold method appled to A over K m (A,vec(V 1 )). (22) can be used as a stoppng crteron whch cheaply checks f the global Arnold method converges wthout computng ϕ (m) j by (19) untl convergence occurs. Recall that for the standard Arnold method on A over K m (A,vec(V 1 )) the Rtz vectors w (m) = V m y (m) =[(V 1 m) H,...,(V s m) H ] H y (m) =[(ϕ (m) 1 )H,...,(ϕ (m) s )H ] H wth w (m) = 1 as t s assumed that y (m) = 1. So the ϕ (m) j are unnormalzed and ther norms are always smaller than one. Snce no one of the nonorthonormal bases Vm j s specal, the ϕ (m) j should generally be comparable n

11 A Global Arnold Method for Large Non-Hermtan Egenproblems 1507 sze,.e., ϕ (m) j 1 s. Therefore, we get AVmy j (m) λ (m) 1 [ (AV 1 s my (m) Vmy j (m) λ (m) Vmy 1 (m) ),...,(AVmy s (m) λ (m) Vmy s (m) ) ] F = 1 s AV m (y (m) = 1 s h m+1,m e H my (m), I s ) λ (m) V m (y (m) I s ) F So combnng the above and the proof of Theorem 3, we have (23) r (m) j := (A λ(m) ϕ (m) j I)ϕ (m) j h m+1,m e H my (m). The left-hand sde of the above relaton s the resdual norm of the normalzed F-Rtz par, whle from (17) the rght-hand sde s just the resdual norm of (λ (m),w (m) ) as an approxmate egenpar of A. So, (22) and (23) demonstrates that all the s F-Rtz vectors ϕ (m) j are good approxmate egenvectors of A assocated wth the egenvalue λ f the standard Arnold method appled to A converges. So the global Arnold method nherts convergence propertes of the standard Arnold method and acheve comparable resduals for the same m. For a convergence analyss of the standard Arnold method, we refer to [19, 20, 21, 26, 27]. We now present a basc global Arnold algorthm for egenproblems. Algorthm 2. A basc global Arnold algorthm 1. Let (λ,ϕ ), = 1, 2,...,k be k desred egenpars of A and tol a user prescrbed tolerance. Choose an n s matrx V and take V 1 = V/ V F as the startng matrx. 2. For m = k + 1,k + 2,...untl convergence (a) Construct the F-orthonormal bass V 1,V 2,...,V m by Algorthm 1. (b) Compute the m egenpars (λ (m),y (m) ), =1, 2,..., m of the resultng Hessenberg matrx H m and select k F-Rtz values, say, λ (m),= 1, 2,...,k as approxmatons to the k desred λ,=1, 2,...,k. (c) Form ϕ (m) = ϕ (m) 1 = Vmy 1,=1, 2,...,k. (d) Test convergence of the k approxmate egenpars (λ (m),ϕ (m) ). (e) If they all drop below tol, then go to Step 3. We should menton that f s =1then the global Arnold method s just the standard basc Arnold method.

12 1508 Congyng Duan and Zhongxao Ja 4. MULTIPLE EIGENPROBLEMS As we have seen prevously, under the assumpton that A s dagonalzable, f H m s dagonalzable, F-Rtz values are always smple even f A has multple egenvalues. So the global Arnold method works as f A has only smple egenvalues. As a result, when a desred λ s multple, the method tself cannot determne the multplcty of λ and compute the egenspace assocated wth t. Explotng the theoretcal analyss of [21], we consder how the global Arnold method can be successfully used to solve multple egenvalue problems. Before dscussons, we need some notatons. We always assume that A s an n n dagonalzable matrx and has M dstnct egenvalues λ, where the multplctes of λ are d, =1, 2,...,M. Let P be the d -dmensonal egenspace assocated wth λ and the columns of Φ d =(ϕ 1,ϕ 2,...,ϕ d ) form a bass of P, where ϕ j =1,j =1, 2,...,d. Wrte the n s startng matrx V 1 as V 1 =(v 11,v 12,...,v 1s ). Then gven Φ d, each v 1j, 1 j s, can be unquely expanded as (24) v 1j = b j1 ϕ 1 + b j2 ϕ b jd ϕ d + u j, u j P 1 P 1 P +1 P M. Defne b 11 b b 1d b 21 b b 2d (25) B s = b s1 b s2... b sd Obvously, B s s row rank defcent when s>d. Assume that the matrx B s s of row full rank for s d. We rewrte the above v 1j as (26) v 1j = β j ϕ j + u j,u j P 1 P 1 P +1 P M, where ϕ j, j =1, 2,...,s are also unt length egenvectors assocated wth λ. Under the assumpton on B s, just as {ϕ j } d j=1, { ϕ j} d j=1 s also a bass of P, and for s>d, ϕ j, j = d +1,...,smust be lnearly dependent to ϕ j,j=1, 2,...,d and belong to the span of { ϕ j } d j=1. In other words, defne Φ s =( ϕ 1, ϕ 2,..., ϕ s ) (m) and Φ s =( ϕ (m) 1, ϕ(m) 2,..., ϕ(m) s ). Then Φ s s of full column rank for s d, whle t must be column rank defcent and the smallest sngular value of t s zero for s>d. From now on we omt the tlde n the Greek letters wthout ambguty; furthermore, we assume to have normalzed ϕ (m) j n the dscussons below. We also assume that the columns of Φ s are strongly lnear ndependent for s d, that s, the smallest sngular value of Φ j s not small and moderate for s d. From

13 A Global Arnold Method for Large Non-Hermtan Egenproblems 1509 the vewpont of the probablty theory, ths assumpton s holds for a bass of P generated randomly. The followng theorem from [21] s the theoretcal background for determnng d numercally by the global Arnold method. Theorem 4. Let P 1,...,P M be the spectral projectors assocated wth λ 1,...,λ M, and defne the matrx X j = ( P1 ϕ (m) j P 1 ϕ (m) j,..., P 1 ϕ (m) j P 1 ϕ (m) j, P +1 ϕ (m) j P +1 ϕ (m) j,..., P M ϕ (m) j P M ϕ (m) j ) and g = mn k λ (m) (27) λ k. Then C j = κ(x j) I P g κ(x j)(1 + P ) g, (28) sn (ϕ j,ϕ (m) j ) C j r (m), j where κ(x j ) s the spectral condton number of X j and r (m) j s the resdual defned by (23). Let σ mn (Φ (m) s ) and σ mn(φ s ) be the smallest sngular values of the matrces Φ (m) s and Φ s, respectvely. Then (29) σ mn (Φ (m) s ) σ mn(φ s )+ s max ϕ j ϕ (m) j. 1 j s In partcular, f s>d, then (30) σ mn (Φ (m) s ) s max ϕ j ϕ (m) 1 j s j (31) s C j max 1 j s r(m) j for small r (m) j. The relaton (28) estmates the accuracy of ϕ (m) j n terms of the resdual norm r (m) j, where C j acts as a condton number and measures the condtonng of ϕ j. The bgger t s, the worse condtoned ϕ j s. If one of κ(x j ) and P s bg or the separaton g of the approxmate λ (m) and the other exact egenvalues λ j s very small, ϕ j s ll condtoned. We can see from (31) that f C j s comparable to or 1 bgger than then σ mn(φ (m) max 1 j s r (m) j s ) may not be small. Based on ths theorem, we can decde f Φ (m) s,=1, 2,...,k, are approxmately column rank defcent n the sense of (31) and thus detect the multplctes d and get an approxmate bass of P. It tells us that Φ (m),=1, 2,...,k are approxmately s

14 1510 Congyng Duan and Zhongxao Ja column rank defcent for d < s and have full column rank for d s. So σ mn (Φ (m) s ),=1, 2,...,k are not small for d s. We decde d n such a way: Assume that r (m) j < tol. Then f s s the smallest nteger such that (32) σ mn (Φ (m) s ) s C j max 1 j s r(m) j hold wth a constant C j sgnfcantly smaller than max 1 j s r(m) j, λ s (s 1) multple and d = s 1. In practce, C j s a-pror unknown. We take C j to be consderably less than tol , say tol or smaller, whch means that ϕ j,j =1, 2,...,s can be qute ll condtoned, such as C j =10 3 or 10 5 f max j s r(m) j or So the procedure may fal to determne d f C j s comparable to or bgger than tol 1 but t s defntely relable. Later numercal experments wll ndeed llustrate that (31) s conservatve and our procedure can determne the multplctes of egenvalues for qute ll-condtoned egenproblems, e.g., C j In practce, s s gven. A random V 1 wll make B s defned by (25) satsfy the assumpton on the rank of t. Wth V 1, f we compute an s numercally multple egenvalue, then ths egenvalue s at least s multple, so the determnaton of d s not actually resolved when s d. The followng approach taken from [20, 21] can fgure out ths problem elegantly. Frstly, choose a random startng matrx V (1) 1 wth s 1 columns. If we have found an s 1 multple egenvalue, then we apply Algorthm 2 wth a new startng ntal V (2) 1 wth s 2 columns, whch s chosen randomly. Now we can compute an s 2 s 1 multple egenvalue, whch s numercally equal to the one computed wth s 1,V (1) 1. We then determne the rank of the matrx (Φ (m) s 1, Φ (m) s 2 ) consstng of these s 1 + s 2 converged F-Rtz vectors wth the numercally multple converged F-Rtz values. Note here that when the egenproblem of A s not too ll condtoned, f some sngular values of ths matrx are of the same order as the maxmum of resdual norms of these s 1 + s 2 converged nteror egenpars, then we consder them to be zero numercally. If the numercal rank of the matrx s less than s 1 + s 2, then the multplcty of ths egenvalue s just the rank of such a matrx. Otherwse, we repeat Algorthm 2 wth s 3 s 1,V (3) 1 and so on untl the numercal rank of the matrx consstng of these s 1 + s s q converged F-Rtz vectors startng wth s 1,V (1) 1,s 2,V (2) 1,...,s q,v (q) 1 respectvely, s smaller than s 1 + s s q. Then the multplcty d of the egenvalue has been determned and equals the numercal rank of ths matrx. In summary, we can present a global Arnold algorthm for multple egenproblems. Algorthm 3. A global Arnold algorthm for multple egenproblems

15 A Global Arnold Method for Large Non-Hermtan Egenproblems Defne the set S ={1, 2,...,k}, Ψ=, q =1 and prescrbe a tolerance tol. 2. Choose a startng n s q matrx V 1 wth V 1 F =1. 3. For m = k + 1,k + 2,...untl convergence (a) Construct the F-orthonormal bass V 1,V 2,...,V m by Algorthm 1. (b) Compute the m egenpars (λ (m),y (m) ), =1, 2,..., mof the resultng Hessenberg matrx H m and use λ (m), =1, 2,..., k to approxmate the desred λ,=1, 2,...,k. (c) Test convergence of the approxmate egenpars (λ (m),ϕ (m) j ), =1, 2,...,k; j =1, 2,...,s q. (d) If they all drop below tol, then go to Step For all S, set Φ (m) s columns. =(Ψ,ϕ (m) 1,ϕ(m) 2,...,ϕ(m) s q ) and s the number of ts (a) Compute the numercal rank r s of Φ (m) s for all S; (b) If r s <s, set d = r s and remove from S; (c) Otherwse, let Ψ=Φ (m) s,q = q +1and go to Step AN IMPLICITLY RESTARTED GLOBAL ARNOLDI ALGORITHM The basc global Arnold algorthm becomes very expensve and mpractcal due to excess storage and hgh computatonal cost as m ncreases. So m must be lmted not to be bg. To make the method practcal, restartng s necessary. The mplct restartng technque due to Sorensen [33] s a very successful and popular one. We show how to extend t to the global Arnold process and develop an mplctly restarted global Arnold algorthm (IRGA). Let k be a fxed specfed nteger, usually the number of the desred egenpars of A and the steps m = k + p. Consder the k + p step global Arnold process (33) (34) AV k+p = V k+p (H k+p I s )+r k+p (e H k+p I s) ) = (V k+p,v k+p+1 )( ( Hk+p β k+p e H k+p I s ). We apply shfted QR teratons to ths truncated factorzaton of A. Let µ be a shft and H k+p µi = QR be the QR factorzaton wth Q orthogonal and R upper trangular. Then H k+p I s µi =(H k+p µi) I s =(QR) I s =(Q I s )(R I s )

16 1512 Congyng Duan and Zhongxao Ja and Therefore, we get (RQ) I s + µi =(RQ + µi) I s =(Q H H k+p Q) I s. (A µi)v k+p V k+p (H k+p I s µi) =r k+p (e H k+p I s ), (A µi)v k+p V k+p (Q I s )(R I s )=r k+p (e H k+p I s), (A µi) ( V k+p (Q I s ) ) ( V k+p (Q I s ) )( (RQ) I s ) =rk+p ( (e H k+p Q) I s ), A ( V k+p (Q I s ) ) ( V k+p (Q I s ) )( (RQ) I s + µi ) = r k+p ( (e H k+p Q) I s ),.e., (35) AV k+p (Q I s )= ( ) ( Q H ) H k+p Q V k+p (Q I s ),V k+p+1 β k+p e H k+p Q A successve applcaton of p mplct shfts results n ( ) H + (36) AV + k+p =(V+ k+p,v k+p k+p+1) β k+p e H ˆQ I s, k+p I s. where V + k+p = V k+p( ˆQ I s ), H + k+p = ˆQ H H k+p ˆQ and ˆQ = Q 1 Q 2...Q p, wth Q j the orthogonal matrx assocated wth the shft µ j. Now, partton ( H V + k+p =(V+ k, ˆV k ), H + + ) k+p = k M ˆβ k e 1 e H k Ĥ p and note So we get β k+p e H k+p ˆQ =(0, 0,..., β k+p,b H ). (37) A(V + k, ˆV p )=(V + k, ˆV p,v k+p+1 ) ˆβ k e 1 e H k β k+p e H k H + k Equatng the frst k columns on both sdes of (37) gves (38) AV + k = V+ k (H+ k I s)+r + k (eh k I s ), M Ĥ I s. where V + k+1 =(1/β+ k )r+ k, r+ k ˆV p (e 1 I s ) ˆβ k + V k+p+1 βk+p and β + k = r+ k. Note that tr ( (V + k )H ˆV p (e 1 I s ) ) =0 b T and tr ( (V + k )H V k+p+1 ) =0.

17 A Global Arnold Method for Large Non-Hermtan Egenproblems 1513 So tr ( (V + ) k )H V k+1 =0. Ths s a new k-step global Arnold process startng wth the updated V 1 +, so we do not need to restart from scratch and extend t to an m-step one from step k +1upwards n a standard way. Smlarly to the mplctly restarted Arnold algorthm (IRA) [33], we can use those p unwanted F-Rtz values as shfts, also called the exact shfts. So we have developed an mplctly restarted global Arnold algorthm (IRGA) wth the exact shfts suggested. Algorthm 4. IRGA wth the exact shfts 1. Gven the number k of desred egenpars, choose s, m and let p = m k, and take V 1 = V/ V F as an n s startng matrx. 2. Run the m-step global Arnold process to get [H, V,k]. 3. Compute the egenpars of H, select k egenvalues of H as approxmatons to the desred egenvalues and take the p unwanted egenvalues as shfts. 4. Apply mplct restartng approach wth p shfts, and the update the global Arnold algorthm V VQ, H Q T HQ s returned. 5. Test convergence. If yes, stop; otherwse, go to Step 2 and extend the global Arnold process from step k +1upwards. In order to determne the multplctes of the desred egenvalues, we combne Algorthm 3 wth Algorthms 4 and present the followng algorthm. Algorthm 5. IRGA for multple egenproblems 1. Gven the number k of desred egenpars, choose s, m and let p = m k. Defne the set S = {1, 2,...,k}, Φ=, q =1 and prescrbe a tolerance tol. 2. Take V 1 = V/ V F as an n s q startng matrx. 3. Run the m-step global Arnold process to get [H, V,k]. 4. Compute the egenpars (λ (m),y (m) ), =1, 2,...,m of H, select λ (m),= 1, 2,...,k as approxmatons to the desred egenvalues λ,=1, 2,...,k and take the p unwanted λ (m),= k +1,...,m as shfts. 5. Apply mplct restartng approach wth p shfts, and the update the global Arnold algorthm V VQ, H Q T HQ s returned. 6. Test convergence. If yes, go to Step 7; otherwse, go to Step 3 and extend the global Arnold process from step k +1upwards. 7. For all S, set Φ (m) s =(Ψ,ϕ (m) 1,ϕ(m) 2,...,ϕ(m) s q ) and s the number of ts columns. (a) Compute the numercal rank r s of Φ (m) s for all S; (b) If r s <s, set d = r s and remove from S; (c) Otherwse, let Ψ=Φ (m) s,q = q +1and go to Step 2.

18 1514 Congyng Duan and Zhongxao Ja We comment that f A s real symmetrc then Algorthm 4 naturally works by smplfyng the global Arnold process as a symmetrc global Lanczos process. For s =1, Algorthm 4 mathematcally reduces to the mplctly restarted Arnold algorthm [33], whose standard Matlab code s the functon egs.m. 6. NUMERICAL EXPERIMENTS We present numercal examples to llustrate the effcency and relablty of IRGA. All experments were run on a PC wth 2.2 GHz Intel Core 2 Duo T7500 processor usng MATLAB 7.1 wth the machne precson ɛ = under the Wndow XP operatng system. If the relatve resdual norm r (m) j = (A λ (m) I)ϕ (m) j tol A 1 A 1 wth tol a prescrbed tolerance, then (λ (m),ϕ (m) j ) s accepted to have converged. Based on the prevous analyss, we assume that 10 3 C j C j = max 1 j s r (m) j n (32), whch allows the multple egenproblems to be qute ll condtoned for small tol. Astol dmnshes, Algorthm 5 s more applcable and works for worse condtoned multple egenproblems. We took random V 1 's n all the examples. In all the tables below, we denote by ter the number of restarts and by Resdual norms the above relatve resdual norms. Example 1. Ths test problem s taken from the set of test matrces n netlb, and the matrx A s of form 0 1 n n A has zero dagonal entres and known egenvalues, and s sngular f n s odd. The egenvalues are plus and mnus n 1, n 3, n 5,..., (1 or 0). The egenvalue problem of A s hghly ll condtoned for n = 2000, the computed condton number of the egenvector matrx s 4.5e Moreover, the smallest spectral condton number of an ndvdual egenvalue s large up to 5.6e So t s expected that t may be very dffcult to compute a few largest egenpars of A usng Arnold type algorthms.

19 A Global Arnold Method for Large Non-Hermtan Egenproblems 1515 We perform Algorthm 4 on the matrx A, and requre that the algorthm stops as soon as all actual resdual norms of the approxmatng egenpars are below tol = We want to compute the four largest egenvalues 1999, 1997, 1995, Table 1 shows the results obtaned. The left part of Fgure 1 depcts the convergence curves for m =30. We see the algorthm used almost the same restarts to acheve the prescrbed accuracy for dfferent s. Ths justfes that the global Arnold algorthm has the same convergence speed as the standard Arnold algorthm,.e., s =1, as far as ter s concerned. It s also seen from the table that the bgger m s, fewer restarts the algorthm uses and the algorthm converges faster for exteror egenvalues. Fg. 1. Left: A(2000) wth m =30, s =2; rght: dw8192 wth m =20, s =2.

20 1516 Congyng Duan and Zhongxao Ja Example 2. We test the unsymmetrc matrx DW8192 from [3]. The stoppng crteron and the notaton used are as before. We compute the largest four egenvalues n magntude λ ,λ , λ ,λ , whch are qute clustered but numercally dstnct. So ths problem may not be easy for Arnold type algorthms. Table 2 reports the results, where we see the proposed algorthm solved the problem effectvely and relably. The rght part of Fgure 1 depcts the convergence curves for m =20. Besdes, we have observatons smlar to those for Example 1. Example 3. We test the matrx OLM1000 [3], a real unsymmetrc matrx. The stoppng crteron as well as the notaton used are as before. We want to compute the largest four egenvalues n magntude λ ,λ , λ ,λ , whch are qute clustered but numercally dstnct. So lke Example 1, ths problem may not be easy for the proposed algorthm. Table 3 lsts the results obtaned. The rght part of Fgure 2 depcts the convergence curves for m =30. From both the table and the fgure, we see the algorthm solved the problem effectvely and relably. Agan, we have smlar observatons to those for Example 1. Example 4. Consder convecton dffuson dfferental operator u(x, y)+ρu x (x, y)=λu(x, y)

21 A Global Arnold Method for Large Non-Hermtan Egenproblems 1517 Fg. 2. Left: olm1000 wth m =30, s =2; rght: A 100 wth m =20, s =2. on a square regon [0, 1] [0, 1] wth the boundary condton u(x, y) =0. Takng ρ =1and dscretzng wth centered dfferences yeld a block trdagonal matrx A n = tr( I,B n, I), where B n = tr(b, 4,a) s a trdagonal matrx wth a = 1+1/2(n +1) and b = 1 1/2(n +1), and n s chosen the number of nteror mesh ponts on each sde of the square. A n s of order N = n 2. The egenvalues λ 2 and λ 3 are very clustered and they get closer as n ncreases. We test Algorthm 4 on the matrx A 100 obtaned by takng n = 100. We are nterested n the four egenvalues wth the largest real parts, and the stoppng crteron as well as the notaton used are as before. The ntal V 1 were

22 1518 Congyng Duan and Zhongxao Ja chosen randomly n a normal dstrbuton, and the computed egenvalues are λ ,λ λ ,λ Table 4 lsts the results obtaned and the rght part of Fgure 2 depcts the convergence curves for m =20. In what follows, we show how to use the global Arnold method and Algorthm 5 to solve multple egenproblems and determne the multplctes d of λ,=1, 2,...,k and the assocated egenspaces. Dfferently from that done n Examples 1{4, for each F-Rtz value we now compute all the s F-Rtz vectors by (19) smultaneously. Examples 5{8 reports the numercal results, where n the tables we lst all the resdual norms of the s F-Rtz pars for each F-Rtz value used to approxmate a desred egenvalue. Example 5. In ths example, we test a multple egenproblem. Let B = I 2 A, where A s the one presented n Example 1. The real unsymmetrc matrx B has egenvalues wth multplcty two. Algorthm 5 s run on B. As have been seen from Example 1, the egenvalue problem of A s very hghly ll condtoned and C j s very huge. We use the same stoppng crteron and the notaton as before. Both the number of restarts and CPU tme (n seconds) are used to measure the cost of the algorthm. Table 5 reports the results for varous s and m and the experments of determnng d for m =30, 40, where svd(x) s the set of all the sngular values of the matrx X and Φ (m) s =(Φ (m) s 1,...,Φ (m) s q ) n all the tables. We see from Table 5 that Algorthm 5 has found λ,d,=1, 2, 3, 4 relably. Ths s very surprsng snce the true C j are much larger than 1 tol. It s observed that for the same m and dfferent s the algorthm used almost the same restarts and the bgger s s, the more costly the algorthm s. However, f one s requred to

23 A Global Arnold Method for Large Non-Hermtan Egenproblems 1519 determne d and compute a bass of P, then IRGA for s > 1 s preferable and advantageous to IRGA for s = 1,.e., IRA, snce t uses less CPU tme. Note that for the same m, runnng IRGA for s > 1 once s less costly than runnng IRA s tmes. For example, we have to run IRGA for s = 1 three tmes to acheve the am but only need to run IRGA for each s = 1, 2 once or IRGA for s = 3 once, whle the latter s cheaper than the former. Example 6. The matrx B = A I2, where A s the one presented n Example 2. Ths real unsymmetrc matrx B has egenvalues wth multplcty

24 1520 Congyng Duan and Zhongxao Ja two, and the four desred egenvalues are clustered. We fnd g 1 g and g 3 g So C 1j 1 g , C 2j 1 g , C 3j 1 g , C 4j g The egenvectors assocated wth λ 1,...,λ 4 may be moderately ll condtoned. Algorthm 5 s run on B. The stoppng crteron and the notaton used are as before. Table 6 lsts the results. We see from Table 6 that Algorthm 5 has found d, =1, 2, 3, 4 relably. Other observatons are smlar to those for Example 5. Example 7. We construct a matrx B = A I 2, where A s the one presented n Example 3. The matrx B has egenvalues wth multplcty two

25 A Global Arnold Method for Large Non-Hermtan Egenproblems 1521 and the four desred egenvalues are clustered. We fnd g1 g and g3 g , so C1j g , C2j g , C3j 1 1 g , C4j g It s clear that the egenvectors assocated wth λ1,..., λ4 are qute ll-condtoned Algorthm 5 s run on B. The stoppng crteron and the notaton used are as before. Table 7 reports the results. We see from Table 7 that for ths ll-condtoned problem Algorthm 5 has found d, = 1, 2, 3, 4 relably. Other observatons are smlar to those for Example 5. Example 8. Ths test matrx B = A I2 s a matrx, where A

26 1522 Congyng Duan and Zhongxao Ja s the one presented n Example 4. The matrx B has egenvalues wth multplcty two and the four desred egenvalues are clustered. We fnd g , g2 g and g , so C1j g , C2j g , C3j g , C4j g It s seen that the egenvectors assocated wth λ1, λ4 may be moderately ll condtoned but the ones wth λ2, λ3 are defntely very ll condtoned. Algorthm 5 s run on B. The stoppng crteron and the notaton used are as before. Table 8 lsts the results. We see from Table 8 that for ths very ll-condtoned problem Algorthm 5 has found d, = 1, 2, 3, 4 relably. Other observatons are smlar to those for Example 5.

27 A Global Arnold Method for Large Non-Hermtan Egenproblems 1523 REFERENCES 1. W. E. Arnold, The prncple of mnmzed teraton n the soluton of the matrx egenvalue problem, Quart. Appl. Math., 9 (1951), Z. Ba, J. Demmel, J. Dongarra, A. Ruhe and H. A. van der Vorst (eds.), Templates for the Soluton of Algebrac Egenvalue Problems: A Practcal Gude, SIAM, Phladelpha, Z. Ba, R. Barrett, D. Day, J. Demmel and J. Dongarra, Test matrx collecton (non- Hermtan egenvalue problems), 4. L. Bao, Y. Ln and Y. We, A new projecton method for solvng large Sylvester equatons, Appl. Numer. Math., 57 (2007), M. Bellalj, K. Jblou and H. Sadok, New convergence results on the global GMRES method for dagonalzable matrces, Techncal Report, L. M. P. A., F. Chateln and D. Ho, Arnold-Tchebychev procedure for large scale nonsymmetrc matrces, Math. Model. Numer. Anal., 24 (1990), C. C. Chu, M. H. La and W. S. Feng, MIMO nterconnects order reductons by usng the multple pont adaptve-order ratonal global Arnold algorthm, IEICE Trans. Elect., E89-C (2006), C. C. Chu, M. H. La and W. S. Feng, The multple pont global Lanczos method for mutple-nputs multple-outputs nterconnect order reducton, IEICE Trans. Elect., E89-A (2006), C. C. Chu, M. H. La and W. S. Feng, Model-order reductons for MIMO systems usng global Krylov subspace methods, Math. Comput. Smulat., to appear. 10. J. K. Cullum and W. E. Donath, A block Lanczos algorthm for computng the q algebracally largest egenvalues and a correspondng egenspace for large, sparse symmetrc matrces, n: Proceedngs of the 1994 IEEE Conference on Decson and Control, IEEE Press, Pscataway, NJ, 1974, pp J. W. Demmel, Appled Numercal Lnear Algebra, SIAM, Phladelpha, G. Golub and R. Underwood, The block Lanczos method for computng egenvalues, n: Mathematcal Software III, J. Rce (ed.), Academc Press, New York, 1977, pp C. Gu and Z. Yang, Global SCD algorthm for real postve defnte lnear systems wth multple rght-hand sdes, Appl. Math. Comput., 189 (2007), M. Heyoun, The global Hessenberg and CMRH methods for lnear systems wth multple rght-hand sdes, Numer. Algor., 26 (2001), M. Heyoun and K. Jblou, Matrx Krylov subspace methods for large scale model reducton problems, Appl. Math. Comput., 181 (2006),

28 1524 Congyng Duan and Zhongxao Ja 16. K. Jblou, A. Messaoud and H. Sadok, Global FOM and GMRES algorthms for matrx equatons, Appl. Numer. Math., 31 (1999), K. Jblou, An Arnold based algorthm for large algebrac Rccat equatons, Appl. Math. Lett., 19 (2006), K. Jblou, Low rank approxmate solutons to large Sylvester matrx equatons, Appl. Math. Comput., 177 (2006), Z. Ja, The convergence of generalzed Lanczos methods for large unsymmetrc egenproblems, SIAM J. Matrx Anal. Appl., 16 (1995), Z. Ja, Generalzed block Lanczos methods for large unsymmetrc egenproblems, Numer. Math., 80 (1998), Z. Ja, Arnold type algorthms for large unsymmetrc multple egenvalue problems, J. Comput. Math., 17 (1999), W. Karush, An teratve method for fndng characterstcs vectors of a symmetrc matrx, Pacfc J. Math., 1 (1951), R. B. Lehoucq and K. J. Maschhoff, Implementaton of an mplctly restarted block Arnold method, Preprnt MCS-P , Argonne Natonal Laboratory, Argonne, IL, Y. Ln, Implctly restarted global FOM and GMRES for nonsymmetrc matrx equatons and Sylvester equatons, Appl. Math. Comput., 167 (2005), C. C. Page, The computaton of egenvalues and egenvectors of very large sparse matrces, Ph. D. thess, London Unversty, London, England, Y. Saad, Varatons on Arnold's method for computng egenelements of large unsymmetrc matrces, Lnear Algebra Appl., 34 (1980), Y. Saad, Projecton methods for solvng large sparse egenvalue problems, n: Matrx Pencls, B. Kagström, A. Ruhe (eds.), Lecture Notes n Mathematcs 973, Sprnger, Berln, 1983, pp Y. Saad, Chebyshev acceleraton technques for solvng nonsymmetrc egenvalue problems, Math. Comput., 42 (1984), Y. Saad, Numercal Methods for Large Egenvalue Problems, Manchester Unversty Press, 1992, UK. 30. D. K. Salkuyeh, CG-type algorthms to solve symmetrc matrx equatons, Appl. Math. Comput., 172 (2006), D. K. Salkuyeh and F. Toutounan, New approaches for solvng large Sylvester equatons, Appl. Math. Comput., 173 (2006), V. Smoncn and E. Gallopoulos, Convergence propertes of block GMRES and matrx polynomals, Lnear Algebra Appl., 247 (1996), D. C. Sorensen, Implct applcaton of polynomal flters n a k-step Arnold method, SIAM J. Matrx Anal. Appl., 13 (1992),

29 A Global Arnold Method for Large Non-Hermtan Egenproblems G. W. Stewart, Matrx Algorthms II: Egensystems, SIAM, Phladelpha, Congyng Duan and Zhongxao Ja Department of Mathematcal Scences Tsnghua Unversty Bejng P. R. Chna E-mal:

30

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k) ISSN 1749-3889 (prnt), 1749-3897 (onlne) Internatonal Journal of Nonlnear Scence Vol.17(2014) No.2,pp.188-192 Modfed Block Jacob-Davdson Method for Solvng Large Sparse Egenproblems Hongy Mao, College of

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

ρ some λ THE INVERSE POWER METHOD (or INVERSE ITERATION) , for , or (more usually) to

ρ some λ THE INVERSE POWER METHOD (or INVERSE ITERATION) , for , or (more usually) to THE INVERSE POWER METHOD (or INVERSE ITERATION) -- applcaton of the Power method to A some fxed constant ρ (whch s called a shft), x λ ρ If the egenpars of A are { ( λ, x ) } ( ), or (more usually) to,

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

Inexact Newton Methods for Inverse Eigenvalue Problems

Inexact Newton Methods for Inverse Eigenvalue Problems Inexact Newton Methods for Inverse Egenvalue Problems Zheng-jan Ba Abstract In ths paper, we survey some of the latest development n usng nexact Newton-lke methods for solvng nverse egenvalue problems.

More information

Developing an Improved Shift-and-Invert Arnoldi Method

Developing an Improved Shift-and-Invert Arnoldi Method Avalable at http://pvamu.edu/aam Appl. Appl. Math. ISSN: 93-9466 Vol. 5, Issue (June 00) pp. 67-80 (Prevously, Vol. 5, No. ) Applcatons and Appled Mathematcs: An Internatonal Journal (AAM) Developng an

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus

More information

2.3 Nilpotent endomorphisms

2.3 Nilpotent endomorphisms s a block dagonal matrx, wth A Mat dm U (C) In fact, we can assume that B = B 1 B k, wth B an ordered bass of U, and that A = [f U ] B, where f U : U U s the restrcton of f to U 40 23 Nlpotent endomorphsms

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

Singular Value Decomposition: Theory and Applications

Singular Value Decomposition: Theory and Applications Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

Norms, Condition Numbers, Eigenvalues and Eigenvectors

Norms, Condition Numbers, Eigenvalues and Eigenvectors Norms, Condton Numbers, Egenvalues and Egenvectors 1 Norms A norm s a measure of the sze of a matrx or a vector For vectors the common norms are: N a 2 = ( x 2 1/2 the Eucldean Norm (1a b 1 = =1 N x (1b

More information

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence.

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence. Vector Norms Chapter 7 Iteratve Technques n Matrx Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematcs Unversty of Calforna, Berkeley Math 128B Numercal Analyss Defnton A vector norm

More information

Formulas for the Determinant

Formulas for the Determinant page 224 224 CHAPTER 3 Determnants e t te t e 2t 38 A = e t 2te t e 2t e t te t 2e 2t 39 If 123 A = 345, 456 compute the matrx product A adj(a) What can you conclude about det(a)? For Problems 40 43, use

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016 U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and

More information

5 The Rational Canonical Form

5 The Rational Canonical Form 5 The Ratonal Canoncal Form Here p s a monc rreducble factor of the mnmum polynomal m T and s not necessarly of degree one Let F p denote the feld constructed earler n the course, consstng of all matrces

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

Time-Varying Systems and Computations Lecture 6

Time-Varying Systems and Computations Lecture 6 Tme-Varyng Systems and Computatons Lecture 6 Klaus Depold 14. Januar 2014 The Kalman Flter The Kalman estmaton flter attempts to estmate the actual state of an unknown dscrete dynamcal system, gven nosy

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

= = = (a) Use the MATLAB command rref to solve the system. (b) Let A be the coefficient matrix and B be the right-hand side of the system.

= = = (a) Use the MATLAB command rref to solve the system. (b) Let A be the coefficient matrix and B be the right-hand side of the system. Chapter Matlab Exercses Chapter Matlab Exercses. Consder the lnear system of Example n Secton.. x x x y z y y z (a) Use the MATLAB command rref to solve the system. (b) Let A be the coeffcent matrx and

More information

Grover s Algorithm + Quantum Zeno Effect + Vaidman

Grover s Algorithm + Quantum Zeno Effect + Vaidman Grover s Algorthm + Quantum Zeno Effect + Vadman CS 294-2 Bomb 10/12/04 Fall 2004 Lecture 11 Grover s algorthm Recall that Grover s algorthm for searchng over a space of sze wors as follows: consder the

More information

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13 CME 30: NUMERICAL LINEAR ALGEBRA FALL 005/06 LECTURE 13 GENE H GOLUB 1 Iteratve Methods Very large problems (naturally sparse, from applcatons): teratve methods Structured matrces (even sometmes dense,

More information

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2 Salmon: Lectures on partal dfferental equatons 5. Classfcaton of second-order equatons There are general methods for classfyng hgher-order partal dfferental equatons. One s very general (applyng even to

More information

1 GSW Iterative Techniques for y = Ax

1 GSW Iterative Techniques for y = Ax 1 for y = A I m gong to cheat here. here are a lot of teratve technques that can be used to solve the general case of a set of smultaneous equatons (wrtten n the matr form as y = A), but ths chapter sn

More information

The Order Relation and Trace Inequalities for. Hermitian Operators

The Order Relation and Trace Inequalities for. Hermitian Operators Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence

More information

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS) Some Comments on Acceleratng Convergence of Iteratve Sequences Usng Drect Inverson of the Iteratve Subspace (DIIS) C. Davd Sherrll School of Chemstry and Bochemstry Georga Insttute of Technology May 1998

More information

Deriving the X-Z Identity from Auxiliary Space Method

Deriving the X-Z Identity from Auxiliary Space Method Dervng the X-Z Identty from Auxlary Space Method Long Chen Department of Mathematcs, Unversty of Calforna at Irvne, Irvne, CA 92697 chenlong@math.uc.edu 1 Iteratve Methods In ths paper we dscuss teratve

More information

Lecture 4: Constant Time SVD Approximation

Lecture 4: Constant Time SVD Approximation Spectral Algorthms and Representatons eb. 17, Mar. 3 and 8, 005 Lecture 4: Constant Tme SVD Approxmaton Lecturer: Santosh Vempala Scrbe: Jangzhuo Chen Ths topc conssts of three lectures 0/17, 03/03, 03/08),

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that

More information

The lower and upper bounds on Perron root of nonnegative irreducible matrices

The lower and upper bounds on Perron root of nonnegative irreducible matrices Journal of Computatonal Appled Mathematcs 217 (2008) 259 267 wwwelsevercom/locate/cam The lower upper bounds on Perron root of nonnegatve rreducble matrces Guang-Xn Huang a,, Feng Yn b,keguo a a College

More information

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Structure and Drive Paul A. Jensen Copyright July 20, 2003 Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

Finding The Rightmost Eigenvalues of Large Sparse Non-Symmetric Parameterized Eigenvalue Problem

Finding The Rightmost Eigenvalues of Large Sparse Non-Symmetric Parameterized Eigenvalue Problem Fndng he Rghtmost Egenvalues of Large Sparse Non-Symmetrc Parameterzed Egenvalue Problem Mnghao Wu AMSC Program mwu@math.umd.edu Advsor: Professor Howard Elman Department of Computer Scences elman@cs.umd.edu

More information

On Finite Rank Perturbation of Diagonalizable Operators

On Finite Rank Perturbation of Diagonalizable Operators Functonal Analyss, Approxmaton and Computaton 6 (1) (2014), 49 53 Publshed by Faculty of Scences and Mathematcs, Unversty of Nš, Serba Avalable at: http://wwwpmfnacrs/faac On Fnte Rank Perturbaton of Dagonalzable

More information

Homework Notes Week 7

Homework Notes Week 7 Homework Notes Week 7 Math 4 Sprng 4 #4 (a Complete the proof n example 5 that s an nner product (the Frobenus nner product on M n n (F In the example propertes (a and (d have already been verfed so we

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

Nahid Emad. Abstract. the Explicitly Restarted Block Arnoldi method. Some restarting strategies for MERAM are given.

Nahid Emad. Abstract. the Explicitly Restarted Block Arnoldi method. Some restarting strategies for MERAM are given. A Generalzaton of Explctly Restarted Block Arnold Method Nahd Emad Abstract The Multple Explctly Restarted Arnold s a technque based upon a multple use of Explctly Restarted Arnold method to accelerate

More information

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b Int J Contemp Math Scences, Vol 3, 28, no 17, 819-827 A New Refnement of Jacob Method for Soluton of Lnear System Equatons AX=b F Naem Dafchah Department of Mathematcs, Faculty of Scences Unversty of Gulan,

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

Global Sensitivity. Tuesday 20 th February, 2018

Global Sensitivity. Tuesday 20 th February, 2018 Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Numerical Heat and Mass Transfer

Numerical Heat and Mass Transfer Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and

More information

1 Matrix representations of canonical matrices

1 Matrix representations of canonical matrices 1 Matrx representatons of canoncal matrces 2-d rotaton around the orgn: ( ) cos θ sn θ R 0 = sn θ cos θ 3-d rotaton around the x-axs: R x = 1 0 0 0 cos θ sn θ 0 sn θ cos θ 3-d rotaton around the y-axs:

More information

Lecture 2: Gram-Schmidt Vectors and the LLL Algorithm

Lecture 2: Gram-Schmidt Vectors and the LLL Algorithm NYU, Fall 2016 Lattces Mn Course Lecture 2: Gram-Schmdt Vectors and the LLL Algorthm Lecturer: Noah Stephens-Davdowtz 2.1 The Shortest Vector Problem In our last lecture, we consdered short solutons to

More information

CSCE 790S Background Results

CSCE 790S Background Results CSCE 790S Background Results Stephen A. Fenner September 8, 011 Abstract These results are background to the course CSCE 790S/CSCE 790B, Quantum Computaton and Informaton (Sprng 007 and Fall 011). Each

More information

LECTURE 9 CANONICAL CORRELATION ANALYSIS

LECTURE 9 CANONICAL CORRELATION ANALYSIS LECURE 9 CANONICAL CORRELAION ANALYSIS Introducton he concept of canoncal correlaton arses when we want to quantfy the assocatons between two sets of varables. For example, suppose that the frst set of

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS Avalable onlne at http://sck.org J. Math. Comput. Sc. 3 (3), No., 6-3 ISSN: 97-537 COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

More information

Workshop: Approximating energies and wave functions Quantum aspects of physical chemistry

Workshop: Approximating energies and wave functions Quantum aspects of physical chemistry Workshop: Approxmatng energes and wave functons Quantum aspects of physcal chemstry http://quantum.bu.edu/pltl/6/6.pdf Last updated Thursday, November 7, 25 7:9:5-5: Copyrght 25 Dan Dll (dan@bu.edu) Department

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

On the Interval Zoro Symmetric Single-step Procedure for Simultaneous Finding of Polynomial Zeros

On the Interval Zoro Symmetric Single-step Procedure for Simultaneous Finding of Polynomial Zeros Appled Mathematcal Scences, Vol. 5, 2011, no. 75, 3693-3706 On the Interval Zoro Symmetrc Sngle-step Procedure for Smultaneous Fndng of Polynomal Zeros S. F. M. Rusl, M. Mons, M. A. Hassan and W. J. Leong

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

Lecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem.

Lecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem. prnceton u. sp 02 cos 598B: algorthms and complexty Lecture 20: Lft and Project, SDP Dualty Lecturer: Sanjeev Arora Scrbe:Yury Makarychev Today we wll study the Lft and Project method. Then we wll prove

More information

NOTES ON SIMPLIFICATION OF MATRICES

NOTES ON SIMPLIFICATION OF MATRICES NOTES ON SIMPLIFICATION OF MATRICES JONATHAN LUK These notes dscuss how to smplfy an (n n) matrx In partcular, we expand on some of the materal from the textbook (wth some repetton) Part of the exposton

More information

Matrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD

Matrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD Matrx Approxmaton va Samplng, Subspace Embeddng Lecturer: Anup Rao Scrbe: Rashth Sharma, Peng Zhang 0/01/016 1 Solvng Lnear Systems Usng SVD Two applcatons of SVD have been covered so far. Today we loo

More information

Quantum Mechanics I - Session 4

Quantum Mechanics I - Session 4 Quantum Mechancs I - Sesson 4 Aprl 3, 05 Contents Operators Change of Bass 4 3 Egenvectors and Egenvalues 5 3. Denton....................................... 5 3. Rotaton n D....................................

More information

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso Supplement: Proofs and Techncal Detals for The Soluton Path of the Generalzed Lasso Ryan J. Tbshran Jonathan Taylor In ths document we gve supplementary detals to the paper The Soluton Path of the Generalzed

More information

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system Transfer Functons Convenent representaton of a lnear, dynamc model. A transfer functon (TF) relates one nput and one output: x t X s y t system Y s The followng termnology s used: x y nput output forcng

More information

Random Walks on Digraphs

Random Walks on Digraphs Random Walks on Dgraphs J. J. P. Veerman October 23, 27 Introducton Let V = {, n} be a vertex set and S a non-negatve row-stochastc matrx (.e. rows sum to ). V and S defne a dgraph G = G(V, S) and a drected

More information

Module 9. Lecture 6. Duality in Assignment Problems

Module 9. Lecture 6. Duality in Assignment Problems Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept

More information

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS SECTION 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS 493 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS All the vector spaces you have studed thus far n the text are real vector spaces because the scalars

More information

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng

More information

Section 8.3 Polar Form of Complex Numbers

Section 8.3 Polar Form of Complex Numbers 80 Chapter 8 Secton 8 Polar Form of Complex Numbers From prevous classes, you may have encountered magnary numbers the square roots of negatve numbers and, more generally, complex numbers whch are the

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011 Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected

More information

FACTORIZATION IN KRULL MONOIDS WITH INFINITE CLASS GROUP

FACTORIZATION IN KRULL MONOIDS WITH INFINITE CLASS GROUP C O L L O Q U I U M M A T H E M A T I C U M VOL. 80 1999 NO. 1 FACTORIZATION IN KRULL MONOIDS WITH INFINITE CLASS GROUP BY FLORIAN K A I N R A T H (GRAZ) Abstract. Let H be a Krull monod wth nfnte class

More information

Lecture 21: Numerical methods for pricing American type derivatives

Lecture 21: Numerical methods for pricing American type derivatives Lecture 21: Numercal methods for prcng Amercan type dervatves Xaoguang Wang STAT 598W Aprl 10th, 2014 (STAT 598W) Lecture 21 1 / 26 Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W)

More information

Chapter Newton s Method

Chapter Newton s Method Chapter 9. Newton s Method After readng ths chapter, you should be able to:. Understand how Newton s method s dfferent from the Golden Secton Search method. Understand how Newton s method works 3. Solve

More information

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty Addtonal Codes usng Fnte Dfference Method Benamn Moll 1 HJB Equaton for Consumpton-Savng Problem Wthout Uncertanty Before consderng the case wth stochastc ncome n http://www.prnceton.edu/~moll/ HACTproect/HACT_Numercal_Appendx.pdf,

More information

Convexity preserving interpolation by splines of arbitrary degree

Convexity preserving interpolation by splines of arbitrary degree Computer Scence Journal of Moldova, vol.18, no.1(52), 2010 Convexty preservng nterpolaton by splnes of arbtrary degree Igor Verlan Abstract In the present paper an algorthm of C 2 nterpolaton of dscrete

More information

A CHARACTERIZATION OF ADDITIVE DERIVATIONS ON VON NEUMANN ALGEBRAS

A CHARACTERIZATION OF ADDITIVE DERIVATIONS ON VON NEUMANN ALGEBRAS Journal of Mathematcal Scences: Advances and Applcatons Volume 25, 2014, Pages 1-12 A CHARACTERIZATION OF ADDITIVE DERIVATIONS ON VON NEUMANN ALGEBRAS JIA JI, WEN ZHANG and XIAOFEI QI Department of Mathematcs

More information

The Finite Element Method: A Short Introduction

The Finite Element Method: A Short Introduction Te Fnte Element Metod: A Sort ntroducton Wat s FEM? Te Fnte Element Metod (FEM) ntroduced by engneers n late 50 s and 60 s s a numercal tecnque for solvng problems wc are descrbed by Ordnary Dfferental

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Example: (13320, 22140) =? Solution #1: The divisors of are 1, 2, 3, 4, 5, 6, 9, 10, 12, 15, 18, 20, 27, 30, 36, 41,

Example: (13320, 22140) =? Solution #1: The divisors of are 1, 2, 3, 4, 5, 6, 9, 10, 12, 15, 18, 20, 27, 30, 36, 41, The greatest common dvsor of two ntegers a and b (not both zero) s the largest nteger whch s a common factor of both a and b. We denote ths number by gcd(a, b), or smply (a, b) when there s no confuson

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

Calculation of time complexity (3%)

Calculation of time complexity (3%) Problem 1. (30%) Calculaton of tme complexty (3%) Gven n ctes, usng exhaust search to see every result takes O(n!). Calculaton of tme needed to solve the problem (2%) 40 ctes:40! dfferent tours 40 add

More information

U.C. Berkeley CS278: Computational Complexity Professor Luca Trevisan 2/21/2008. Notes for Lecture 8

U.C. Berkeley CS278: Computational Complexity Professor Luca Trevisan 2/21/2008. Notes for Lecture 8 U.C. Berkeley CS278: Computatonal Complexty Handout N8 Professor Luca Trevsan 2/21/2008 Notes for Lecture 8 1 Undrected Connectvty In the undrected s t connectvty problem (abbrevated ST-UCONN) we are gven

More information

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law:

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law: CE304, Sprng 2004 Lecture 4 Introducton to Vapor/Lqud Equlbrum, part 2 Raoult s Law: The smplest model that allows us do VLE calculatons s obtaned when we assume that the vapor phase s an deal gas, and

More information

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0 MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector

More information

x = , so that calculated

x = , so that calculated Stat 4, secton Sngle Factor ANOVA notes by Tm Plachowsk n chapter 8 we conducted hypothess tests n whch we compared a sngle sample s mean or proporton to some hypotheszed value Chapter 9 expanded ths to

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

4DVAR, according to the name, is a four-dimensional variational method.

4DVAR, according to the name, is a four-dimensional variational method. 4D-Varatonal Data Assmlaton (4D-Var) 4DVAR, accordng to the name, s a four-dmensonal varatonal method. 4D-Var s actually a drect generalzaton of 3D-Var to handle observatons that are dstrbuted n tme. The

More information

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers Psychology 282 Lecture #24 Outlne Regresson Dagnostcs: Outlers In an earler lecture we studed the statstcal assumptons underlyng the regresson model, ncludng the followng ponts: Formal statement of assumptons.

More information

ANALYSIS OF PROJECTION METHODS FOR SOLVING LINEAR SYSTEMS WITH MULTIPLE RIGHT-HAND SIDES

ANALYSIS OF PROJECTION METHODS FOR SOLVING LINEAR SYSTEMS WITH MULTIPLE RIGHT-HAND SIDES SIAM J. SCI. COMPUT. c 997 Socety for Industral and Appled Mathematcs Vol. 8, No. 6, pp. 698 72, November 997 009 ANALYSIS OF PROJECTION METHODS FOR SOLVING LINEAR SYSTEMS WITH MULTIPLE RIGHT-HAND SIDES

More information

On a direct solver for linear least squares problems

On a direct solver for linear least squares problems ISSN 2066-6594 Ann. Acad. Rom. Sc. Ser. Math. Appl. Vol. 8, No. 2/2016 On a drect solver for lnear least squares problems Constantn Popa Abstract The Null Space (NS) algorthm s a drect solver for lnear

More information

The Finite Element Method

The Finite Element Method The Fnte Element Method GENERAL INTRODUCTION Read: Chapters 1 and 2 CONTENTS Engneerng and analyss Smulaton of a physcal process Examples mathematcal model development Approxmate solutons and methods of

More information

Density matrix. c α (t)φ α (q)

Density matrix. c α (t)φ α (q) Densty matrx Note: ths s supplementary materal. I strongly recommend that you read t for your own nterest. I beleve t wll help wth understandng the quantum ensembles, but t s not necessary to know t n

More information

PHYS 215C: Quantum Mechanics (Spring 2017) Problem Set 3 Solutions

PHYS 215C: Quantum Mechanics (Spring 2017) Problem Set 3 Solutions PHYS 5C: Quantum Mechancs Sprng 07 Problem Set 3 Solutons Prof. Matthew Fsher Solutons prepared by: Chatanya Murthy and James Sully June 4, 07 Please let me know f you encounter any typos n the solutons.

More information

Linear Feature Engineering 11

Linear Feature Engineering 11 Lnear Feature Engneerng 11 2 Least-Squares 2.1 Smple least-squares Consder the followng dataset. We have a bunch of nputs x and correspondng outputs y. The partcular values n ths dataset are x y 0.23 0.19

More information

General viscosity iterative method for a sequence of quasi-nonexpansive mappings

General viscosity iterative method for a sequence of quasi-nonexpansive mappings Avalable onlne at www.tjnsa.com J. Nonlnear Sc. Appl. 9 (2016), 5672 5682 Research Artcle General vscosty teratve method for a sequence of quas-nonexpansve mappngs Cuje Zhang, Ynan Wang College of Scence,

More information

MATH Sensitivity of Eigenvalue Problems

MATH Sensitivity of Eigenvalue Problems MATH 537- Senstvty of Egenvalue Problems Prelmnares Let A be an n n matrx, and let λ be an egenvalue of A, correspondngly there are vectors x and y such that Ax = λx and y H A = λy H Then x s called A

More information

The FEAST Algorithm for Sparse Symmetric Eigenvalue Problems

The FEAST Algorithm for Sparse Symmetric Eigenvalue Problems The FEAST Algorthm for Sparse Symmetrc Egenvalue Problems Susanne Bradley Department of Computer Scence, Unversty of Brtsh Columba Abstract The FEAST algorthm s a method for computng extreme or nteror

More information

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA 4 Analyss of Varance (ANOVA) 5 ANOVA 51 Introducton ANOVA ANOVA s a way to estmate and test the means of multple populatons We wll start wth one-way ANOVA If the populatons ncluded n the study are selected

More information

Linear Algebra and its Applications

Linear Algebra and its Applications Lnear Algebra and ts Applcatons 4 (00) 5 56 Contents lsts avalable at ScenceDrect Lnear Algebra and ts Applcatons journal homepage: wwwelsevercom/locate/laa Notes on Hlbert and Cauchy matrces Mroslav Fedler

More information