The FEAST Algorithm for Sparse Symmetric Eigenvalue Problems

Size: px
Start display at page:

Download "The FEAST Algorithm for Sparse Symmetric Eigenvalue Problems"

Transcription

1 The FEAST Algorthm for Sparse Symmetrc Egenvalue Problems Susanne Bradley Department of Computer Scence, Unversty of Brtsh Columba Abstract The FEAST algorthm s a method for computng extreme or nteror egenvalues of a matrx or matrx pencl. Each step of ths algorthm nvolves repeatedly solvng lnear systems (whch are complex and generally ll-condtoned) along a contour n the complex plane. Most of the prevous research done on the FEAST algorthm has used drect solvers for these systems. In ths report, we consder nstead usng teratve solvers for FEAST. Focusng on the standard symmetrc egenvalue problem, we explore the use of MINRES, as well as GMRES wth and wthout precondtonng. Through numercal experments, we conclude that FEAST can be employed effectvely wth a precondtoned teratve solver to fnd egenpars of large, sparse matrces. 1 Introducton Consder the generalzed egenvalue problem, whch s to fnd a scalar λ and an n-dmensonal vector x such that Ax = λbx for A, B C n n. If B s nonsngular, ths s equvalent to computng an egenpar of the matrx M := B 1 A. The FEAST method [14] fnds all such egenpars that have λ n some regon Γ n the complex plane. The computatonal backbone of FEAST s the soluton of several lnear systems along a contour n the complex plane. If A and B are large and sparse, the current mplementaton of FEAST solves these systems usng the Intel MKL-Pardso solver [15]. Despte the avalablty of ths opton, much of the prevous work n the FEAST lterature has focused on matrces of sze 500 to 4,000 [10, 11, 22, 21]. Some exceptons are: the orgnal FEAST paper [14], whch employed FEAST on a matrx of sze 12,450; the work of and Güttel et al. [9], whch had a maxmum matrx sze of 176,622; and Aktulga et al. [1], whch explored FEAST for standard egenvalue problems of dmensons as hgh as 327,000. To our knowledge, the only work that has explored the use of teratve solvers n FEAST s that of Krämer et al. [11], whch performed emprcal testng of dfferent varants of FEAST. In t, GMRES was employed wth dfferent tolerances on a matrx of sze 1,059. The purpose of ths project s to mplement FEAST n Matlab wth teratve lnear solvers and explore what mpact ths has on the performance of the method. We focus prmarly on the standard egenvalue problem (.e. B s the dentty) wth real symmetrc A. Emal: smbrad@cs.ubc.ca 1

2 1.1 Overvew of the FEAST method We now descrbe the dervaton of the FEAST algorthm n the case where A s Hermtan and B s Hermtan postve defnte (for a dervaton n the non-hermtan case, see [21]). The man computatonal work of FEAST les n applyng a ratonal functon to M that approxmates the ndcator functon π(µ) = 1 2πı Γ 1 dγ, (1) γ µ where Γ s a contour n the complex plane that encloses the desred egenvalues. Cauchy s ntegral theorem tells us that π(µ) = 1 for µ Γ, and π(µ) = 0 otherwse. Defne g(µ; z) := 1 z µ for a scalar nput µ and parameter z. By conventon, we defne the functon g(m; z) for a matrx M as g(m; z) = (zi M) 1. If we assume that M s dagonalzable as s the case wth Hermtan A and B we can wrte M = XΛX 1, meanng that g(m; z) : = (zi M) 1 = (zxx 1 XΛX 1 ) 1 = X(zI Λ) 1 X 1 = Xg(Λ; z)x 1, (2) where g(λ; z) s the dagonal matrx wth each dagonal entry λ of Λ replaced by g(λ; z). Our focus s on Hermtan matrces, whch have real egenvalues. Therefore, we assume that our nterval of nterest s gven by I := [λ, λ + ] R. For some complex contour Γ enclosng I, we estmate the ntegral π(µ) by numercal quadrature. We assume we have m quadrature nodes n the upper half of the complex plane, and m nodes n the lower half. We therefore wsh to construct a functon of the form ρ m (µ) := 2m =1 w z µ, wth ρ m (µ) π(µ). For matrx nput, the functon s defned as ρ m (M) : = = 2m =1 2m =1 w (z I M) 1 w (z B A) 1 B. (3) From Equaton (2), we see that ρ m (M) wll have the same egenvectors x as M, but ts egenvalues wll be gven by ρ m (λ ), where λ are the orgnal egenvalues of M. Rather than computng ρ m (M) drectly, we postmultply by a set of vectors Q = [q 1, q 2,..., q p ] to compute ρ(m)q. Ths entals solvng 2m lnear systems, each wth multple rght-hand sdes Bq j. If ρ m s a good approxmaton to the ndcator functon π (Equaton (1)), ρ(m)q approxmates the applcaton of the spectral projector to Q. Specfcally, we wll have ρ m (M)Q X Γ X H Γ BQ, where X Γ are the (B-orthogonal) egenvectors correspondng to the egenvalues of M n Γ. In FEAST, we use the vectors resultng from the approxmate spectral projecton n a Raylegh-Rtz procedure, and the resultng Rtz vectors form the rght-hand sde vectors Q for the 2

3 next teraton. Pseudocode for the FEAST method (as presented n [22]) s gven n Algorthm 1. Constructng the flter functon ρ m and selectng a search space dmenson p are dscussed n Sectons 3 and 4, respectvely. Algorthm 1: FEAST Input: Hermtan A, h.p.d. B, nterval I = [λ, λ + ], search space dmenson p Output: All egenpars (λ, x) of M = B 1 A such that λ I 1 Construct ρ m (µ) := 2m =1 w z µ 2 Pck p random n-vectors Q (0) = [q 1, q 2,..., q p ]; 3 for k = 1, 2,... untl convergence do 4 Y (k) ρ(m) Q (k 1) (see Equaton (3)); // Raylegh-Rtz procedure: to approxmate the ndcator functon for I; 5 Obtan reduced p p system: Â (k) Y H (k) AY (k), ˆB(k) Y H (k) BY (k); 6 Solve Â(k) ˆX (k) = ˆB (k) ˆX(k) ˆΛ(k) for ˆX (k), ˆΛ (k) ; 7 Set Q (k) Y (k) ˆX(k) ; 8 end 2 Survey of other egenvalue solvers In ths secton we wll go over some other establshed methods for egenvalue problems. 2.1 Standard egenvalue problem To compute the entre spectrum of a (generally farly small) real matrx, a common method s the QR algorthm [7], whch computes the Schur decomposton of a gven matrx. Namely, we factor A = Q H TQ, where Q s orthonormal (or untary, f A s complex), and T s upper trangular. Here the egenvalues of A are gven by the dagonal entres of T, and the columns of Q are called the Schur vectors of A. Sometmes, computng a full Schur factorzaton s too expensve, or we aren t nterested n the entre spectrum of a matrx. Several methods compute only a subset of the spectrum. The smplest of these s the power method [17, chapter 4]: gven a nonzero ntal vector v 0, the sequence A k v 0 wll eventually converge to the egenvector correspondng to the largest modulus egenvalue λ 1 of A, provded that λ 1 s sem-smple. We can generalze ths to subspace teratons: nstead of a sngle vector, we consder the sequence X (k) := A k X (0), where X (0) s a system of vectors [x 0, x 1,..., x m ]. Under a few assumptons, the columns of X (k) converge n drecton to the m domnant Schur vectors of A (see [17, chapter 5]). For ths to occur, we must force the columns of X (k) to be lnearly ndependent otherwse, all columns wll converge to the sngle domnant egenvector. Ths s done by computng a QR decomposton X (k) = QR at each step and settng X (k) := Q, whch ensures that the vectors wll be orthogonal. Raylegh-Rtz methods Power methods and subspace teratons are rarely used alone: they more often form a buldng block for more sophstcated algorthms. A large class of methods uses the Raylegh- Rtz procedure [17, chapter 4]: gven an orthonormal bass V n s that approxmates some porton of the spectrum of A, we compute the egenpars of a smaller (s s) matrx H := V H AV. If (λ, x) s an egenpar 3

4 of H then (λ, Vx) s called a Rtz par of A. If the range of V approxmates the range of the egenvectors of nterest suffcently well, the Rtz pars wll be a good approxmaton of the egenpars of the orgnal matrx. The best-known methods of ths type are the Arnold and Lanczos methods, n whch V s s an orthonormal bass for the Krylov subspace K s := span{v 0, Av 0, A 2 v 0,..., A s 1 v 0 } for some nonzero ntal vector v 0. Ths results n an upper Hessenberg matrx H s (or trdagonal, n the case of the Lanczos method). In general, the Rtz pars wll approxmate the extremal egenpars of A farly quckly, but nteror egenpars wll converge more slowly. Some methods for example, those of Davdson [3] and Jacob-Davdson [20] apply precondtonng to the search vectors obtaned n an Arnold procedure, whch can speed up convergence for some matrces. Flterng methods Computng nteror egenvalues s more challengng. The most conceptually straghtforward technque s the shft-and-nvert teraton, n whch we perform subspace or Krylov teratons usng (A σi) 1 nstead of A. Ths matrx has the same egenvectors as A, but ts egenvalues are λ new = 1/(λ σ), = 1,..., n. Hence, the largest-magntude egenvalues λ new of the nverted matrx wll be those correspondng to λ near σ. We then have faster convergence of egenpars near σ, but at the cost of multple lnear system solves wth a matrx that wll generally be ndefnte, and may be ll-condtoned f A has egenvalues very close to σ. For matrces that are too large to nvert, t s common to use flterng technques that elmnate portons of the search space n the drecton of unwanted egenvectors. For example, one can replace the standard power teratons wth polynomal teratons of the form z q = p q (A)z 0, wth p q beng a degree-q polynomal constructed based on some knowledge of the spectrum of A, accordng to what porton of the spectrum s of nterest. Examples of ths class of method nclude the explctly and mplctly restarted Arnold and Lanczos methods [8], and the polynomal fltered Lanczos method developed n [6]. An alternatve to polynomal flterng s to construct flters based on ratonal functons of A [9]. FEAST s an example of ths class of method, as ts subspace teratons use a ratonal functon ρ that approxmates the ndcator functon over the nterval of nterest [22]. Wthn the same group s the Contour Integral Raylegh-Rtz (CIRR) method [19]. Lke FEAST, ths method uses ratonal functons derved from Cauchy s ntegral theorem to fnd egenpars n a certan nterval; but CIRR nstead uses these functons to approxmate moment matrces, whch are then used n a Raylegh-Rtz procedure. 2.2 Generalzed egenvalue problem Of the Raylegh-Rtz technques lsted above for standard egenvalue problems, only FEAST and CIRR are drectly applcable to the generalzed egenvalue problem. In FEAST n the generalzed case, we use numercal ntegraton to estmate the ntegral π(m) := 1 2πı Γ (zb A) 1 Bq j dz, for dfferent rght-hand sdes q j. The CIRR method s smlar, but nstead approxmates several ntegrals of the form s k := 1 (z γ) k (zb A) 1 Bq 2πı j dz, k = 0, 1,..., m 1, Γ 4

5 for some γ nsde Γ. Note that n FEAST we only approxmate the ntegral s 0. For detals and explanaton of CIRR, we refer to [18, 19]. 3 Numercal quadrature In ths secton we focus on computng quadrature nodes and weghts whch we wll denote by z and w, respectvely so that 2m w ρ m (µ) = (4) z µ =1 approxmates the ndcator functon (as defned by the contour ntegral π(µ) n Equaton (1)) over the nterval [ 1, 1]. We are assumng wthout loss of generalty that the pencl (A, B) has been transformed lnearly such that the wanted egenvalues are n [ 1, 1]. We derve node-weght pars based on Gaussan and trapezodal quadrature n sectons 3.1 and 3.2, respectvely, and dscuss n secton 4 how the quadrature functon affects the convergence of FEAST. Integral parameterzaton For a gven scalng parameter S, we defne a famly of ellpses Γ S ntersectng the real lne at 1 and 1 by: Γ S = {γ : γ = γ(θ) = Seıθ + S 1 e ıθ } S + S 1, θ [0, 2π). (5) For S =, ths gves the unt crcle, whle for fnte S t gves a vertcally flattened ellpse. By varable substtuton, we can then evaluate the ntegral π(µ) by π(µ) = 1 2π γ (θ) 2π dθ =: f µ (θ)dθ, (6) 2πı 0 γ(θ) µ 0 where f µ (θ) = 1 (Se ıθ S 1 e ıθ )/(S + S 1 ) 2π (Se ıθ + S 1 e ıθ )/(S + S 1 ) µ. 3.1 Gaussan quadrature As n [14], we use m Gauss quadrature nodes θ (G) on the nterval [0, π] and another m nodes θ (G) on [π, 2π], all wth correspondng quadrature weghts ω (G). The quadrature approxmaton for π(µ) s then gven by π(µ) 2m =1 ω (G) f µ (θ (G) ) =: ρ (G) m (µ). We then defne the mapped Gauss nodes and weghts as z (G) (G) = Seıθ + S 1 e ıθ(g) S + S 1, w (G) = ω 2π Se ıθ(g) S 1 e ıθ(g) S + S 1, 5

6 and then wrte ρ (G) m (µ) = 2m =1 w (G) z (G) µ. (7) By constructon, the nodes and weghts both appear n complex conjugate pars. Specfcally, for = 1,..., m, z = z m+ and w = w m+. Note that, for real µ, Therefore, whch means we can rewrte Equaton (7) as ( ) w w z µ =. z µ ( ) w m+ z m+ µ = w, = 1,..., m, z µ ρ (G) m (µ) = 2 Re ( m =1 ) w (G) z (G). (8) µ Analogously, for real symmetrc matrces A, B, and a real matrx Q, we can wrte ρ (G) m (M)Q := 2m =1 w (G) (z (G) B A) 1 BQ = 2 Re ( m =1 w (G) (z (G) B A) 1 BQ Hence, n the strctly real case, we can perform the quadrature step of FEAST usng only m of the 2m quadrature nodes. Ths means the requred number of lnear system solves s now mp nstead of 2mp, where p s the number of columns of Q. 3.2 Trapezodal quadrature The constructon of the trapezodal approxmant ρ (T m ) s smlar to the Gaussan case, but we begn wth the trapezod rule to estmate 2π 0 f µ (θ)dθ. We use equspaced quadrature nodes θ (T ) = π( 1/2)/m and equal weghts ω (T ) = π/m for = 1,..., 2m. Defnng the mapped nodes z (T ) and weghts w (T ) from θ (T ) and ω (T ) n the same way as n the Gaussan case, we can now wrte ρ (T ) m (µ) = In ths case, note that for = 1,..., m, z (T ) = z (T ) 2m reasonng as we dd n the Gaussan case to wrte ρ (T ) m (µ) = 2 Re 2m =1 w (T ) z (T ) µ. (9) ) and w(t = w (T ) 2m. Therefore, we can use the same ( m =1 ) w (T ) z (T ) µ ). (10) 6

7 Fgure 1: Gaussan quadrature wth (half) number of quadrature ponts m = 3, 5, 8, and shape parameters S =, 2, Top fgures show quadrature nodes for m = 3, and bottom fgures show the value of ρ(µ) for postve µ only (as ρ s symmetrc about µ = 0). Smaller values of S result n more rapd decay outsde the nterval, at the expense of added oscllatons nsde the nterval. for all real µ. The analogous result also holds for the matrx case, assumng real symmetrc A, B, and real Q. 4 Convergence of the FEAST method We now descrbe how the choce of quadrature functon ρ affects convergence of the desred egenpars. Ths wll also shed lght on how to select a search space dmenson p for a partcular FEAST nstance. We gnore the presence of any errors n the computaton of ρ(m)q (k). For convergence analyss of FEAST wth lnear solver errors, we refer the reader to [22]. Assume that our search space has dmenson p, and that the egenvalues λ j, j = 1,..., n of M are 7

8 Fgure 2: ρ (T ) m (µ) for the nterval ( 1, 1) wth S =, 2, 1.25, and wth (half) number of nodes equal to 3, 5, 8. Plots show the results of the ntegraton for real µ. The functon s even, so we only show postve µ values. Smaller values of S result n more rapd decay outsde the nterval, at the expense of added oscllatons nsde the nterval. ordered such that ρ(λ 1 ) ρ(λ 2 )... ρ(λ n ). Let x j denote the egenvector correspondng to λ j. Let Q (k) := span(q (k) ), and let q denote the number of egenvalues of M n the nterval of nterest [λ, λ + ]. The followng result was proven n [22]: Theorem 1 (Tang and Polzz, 2014). At teraton k of FEAST wth quadrature rule ρ, for each j = 1, 2,..., q, there s a vector s j Q (k) such that x j s j B α ρ(λ p+1 )/ρ(λ j ) k, where α s a constant. In partcular, (I P (k) )x j B α ρ(λ p+1 )/ρ(λ j ) k, where P (k) s the B-orthogonal projector onto the space Q (k). Note that, for the standard egenvalue problem, the B-norm reduces to the 2-norm. In ths case, Theorem 1 tells us that the dstance between an egenvector x j and the span of the Rtz vectors s attenuated by a factor of ρ(λ p+1 )/ρ(λ j ) after each FEAST teraton. Selectng a search space sze From Theorem 1, we see that FEAST wll converge more quckly the smaller the value of ρ(λ p+1 ). For a gven quadrature functon ρ, ths value wll be monotoncally nondecreasng as the dmenson p of our search space s ncreased. But, because ncreasng p requres solvng addtonal lnear systems at each teraton, there s a trade-off between the FEAST convergence rate and the amount of work per teraton. In the orgnal FEAST paper [14] and n [22], t was suggested that one should generally use p = 1.5q, where q s the number of egenpars n the search nterval. (Ths, of course, requres a pror estmate of q, but technques to address ths problem have been developed n [12].) The reasonng behnd ths recommendaton was that, when ρ(µ) s an 8-pont Gaussan quadrature functon over a crcular contour for the nterval [ 1, 1], ρ(1.5) Therefore, f the spectrum of M s approxmately evenly dstrbuted, we would expect to see (I P (k) )x j B 0 at a rate of 10 3k. 8

9 However, Güttel et al. [9] later explored the use of dfferent quadrature functons wth ellptcal contours, nstead of just crcular ones. From Fgures 1 and 2, we see that for both quadrature types, functons based on ellptcal contours decay more quckly outsde the nterval of nterest, at the expense of some oscllaton and early decay wthn the nterval. We would expect and ths was verfed expermentally n [9] that by usng ellptcal contours, we can acheve a desrable convergence rate wth a smaller search space than we could wth crcular contours. 5 FEAST wth teratve lnear solvers The man work of FEAST s n the soluton of the lnear systems to approxmate the spectral projector. For standard problems wth real A, these equatons are of the form (z I A)y,j = q j, = 1,..., m, j = 1,..., p. (11) If A s large and sparse, we would lke to be able to solve these equatons wth teratve solvers. However, the matrces wll generally have egenvalues near the contour ponts, makng the systems ll-condtoned. Addtonally, the shfts z are complex, meanng that the systems wll be non-hermtan even f A s symmetrc. 5.1 Iteratve solver canddates For each of the p rght-hand sde vectors q j, we must solve m shfted systems (wth complex shfts). If we use a Krylov solver, we can solve these more effcently as Krylov subspaces are nvarant under shfts namely, f V s s a bass for an s-dmensonal Krylov subspace and H s := V H s ( A)V s, then V H s (z I A)V s = z I + H s, where H s s upper Hessenberg. Symmetrc case When A s symmetrc, H s s trdagonal, and therefore so s z I + H s. Thus, for a gven rght-hand sde q j, each shfted system reduces to a trdagonal matrx, all wth the same Krylov bass vectors V s. Ths means that even though our matrces are non-hermtan, we can stll formulate a three-term recurrence relaton between the bass vectors v j [16]. Hence, we can solve these systems wth the MINRES method [23]. Non-symmetrc case When A s non-symmetrc [21], we can stll use the same Krylov subspace for the shfted systems, but MINRES s no longer applcable. We can alternatvely use the GMRES method. The GMRES-DR-Sh method of [4] s one approach for takng advantage of shfted systems wth multple rght-hand sdes, although n ts orgnal mplementaton t can t be used wth precondtonng. 5.2 Iteratve solvers wth precondtonng Ideally, we would lke to use a precondtoner P to accelerate the convergence of the lnear solvers we ve just descrbed. 9

10 MINRES : In general, we can precondton MINRES as long as the precondtoner P s symmetrc postve defnte. However, matters are complcated for our problem because the systems are shfted by a complex value z. Assume we have factored P = P 1/2 P 1/2 (wth P 1/2 symmetrc), and consder the splt precondtoned system: P 1/2 (z I A)P 1/2 (P 1/2 y,j ) = P 1/2 q j. If (z I A) was real symmetrc, t would follow that the teraton matrx P 1/2 (z I A)P 1/2 would be real symmetrc as well. However, n our case the shft z s complex, whch would mean that the precondtoned matrx, whle stll beng symmetrc, wll be non-hermtan. Wth the complex elements of the system not necessarly restrcted to a shft along the dagonal, standard MINRES may not be applcable, as the Krylov bass vectors could no longer be expressed as a three-term recurrence. There are approaches that could be used to overcome ths: for example, we could use precondtonng f we used the MINRES varant for complex symmetrc systems developed by Cho [2], but an nvestgaton of ths s beyond the scope of ths report. GMRES : When we don t precondton GMRES, we can easly take advantage of the shfted systems. Snce the Krylov subspaces are nvarant under shfts, we can construct a sngle Krylov bass for each rght-hand sde. Matters are more complcated f we wsh to use precondtonng. When we precondton the shfted systems, they wll no longer be shfted by the dentty; thus, ther Krylov spaces are no longer equvalent, and reusng nformaton effcently between the dfferent systems s not as straghtforward. There are precondtoned methods that can mprove effcency by takng advantage of related lnear systems and multple rght-hand sdes see, for example, the work of Parks et al. [13] but an exploraton of these methods s left as a subject for future work. 6 Numercal experments All experments focus on the standard egenvalue problem wth real symmetrc A. We use MINRES-FEAST to denote FEAST wth MINRES lnear solvers, and GMRES-FEAST to denote the use of GMRES. All code s mplemented n Matlab. We use the Parallel Computng toolbox wth 4 workers n the parallel pool to solve the p dfferent rght-hand sdes n Equaton (11). Experments were run on a commodty PC wth a 3.60 GHz Intel Core processor and 16 GB of memory. We check for convergence of FEAST by measurng the resdual norms of the computed Rtz pars (λ, x) that have λ [λ, λ + ]. Specfcally, we say the method has converged f Ax λx 2 < tol for all such egenpars (wth x 2 = 1), where tol s a user-specfed resdual tolerance. The followng experments are all performed wth tol = Test matrces We begn wth a bref descrpton of the three real symmetrc matrces used n the followng experments. We use the Trefethen 2000 matrx 1, the Andrews matrx 2 and the c-bg matrx from the Schenk IBMNA

11 group 3, all of whch were obtaned from the Unversty of Florda Sparse Matrx Collecton [5]. Table 1 summarzes some propertes of the matrces, and sparsty patterns are shown n Fgure 3. Matrx Trefethen 2000 Andrews c-bg n 2,000 60, ,241 Nonzeros 41, ,154 2,340,859 Full egen-range [1.12, 1.74e+04] [0.0, 36.49] [ , ] nnz(l+u), wth AMD 1,699, ,019, ,569,235 Table 1: Summary of the dfferent test matrces used n our numercal experments. 6.2 GMRES-FEAST vs. MINRES-FEAST In ths experment we explore the performance of FEAST wth MINRES, GMRES, and GMRES precondtoned wth ILUTP usng a drop tolerance of We use no restartng wth ether GMRES solver. Ths experment s performed on the symmetrc postve defnte 2,000 2,000 Trefethen 2000 matrx. For our experment, we fnd the q = 20 egenpars wth λ [31.2, 113.5]. We use a search space dmenson p = 26 wth the same random ntal subspace Q (0) for all methods. The tolerance of all lnear solvers s We use 8-pont Gaussan quadrature wth shape parameter S = 2.0. Accordng to Theorem 1, n the absence of lnear solver errors ths should gve us a convergence factor of for the slowest-convergng egenpar (note that we can t generally compute ths value n practce, as t requres knowledge of the egenvalues of the matrx). Results In all cases, FEAST took three teratons to converge. Table 2 summarzes the number of lnear solver teratons and computatonal tme requred for each teraton. MINRES GMRES GMRES-ILUTP Iteraton 1 (FEAST) Iteraton 2 (FEAST) Iteraton 3 (FEAST) Total Solver teratons 166, ,213 36, ,835 Tme (s) Solver teratons 146, ,305 27, ,777 Tme (s) Solver teratons 1,487 1,513 1,509 4,509 Tme (s) Table 2: MINRES-FEAST vs. GMRES-FEAST, for fndng 20 egenpars of the Trefethen 2000 matrx. MINRES and GMRES refer to unprecondtoned solvers, wth Krylov subspaces recycled between shfted systems, and GMRES-ILUTP refers to GMRES precondtoned wth ILUTP(τ = 0.01), wth all systems solved ndependently. Solver teratons denotes the total number of lnear solver teratons used for all m p = 208 lnear system solves at each FEAST teraton. Total tme for GMRES-ILUTP ncludes factorzaton tme

12 (a) Trefethen 2000 (b) Andrews (c) c-bg Fgure 3: Sparsty patterns of the matrces Trefethen 2000; (upper left quarter of) Andrews; and c-bg. At the frst teraton when all solvers have the same random rght-hand sde vectors MINRES requres an average of 801 teratons per lnear system solve. For GMRES wthout precondtonng, the average s 706 teratons per lnear system solve; however, due to orthogonalzaton costs, these teratons are much more expensve, and the computatonal tme ncreases substantally. We get much better results when we precondton GMRES, nstead of takng advantage of the shfts as we do n the unprecondtoned verson. Computng the ILUTP factorzatons takes a total of 0.05 seconds for all eght systems, and the resultng factorzatons L + U have between 2,500 and 2,800 nonzero entres. By precondtonng, we go from an average of 706 teratons per lnear system to an average of 7 teratons. 12

13 6.3 Precondtoned GMRES-FEAST for large matrces From the prevous experment, we saw that we ganed more computatonal savngs by precondtonng the lnear systems than by explotng symmetry (as wth MINRES) or takng advantage of equvalent Krylov subspaces across shfted systems (as wth MINRES and unprecondtoned GMRES). We now explore the scalablty of precondtoned GMRES-FEAST by testng t on two large matrces: the Andrews matrx (sze 60,000) and the c-bg matrx (sze 345,241). In both cases, we fnd 100 egenpars (extreme for Andrews and nteror for c-bg) usng a random ntal subspace of sze p = 130. We use 8-pont trapezodal quadrature wth S = All shfted matrces are precondtoned wth ILUTP and all solver tolerances are set to In both cases, convergence of all desred egenpars s acheved n three teratons. A summary of the results s gven n Table 3. Iteraton counts for both the FEAST algorthm and the lnear solvers suggest that our approach s effectve for large matrces. Matrx Andrews (60,000 60,000) c-bg (345, ,241) Expermental settng Interval [λ, λ + ] [0.0, 1.3] [45.2, 50.0] ILUTP drop tolerance 5.0 1e e 3 Results Average nnz(l+u) (over all shfts) 2,767,412 5,697,775 Total ILUTP factorzaton tme (s) 1, Average GMRES teratons per LS solve Max GMRES teratons for any LS solve FEAST teratons to convergence 3 3 Average tme per FEAST teraton (s) 8, ,578.3 Table 3: Results of FEAST for fndng 100 egenpars of the matrces Andrews (sze 60,000) and c-bg (sze 345,241). 7 Conclusons In ths report we explored the use of teratve solvers wthn the FEAST method to compute egenpars of sparse symmetrc matrces. We conclude that usng FEAST wth precondtoned GMRES s a promsng approach, and we demonstrated the effectveness of the method by computng egenpars of matrces of dmensons up to the hundreds of thousands. There are several possble ways we mght employ teratve solvers even more effectvely wthn FEAST. We observed n our experments that MINRES teratons are cheap, but that wthout precondtonng, the teraton counts tend to be rather hgh. If we apply a complex symmetrc verson of MINRES, as was developed n [2], we could then use MINRES solvers wth precondtonng. Wthn FEAST, we solve a few (often eght) dfferent systems, for several rght-hand sdes. Our precondtoned approach doesn t attempt to explot any relatonshp between the dfferent systems, and none of the methods explored here have taken advantage of the fact that each system s beng solved 13

14 repeatedly for multple rght-hand sdes. Krylov subspace recyclng methods partcularly those that can be used n conjuncton wth precondtonng, such as n [13] represent a promsng area of future research. In performng the numercal experments, we observed that we requre tght lnear solver tolerances (generally somewhere between 10 9 and ) to obtan relable FEAST convergence to Further research s needed to determne whether there may be ways to reduce the need for very accurate lnear system solves. Ths may also mean that, under some crcumstances (such as when we fnd we need a solver tolerance of ), we gan lttle from usng teratve nstead of drect solvers. A queston for future research s to determne more precsely the crcumstances under whch a partcular FEAST nstance would beneft most from the use of teratve solvers. A lmtaton of ths work s that t doesn t take full advantage of one of the key benefts of FEAST, whch s that FEAST s nherently parallelzable: we can parallelze the solutons of the separate lnear systems as we ve done here, or we can partton the nterval of nterest and run FEAST on each segment n parallel (see [1, 9]), or both. We ve made lmted use of parallelzaton here, but could acheve better results wth more computng resources and more sophstcated parallelzaton schemes. Another possble drecton for future work n ths area would be the use of parallel teratve solvers. References [1] H. M. Aktulga, L. Ln, C. Hane, E. G. Ng, and C. Yang. Parallel egenvalue calculaton based on multple shft-nvert Lanczos and contour ntegral based spectral projecton method. Parallel Comput., 40(7): , July [2] S. T. Cho. Mnmal resdual methods for complex symmetrc, skew symmetrc, and skew Hermtan systems. CoRR, abs/ , [3] M. Crouzex, B. Phlppe, and M. Sadkane. The Davdson method. SIAM J. Sc. Comput., 15(1):62 76, Jan [4] D. Darnell, R. B. Morgan, and W. Wlcox. Deflated GMRES for systems wth multple shfts and multple rght-hand sdes. Lnear Algebra and ts Applcatons, 429(10): , [5] T. A. Davs and Y. Hu. The Unversty of Florda sparse matrx collecton. ACM Trans. Math. Softw., 38(1):1:1 1:25, Dec [6] H. Fang and Y. Saad. A fltered Lanczos procedure for extreme and nteror egenvalue problems. SIAM Journal on Scentfc Computng, 34(4):A2220 A2246, [7] J. G. F. Francs. The QR transformaton a untary analogue to the LR transformaton, Part 1. The Computer Journal, 4(3): , [8] G. H. Golub and C. F. Van Loan. Matrx Computatons (3rd Ed.). Johns Hopkns Unversty Press, Baltmore, MD, USA, [9] S. Güttel, E. Polzz, P. T. P. Tang, and G. Vaud. Zolotarev quadrature rules and load balancng for the FEAST egensolver. SIAM Journal on Scentfc Computng, 37(4):A2100 A2122,

15 [10] J. Kestyn, E. Polzz, and P. T. P. Tang. FEAST egensolver for non-hermtan problems. CoRR, abs/ , [11] L. Krämer, E. D. Napol, M. Galgon, B. Lang, and P. Bentnes. Dssectng the FEAST algorthm for generalzed egenproblems. CoRR, abs/ , [12] E. D. Napol, E. Polzz, and Y. Saad. Effcent estmaton of egenvalue counts n an nterval. CoRR, abs/ , [13] M. L. Parks, E. de Sturler, G. Mackey, D. D. Johnson, and S. Mat. Recyclng Krylov subspaces for sequences of lnear systems. SIAM Journal on Scentfc Computng, 28(5): , [14] E. Polzz. A densty matrx-based algorthm for solvng egenvalue problems. CoRR, abs/ , [15] E. Polzz. FEAST egenvalue solver v3.0: User gude. CoRR, abs/ , [16] Y. Saad. Iteratve methods for sparse lnear systems. SIAM, [17] Y. Saad. Numercal methods for large egenvalue problems. SIAM, [18] T. Sakura and H. Sugura. A projecton method for generalzed egenvalue problems usng numercal ntegraton. Journal of Computatonal and Appled Mathematcs, 159(1): , th Japan- Chna Jont Semnar on Numercal Mathematcs; In Search for the Fronter of Computatonal and Appled Mathematcs toward the 21st Century. [19] T. Sakura and H. Tadano. CIRR: a Raylegh-Rtz type method wth contour ntegral for generalzed egenvalue problems. Hokkado Math. J., 36(4): , [20] G. L. G. Slejpen and H. A. Van der Vorst. A Jacob-Davdson teraton method for lnear egenvalue problems. SIAM Journal on Matrx Analyss and Applcatons, 17(2): , [21] P. T. P. Tang, J. Kestyn, and E. Polzz. A new hghly parallel non-hermtan egensolver. In Proceedngs of the Hgh Performance Computng Symposum, HPC 14, pages 1:1 1:9, San Dego, CA, USA, Socety for Computer Smulaton Internatonal. [22] P. T. P. Tang and E. Polzz. FEAST as a subspace teraton egensolver accelerated by approxmate spectral projecton. SIAM Journal on Matrx Analyss and Applcatons, 35(2): , Copyrght , Socety for Industral and Appled Mathematcs; Last updated [23] H. A. Van der Vorst. Iteratve Krylov methods for large lnear systems. Cambrdge monographs on appled and computatonal mathematcs. Cambrdge Unversty Press, Cambrdge, UK, New York,

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k) ISSN 1749-3889 (prnt), 1749-3897 (onlne) Internatonal Journal of Nonlnear Scence Vol.17(2014) No.2,pp.188-192 Modfed Block Jacob-Davdson Method for Solvng Large Sparse Egenproblems Hongy Mao, College of

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus

More information

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence.

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence. Vector Norms Chapter 7 Iteratve Technques n Matrx Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematcs Unversty of Calforna, Berkeley Math 128B Numercal Analyss Defnton A vector norm

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

Singular Value Decomposition: Theory and Applications

Singular Value Decomposition: Theory and Applications Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real

More information

Developing an Improved Shift-and-Invert Arnoldi Method

Developing an Improved Shift-and-Invert Arnoldi Method Avalable at http://pvamu.edu/aam Appl. Appl. Math. ISSN: 93-9466 Vol. 5, Issue (June 00) pp. 67-80 (Prevously, Vol. 5, No. ) Applcatons and Appled Mathematcs: An Internatonal Journal (AAM) Developng an

More information

Inexact Newton Methods for Inverse Eigenvalue Problems

Inexact Newton Methods for Inverse Eigenvalue Problems Inexact Newton Methods for Inverse Egenvalue Problems Zheng-jan Ba Abstract In ths paper, we survey some of the latest development n usng nexact Newton-lke methods for solvng nverse egenvalue problems.

More information

Time-Varying Systems and Computations Lecture 6

Time-Varying Systems and Computations Lecture 6 Tme-Varyng Systems and Computatons Lecture 6 Klaus Depold 14. Januar 2014 The Kalman Flter The Kalman estmaton flter attempts to estmate the actual state of an unknown dscrete dynamcal system, gven nosy

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

1 GSW Iterative Techniques for y = Ax

1 GSW Iterative Techniques for y = Ax 1 for y = A I m gong to cheat here. here are a lot of teratve technques that can be used to solve the general case of a set of smultaneous equatons (wrtten n the matr form as y = A), but ths chapter sn

More information

ρ some λ THE INVERSE POWER METHOD (or INVERSE ITERATION) , for , or (more usually) to

ρ some λ THE INVERSE POWER METHOD (or INVERSE ITERATION) , for , or (more usually) to THE INVERSE POWER METHOD (or INVERSE ITERATION) -- applcaton of the Power method to A some fxed constant ρ (whch s called a shft), x λ ρ If the egenpars of A are { ( λ, x ) } ( ), or (more usually) to,

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

2.3 Nilpotent endomorphisms

2.3 Nilpotent endomorphisms s a block dagonal matrx, wth A Mat dm U (C) In fact, we can assume that B = B 1 B k, wth B an ordered bass of U, and that A = [f U ] B, where f U : U U s the restrcton of f to U 40 23 Nlpotent endomorphsms

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

Grover s Algorithm + Quantum Zeno Effect + Vaidman

Grover s Algorithm + Quantum Zeno Effect + Vaidman Grover s Algorthm + Quantum Zeno Effect + Vadman CS 294-2 Bomb 10/12/04 Fall 2004 Lecture 11 Grover s algorthm Recall that Grover s algorthm for searchng over a space of sze wors as follows: consder the

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13 CME 30: NUMERICAL LINEAR ALGEBRA FALL 005/06 LECTURE 13 GENE H GOLUB 1 Iteratve Methods Very large problems (naturally sparse, from applcatons): teratve methods Structured matrces (even sometmes dense,

More information

The Order Relation and Trace Inequalities for. Hermitian Operators

The Order Relation and Trace Inequalities for. Hermitian Operators Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence

More information

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS) Some Comments on Acceleratng Convergence of Iteratve Sequences Usng Drect Inverson of the Iteratve Subspace (DIIS) C. Davd Sherrll School of Chemstry and Bochemstry Georga Insttute of Technology May 1998

More information

Report on Image warping

Report on Image warping Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.

More information

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016 U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and

More information

Relaxation Methods for Iterative Solution to Linear Systems of Equations

Relaxation Methods for Iterative Solution to Linear Systems of Equations Relaxaton Methods for Iteratve Soluton to Lnear Systems of Equatons Gerald Recktenwald Portland State Unversty Mechancal Engneerng Department gerry@pdx.edu Overvew Techncal topcs Basc Concepts Statonary

More information

5 The Rational Canonical Form

5 The Rational Canonical Form 5 The Ratonal Canoncal Form Here p s a monc rreducble factor of the mnmum polynomal m T and s not necessarly of degree one Let F p denote the feld constructed earler n the course, consstng of all matrces

More information

= = = (a) Use the MATLAB command rref to solve the system. (b) Let A be the coefficient matrix and B be the right-hand side of the system.

= = = (a) Use the MATLAB command rref to solve the system. (b) Let A be the coefficient matrix and B be the right-hand side of the system. Chapter Matlab Exercses Chapter Matlab Exercses. Consder the lnear system of Example n Secton.. x x x y z y y z (a) Use the MATLAB command rref to solve the system. (b) Let A be the coeffcent matrx and

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

Lecture 21: Numerical methods for pricing American type derivatives

Lecture 21: Numerical methods for pricing American type derivatives Lecture 21: Numercal methods for prcng Amercan type dervatves Xaoguang Wang STAT 598W Aprl 10th, 2014 (STAT 598W) Lecture 21 1 / 26 Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W)

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that

More information

Numerical Heat and Mass Transfer

Numerical Heat and Mass Transfer Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and

More information

1 Matrix representations of canonical matrices

1 Matrix representations of canonical matrices 1 Matrx representatons of canoncal matrces 2-d rotaton around the orgn: ( ) cos θ sn θ R 0 = sn θ cos θ 3-d rotaton around the x-axs: R x = 1 0 0 0 cos θ sn θ 0 sn θ cos θ 3-d rotaton around the y-axs:

More information

Deriving the X-Z Identity from Auxiliary Space Method

Deriving the X-Z Identity from Auxiliary Space Method Dervng the X-Z Identty from Auxlary Space Method Long Chen Department of Mathematcs, Unversty of Calforna at Irvne, Irvne, CA 92697 chenlong@math.uc.edu 1 Iteratve Methods In ths paper we dscuss teratve

More information

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng

More information

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 ) Kangweon-Kyungk Math. Jour. 4 1996), No. 1, pp. 7 16 AN ITERATIVE ROW-ACTION METHOD FOR MULTICOMMODITY TRANSPORTATION PROBLEMS Yong Joon Ryang Abstract. The optmzaton problems wth quadratc constrants often

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

Feb 14: Spatial analysis of data fields

Feb 14: Spatial analysis of data fields Feb 4: Spatal analyss of data felds Mappng rregularly sampled data onto a regular grd Many analyss technques for geophyscal data requre the data be located at regular ntervals n space and/or tme. hs s

More information

Appendix B. The Finite Difference Scheme

Appendix B. The Finite Difference Scheme 140 APPENDIXES Appendx B. The Fnte Dfference Scheme In ths appendx we present numercal technques whch are used to approxmate solutons of system 3.1 3.3. A comprehensve treatment of theoretcal and mplementaton

More information

DUE: WEDS FEB 21ST 2018

DUE: WEDS FEB 21ST 2018 HOMEWORK # 1: FINITE DIFFERENCES IN ONE DIMENSION DUE: WEDS FEB 21ST 2018 1. Theory Beam bendng s a classcal engneerng analyss. The tradtonal soluton technque makes smplfyng assumptons such as a constant

More information

Finding The Rightmost Eigenvalues of Large Sparse Non-Symmetric Parameterized Eigenvalue Problem

Finding The Rightmost Eigenvalues of Large Sparse Non-Symmetric Parameterized Eigenvalue Problem Fndng he Rghtmost Egenvalues of Large Sparse Non-Symmetrc Parameterzed Egenvalue Problem Mnghao Wu AMSC Program mwu@math.umd.edu Advsor: Professor Howard Elman Department of Computer Scences elman@cs.umd.edu

More information

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b Int J Contemp Math Scences, Vol 3, 28, no 17, 819-827 A New Refnement of Jacob Method for Soluton of Lnear System Equatons AX=b F Naem Dafchah Department of Mathematcs, Faculty of Scences Unversty of Gulan,

More information

Global Sensitivity. Tuesday 20 th February, 2018

Global Sensitivity. Tuesday 20 th February, 2018 Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values

More information

Limited Dependent Variables

Limited Dependent Variables Lmted Dependent Varables. What f the left-hand sde varable s not a contnuous thng spread from mnus nfnty to plus nfnty? That s, gven a model = f (, β, ε, where a. s bounded below at zero, such as wages

More information

Chapter 12. Ordinary Differential Equation Boundary Value (BV) Problems

Chapter 12. Ordinary Differential Equation Boundary Value (BV) Problems Chapter. Ordnar Dfferental Equaton Boundar Value (BV) Problems In ths chapter we wll learn how to solve ODE boundar value problem. BV ODE s usuall gven wth x beng the ndependent space varable. p( x) q(

More information

4DVAR, according to the name, is a four-dimensional variational method.

4DVAR, according to the name, is a four-dimensional variational method. 4D-Varatonal Data Assmlaton (4D-Var) 4DVAR, accordng to the name, s a four-dmensonal varatonal method. 4D-Var s actually a drect generalzaton of 3D-Var to handle observatons that are dstrbuted n tme. The

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

Least squares cubic splines without B-splines S.K. Lucas

Least squares cubic splines without B-splines S.K. Lucas Least squares cubc splnes wthout B-splnes S.K. Lucas School of Mathematcs and Statstcs, Unversty of South Australa, Mawson Lakes SA 595 e-mal: stephen.lucas@unsa.edu.au Submtted to the Gazette of the Australan

More information

BOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS

BOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS BOUNDEDNESS OF THE IESZ TANSFOM WITH MATIX A WEIGHTS Introducton Let L = L ( n, be the functon space wth norm (ˆ f L = f(x C dx d < For a d d matrx valued functon W : wth W (x postve sem-defnte for all

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

On a direct solver for linear least squares problems

On a direct solver for linear least squares problems ISSN 2066-6594 Ann. Acad. Rom. Sc. Ser. Math. Appl. Vol. 8, No. 2/2016 On a drect solver for lnear least squares problems Constantn Popa Abstract The Null Space (NS) algorthm s a drect solver for lnear

More information

The Exact Formulation of the Inverse of the Tridiagonal Matrix for Solving the 1D Poisson Equation with the Finite Difference Method

The Exact Formulation of the Inverse of the Tridiagonal Matrix for Solving the 1D Poisson Equation with the Finite Difference Method Journal of Electromagnetc Analyss and Applcatons, 04, 6, 0-08 Publshed Onlne September 04 n ScRes. http://www.scrp.org/journal/jemaa http://dx.do.org/0.46/jemaa.04.6000 The Exact Formulaton of the Inverse

More information

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2 Salmon: Lectures on partal dfferental equatons 5. Classfcaton of second-order equatons There are general methods for classfyng hgher-order partal dfferental equatons. One s very general (applyng even to

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

CSCE 790S Background Results

CSCE 790S Background Results CSCE 790S Background Results Stephen A. Fenner September 8, 011 Abstract These results are background to the course CSCE 790S/CSCE 790B, Quantum Computaton and Informaton (Sprng 007 and Fall 011). Each

More information

Joint Statistical Meetings - Biopharmaceutical Section

Joint Statistical Meetings - Biopharmaceutical Section Iteratve Ch-Square Test for Equvalence of Multple Treatment Groups Te-Hua Ng*, U.S. Food and Drug Admnstraton 1401 Rockvlle Pke, #200S, HFM-217, Rockvlle, MD 20852-1448 Key Words: Equvalence Testng; Actve

More information

Section 8.3 Polar Form of Complex Numbers

Section 8.3 Polar Form of Complex Numbers 80 Chapter 8 Secton 8 Polar Form of Complex Numbers From prevous classes, you may have encountered magnary numbers the square roots of negatve numbers and, more generally, complex numbers whch are the

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

LECTURE 9 CANONICAL CORRELATION ANALYSIS

LECTURE 9 CANONICAL CORRELATION ANALYSIS LECURE 9 CANONICAL CORRELAION ANALYSIS Introducton he concept of canoncal correlaton arses when we want to quantfy the assocatons between two sets of varables. For example, suppose that the frst set of

More information

Quantum Mechanics I - Session 4

Quantum Mechanics I - Session 4 Quantum Mechancs I - Sesson 4 Aprl 3, 05 Contents Operators Change of Bass 4 3 Egenvectors and Egenvalues 5 3. Denton....................................... 5 3. Rotaton n D....................................

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010 Parametrc fractonal mputaton for mssng data analyss Jae Kwang Km Survey Workng Group Semnar March 29, 2010 1 Outlne Introducton Proposed method Fractonal mputaton Approxmaton Varance estmaton Multple mputaton

More information

Norms, Condition Numbers, Eigenvalues and Eigenvectors

Norms, Condition Numbers, Eigenvalues and Eigenvectors Norms, Condton Numbers, Egenvalues and Egenvectors 1 Norms A norm s a measure of the sze of a matrx or a vector For vectors the common norms are: N a 2 = ( x 2 1/2 the Eucldean Norm (1a b 1 = =1 N x (1b

More information

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

10-701/ Machine Learning, Fall 2005 Homework 3

10-701/ Machine Learning, Fall 2005 Homework 3 10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40

More information

Numerical Properties of the LLL Algorithm

Numerical Properties of the LLL Algorithm Numercal Propertes of the LLL Algorthm Frankln T. Luk a and Sanzheng Qao b a Department of Mathematcs, Hong Kong Baptst Unversty, Kowloon Tong, Hong Kong b Dept. of Computng and Software, McMaster Unv.,

More information

CHAPTER 14 GENERAL PERTURBATION THEORY

CHAPTER 14 GENERAL PERTURBATION THEORY CHAPTER 4 GENERAL PERTURBATION THEORY 4 Introducton A partcle n orbt around a pont mass or a sphercally symmetrc mass dstrbuton s movng n a gravtatonal potental of the form GM / r In ths potental t moves

More information

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law:

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law: CE304, Sprng 2004 Lecture 4 Introducton to Vapor/Lqud Equlbrum, part 2 Raoult s Law: The smplest model that allows us do VLE calculatons s obtaned when we assume that the vapor phase s an deal gas, and

More information

Chapter Newton s Method

Chapter Newton s Method Chapter 9. Newton s Method After readng ths chapter, you should be able to:. Understand how Newton s method s dfferent from the Golden Secton Search method. Understand how Newton s method works 3. Solve

More information

Eigenvalues of Random Graphs

Eigenvalues of Random Graphs Spectral Graph Theory Lecture 2 Egenvalues of Random Graphs Danel A. Spelman November 4, 202 2. Introducton In ths lecture, we consder a random graph on n vertces n whch each edge s chosen to be n the

More information

Perron Vectors of an Irreducible Nonnegative Interval Matrix

Perron Vectors of an Irreducible Nonnegative Interval Matrix Perron Vectors of an Irreducble Nonnegatve Interval Matrx Jr Rohn August 4 2005 Abstract As s well known an rreducble nonnegatve matrx possesses a unquely determned Perron vector. As the man result of

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have

More information

[7] R.S. Varga, Matrix Iterative Analysis, Prentice-Hall, Englewood Clis, New Jersey, (1962).

[7] R.S. Varga, Matrix Iterative Analysis, Prentice-Hall, Englewood Clis, New Jersey, (1962). [7] R.S. Varga, Matrx Iteratve Analyss, Prentce-Hall, Englewood ls, New Jersey, (962). [8] J. Zhang, Multgrd soluton of the convecton-duson equaton wth large Reynolds number, n Prelmnary Proceedngs of

More information

Lecture 10: May 6, 2013

Lecture 10: May 6, 2013 TTIC/CMSC 31150 Mathematcal Toolkt Sprng 013 Madhur Tulsan Lecture 10: May 6, 013 Scrbe: Wenje Luo In today s lecture, we manly talked about random walk on graphs and ntroduce the concept of graph expander,

More information

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering / Theory and Applcatons of Pattern Recognton 003, Rob Polkar, Rowan Unversty, Glassboro, NJ Lecture 4 Bayes Classfcaton Rule Dept. of Electrcal and Computer Engneerng 0909.40.0 / 0909.504.04 Theory & Applcatons

More information

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1 C/CS/Phy9 Problem Set 3 Solutons Out: Oct, 8 Suppose you have two qubts n some arbtrary entangled state ψ You apply the teleportaton protocol to each of the qubts separately What s the resultng state obtaned

More information

Case A. P k = Ni ( 2L i k 1 ) + (# big cells) 10d 2 P k.

Case A. P k = Ni ( 2L i k 1 ) + (# big cells) 10d 2 P k. THE CELLULAR METHOD In ths lecture, we ntroduce the cellular method as an approach to ncdence geometry theorems lke the Szemeréd-Trotter theorem. The method was ntroduced n the paper Combnatoral complexty

More information

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS SECTION 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS 493 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS All the vector spaces you have studed thus far n the text are real vector spaces because the scalars

More information

Inductance Calculation for Conductors of Arbitrary Shape

Inductance Calculation for Conductors of Arbitrary Shape CRYO/02/028 Aprl 5, 2002 Inductance Calculaton for Conductors of Arbtrary Shape L. Bottura Dstrbuton: Internal Summary In ths note we descrbe a method for the numercal calculaton of nductances among conductors

More information

The Geometry of Logit and Probit

The Geometry of Logit and Probit The Geometry of Logt and Probt Ths short note s meant as a supplement to Chapters and 3 of Spatal Models of Parlamentary Votng and the notaton and reference to fgures n the text below s to those two chapters.

More information

This model contains two bonds per unit cell (one along the x-direction and the other along y). So we can rewrite the Hamiltonian as:

This model contains two bonds per unit cell (one along the x-direction and the other along y). So we can rewrite the Hamiltonian as: 1 Problem set #1 1.1. A one-band model on a square lattce Fg. 1 Consder a square lattce wth only nearest-neghbor hoppngs (as shown n the fgure above): H t, j a a j (1.1) where,j stands for nearest neghbors

More information

Representation theory and quantum mechanics tutorial Representation theory and quantum conservation laws

Representation theory and quantum mechanics tutorial Representation theory and quantum conservation laws Representaton theory and quantum mechancs tutoral Representaton theory and quantum conservaton laws Justn Campbell August 1, 2017 1 Generaltes on representaton theory 1.1 Let G GL m (R) be a real algebrac

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

Fall 2012 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede

Fall 2012 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede Fall 0 Analyss of Expermental easurements B. Esensten/rev. S. Errede We now reformulate the lnear Least Squares ethod n more general terms, sutable for (eventually extendng to the non-lnear case, and also

More information

SIO 224. m(r) =(ρ(r),k s (r),µ(r))

SIO 224. m(r) =(ρ(r),k s (r),µ(r)) SIO 224 1. A bref look at resoluton analyss Here s some background for the Masters and Gubbns resoluton paper. Global Earth models are usually found teratvely by assumng a startng model and fndng small

More information

IV. Performance Optimization

IV. Performance Optimization IV. Performance Optmzaton A. Steepest descent algorthm defnton how to set up bounds on learnng rate mnmzaton n a lne (varyng learnng rate) momentum learnng examples B. Newton s method defnton Gauss-Newton

More information

Homework Notes Week 7

Homework Notes Week 7 Homework Notes Week 7 Math 4 Sprng 4 #4 (a Complete the proof n example 5 that s an nner product (the Frobenus nner product on M n n (F In the example propertes (a and (d have already been verfed so we

More information

Chapter - 2. Distribution System Power Flow Analysis

Chapter - 2. Distribution System Power Flow Analysis Chapter - 2 Dstrbuton System Power Flow Analyss CHAPTER - 2 Radal Dstrbuton System Load Flow 2.1 Introducton Load flow s an mportant tool [66] for analyzng electrcal power system network performance. Load

More information

Linear Feature Engineering 11

Linear Feature Engineering 11 Lnear Feature Engneerng 11 2 Least-Squares 2.1 Smple least-squares Consder the followng dataset. We have a bunch of nputs x and correspondng outputs y. The partcular values n ths dataset are x y 0.23 0.19

More information

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso Supplement: Proofs and Techncal Detals for The Soluton Path of the Generalzed Lasso Ryan J. Tbshran Jonathan Taylor In ths document we gve supplementary detals to the paper The Soluton Path of the Generalzed

More information

A new Approach for Solving Linear Ordinary Differential Equations

A new Approach for Solving Linear Ordinary Differential Equations , ISSN 974-57X (Onlne), ISSN 974-5718 (Prnt), Vol. ; Issue No. 1; Year 14, Copyrght 13-14 by CESER PUBLICATIONS A new Approach for Solvng Lnear Ordnary Dfferental Equatons Fawz Abdelwahd Department of

More information

Physics 5153 Classical Mechanics. D Alembert s Principle and The Lagrangian-1

Physics 5153 Classical Mechanics. D Alembert s Principle and The Lagrangian-1 P. Guterrez Physcs 5153 Classcal Mechancs D Alembert s Prncple and The Lagrangan 1 Introducton The prncple of vrtual work provdes a method of solvng problems of statc equlbrum wthout havng to consder the

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

Nahid Emad. Abstract. the Explicitly Restarted Block Arnoldi method. Some restarting strategies for MERAM are given.

Nahid Emad. Abstract. the Explicitly Restarted Block Arnoldi method. Some restarting strategies for MERAM are given. A Generalzaton of Explctly Restarted Block Arnold Method Nahd Emad Abstract The Multple Explctly Restarted Arnold s a technque based upon a multple use of Explctly Restarted Arnold method to accelerate

More information

The Minimum Universal Cost Flow in an Infeasible Flow Network

The Minimum Universal Cost Flow in an Infeasible Flow Network Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran

More information

The Multiple Explicitly Restarted Arnoldi method (MERAM) involves multiple

The Multiple Explicitly Restarted Arnoldi method (MERAM) involves multiple A Comparson Between Parallel Multple Explctly Restarted and Explctly Restarted Block Arnold Methods N. Emad, S. Petton y z, and A. Sedrakan x Introducton The Multple Explctly Restarted Arnold method ()

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information