BRILL AND PINDER eral linear partial differential equations in two spatial dimensions with Dirichlet and/or Neumann boundary conditions, discretized b

Size: px
Start display at page:

Download "BRILL AND PINDER eral linear partial differential equations in two spatial dimensions with Dirichlet and/or Neumann boundary conditions, discretized b"

Transcription

1 Eigenvalue Analysis of a Block Red-Black Gauss-Seidel Preconditioner Applied to the Hermite Collocation Discretization of Poisson's Equation Stephen H. Brill Department of Mathematics and Computer Science Boise State University Boise, Idaho, USA George F. Pinder Department of Civil and Environmental Engineering University ofvermont Burlington, Vermont, USA This is a preprint of an article published in Numerical Methods for Partial Differential Equations, Vol. 7, No. 3, pp. 4-8, May. cfl copyright John Wiley & Sons, Inc. Received June, This paper is concerned with the numerical solution of Poisson's equation with Dirichlet boundary conditions, defined on the unit square, discretized by Hermite collocation with uniform mesh. In [], it was demonstrated that the Bi-CGSTAB method of van der Vorst [] with block Red- Black Gauss-Seidel (RBGS) preconditioner is an efficient method to solve this problem. In this paper, we derive analytic formulae for the eigenvalues that control the rate at which the Bi-CGSTAB/RBGS method converges. These formulae, which depend upon the location of the collocation points, can be utilized to determine where the collocation points should be placed in order to make the Bi-CGSTAB/RBGS method converge as quickly as possible. Furthermore, using the optimal location of the collocation points can result in significant time savings for fixed accuracy and fixed problem size. cfl John Wiley & Sons, Inc. Keywords: Hermite collocation, Bi-CGSTAB method, Red-Black, eigenvalue formulae I. INTRODUCTION The Bi-CGSTAB method of van der Vorst [], combined with a block Red-Black Gauss- Seidel (RBGS) preconditioner, was shown in [] to be an efficient method of solving gen- Numerical Methods for Partial Differential Equations 7, No. 3,?? () cfl John Wiley & Sons, Inc. CCC???

2 BRILL AND PINDER eral linear partial differential equations in two spatial dimensions with Dirichlet and/or Neumann boundary conditions, discretized by Hermite collocation. We study herein the specific case of Hermite collocation applied to Poisson's equation with Dirichlet boundary conditions on a uniform mesh, solved by Bi-CGSTAB/RBGS. We derive analytical formulae for the eigenvalues that control the rate at which Bi- CGSTAB/RBGS converges. Because these eigenvalues depend on the location of the collocation points, we are motivated to investigate if the speed of convergence can be enhanced by changing collocation point location. We find that the collocation points can be positioned in an optimal way to maximize the speed of convergence. This paper is organized as follows. We first provide an overview of preliminary material. We then produce a lengthy analysis that culminates in formulae for the pertinent eigenvalues. This is followed by a discussion of locating the collocation points in an optimal way to accelerate the rate of convergence. Finally, numerical experiments indicate that placing the collocation points optimally results in significant savings in solving time for fixed accuracy and fixed problem size. II. PRELIMINARY MATERIAL Details and derivations of this introductory material can be found in []. We wish to solve Poisson's u = H(x; y) with Dirichlet boundary conditions, on the unit square S =[; ] [; ] with a uniform mesh of m square finite elements (m must be even), discretized by Hermite collocation. Let each of these square finite elements be described in a local coordinate system as» ;» ; : (.) In order to define a well-posed problem, we require four collocation points per finite element. With reference to (.), we set these collocation points to be at coordinates (ο;ο), (ο; ο), (ο;ο), (ο; ο), where <ο<. It is well known, given certain smoothness conditions, that to minimize discretization error, one chooses the collocation points within each finite element to coincide with the points of Gaussian quadrature [3]. This is equivalent to selecting ο = p, from which we obtain O(h 4 ) discretization error, where h = m. If ο 6= p, then the discretization error is O(h ). We initially use the Gaussian value of ο = p in the eigenvalue analysis that follows. Later, we will generalize our analysis to allow ο to assume any value in the interval ;. We utilize the Red-Black numbering of equations and unknowns described in [] to discretize (.) via Hermite collocation, obtaining the matrix equation Ax = b; (.3)

3 EIGENVALUE ANALYSIS OF HERMITE COLLOCATION 3 whose block structure is 6 4 A F B F A C B A 3 C B A m3 C m4 B m3 A L C L C F B A C B A which we abbreviate C Bm4 A m4 Our RBGS preconditioner is C m3 B L A m v F v v 3.. v m3 v L v v.. v m4 v m 3 = b F b b 3.. b m3 b L b b.. b m4 b m 3 ; (.4) 7 5»»» R U vr br = : (.5) L B v B b B P =» R : L B III. EIGENVALUE ANALYSIS As reported in [4], the rate at which preconditioned conjugate gradient methods (like Bi-CGSTAB) converge depends on the global eigenvalue distribution [of P A] more than anything else." (P is normally chosen so that P A ß I, where I is the identity matrix of appropriate size.) We are thus motivated to find analytical formulae for the eigenvalues of P A, buoyed by the knowledge that analytical formulae for the eigenvalues associated with solving (.3) via the block Jacobi, Gauss-Seidel, and SOR (successive overrelaxation) methods were determined in [6]. Indeed, we use results reported in [6] in our discussion below aswell as using the general approach of[6] as a model for determining the eigenvalues of P A. We will use the term spectrum of a matrix to refer to the set whose entries are the eigenvalues of the matrix and denote the spectrum by ff. We thus seek ff P A. At times, we will want to consider the vector whose entries are those of the set ff P A. We will use the same notation, i.e., ff P A, for both the vector and the set. We expect that this slight abuse of notation will not be confusing. A. Reformulation of the Problem To make the problem of finding eigenvalue formulae tractable, we introduce two new matrices. First, we replace matrix A by the matrix AK, where K is a diagonal matrix whose nontrivial entries are non-negative integer powers of m. Introduction of matrix K in this regard is equivalent to implementing the scaling procedure introduced in [5] and utilized in [6] and [7]. Secondly, we note that all the blocks of matrix A with numbered subscripts in (.4) have the same size, namely 4m 4m. However, the blocks with lettered subscripts have

4 4 BRILL AND PINDER different sizes. A F and A L are m m, B F and C L are m 4m, and B L and C F are 4m m. In order to obtain a matrix where all the blocks have the same size, a similarity transformation which permutes the rows and columns of the matrix in (.4) is performed such that the structure of each block is altered but the overall block structure in (.4) is maintained. The resulting matrix is 6 4 Y Y 3 Y 4 Y Y Y 3 Y 4 Y Y Y 3 Y Y 3 Y Y Y Y Y Y 3 Y 4 Y Y 3 Y 4 Y 4 Y Y Y 3 Y 4 Y Y Y 3 Y 4 Y Y Y Y Y 3 Y Y 3 Y 4 Y Y Y 4 Y Y : (3.) where the submatrices Y, Y, Y 3, Y 4 are all m m and have the structure Yi = 6 4 ai; ai;3 ai;4 ai;4 ai; ai; ai; ai; ai;3 ai;4 ai;3 ai;4 ai; ai; ai; ai; ai;3 ai;4 ai;3 ai;4 ai; ai; ai; ai; ai;3 ai;4 ai;3 ai;4 ai; ai; ai; ai; ai;4 ai;3 ai;4 ai; For any ο ;, the entries ai;j ;i;j=; ; 3; 4, are given below. Note the symmetry a i;j = a j;i. For the Gaussian case ο = p, these entries reduce to those given in [6] and

5 EIGENVALUE ANALYSIS OF HERMITE COLLOCATION 5 [7]. a ; =m ο (ο )(+ο) a ; = a ; = m (+ο) ο 8ο a 3; = a ;3 =m ο 3 4ο a 4; = a ;4 = m (+ο) ο 3 ο +4ο a ; = 4 m (6ο +)(+ο) (ο ) a 3; = a ;3 = m ( ο) 3 ο +ο +4ο a 4; = a ;4 = 4 m (ο ) (ο +) ο a 3;3 =m ο (ο ) (ο +) a 4;3 = a 3;4 = m ( ο) +8ο +ο a 4;4 = 4 m (ο ) (ο +)(6ο ) : (3.) Now, let W be the permutation matrix by which one obtains (3.) from A via W AW. Recalling matrix K above, let A μ = W AKW replace A. Similarly, replace P with μp = W PKW. Then we obtain P μ A μ = W K P AKW. That is, P μ A μ is a similarity transformation of P A, and we maythus perform our eigenvalue analysis on μp A. μ Considering the structure of (3.), we may abbreviate A μ by» μr U μ μa = μl B μ and μ P by» μrμl μp = B μ : Because P μ is a block matrix, its inverse is computable [8] as» μr μp = B μ L μ R μ B μ ; therefore μp μ A =» I R μ U μ I B μ L μ R μ U μ, where I represents the identity matrix of appropriate size. Recall we want P μ A μ to be close" to I. This is clearly equivalent to» I P μ A μ R μ U μ = μb L μ R μ U μ being close" to the null" matrix (i.e., the matrix whose entries are all zero). The null matrix has all its eigenvalues equal to zero. Since I P μ A μ may be viewed as a block upper-triangular matrix, its spectrum is given by the union of the spectra of those matrices on its diagonal blocks, namely the null matrix and J = B μ L μ R μ U. μ We therefore expect the fastest convergence when the eigenvalues of J are clustered near the origin of the complex plane. Finally, we make one more change to the formulation above. We note that the eigenvalues of J = L μ R μ U μ B μ and those of J are identical (because J is obtained from J from a similarity transformation), and choose to perform our analysis on J.

6 6 BRILL AND PINDER B. The eigenvalues of J for the case ο = p Because R μ and B μ are both block diagonal, their inverses are easily computed. Noting that» Y Y =» Y Y Y Y Y Y, we see that and μr = 6 4 μb = 6 4 J is thus seen to be where and J = 6 4 Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y S QS SQ Q SQ S QS Q Q QS S SQ SQ S QS Q S = Q = Y Y Y Y Y Q QS S SQ SQ S QS Q Y3 Y + Y 4 Y Q QS S SQ SQ Q S QS 3, (3.3) 7 5 (3.4) Y3 Y Y 4 Y. (3.5)

7 EIGENVALUE ANALYSIS OF HERMITE COLLOCATION 7 At this point we employ the strategy of [6], namely to determine ff (J) for the case where S and Q are real scalars, and then use this result to determine ff (J) when S and Q are m m matrices. To compute the eigenvalues μ and eigenvectors z of J, we set i.e. S μi QS SQ Q SQ S μi QS Q Q QS S μi SQ (J μi) z =, (3.6) SQ S μi QS Q Q QS S μi SQ SQ Q S μi QS z ; z ; z ; z ;.. zm ; zm ; 3 = To apply Theorem 8.3 in [9] (as was done successfully in [6]) we must manipulate the first and last rows in (3.7) so that each set of tworows of the resulting matrix is identical. We obtain Q QS S μi SQ SQ S μi QS Q Q QS S μi SQ SQ S μi QS Q where Q QS S μi SQ SQ S μi QS Q z ; z ; z ; z ; z ; z ; z 3; z 3;.. zm +; zm +; b = Q (z ; + z ; )+QS (z ; + z ; ) and b = Q zm ; + zm +; + QS z m ; + zm +; : To easily apply Theorem 8.3 in [9], it is convenient tohave the vector on the right side of (3.8) be identically zero. We therefore set b = b =, obtaining for arbitrary even m the homogeneous matrix difference equations B z k +(B μi) z k + B z k+ =, (3.9)»»» k =; ; 3;::: m, where Q B = QS S SQ, B =, SQ S B =, QS Q z k =» zk; with boundary conditions z k; = (3.7) 6 4 b.. b (3.8) b = Q (z ; + z )+QS (z ; + z ; )= b = Q zm ; + zm +; + QS z m (3.) ; + zm +; =: , :

8 8 BRILL AND PINDER It is clear that the problem of determining the eigenvalues (and eigenvectors) of J is equivalent to solving the boundary value problem (3.9), (3.), which wenow proceed to do. With respect to Theorem 8.3 in [9], we form the matrix polynomial L () that corresponds to (3.9), namely» S L () =B μ + Q SQ + SQ +(B μi) + B = SQ + SQ Q + (3.) S μ and compute its determinant det (L ()) = n Q μ + h S i o Q S μ + μ Q μ. (3.) Then (from Theorem 8.3 in [9]), the general solution of (3.9) is given by z k = X F J k F w. (3.3) Here (X F; J F ) is a Jordan pair (see [9]) of the matrix polynomial L () in (3.) and w C n, where n is the degree (in ) of det (L ()). We consider two separate cases: μ = and μ 6=. If μ =, then» S L () = + Q SQ + SQ SQ + SQ Q + S and det (L ()) = S Q : Thus the only eigenvalue of L (), i.e., zero of det(l ()), is =, which is a double eigenvalue. The Jordan chain (see [9]) associated with the eigenvalue = is seen to be of length two and it thus forms a canonical set (see [9]). Its components, using the definition in [9], are easily seen to be the columns of» X F = Q S S Q while the matrix J F is J F =», and w = Λ T w w. Applying (3.3) with k =; ; ;::: and the definitions of XF and J F given above, we see that»» z; w + w z = = z ; Q S w S Q w»» z; w z = = (3.4) z ; Q S w z k =, k. Now, recall the boundary condition b = Q (z ; + z ; )+QS (z ; + z ; ) = from (3.). Assuming that S 6= ±Q (a most reasonable assumption in light of (3.4) and (3.5)) and using the values of z ;, z ;, z ;, and z ; from (3.4), we conclude that

9 EIGENVALUE ANALYSIS OF HERMITE COLLOCATION 9 w =. But then z k = for all k, which means that z in (3.6) is a zero eigenvector, which is impermissible. The case μ = is therefore eliminated from consideration. Thus μ 6=. In this case, det (L ()) is given by (3.). Setting det (L ()) = to determine the eigenvalues of L () gives = =or = or = where and are obtained from the quadratic formula. Using the well-known formulae for the sum and product of the roots of a quadratic equation, we obtain S Q S μ + μ + = (3.5) Q μ and =. (3.6)» We note that the Jordan chain corresponding to the eigenvalue = is Q or» S Q equivalently,. We now consider two separate cases, namely S 6= and =. If 6=, then the Jordan chain corresponding to i is seen to be where So we obtain, in this case»!i, i =;,! i = SQ i SQ (S μ) i + Q : (3.7) X F = J F =» Q!! S, 4 3 5, and w = Λ T w w w. We now find w to satisfy the boundary conditions (3.). Using (3.3) for k =; ; m ; m + together with (3.), we obtain the equation»» w E =, w where " E = Q! m Q (! + )+S( +! ) Q (! + )+S( +! ) m S m m +! + Q! m m S m m +! + If E is nonsingular, then w = w =,which implies (from (3.3)) that z k = for all k. Thus z in (3.6) is a zero eigenvector, which is impermissible. Therefore, E is singular, so its determinant is zero, which leads to the equation m m [Q (! + )+S( +! )] [Q (! + )+S( +! )]=: (3.8) # :

10 BRILL AND PINDER If Q (! i + i )+S ( +! i i )=,i =;, then using (3.7) and solving for i yields i = (S + Q)(S Q) Sμ, (3.9) Qμ i =;. Since i (which is non-zero) is a solution of det (L ()) =, we can conclude from (3.) that h S i Q μ i + Q S μ + μ i Q μ =. (3.) Substitution of (3.9) into (3.) yields a cubic equation in μ, whose solutions are μ =(S Q)(S + Q) or μ =(S Q), where the former solution is of multiplicity two. If μ =(S Q), then (3.9) reduces to = =, which contradicts the assumption 6=. If, on the other hand, μ =(S Q)(S + Q), then (3.9) reduces to = =, which also contradicts the assumption 6=. Thus, with respect to (3.8), we obtain m m =. Recalling (3.6) and the assumption 6=,we conclude and = e i (3.) = e i (3.) where = kß m, k =; ; 3;:::; m. If we now add the equation obtained by substituting (3.) into (3.) to the equation obtained by substituting (3.) into (3.), we obtain a quadratic equation in μ whose coefficients are all real. Solving this equation for μ yields μ = Q cos + q S ± Q S ( + cos ) (Q sin ), (3.3) = kß m, k =; ; 3;:::; m. What we have shown so far is this: if S and Q are scalars defining the m m matrix J in (3.3), then m of the m eigenvalues of J are given by (3.3). The remaining two eigenvalues of J arise from the case = and will be determined below. If =, then (3.5) becomes ; Q μ = Q S S μ + μ, (3.4) where ; = =. By (3.6), we must have = =or = =. If = =, then solving (3.4) for μ yields μ =(S ± Q). We now consider these two cases separately.

11 EIGENVALUE ANALYSIS OF HERMITE COLLOCATION If μ =(S + Q), then, using the definition of Jordan pair in [9], we obtain» S X F = Q Q, S J F = 4 3 5, and w = w w Λ T w. Using (3.3) with these values and (3.), we obtain»»» 4S Q + w m + + Q =. w S Unless S =, this matrix is nonsingular, which implies w 4m+ = w =. Using (3.3), we see that z k = for all k which implies that z is a zero eigenvector, which is impermissible. We thus eliminate the possibility μ =(S + Q). If μ =(S Q), then, again using the definition of Jordan pair from [9], we obtain X F = J F =» S Q Q S 4 3 5, and w = w w w Λ T. Using (3.3) with these values and (3.), we find that w = and that w 6= is arbitrary. We conclude that = provides the eigenvalue μ = (S Q) of J with corresponding eigenvector Λ T. If = =, then solving (3.4) for μ yields the solution of multiplicity two μ =(S Q)(S + Q). Once more using the definition of Jordan pair in [9], we obtain for this case,» S X F =, Q J F = 4 3 5,, and w = w w w Λ T. Using (3.3) with these values and (3.), we find that w = w is arbitrary but non-zero. We thus obtain eigenvalue μ = (S Q)(S + Q) and corresponding eigenvector Λ T of J from =. We have therefore proved

12 BRILL AND PINDER Lemma 3.. Let J be the m m matrix defined in (3.3) by the real scalars S and Q. Then the eigenvalues μ of J are given by μ = S Q μ =(S Q) μ = Q cos + q S ± Q S ( + cos ) (Q sin ), where = kß m, k =; ; 3;:::; m. Before we consider the general case where S and Q are matrices, we require another lemma: Lemma 3.. Let S and Q be as defined in (3.4) and (3.5), respectively. Then SQ = QS. Proof. Using Lemma 5. in [6], we are given the existence of a nonsingular matrix X and explicit diagonal matrices D and D such that X T Y 4 Y XT = D (3.5) and X T Y 3 Y XT = D. (3.6) Thus Y 4 Y and Y 3 Y have the same complete set of eigenvectors and must therefore commute [], i.e., Y 4 Y Y 3 Y = Y 3 Y Y 4 Y. (3.7) But this is precisely the condition that is required to show that SQ = QS. Now that we have established that S and Q commute, we see that we can use the analysis culminating in Lemma 3. to determine the eigenvalues of J for the case where S and Q are defined as in (3.4) and (3.5). Let us begin by recalling (3.): L () =» S μi + Q SQ + SQ SQ + SQ Q + S μi where we are making use of the fact that S and Q are commuting matrices. As above, let us now consider» S μi + Q = det (L ()) = det SQ + SQ SQ + SQ Q +. (3.8) S μi From our work above, we know the solutions of (3.8), namely =, = ±, and = e ±i k, k =; ;::: m. Wenow exploit this knowledge to compute the eigenvalues μ of J. If we use the values of = e i k, k = kß m, k =; ;::: m in (3.8), we can easily show that» S + = det Q μi SQ + SQ. (3.9) SQ + SQ S + Q μi Finding the values μ that satisfy (3.9) is equivalent to finding the eigenvalues μ of the matrix» S + M k = Q SQ + SQ. SQ + SQ S + Q,

13 EIGENVALUE ANALYSIS OF HERMITE COLLOCATION 3 To eliminate the complex numbers (i.e., the 's), we perform the similarity transformation T k = R k M k R k, where» i Q SQ sin R k = k i Q SQ sin k. Q + SQ sin k Q + SQ sin k We thus seek the eigenvalues μ of T k =» (S Q)(S Q cos k ) (S Q) Q sin k (S + Q) Q sin k (S + Q)(S + Q cos k ) k = kß m, k =; ;::: m. The characterization of all the eigenvalues μ of J is found in the following lemma, the proof of which is, except for the obvious notational differences, analogous to that given in Lemma 4. in [6]:, Lemma 3.3. Let J be the m m matrix defined in (3.3) by the matrices S and Q defined in (3.4) and (3.5). Then ff (J) = m [ k= [ ff (T k ) ff S [ Q ff (S Q). (3.3) Now that we have characterized the spectrum of J as the union of spectra of other matrices, we now determine these latter spectra, namely for T k, k = ; ;::: m ; S Q ; and (S Q). We first make some observations and definitions. Let Y 3 = Y 3 Y and let Y 4 = Y 4 Y. With respect to (3.4) and (3.5), we see that and S + Q = Y 3 S Q = Y 4. Using these definitions, trigonometric identities, and (3.7), we can show» sin k T k = Y 4Y 3 + cos k Y sin k 4 Y4 Y 3 Y4 sin k Y 3 Y 4Y 3 sin k Y 4Y 3 + cos k Y. 3 We now use (3.5) and (3.6) to compute the similarity transformation»» " X T X T sin k X T T k X T = DD + cos k D sin k DD D # sin k D DD sin k DD + cos k, D where D and D are diagonal matrices given explicitly in Lemma 5. of [6]. We write and D = diag (d ;d ;:::;d m ) D = diag d ; d ;:::;d m : (3.3)

14 4 BRILL AND PINDER If we then permute the rows and columns of (3.3) in an obvious way, we find that T k is similar to diag(d k; ;D k; ;:::;D k;m ), where» di di sin k D k;i = + d i cos k sin d i k di d i sin d i k di d i d i di sin k + d i cos k. Since each D k;i is a matrix, the eigenvalues of each are easily determined: 8 r 9 >< k;i +d id i ± k;i k;i +4d id i >= ff (D k;i )= μ : μ =, >: >; where k;i = d i d i cos k, k = kß m, k =; ;::: m, i =; ;:::;m. To find ff S Q, we note that S Q = Y 4 Y 3 and X T Y 4 Y 3 X T = DD, which is a diagonal matrix. Therefore, ff S Q = Φ Ψ m d i d i. i= To find ff (S Q),we note that (S Q) = Y4, which is similar to D, which is (S Q) = Φ Ψ d m i. i= a diagonal matrix. Therefore, ff We maynow state: Theorem 3.4. Let J be the m m matrix defined in (3.3) with S and Q defined as in (3.4) and (3.5), respectively. Let ο = p in (3.). Then ff (J) = Φ Ψ [ Φ Ψ [ 8 < μ : μ = d i μ : μ = didi : μ : μ = k;i +d id i ± q k;i 9 k;i +4d id i = ;, where k;i = d i d i cos k, k = kß m, k =; ;::: m, i =; ;:::;m, and where [6] fd i g m i= = Φ ff ;ff+ ;ff ;:::;ff+ m ;ff m ;ff mψ, Φ Ψ m di = Φ fi ;fi+ ;fi ;:::;fi+ m ;fi m mψ ;fi, i= ff ± j = 3 p 3 ± q j 8 6 p 3+ p 3+ cos ' j ; ' j = jß m ;j =; ;:::;m: fi ± j = q j = ( cos ' j ) ± 3 p 3q j p p 3 cos ' j ; q cos ' j cos ' j ;

15 EIGENVALUE ANALYSIS OF HERMITE COLLOCATION 5 C. The eigenvalues of J for the case of general ο ; The result in Theorem 3.4 is for the case ο = p. If we seek an analogous result for arbitrary ο ;, we note that we must consider only three things. The first (and obvious) one is that the entries of matrix A are now given by (3.) for arbitrary ο (as opposed to the specific value ο = p ). The second is to check whether we obtain the result w = w, where w and w are given in Lemmas 5. and 5. in [6], which a long and tedious calculation does indeed confirm. The third is to determine how ff ± j associates with fi ± j (see Lemma 5. in [6]). In this case, another long and tedious calculation provides that ff + j associates with fi j and that ff j associates with fi j + for all values of ο ;. We can therefore state, for arbitrary ο ;, the result analogous to Theorem 3.4: Theorem 3.5. Let J be the m m matrix defined in (3.3) with S and Q defined as in (3.4) and (3.5), respectively. Let ο in (3.) be an arbitrary number in the interval ;. Then ff (J) = Φ Ψ [ Φ Ψ [ 8 9 < μ : μ = d i μ : μ = didi : μ : μ = k;i +d id i ± q k;i k;i +4d id i = ; ; where k;i = d i d i cos k ; k = kß m ; k =; ;::: m ; i =; ;:::;m; c j = cos j ; j =; ;:::;m; fd i g m i= = Φ ff + ;ff+ ;ff ;:::;ff+ m ;ff m ;ff+ mψ ; Φ Ψ m di = Φ fi i= ;fi ;fi+ ;:::;fi m ;fi+ m mψ ;fi ; q q j =ο (6ο 4 4ο + ) + c j (3ο 4 + 8) + c j (6ο4 +4ο 3); ff ± j = 96ο 4 9ο +5+c j ο +8ο ± q j 96ο 4 56ο 3 +76ο +4ο +5+c j ( ο)(+4ο 4ο 48ο 3 ) ; fi j ± 8ο 6 9ο 4 +6ο +cj 8ο 6 +96ο 4 ο + ± qj = (+ο)( 6ο 8ο +8ο 3 +3ο 4 64ο 5 )+cj ( ο)(+ο) (+4ο +8ο 6ο 3 ). IV. SHIFTING THE COLLOCATION POINTS TO MINIMIZE fl fl ff I P A fl fl In this section we investigate the possibility of increasing the speed at which the Bi- CGSTAB/RBGS algorithm converges by using values for ο other than p,acknowledging that any increase in convergence rate will come at the expense of losing the O 4 h accuracy provided by the Gaussian value ο = p. In brief, we show that the collocation points can be located in an optimal fashion that results in significant time savings for fixed accuracy and fixed problem size. Recall the discussion above where we stated that we expect the fastest convergence of the preconditioned Bi-CGSTAB algorithm to occur when the eigenvalues are clustered

16 6 BRILL AND PINDER about the origin of the complex plane. In order to measure the clustering, we consider the vector norm fl fl A fl ff I P fl = kff (J)k. In light of the geometric interpretation of the -norm, we maysay that optimal clustering about the origin of the complex plane is equivalent to minimizing kff (J)k as a function of ο, the parameter which controls collocation point location. In the literature (e.g. [4] and []), the condition number» fl (P A) is often used as a measure of how close P A is to the identity matrix I, with rapid convergence occurring when P A ß I. The condition number is defined» fl (M) =kmk fl fl flm fl fl fl, where fl is a given matrix norm. The four most common matrix norms correspond to fl =; ; ;F (see [] for the definitions of these norms). Because these matrix norms satisfy the consistency [] or submultiplicative [] property, the minimum value that» fl (M) can attain is unity (which occurs when M = I). Thus if we were able to achieve P = A, then we would have kff (J)k = (the minimum value a vector norm can attain) and» fl (P A) = (the minimum value a condition number can attain). We are therefore motivated to examine the relationships between the value of ο that produces the fastest convergence of Bi-CGSTAB/RBGS, the value of ο that minimizes kff (J)k and the value of ο that minimizes each of the four condition numbers. In Figure, we summarize results obtained for Poisson's equation (.) for the case m =. Convergence is defined by the -norm of the residual vector being less than 8. For ο =:; :;:::;:49, we solved Poisson's equation using Bi-CGSTAB/RBGS, computed kff (J)k = fl fl A fl ff I P using the eigenvalue formulae derived above, and fl computed the condition numbers using Matlab. With reference to Figure, we see that the general behavior in all six graphs is similar: i.e., as ο increases, the curves rapidly decrease to a minimum, then increase gradually. Upon more careful examination, we see that the value of ο that minimizes each of fl fl A fl ff I P fl,» F (P A), and» (P A) corresponds well to the value of ο that minimizes the number of iterations required for convergence of Bi-CGSTAB/RBGS. The condition numbers using the - and -norms are less effective in predicting the value of ο that produces the fastest convergence of Bi-CGSTAB/RBGS. In Figure, we repeat our study for m =. The results are similar, except that now the condition number using the -norm also is a poor predictor of optimal convergence, leaving only fl fl A fl ff I P fl and» F (P A) as effective predictors for the value of ο that produces optimal convergence of Bi-CGSTAB/RBGS. It is worth noting that computing fl fl A fl ff I P is easy, since we have formulas for fl the eigenvalues of J. Furthermore, finding the value of ο that minimizes fl fl A fl ff I P fl for any given value of m is also easy if we use the MINOS optimization software (see fl[3] for documentation). Using MINOS, we determined the value of ο that minimizes fl A fl ff I P for Poisson's equation for all even m between and, inclusive. The fl results are given in Figure 3. In examining Figure 3, we see that the optimal value of ο is relatively insensitive to problem size m for sufficiently large m. This is an important result, for it means that a separate optimization for each value of m is not necessary each time we wish to solve Poisson's equation as quickly as possible. A qualitative explanation for the insensitivity of optimal ο with respect to m follows.

17 EIGENVALUE ANALYSIS OF HERMITE COLLOCATION 7 iterations until convergence ξ 6 σ(i (P A)) ξ 6 κ F (P A) 4 κ (P A) 4 κ (P A) ξ ξ κ (P A) ξ ξ fl FIG.. Comparison of numerical results, condition numbers and fl A fl ff I P for Poisson's fl equation for m = With respect to Theorem 3.5, we see that there are eight flavors of eigenvalues. Four of them are (ff + ),(ff ), ff + fi, and ff fi +. The remaining four are of the form +dd ± q +4dd : This represents four additional flavors since may be formed by combining ff + and fi or by combining ff and fi + and, for each form of, we either add or subtract the radical. To summarize: flavor number characterization number of eigenvalues per flavor (ff + ) m + (ff ) m 3 ff + fi m + 4 ff fi + m 5 with ff +, fi and + p (m +) m 6 with ff +, fi and p (m +) m 7 with ff, fi + and + p (m ) m 8 with ff, fi + and p (m ) m

18 8 BRILL AND PINDER iterations until convergence ξ 3 x 4 σ(i (P A)) ξ 8 κ F (P A) ξ 3 κ (P A) ξ 3 κ (P A) κ (P A) ξ ξ fl FIG.. Comparison of numerical results, condition numbers and fl A fl ff I P for Poisson's fl equation for m = Figures 4 through show the eigenvalues associated with each of the eight flavors for ο =:, :5, :, :5, :, and :4 and m =. The fact that we use the specific value m = should not obfuscate what occurs in the general case, for the eigenvalues are merely values of continuous functions (i.e., the eight flavors) evaluated at the discrete points defined by j (j =; ;:::;m) and k k =; ;:::; m where i = iß m, i = j or k (see Theorem 3.5). For very small ο, all flavors have a large real part. As ο increases to :5, all eigenvalues associated with all flavors cluster tightly around the origin, with the exception of those of flavor 7, which has a few large real eigenvalues. As ο increases further, we see that the eigenvalues of flavors,, 3, 5, and 6 remain tightly clustered about the origin, while eigenvalues of flavor 7 gain more outliers and eigenvalues of flavors 4 and 8 disperse significantly. This behavior is clearly not a function of m, but rather of the eight flavors. This explains why the optimal value of ο (i.e., the value corresponding to greatest clustering) is approximately.54, irrespective of problem size m. We also compare the ease of finding the value of ο that minimizes condition numbers as compared to minimizing fl fl A fl ff I P. Assuming sufficiently large m, the value fl of ο that minimizes fl fl A fl ff I P is seen a priori to be about.54. On the other fl hand, computing condition numbers, let alone minimizing them with respect to ο, isan exceedingly difficult task. Indeed, both [] and [] report on methods which estimate

19 EIGENVALUE ANALYSIS OF HERMITE COLLOCATION optimal ξ m FIG. 3. Optimal value of ο for various values of m for Poisson's equation only the order of magnitude of condition numbers. However, it is evident that if we wish to use condition numbers to determine the optimal value of ο, we require more precise information than mere orders of magnitude. And, as is readily seen from Figures and, the insensitivity of optimal ο with respect to m that we find for fl fl ff I P A fl fl does not hold for the condition numbers, negating the possibility of a priori optimality. Finally, we examine the issue of the tradeoff between increasing the speed of convergence of Bi-CGSTAB/RBGS using the optimal value of ο and the commensurate loss of accuracy. To address this question, we solved Poisson's equation with Dirichlet boundary conditions for two cases in which the analytical solution is known. Let v be the vector whose entries are the values of u, the solution of Poisson's equation, at the interior mesh points of our finite element domain. Since the analytical solution is known, we had the Bi-CGSTAB/RBGS iterations stop when kv analytical v numerical k <ffl; where ffl is a given tolerance. For all reported run times, we compiled an average of five individual runs. We solved both of these PDEs with problem sizes m =; ; 3; 4. For each of these values of m, we ran our code using both the Gaussian value of ο = p and the optimal value of ο as determined from the MINOS optimization software. For ffl, we used values

20 BRILL AND PINDER ξ =. ξ =. ξ = ξ =.5 ξ =.5 ξ =.4 FIG. 4. Eigenvalues of flavor for Poisson's equation for various values of ο for m = of the form ffl = j ; (4.) j =; 3; 4; 5; 6. Because the Gaussian value of ο = p provides O(h 4 ) accuracy while the optimal value of ο provides only O(h ) accuracy, there are sufficiently large values of j in (4.) for which Bi-CGSTAB/RBGS converges when using the Gaussian value of ο but for which it fails to converge when using the optimal value of ο. In the numerical experiments, the analytical solutions used are and u(x; y) = sin x cos y (4.) u(x; y) = exp(x + y): (4.3) The results are given in Tables I and II, respectively. In Tables I and II, the first column is problem size m while the second column is the corresponding value of the optimal value of ο, as determined from MINOS. Column three gives the tolerance ffl from (4.). Columns four and five (respectively, six and seven) give the average run time (in seconds) and number of iterations required to reach convergence using the optimal value of ο (respectively, the Gaussian value of ο). The eighth and final column gives the improvement" one achieves using the optimal value of ο instead of its

21 EIGENVALUE ANALYSIS OF HERMITE COLLOCATION TABLE I. Results for Poisson's equation with analytical solution (4.) ο opt ο = p m ο opt log ffl time its time its % impr TABLE II. Results for Poisson's equation with analytical solution (4.3) ο opt ο = p m ο opt log ffl time its time its % impr

22 BRILL AND PINDER ξ =. ξ =. ξ = ξ =.5 ξ =.5 ξ =.4 FIG. 5. Eigenvalues of flavor for Poisson's equation for various values of ο for m = Gaussian value. Improvement is defined time(gaussian) time(optimal) time(gaussian) %: In examining Tables I and II, we see, for fixed accuracy and fixed problem size, that the convergence using the optimal value of ο is always faster than that using the Gaussian value of ο. The value of the improvement" is never less than % and as high as more than 3%. The only times when the optimal value of ο fails to do better than the Gaussian value is when ffl is so small that convergence with the optimal value of ο is unattainable. V. SUMMARY We derived analytical formulae for the eigenvalues that govern the rate at which the Bi- CGSTAB/RBGS method converges to the solution of the matrix equation arising from the Hermite collocation discretization of Poisson's equation. The eigenvalue formulae depend upon collocation point location, which can be chosen optimally to minimize the number of iterations required to converge to a predetermined tolerance. The optimal location of the collocation points is insensitive to problem size m for sufficiently large m. Results of numerical experiments indicate significant speedup when we use the optimal collocation point location as compared to the Gaussian location. Additionally, although

23 EIGENVALUE ANALYSIS OF HERMITE COLLOCATION ξ =. ξ =. ξ = ξ =.5 ξ =.5 ξ =.4 FIG. 6. Eigenvalues of flavor 3 for Poisson's equation for various values of ο for m = the issue was not addressed in this work, our preconditioner makes our method of solution amenable to parallel processing. REFERENCES. Brill, S. H. Hermite Collocation Solution of Partial Differential Equations via Preconditioned Krylov Methods. Numerical Methods for Partial Differential Equations, in press.. van der Vorst, H. A. (99). Bi-CGSTAB: A Fast and Smoothly Converging Variant of Bi-CG for the Solution of Nonsymmetric Linear Systems. SIAM J. Sci. Stat. Comput., 3: Prenter, P. M. and Russell, R. D. (976). Orthogonal Collocation for Elliptic Partial Differential Equations. SIAM J. Numer. Anal., 3: Saad, Y. and Schultz, M. H. (985). Parallel Implementation of Preconditioned Conjugate Gradient Methods. Research Report YALEU/DCS/RR-45, Department of Computer Science, Yale University, New Haven, Connecticut. 5. Dyksen, W. R. and Rice, J. R. (986) The Importance of Scaling for the Hermite Bicubic Collocation Equations. SIAM J. Sci. Stat. Comput., 7: Lai, Y.-L., Hadjidimos, A., Houstis, E. N., and Rice, J. R. (995). On the Iterative Solution of Hermite Collocation Equations. SIAM J. Matrix Anal. Appl., 6:5477.

24 4 BRILL AND PINDER ξ =. ξ =. ξ = ξ =.5 ξ =.5 ξ =.4 FIG. 7. Eigenvalues of flavor 4 for Poisson's equation for various values of ο for m = 7. Papatheodorou, T. S. (983). Block AOR Iteration for Nonsymmetric Matrices. Mathematics of Computation, 4: Cottle, R. W. (974). Manifestations of the Schur Complement. Linear Algebra and its Applications, 8: Gohberg, I., Lancaster, P., and Rodman, L. (98). Matrix Polynomials, Academic Press, New York.. Wilkinson, J. H. (965). The Algebraic Eigenvalue Problem. Oxford University Press, London.. Golub, G. H. and van Loan, C. F. (996). Matrix Computations, Third ed. The Johns Hopkins University Press, Baltimore.. Watkins, D. S. (99). Fundamentals of Matrix Computations. John Wiley & Sons, Inc., New York. 3. Murtaugh, B. A. and Saunders, M. A. (987). MINOS 5. User's Guide. Technical Report SOL 83-R, Department of Operations Research, Stanford University, Stanford, California.

25 EIGENVALUE ANALYSIS OF HERMITE COLLOCATION ξ =. ξ =. ξ = ξ =.5 ξ =.5 ξ =.4 FIG. 8. Eigenvalues of flavor 5 for Poisson's equation for various values of ο for m =

26 6 BRILL AND PINDER ξ =. ξ =. ξ = ξ =.5 ξ =.5 ξ =.4 FIG. 9. Eigenvalues of flavor 6 for Poisson's equation for various values of ο for m =

27 EIGENVALUE ANALYSIS OF HERMITE COLLOCATION ξ =. ξ =. ξ = ξ =.5 ξ =.5 ξ =.4 FIG.. Eigenvalues of flavor 7 for Poisson's equation for various values of ο for m =

28 8 BRILL AND PINDER ξ =. ξ =. ξ = ξ =.5 ξ =.5 ξ =.4 FIG.. Eigenvalues of flavor 8 for Poisson's equation for various values of ο for m =

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative

More information

Abstract. 1 Introduction

Abstract. 1 Introduction The Bi-CGSTAB method with red-black Gauss-Seidel preconditioner applied to the hermite collocation discretization of subsurface multiphase flow and transport problems S.H. Brill, Department of Mathematics

More information

A Block Red-Black SOR Method. for a Two-Dimensional Parabolic. Equation Using Hermite. Collocation. Stephen H. Brill 1 and George F.

A Block Red-Black SOR Method. for a Two-Dimensional Parabolic. Equation Using Hermite. Collocation. Stephen H. Brill 1 and George F. 1 A lock ed-lack SO Method for a Two-Dimensional Parabolic Equation Using Hermite Collocation Stephen H. rill 1 and George F. Pinder 1 Department of Mathematics and Statistics University ofvermont urlington,

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant

More information

Iterative Methods for Sparse Linear Systems

Iterative Methods for Sparse Linear Systems Iterative Methods for Sparse Linear Systems Luca Bergamaschi e-mail: berga@dmsa.unipd.it - http://www.dmsa.unipd.it/ berga Department of Mathematical Methods and Models for Scientific Applications University

More information

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc. Lecture 11: CMSC 878R/AMSC698R Iterative Methods An introduction Outline Direct Solution of Linear Systems Inverse, LU decomposition, Cholesky, SVD, etc. Iterative methods for linear systems Why? Matrix

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

Preface to the Second Edition. Preface to the First Edition

Preface to the Second Edition. Preface to the First Edition n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................

More information

Basic Concepts in Linear Algebra

Basic Concepts in Linear Algebra Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University February 2, 2015 Grady B Wright Linear Algebra Basics February 2, 2015 1 / 39 Numerical Linear Algebra Linear

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Conjugate Gradient (CG) Method

Conjugate Gradient (CG) Method Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous

More information

Review of Basic Concepts in Linear Algebra

Review of Basic Concepts in Linear Algebra Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

Solving Linear Systems

Solving Linear Systems Solving Linear Systems Iterative Solutions Methods Philippe B. Laval KSU Fall 207 Philippe B. Laval (KSU) Linear Systems Fall 207 / 2 Introduction We continue looking how to solve linear systems of the

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

PETROV-GALERKIN METHODS

PETROV-GALERKIN METHODS Chapter 7 PETROV-GALERKIN METHODS 7.1 Energy Norm Minimization 7.2 Residual Norm Minimization 7.3 General Projection Methods 7.1 Energy Norm Minimization Saad, Sections 5.3.1, 5.2.1a. 7.1.1 Methods based

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

NUMERICAL COMPUTATION IN SCIENCE AND ENGINEERING

NUMERICAL COMPUTATION IN SCIENCE AND ENGINEERING NUMERICAL COMPUTATION IN SCIENCE AND ENGINEERING C. Pozrikidis University of California, San Diego New York Oxford OXFORD UNIVERSITY PRESS 1998 CONTENTS Preface ix Pseudocode Language Commands xi 1 Numerical

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU Preconditioning Techniques for Solving Large Sparse Linear Systems Arnold Reusken Institut für Geometrie und Praktische Mathematik RWTH-Aachen OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative

More information

Preconditioning Techniques Analysis for CG Method

Preconditioning Techniques Analysis for CG Method Preconditioning Techniques Analysis for CG Method Huaguang Song Department of Computer Science University of California, Davis hso@ucdavis.edu Abstract Matrix computation issue for solve linear system

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 17 1 / 26 Overview

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015

Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015 Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in NUMERICAL ANALYSIS Spring 2015 Instructions: Do exactly two problems from Part A AND two

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 16-02 The Induced Dimension Reduction method applied to convection-diffusion-reaction problems R. Astudillo and M. B. van Gijzen ISSN 1389-6520 Reports of the Delft

More information

APPLIED NUMERICAL LINEAR ALGEBRA

APPLIED NUMERICAL LINEAR ALGEBRA APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation

More information

A MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY

A MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY A MULTIGRID ALGORITHM FOR THE CELL-CENTERED FINITE DIFFERENCE SCHEME Richard E. Ewing and Jian Shen Institute for Scientic Computation Texas A&M University College Station, Texas SUMMARY In this article,

More information

Introduction to Iterative Solvers of Linear Systems

Introduction to Iterative Solvers of Linear Systems Introduction to Iterative Solvers of Linear Systems SFB Training Event January 2012 Prof. Dr. Andreas Frommer Typeset by Lukas Krämer, Simon-Wolfgang Mages and Rudolf Rödl 1 Classes of Matrices and their

More information

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University of Minnesota 2 Department

More information

On the Preconditioning of the Block Tridiagonal Linear System of Equations

On the Preconditioning of the Block Tridiagonal Linear System of Equations On the Preconditioning of the Block Tridiagonal Linear System of Equations Davod Khojasteh Salkuyeh Department of Mathematics, University of Mohaghegh Ardabili, PO Box 179, Ardabil, Iran E-mail: khojaste@umaacir

More information

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix

More information

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009) Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential

More information

A fast randomized algorithm for overdetermined linear least-squares regression

A fast randomized algorithm for overdetermined linear least-squares regression A fast randomized algorithm for overdetermined linear least-squares regression Vladimir Rokhlin and Mark Tygert Technical Report YALEU/DCS/TR-1403 April 28, 2008 Abstract We introduce a randomized algorithm

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Iteration basics Notes for 2016-11-07 An iterative solver for Ax = b is produces a sequence of approximations x (k) x. We always stop after finitely many steps, based on some convergence criterion, e.g.

More information

Solving Linear Systems of Equations

Solving Linear Systems of Equations November 6, 2013 Introduction The type of problems that we have to solve are: Solve the system: A x = B, where a 11 a 1N a 12 a 2N A =.. a 1N a NN x = x 1 x 2. x N B = b 1 b 2. b N To find A 1 (inverse

More information

9. Iterative Methods for Large Linear Systems

9. Iterative Methods for Large Linear Systems EE507 - Computational Techniques for EE Jitkomut Songsiri 9. Iterative Methods for Large Linear Systems introduction splitting method Jacobi method Gauss-Seidel method successive overrelaxation (SOR) 9-1

More information

OUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact

OUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact Computational Linear Algebra Course: (MATH: 6800, CSCI: 6800) Semester: Fall 1998 Instructors: { Joseph E. Flaherty, aherje@cs.rpi.edu { Franklin T. Luk, luk@cs.rpi.edu { Wesley Turner, turnerw@cs.rpi.edu

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Introduction. Chapter One

Introduction. Chapter One Chapter One Introduction The aim of this book is to describe and explain the beautiful mathematical relationships between matrices, moments, orthogonal polynomials, quadrature rules and the Lanczos and

More information

4.6 Iterative Solvers for Linear Systems

4.6 Iterative Solvers for Linear Systems 4.6 Iterative Solvers for Linear Systems Why use iterative methods? Virtually all direct methods for solving Ax = b require O(n 3 ) floating point operations. In practical applications the matrix A often

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 24: Preconditioning and Multigrid Solver Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 5 Preconditioning Motivation:

More information

Matrices, Moments and Quadrature, cont d

Matrices, Moments and Quadrature, cont d Jim Lambers CME 335 Spring Quarter 2010-11 Lecture 4 Notes Matrices, Moments and Quadrature, cont d Estimation of the Regularization Parameter Consider the least squares problem of finding x such that

More information

3 (Maths) Linear Algebra

3 (Maths) Linear Algebra 3 (Maths) Linear Algebra References: Simon and Blume, chapters 6 to 11, 16 and 23; Pemberton and Rau, chapters 11 to 13 and 25; Sundaram, sections 1.3 and 1.5. The methods and concepts of linear algebra

More information

PROJECTED GMRES AND ITS VARIANTS

PROJECTED GMRES AND ITS VARIANTS PROJECTED GMRES AND ITS VARIANTS Reinaldo Astudillo Brígida Molina rastudillo@kuaimare.ciens.ucv.ve bmolina@kuaimare.ciens.ucv.ve Centro de Cálculo Científico y Tecnológico (CCCT), Facultad de Ciencias,

More information

Classical iterative methods for linear systems

Classical iterative methods for linear systems Classical iterative methods for linear systems Ed Bueler MATH 615 Numerical Analysis of Differential Equations 27 February 1 March, 2017 Ed Bueler (MATH 615 NADEs) Classical iterative methods for linear

More information

Notes on PCG for Sparse Linear Systems

Notes on PCG for Sparse Linear Systems Notes on PCG for Sparse Linear Systems Luca Bergamaschi Department of Civil Environmental and Architectural Engineering University of Padova e-mail luca.bergamaschi@unipd.it webpage www.dmsa.unipd.it/

More information

Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White

Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White Introduction to Simulation - Lecture 2 Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White Thanks to Deepak Ramaswamy, Michal Rewienski, and Karen Veroy Outline Reminder about

More information

Iterative techniques in matrix algebra

Iterative techniques in matrix algebra Iterative techniques in matrix algebra Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan September 12, 2015 Outline 1 Norms of vectors and matrices 2 Eigenvalues and

More information

Linear Solvers. Andrew Hazel

Linear Solvers. Andrew Hazel Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

JACOBI S ITERATION METHOD

JACOBI S ITERATION METHOD ITERATION METHODS These are methods which compute a sequence of progressively accurate iterates to approximate the solution of Ax = b. We need such methods for solving many large linear systems. Sometimes

More information

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C.

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C. Lecture 9 Approximations of Laplace s Equation, Finite Element Method Mathématiques appliquées (MATH54-1) B. Dewals, C. Geuzaine V1.2 23/11/218 1 Learning objectives of this lecture Apply the finite difference

More information

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018 Numerical Analysis Preliminary Exam 1 am to 1 pm, August 2, 218 Instructions. You have three hours to complete this exam. Submit solutions to four (and no more) of the following six problems. Please start

More information

Introduction to Matrices

Introduction to Matrices POLS 704 Introduction to Matrices Introduction to Matrices. The Cast of Characters A matrix is a rectangular array (i.e., a table) of numbers. For example, 2 3 X 4 5 6 (4 3) 7 8 9 0 0 0 Thismatrix,with4rowsand3columns,isoforder

More information

5.3 The Power Method Approximation of the Eigenvalue of Largest Module

5.3 The Power Method Approximation of the Eigenvalue of Largest Module 192 5 Approximation of Eigenvalues and Eigenvectors 5.3 The Power Method The power method is very good at approximating the extremal eigenvalues of the matrix, that is, the eigenvalues having largest and

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 21: Sensitivity of Eigenvalues and Eigenvectors; Conjugate Gradient Method Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

G1110 & 852G1 Numerical Linear Algebra

G1110 & 852G1 Numerical Linear Algebra The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the

More information

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns 5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns (1) possesses the solution and provided that.. The numerators and denominators are recognized

More information

Solving Sparse Linear Systems: Iterative methods

Solving Sparse Linear Systems: Iterative methods Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccs Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary c 2008,2010

More information

Solving Sparse Linear Systems: Iterative methods

Solving Sparse Linear Systems: Iterative methods Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary

More information

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1 Scientific Computing WS 2018/2019 Lecture 9 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 9 Slide 1 Lecture 9 Slide 2 Simple iteration with preconditioning Idea: Aû = b iterative scheme û = û

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

Gradient Method Based on Roots of A

Gradient Method Based on Roots of A Journal of Scientific Computing, Vol. 15, No. 4, 2000 Solving Ax Using a Modified Conjugate Gradient Method Based on Roots of A Paul F. Fischer 1 and Sigal Gottlieb 2 Received January 23, 2001; accepted

More information

Iterative Methods. Splitting Methods

Iterative Methods. Splitting Methods Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition

More information

Direct and Iterative Solution of the Generalized Dirichlet-Neumann Map for Elliptic PDEs on Square Domains

Direct and Iterative Solution of the Generalized Dirichlet-Neumann Map for Elliptic PDEs on Square Domains Direct and Iterative Solution of the Generalized Dirichlet-Neumann Map for Elliptic PDEs on Square Domains A.G.Sifalakis 1, S.R.Fulton, E.P. Papadopoulou 1 and Y.G.Saridakis 1, 1 Applied Mathematics and

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS

ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS YOUSEF SAAD University of Minnesota PWS PUBLISHING COMPANY I(T)P An International Thomson Publishing Company BOSTON ALBANY BONN CINCINNATI DETROIT LONDON MADRID

More information

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication.

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication. CME342 Parallel Methods in Numerical Analysis Matrix Computation: Iterative Methods II Outline: CG & its parallelization. Sparse Matrix-vector Multiplication. 1 Basic iterative methods: Ax = b r = b Ax

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

Chapter 7. Linear Algebra: Matrices, Vectors,

Chapter 7. Linear Algebra: Matrices, Vectors, Chapter 7. Linear Algebra: Matrices, Vectors, Determinants. Linear Systems Linear algebra includes the theory and application of linear systems of equations, linear transformations, and eigenvalue problems.

More information

Multigrid absolute value preconditioning

Multigrid absolute value preconditioning Multigrid absolute value preconditioning Eugene Vecharynski 1 Andrew Knyazev 2 (speaker) 1 Department of Computer Science and Engineering University of Minnesota 2 Department of Mathematical and Statistical

More information

A Comparison of Solving the Poisson Equation Using Several Numerical Methods in Matlab and Octave on the Cluster maya

A Comparison of Solving the Poisson Equation Using Several Numerical Methods in Matlab and Octave on the Cluster maya A Comparison of Solving the Poisson Equation Using Several Numerical Methods in Matlab and Octave on the Cluster maya Sarah Swatski, Samuel Khuvis, and Matthias K. Gobbert (gobbert@umbc.edu) Department

More information

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations. POLI 7 - Mathematical and Statistical Foundations Prof S Saiegh Fall Lecture Notes - Class 4 October 4, Linear Algebra The analysis of many models in the social sciences reduces to the study of systems

More information

QUALITATIVE CONTROLLABILITY AND UNCONTROLLABILITY BY A SINGLE ENTRY

QUALITATIVE CONTROLLABILITY AND UNCONTROLLABILITY BY A SINGLE ENTRY QUALITATIVE CONTROLLABILITY AND UNCONTROLLABILITY BY A SINGLE ENTRY D.D. Olesky 1 Department of Computer Science University of Victoria Victoria, B.C. V8W 3P6 Michael Tsatsomeros Department of Mathematics

More information

Eigenvalues and eigenvectors

Eigenvalues and eigenvectors Chapter 6 Eigenvalues and eigenvectors An eigenvalue of a square matrix represents the linear operator as a scaling of the associated eigenvector, and the action of certain matrices on general vectors

More information

IDR(s) Master s thesis Goushani Kisoensingh. Supervisor: Gerard L.G. Sleijpen Department of Mathematics Universiteit Utrecht

IDR(s) Master s thesis Goushani Kisoensingh. Supervisor: Gerard L.G. Sleijpen Department of Mathematics Universiteit Utrecht IDR(s) Master s thesis Goushani Kisoensingh Supervisor: Gerard L.G. Sleijpen Department of Mathematics Universiteit Utrecht Contents 1 Introduction 2 2 The background of Bi-CGSTAB 3 3 IDR(s) 4 3.1 IDR.............................................

More information

Chapter 5 HIGH ACCURACY CUBIC SPLINE APPROXIMATION FOR TWO DIMENSIONAL QUASI-LINEAR ELLIPTIC BOUNDARY VALUE PROBLEMS

Chapter 5 HIGH ACCURACY CUBIC SPLINE APPROXIMATION FOR TWO DIMENSIONAL QUASI-LINEAR ELLIPTIC BOUNDARY VALUE PROBLEMS Chapter 5 HIGH ACCURACY CUBIC SPLINE APPROXIMATION FOR TWO DIMENSIONAL QUASI-LINEAR ELLIPTIC BOUNDARY VALUE PROBLEMS 5.1 Introduction When a physical system depends on more than one variable a general

More information

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination J.M. Peña 1 Introduction Gaussian elimination (GE) with a given pivoting strategy, for nonsingular matrices

More information

Introduction and Stationary Iterative Methods

Introduction and Stationary Iterative Methods Introduction and C. T. Kelley NC State University tim kelley@ncsu.edu Research Supported by NSF, DOE, ARO, USACE DTU ITMAN, 2011 Outline Notation and Preliminaries General References What you Should Know

More information

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294) Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps

More information

Homework 2 Foundations of Computational Math 2 Spring 2019

Homework 2 Foundations of Computational Math 2 Spring 2019 Homework 2 Foundations of Computational Math 2 Spring 2019 Problem 2.1 (2.1.a) Suppose (v 1,λ 1 )and(v 2,λ 2 ) are eigenpairs for a matrix A C n n. Show that if λ 1 λ 2 then v 1 and v 2 are linearly independent.

More information

LS.1 Review of Linear Algebra

LS.1 Review of Linear Algebra LS. LINEAR SYSTEMS LS.1 Review of Linear Algebra In these notes, we will investigate a way of handling a linear system of ODE s directly, instead of using elimination to reduce it to a single higher-order

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

LINEAR SYSTEMS (11) Intensive Computation

LINEAR SYSTEMS (11) Intensive Computation LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY

More information

The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix

The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix Chun-Yueh Chiang Center for General Education, National Formosa University, Huwei 632, Taiwan. Matthew M. Lin 2, Department of

More information