Peter Deuhard. for Symmetric Indenite Linear Systems

Size: px
Start display at page:

Download "Peter Deuhard. for Symmetric Indenite Linear Systems"

Transcription

1 Peter Deuhard A Study of Lanczos{Type Iterations for Symmetric Indenite Linear Systems Preprint SC 93{6 (March 993)

2 Contents 0. Introduction. Basic Recursive Structure 2. Algorithm Design Principles 7 2. Iterative Error Minimization : : : : : : : : : : : : : : : : : : : : Truncated Coecient Approximation : : : : : : : : : : : : : : : Iterative Error Reduction : : : : : : : : : : : : : : : : : : : : : : 3. Compact Iteration Schemes 2 3. Residual Projection : : : : : : : : : : : : : : : : : : : : : : : : : Residual Orthogonality : : : : : : : : : : : : : : : : : : : : : : : Blended Iteration Schemes : : : : : : : : : : : : : : : : : : : : : 7 Abstract The Lanczos iteration for symmetric indenite linear systems seems to be well{known for quite a while. However, in order to modify it with the aim of improved performance, the present paper studies certain aspects in terms of an adjoint scalar three{term recurrence. Thus, at least a dierent view is opened. Moreover, an alternative 3n{implementation in terms of the Euclidean orthogonal basis has been found that easily permits generalizations. The study is understood as a start{o for further numerical investigations and experiments.

3 0. Introduction This paper deals with the numerical solution of linear n{systems Ax = b ; x; b 2 R where A is a symmetric (n; n){matrix, in general indenite. For suciently large systems, an iterative method will be the method of choice. Since A is symmetric, an orthogonal basis of R n can be constructed via three{term recurrences an observation rst exploited by C. Lanczos [7]. As a consequence, compact algorithms with xed array storage q n, q = O(), can be designed. However, utmost care must be taken to achieve numerical stability. Even the algorithm originally proposed by Lanczos turned out to be unstable. The most successful candidate among the stable algorithms is the conjugate gradient method, which, for positive denite A, minimizes the energy norm of the error in each step a concept, however, which does not carry over to the general indenite case. An extension to the indenite case, aiming at minimization of the Euclidean norm of the error, has been proposed by Fridman [5]. Unfortunately, this algorithm is also numerically unstable. A numerically stable alternative using Givens rotations has been designed by Paige and Saunders [8] in a classical numerical linear algebra framework. The purpose of the present paper is to revisit the topic mainly from the side of its recursive structure without xing the choice of norm too early. Thus the iterative features of a whole class of algorithms can be viewed in a more general framework such as compact storage realization, iterative convergence and numerically stable implementation. In Section, a few simple technical results are exposed, which may nevertheless not be folklore. On this basis, construction principles for the design of algorithms are discussed such as approximate Krylov subspace error minimization, error reduction or residual orthogonality. Compact implementations and their numerical stability properties are discussed in Section 3 together with a 3n{implementation that permits blending of the Lanczos iteration with other possible iterations.. Basic Recursive Structure Given an initial guess x 0 of the unknown solution x, let x := x? x 0 denote the correction to be computed. Given additionally the initial residual r 0 := b? Ax 0 ;

4 then the symmetric indenite linear system to be solved arises as Ax = r 0 ; A T = A : (.) We assume that A is nonsingular (and will test for this hypothesis in the course of computation by means of the algorithms to be studied). For a given inner product h; i, which will be xed later, let fv j g dene an orthogonal basis such that hv i ; v j i = j ij : (.2) We dispose about normalization such that v j = p j (A)r 0 = A j? r 0 + : : : in terms of a sequence of polynomials fp j g with leading coecient. Then the sequence fv j g can be computed recursively via Schmidt orthogonalization v = r 0 ; v 2 = Av? v v k+ = Av k? k v k? 2 k v k? k = 2; : : : ; n? (.3) where k = hv k ; Av k i= k ; 2 k = k = k? : The fv ; : : : ; v k g span a Krylov subspace Let K k (A; r 0 ) := fr 0 ; Ar 0 ; : : : ; A k? r 0 g : v j := v j = p j j = : : : : ; n denote the associated orthonormal basis. In order to introduce matrix notation, let V k := [v ; : : : ; v k ] denote orthogonal (n; k){matrices such that V T k V k = I k k = ; : : : ; n and V n V T n = I n : In this notation, (.3) can be written as AV k = V k T k + k+ v k+ e T k (.4) in terms of the symmetric tridiagonal (k; k){matrices T k = V T k AV k ; T k = k k k (.5) 2

5 With the decomposition (.4) for k = n, the original system (.) to be solved now arises as T n n = kr 0 ke ; n := V T nx : (.6) In componentwise notation we thus obtain the solution in the form j := hv j ; xi j = ; : : : ; n (.7) x = nx j v j : (.8) As for the formal solution of (.6), we apply rational Cholesky decomposition with B k = T k = B k D k B T k k = ; : : : ; n (.9) b Comparing entries of B k, D k b k ; D k = diag(d ; : : : ; d k ) : and T k then yields the forward recursions d = ; d j+ = j+? 2 j+ =d j j = ; : : : ; n? (.0) and b j+ = j+ =d j j = ; : : : ; n? : By Sylvester's theorem of inertia [6], the sign distribution of the sequence fd j g equals the one of the eigenvalues of A. In particular, since A is assumed to be nonsingular, none of the d ; : : : ; d n will vanish. On the other hand, if any of the d j vanishes in the course of computation, then A must be singular. For this reason, utmost care must be taken, whenever these d j are used in actual computation, to treat the \nearly singular" case. Upon rechanging normalization from v j to v j, the coecients j will change to j = hv j; xi hv j ; v j i = hv j; xi= j j = : : : : ; n in terms of the expansion x = nx j v j : (.) 3

6 Accordingly, the tridiagonal system arises as T n n = e (.2) with T n := n n In order to construct an iterative method, dene : x k := x k? x 0 ; r k := b? Ax k ; k = 0; : : : ; n ; (.3) which implies the iterative updates Note that x k = x k? + k v k ; r k = r k?? k Av k k = ; : : : ; n : (.4) x n = x ; r n = 0 : (.5) Obviously, once the coecients f j g can be computed recursively, the updates (.4) can be performed easily. Lemma Let r 0 6= 0. In the notation just introduced, the coecients f j g satisfy the (adjoint) three{term recurrence j? + j j + 2 j+ j+ = 0 (.6) with 0 =?. At most one component q of f j g can vanish. Moreover, the iterative residuals can be represented as r k = k+ 2 k+ v k? k v k+ k = 0; : : : ; n? : (.7) Proof. In order to show (.6) above, just recall (.2) T n n = e = (; 0; : : : ; 0) T : Herein, rows 2 up to (n? ) anyway represent (.6). Row is included by dening 0 =?. Row n is included, since n+ = hv n+ ; v n+ i = 0, which implies 2 = n+ n+= n = 0, so that the last row degenerates. With the homogeneous three{term recurrence (.6) valid, no more than one single component q is allowed to vanish otherwise all components must vanish, which is a contradiction to the condition r 0 6= 0. 4

7 Finally, the representation for the residuals is easily obtained: r k = r 0? Ax k = v? P kp j Av j =? k? ( j? + j j + j+ 2 j+ )v j? ( k+ + k k )v k? k v k+ = k+ 2 k+ v k? k v k+ : With 0 =? and the recurrence (.6) given, a forward recursion for x k, k = ; 2; : : : can be principally constructed, once the coecient = hr 0; xi hr 0 ; r 0 i hv ; xi (.8) is computationally available. For the Euclidean inner product, (.8) is just the denition, which cannot be evaluated, unless the solution x is already known. If, however, A is not only symmetric but also positive denite, then h; i can be specied as the well{known energy product As a consequence, one then obtains hu; vi A := hu; Avi : (.9) = hr 0; xi A hr 0 ; r 0 i A = hv ; v i hv ; Av i = hv ; v 2 + v i = : (.20) This property leads to the conjugate gradient method. Assume, however, that the inner product is specied such that cannot be directly evaluated. Then the dependence of the iterative solutions x k on is of interest. Lemma 2 Let f j g, f j g denote two solutions of the adjoint three{term recurrence (.6) with starting values Then the Krylov subspace solution x k or, equivalently 0 =? ; = 0 and 0 = 0 ; = : (.2) x k = can be represented as j v j ; j = j + j (.22) x k = y k + z k 5

8 in term of y k = j v j ; z k = j v j : The sequences f j g, f j g or, equivalently, the corrections y k, z k, k = ; : : : ; n are linearly independent. Moreover, one has In particular, the {sequence can be obtained from withfd k g as dened in (.0). j 6= 0 ; j 6= 0 j = 2; : : : ; n (.23) k+ =? k k+ d k k ; k = ; : : : ; n? ; (.24) Proof. The adjoint three{term recurrence for the coecients j has two linearly independent solutions. With 0 =? given and to be used as a parameter, the selection of the solutions f j g, f j g by 0 =?, = 0 and 0 = 0, = is natural. Both sequences already have one vanishing element, which implies that a further element cannot vanish, unless the whole sequence vanishes which would be a contradiction. As for the linear independence, let D(k; k + ) := k k+? k k+ (.25) denote the associated Casorati determinant compare e.g. [2] or [3]. Then insertion of (.6) yields 2 k+ D(k; k + ) =? k( k? + k k ) + k ( k? + k k ) = D(k? ; k) : Hence, with 2 k+ = k+ = k, the invariance relation k+ D(k; k + ) = : : : = D(0; ) =? (.26) is directly obtained. As a consequence D(k; k + ) 6= 0 k = 0; : : : ; n? : which implies linear independence of the sequences f j g, f j g and the corrections y k, z k for all k = ; : : : ; n. Finally, with (.6) for f j g, i.e. j? + j j + 2 j+ j+ = 0 j = ; : : : ; n? and j 6= 0 for j = ; : : : ; n, we obtain j? j + j + j+ j+ j j = 0 ; j = ; : : : ; n? : 6

9 Upon comparing this continued fraction representation with the rational recursion (.0), one immediately observes equality of these recursions for the choice d j =? j+ j+ j j and, in addition, consistence of the starting values, since = d =? = 0 + : 2. Algorithm Design Principles The problem stated in the preceding section is the ecient recursive computation of the Krylov subspace solutions for increasing index k with the tridiagonal matrices wherein T k k = e ; x k = V k k (2.) T k = k k j := hv j ; v j i ; j := hv j ; Av j i= j : For A symmetric and positive denite, which implies the same for T k, the corrections x k are just those obtained from the conjugate gradient method. Following [8], this can be seen (in our notation) starting from (.6) and (.26): x k = V k k = V k T? k kr 0ke = V k B?T k D? k B? k kr 0ke : At this point, a splitting into pure forward recursions is done by introducing a new basis fp j g by V k B?T k =: P k = [p ; : : : ; p k ] : With b j = j =d j?, j = 2; : : : ; n, this can be written as v = p = r 0 =kr 0 k ; v j = j d j? p j? + p j j = 2; : : : ; n (2.2) 7

10 For this basis, we obtain P T k AP k = B? k V T k AV kb?t k = D k ; (2.3) which means that the fp j g are A{orthogonal and So the energy product h; i A d j = hp j ; Ap j i > 0 ; j = ; : : : ; n : (2.4) = h; Ai is the appropriate inner product. Then the associated coecients, say e k = ( e ; : : : ; e k ), can be obtained from B k D k e k = kr 0 ke (2.5) by pure forward recursion. For A symmetric indenite, however, we still have d j 6= 0 for all j, but an energy product can no longer be dened. Rather, the Euclidean inner product or other suitable choices need to be investigated. For this reason, various principles for constructing such algorithms are now discussed in the light of the results of Section. 2. Iterative Error Minimization For a given iterate x k = x 0 + x k, let " k := hx k? x; x k? xi =2 = kx k? xk (2.6) denote the error in the norm induced by the given inner product h; i. Let this inner product be such that can be actually evaluated. Then (2.) leads to the well{known Galerkin condition hx k ; x k? xi = 0 ; (2.7) which then directly induces the minimization property and the associated reduction property kx k? xk = min ky? xk (2.8) y2kk " 2 k+ = "2 k? kx k+? x k k 2 " 2 k : (2.9) As a consequence of (2.8), the ultimate solution is obtained at the nal step, i.e. x n = x : (2.0) For A positive denite, this situation can be constructed by choosing the energy product h; i := h; Ai (2.) 8

11 as inner product. As a consequence, q 2 (A) with 2 (A) the spectral condition number of A governs the convergence behavior of the iterates. Unfortunately, for A indenite, (2.) would no longer dene an inner product. Therefore, even though the formal extension of the cg{method can be dened, its convergence properties are no longer satisfactory and, in addition, its traditional implementation may just fail at intermediate steps. In this situation, Fridman [5] had suggested to modify the Krylov subspaces by choosing which then implies that v = Ar 0 (2.2) = hv ; xi= = hr 0 ; Axi= = : (2.3) Upon looking back to the proof of Lemma, it can be seen that the above choice means a switch to the normal equations A 2 x = Ar 0 (2.4) and, as a consequence, to Krylov subspaces of the form K k (A 2 ; Ar 0 ). This is not what is wanted here. Hence, other possibilities need to be discussed. 2.2 Truncated Coecient Approximation The short{hand notation (2.) hides the fact that the components of the vectors k change fully with each increase of k. In componentwise notation, we have to write x k = k j v j : (2.5) At this point, Paige and Saunders [8] decided to exploit the tridiagonal matrices T k by Givens transformations from the right thus reducing the entries to be changed. Here, however, we will resort to Lemma 2 above. From this, the coecients k j can be seen to have a fairly simple structure. Lemma 3 The coecients k j in terms f j g, f j g as dened in Lemma 2 with as dened in (2.5) may be expressed by k j = j + k j j = ; : : : ; n (2.6) k =? k+ k+ k = ; : : : ; n ; n : (2.7) The associated residuals r k are mutually orthogonal with respect to h; i and can be represented as r k =? b k v k+ k = 0; : : : ; n? : (2.8) with b k := k k =? k = 0; : : : ; n? : (2.9) k+ k+ 9

12 Proof. Let k := ( k ; : : : ; k k )T with k j as in (2.5). Then x k can be dened via T k k = e : (2.20) In view of Lemma 2, k is split according to k j = k j + k k j j = 0; : : : ; k ; k = ; : : : ; n ; which is just (2.6). Let k := ( ; : : : ; k ) T, k := ( ; : : : : k ), then the equivalent equations to (2.20) are easily veried to be T k k = e? 2 k+ k+e k (2.2) T k k =? 2 k+ k+e k : (2.22) Now with (2.6) and the combination (2.20), (2.2), and (2.22) we have O = T k ( k + k k )? T k k = 2 k+( k+ + k k+)e k : From this, (2.7) follows immediately. For k = n, we have n =, since x n = x. Finally, the residual representation (.7) can be employed to obtain r k = r 0? Ax k = r 0? A(y k + k z k) = 2 k+( k+ + k k+)v k? ( k + k k)v k+ : In the notation above this boils down to r k =? b k v k+ ; b k = ( k + k k) : Insertion of (2.7) and use of the Casorati determinant relation (.26) then yields b k = ( k k+? k k+ ) = D(k; k + )= k+ k+ = D(0; )=( k+ k+ ) =? =( k+ k+ ) : This completes the proof of the lemma. Note that (2.6) is a representation both for the cg{iterates (including the indenite case) and the iterates obtained from SYMMLQ due to [8]. The residual orthogonality is well{known but usually proved for the spd{case only. In Section 3.2 below, we will give an ecient algorithm to compute the iterates based on the orthogonal residuals. Unfortunately, these iterates do not reduce any error norm, if A is indenite. For this reason, a further design principle is investigated next. 0

13 2.3 Iterative Error Reduction As was shown in Section 2.2 before, the fact that k 6= for k < n leads to the consequence that the error x k? x is not reduced systematically in the norm k k = h; i =2. It might therefore be of interest to study a correction of the type (2.5), but with k j := j + k j j = ; : : : ; n (2.23) for arbitrary choice of f k g. The question of interest will then be: can the coecients k be determined in such a way that at least (2.7) and (2.9) can be saved, if not the full minimization property (2.8)? Lemma 4 Consider general iterates x k = x 0 + x k with x k dened by (2.5) and (2.23). Let k k be the norm induced by the specied inner product h; i. Then the choice k =? hy k; z k i (2.24) hz k ; z k i guarantees that " 2 k+ = " 2 k? kx k+? x k k 2 " 2 k : The denominator in (2.24) never vanishes. Proof. Starting from (2.23), we obtain the error expansion P P x k? x = x k? x = k ( j + k j )v j? n = np ( j + j )v j + ( k? ) k P j=k+ ( j + j )v j j v j Hence, with (2.5) and the orthogonality of the fv j g hx k? x; x k i = ( k? ) j ( j + k j ) j : There are two choices that make this inner product vanish: either k =, the case already discussed in Section 2., or j j j + k 2 k j = 0 : (2.25) Upon introducing the denitions of y k, z k from Lemma 2, the result (2.24) is directly conrmed. Finally note that hz ; z i = 2 = > 0 ;

14 which implies hz k ; z k i > 0 8k : For actual computation, (2.25) will be used to dene f k g recursively := 0 ; := ; = 0 k = ; 2; : : : : k+ = k + k+ k+ k+ (2.26) k+ = k + 2 k+ k+ k+ =? k+ = k+ Finally, note that this choice of k implies the following orthogonal projection property! x k = y k? hy k; z k i hz k ; z k i z k = I? z T kzk y z T k (2.27) k z k written, for ease of notation, in terms of the Euclidean inner product. 3. Compact Iteration Schemes In the positive denite case, an ecient compact iteration scheme with essentially 3n array storage is the well{known conjugate gradient method. The aim of this section is to construct a comparable scheme for the indenite case. One trick in the cg{method certainly is the introduction of the residuals r k := b? Ax k = r 0? Ax k (3.) as the actual variables in order to avoid computational use of the ill{conditioned n{system of three{term recurrences (.3) for the basis elements fv j g. This trick is now analyzed for the indenite case. 3. Residual Projection The result (.9) of Lemma readily leads to two dierent ways of representing the coecients f j g. First, we have Second, we obtain hv k ; r k i = k+ 2 k+ k = k+ k+ ; k = ; 2; : : : ; n? (3.2) hv k+ ; r k i =? k k+ ; k = 0; : : : ; n? : For k = 0, the latter representation leads to 0 =?, as required. For k > 0, however, this representation is just an identity, since the actual computation 2

15 of r k involves x k, which, in turn, requires the knowledge of k. Therefore, only (3.2) remains useful for actual computation. Since (3.2) starts with 2, any approach based on this formula will only be executable, if is known in agreement with the structural results of Lemma. Suppose therefore that is available, then (.9) inspires the following outline of an algorithm: k = 0 : v := r 0 ; x := x 0 + v ; = hv ; v i k = ; : : : ; n? : r k := r k?? k Av k [ k+ ] := hv k ; r k i if k 6= 0 then v k+ := [ k+ ] v k k? r k = k (3.3) k+ := hv k+ ; v k+ i ; k+ := [ k+ ]= k+ The critical step in this algorithm is certainly (3.3) for k = 0. Fortunately, vanishing components of f j g can only occur once see Lemma. So a type of \look{ahead strategy" can be derived, in principle. For details of such strategies, see e.g. [4] or []. Unfortunately, however, \nearly vanishing" components of f j g cannot generally be excluded with the consequence that a look{ ahead strategy of variable length must be developed for emergency situations. Moreover, upon inserting (3.2) into (3.3), any new basis element is seen to come from a residual projection, since!! v k+ = hv k; r k i hv k ; v k i v k? r k =? I? v kv T k v T v r k : (3.4) k k (Once more, for ease of writing, the Euclidean inner product has been used.) As a consequence, either numerical instability or frequent restart under a condition such as k+ < " (3.5) will occur. For this reason, this type of algorithm is not pursued further here. 3.2 Residual Orthogonality Upon recalling the general iteration for x k = k j v j ; k j = j + k j j = ; : : : ; k ; (3.6) 3

16 an alternative simple idea to exploit the residuals as actual variables can be derived. Lemma 3 above shows that for b k := k =? k+ k+ (3.7) the iterates bx k := x 0 + x k give rise to orthogonal residuals of the form br k =? b k v k+ k = 0; : : : ; n? ; with b k :=? =( k+ k+ ) k = 0; : : : ; n? : For actual computation of the f b k g note that, with (.24) b k b k? = k k k+ k+ =? d k k = ; : : : ; n : (3.8) This recursion is started with b 0 =?, which implies b = =. In a similar way, the f b k g can be obtained recursively starting from Now, dene b := b and Then, with (.26), we have b =? = : (3.9) b k := b k? b k? ; k = 2; : : : ; n : (3.0) b k =? k+ k+ + k k = D(k; k + ) =? k k+ k k+ k+ = b k k : (3.) Hence, for k = ; : : : ; n? : b k+ = k k+ k+ = k+ b k k+ k+2 k+2 k which, by insertion of (.24), yields k k k+ k+ k+ k+ k+2 k+2 ; b k+ = b k k+ =( k d k d k+ ) k = ; : : : ; n? : (3.2) The orthogonality of the residuals can be exploited to derive an alternative recursive scheme in terms of the basis fv j g. The derivation requires some rather straightforward calculations, which we summarize in the following lemma. 4

17 Lemma 5 Let fbx k g denote the extended cg{iterates, fbr k g the associated residuals, and dbx k := bx k? bx k?, the iterative corrections. Moreover, dene the expressions b k := hbr k? ; br k? i ; b dk := d k =b k ; b k = hbr k? ; Abr k? i=b k : (3.3) Then the iterates can be computed by the compact scheme (k = ; : : : ; n? ) as follows: wherein br k := br k?? Adbx k dbx k+ := The iteration is started with b d k bd k+ dbx k + bx k+ := bx k + dbx k+ ; bd k = b k+ b dk+ br k (3.4) (?) k?j b j =b j : (3.5) br 0 := b? Ax 0 ; dbx = br 0 ; bx = x 0 + dbx : Proof. For general iterates fx k g we obtain dx k+ = x k+? x k = k+ X ( j + k+ j )v j? X k+ = ( k+ + k k+ )v k+ + ( k+? k ) j v j ( j + k j )v j = ( k+ + k k+ )v k+ + ( k+? k )z k+ : (3.6) The special iterates fbx k g are characterized by k = b k, which yields From this, we directly calculate dbx k+ = b k+ z k+ : dbx k+ = b k+ (z k + k+ v k+ ) = b k+ b k dbx k? b k+ k+ b k br k : Insertion of (3.), (3.2) and (.24) yields b k+ k+ b k = b k b k k+ k+ k d k d k+ = k+ k+ k k d k =? : d k+ d k+ 5

18 Once more with (3.2), we thus have dbx k+ = k+ k d k d k+ dbx k + d k+ br k : At this point, the continued fraction representation (.0) can be used, in principle, which reads Note that due to (2.8) d = ; d k+ = k+? k+ k d k k = ; : : : ; n? : b k+ = b 2 k k+ ; b k+ = k+ : Upon introducing b d k, we arrive at the linear recursion bd k = b k b k? b d k? ; (3.7) which can be solved in closed form to conrm (3.5). Finally, with (3.8) it is seen that k+ = b b 2 k+k+ = b k+d k = d b k = d b k+ : k d k d k+ b k d k+ b 2 k b k d k d k+ Remark. For the sake of clarity, the connection of the recursive scheme (3.4) with the usual cg{scheme is described. For this purpose, just combine (2.2), which denes the basis fp j g from the basis fv j g, with the residual representation (.9). By means of the simple scaling transformation one arrives at p k :=? b k? p k p k ; (3.8) p = br 0 ; p k+ = br k + b k b k+ p k ; k = ; : : : ; n? ; (3.9) which is the usual formula see e.g. [6] or [3]. In the standard cg{scheme, the fd k g are computed via (2.4), whereas here formula (3.5) is employed. In order to actually evaluate f b d k g via (3.5), dene S k := (?) j? b j =: S + k? S? k (3.20) 6

19 wherein S +, k S? k sum so that represent all positive or all negative terms, respectively, in the bd k = (?) k? (S + k? S? k ) ; S + k ; S? k > 0 : (3.2) The relative condition number of this summation is then k := S+ k S + k + S? k? S? k : (3.22) Restart of the iteration (3.4) will therefore be necessary whenever the requirement " k < (3.23) is violated for some suitably chosen default value ". 3.3 Blended Iteration Schemes The above considerations led to an alternative compact scheme for the cg{ iteration, especially designed for the symmetric indenite case. It can, however, also just be used to compute the necessary intermediate terms for any other iteration characterized by x k = ( j + k j )v j with k 6= b k. For the associated general iteration recall from (3.6) that dx k+ = ( k+ + k k+ )v k+ + k+ z k+ with k+ := k+? k. In the frame of Lemma 5 just observe that v k+ =? b br k ; z k+ = dbx k+ : (3.24) k b k+ Therefore, we must just replace the bx k {update by an x k {update of the form x k+ = x k + k+ b k+ dbx k+? ( k+ + k k+) b k br k : (3.25) For computational realization, note that (.24) is equivalent to and (2.9) can be used to derive k+ =? k =( b d k b k+ ) (3.26) b k =? k+ b k+ =b = k =( b d k b ) : (3.27) 7

20 Moreover, b k+ can be expressed by b k+ = =(b b dk+ ) : (3.28) Finally, the factor before br k should be evaluated in the form k+ + k k+ b k = ( k? b k )=( b k k+ ) ; which then assures that x k = bx k arises also numerically for k = b k and k+ = b k+. 8

21 References [] R. Bank, T. Chan: A Composite Step Bi{Conjugate Gradient Method for Nonsymmetric Linear Systems. Lecture given at Oberwolfach (July 992). [2] L. Brand: Dierence and Dierential Equations. New York: Willey (986). [3] P. Deuhard, A. Hohmann: Numerische Mathematik. Eine algorithmisch orientierte Einfurung. Kap. 6: Drei-Term-Rekursionen. Verlag de Gruyter Berlin, New York (99). [4] R.W. Freund, M.H. Gutknecht, N.M. Nachtigal: An implementation of the look{ahead Lanczos algorithm for non{hermitian matrices. I. Tech. Rep , RIACS, NASA Ames Research Center (990). [5] V.M. Fridman: The Method of Minimum Iteration with Minimum Errors for a System of Linear Algebraic Equations with a Symmetrical Matrix. Zh. vych. math. 2, pp. 34{342 (962). [6] G.H. Golub, C.F. van Loan: Matrix Computations. The Johns Hopkins University Press (989). [7] C. Lanczos: Solution of systems of linear equations by minimized iterations. J. Res. NBS 49, pp. 33{53 (952). [8] C.C. Paige, M.A. Saunders: Solution of Sparse Indenite Systems of Linear Equations. SIAM J. Numer. Anal. 2, pp. 67{629 (975). 9

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition) Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

Reduced Synchronization Overhead on. December 3, Abstract. The standard formulation of the conjugate gradient algorithm involves

Reduced Synchronization Overhead on. December 3, Abstract. The standard formulation of the conjugate gradient algorithm involves Lapack Working Note 56 Conjugate Gradient Algorithms with Reduced Synchronization Overhead on Distributed Memory Multiprocessors E. F. D'Azevedo y, V.L. Eijkhout z, C. H. Romine y December 3, 1999 Abstract

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

Introduction to Iterative Solvers of Linear Systems

Introduction to Iterative Solvers of Linear Systems Introduction to Iterative Solvers of Linear Systems SFB Training Event January 2012 Prof. Dr. Andreas Frommer Typeset by Lukas Krämer, Simon-Wolfgang Mages and Rudolf Rödl 1 Classes of Matrices and their

More information

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence)

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) David Glickenstein December 7, 2015 1 Inner product spaces In this chapter, we will only consider the elds R and C. De nition 1 Let V be a vector

More information

CG Type Algorithm for Indefinite Linear Systems 1. Conjugate Gradient Type Methods for Solving Symmetric, Indenite Linear Systems.

CG Type Algorithm for Indefinite Linear Systems 1. Conjugate Gradient Type Methods for Solving Symmetric, Indenite Linear Systems. CG Type Algorithm for Indefinite Linear Systems 1 Conjugate Gradient Type Methods for Solving Symmetric, Indenite Linear Systems Jan Modersitzki ) Institute of Applied Mathematics Bundesstrae 55 20146

More information

arxiv: v1 [hep-lat] 2 May 2012

arxiv: v1 [hep-lat] 2 May 2012 A CG Method for Multiple Right Hand Sides and Multiple Shifts in Lattice QCD Calculations arxiv:1205.0359v1 [hep-lat] 2 May 2012 Fachbereich C, Mathematik und Naturwissenschaften, Bergische Universität

More information

1 Conjugate gradients

1 Conjugate gradients Notes for 2016-11-18 1 Conjugate gradients We now turn to the method of conjugate gradients (CG), perhaps the best known of the Krylov subspace solvers. The CG iteration can be characterized as the iteration

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices Chapter 7 Iterative methods for large sparse linear systems In this chapter we revisit the problem of solving linear systems of equations, but now in the context of large sparse systems. The price to pay

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course

More information

Krylov Subspace Methods that Are Based on the Minimization of the Residual

Krylov Subspace Methods that Are Based on the Minimization of the Residual Chapter 5 Krylov Subspace Methods that Are Based on the Minimization of the Residual Remark 51 Goal he goal of these methods consists in determining x k x 0 +K k r 0,A such that the corresponding Euclidean

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 20 1 / 20 Overview

More information

FEM and sparse linear system solving

FEM and sparse linear system solving FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm CS 622 Data-Sparse Matrix Computations September 19, 217 Lecture 9: Krylov Subspace Methods Lecturer: Anil Damle Scribes: David Eriksson, Marc Aurele Gilles, Ariah Klages-Mundt, Sophia Novitzky 1 Introduction

More information

Block Bidiagonal Decomposition and Least Squares Problems

Block Bidiagonal Decomposition and Least Squares Problems Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition

More information

Linear Algebra and Eigenproblems

Linear Algebra and Eigenproblems Appendix A A Linear Algebra and Eigenproblems A working knowledge of linear algebra is key to understanding many of the issues raised in this work. In particular, many of the discussions of the details

More information

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 45 Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems Peter J. Hammond latest revision 2017 September

More information

Spectral Theorem for Self-adjoint Linear Operators

Spectral Theorem for Self-adjoint Linear Operators Notes for the undergraduate lecture by David Adams. (These are the notes I would write if I was teaching a course on this topic. I have included more material than I will cover in the 45 minute lecture;

More information

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Krylov Space Methods Nonstationary sounds good Radu Trîmbiţaş Babeş-Bolyai University Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Introduction These methods are used both to solve

More information

Preconditioned Conjugate Gradient-Like Methods for. Nonsymmetric Linear Systems 1. Ulrike Meier Yang 2. July 19, 1994

Preconditioned Conjugate Gradient-Like Methods for. Nonsymmetric Linear Systems 1. Ulrike Meier Yang 2. July 19, 1994 Preconditioned Conjugate Gradient-Like Methods for Nonsymmetric Linear Systems Ulrike Meier Yang 2 July 9, 994 This research was supported by the U.S. Department of Energy under Grant No. DE-FG2-85ER25.

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Some minimization problems

Some minimization problems Week 13: Wednesday, Nov 14 Some minimization problems Last time, we sketched the following two-step strategy for approximating the solution to linear systems via Krylov subspaces: 1. Build a sequence of

More information

Gradient Method Based on Roots of A

Gradient Method Based on Roots of A Journal of Scientific Computing, Vol. 15, No. 4, 2000 Solving Ax Using a Modified Conjugate Gradient Method Based on Roots of A Paul F. Fischer 1 and Sigal Gottlieb 2 Received January 23, 2001; accepted

More information

Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computa

Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computa Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computations ªaw University of Technology Institute of Mathematics and Computer Science Warsaw, October 7, 2006

More information

Research Article Some Generalizations and Modifications of Iterative Methods for Solving Large Sparse Symmetric Indefinite Linear Systems

Research Article Some Generalizations and Modifications of Iterative Methods for Solving Large Sparse Symmetric Indefinite Linear Systems Abstract and Applied Analysis Article ID 237808 pages http://dxdoiorg/055/204/237808 Research Article Some Generalizations and Modifications of Iterative Methods for Solving Large Sparse Symmetric Indefinite

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

Large-scale eigenvalue problems

Large-scale eigenvalue problems ELE 538B: Mathematics of High-Dimensional Data Large-scale eigenvalue problems Yuxin Chen Princeton University, Fall 208 Outline Power method Lanczos algorithm Eigenvalue problems 4-2 Eigendecomposition

More information

QUASI-UNIFORMLY POSITIVE OPERATORS IN KREIN SPACE. Denitizable operators in Krein spaces have spectral properties similar to those

QUASI-UNIFORMLY POSITIVE OPERATORS IN KREIN SPACE. Denitizable operators in Krein spaces have spectral properties similar to those QUASI-UNIFORMLY POSITIVE OPERATORS IN KREIN SPACE BRANKO CURGUS and BRANKO NAJMAN Denitizable operators in Krein spaces have spectral properties similar to those of selfadjoint operators in Hilbert spaces.

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 19: More on Arnoldi Iteration; Lanczos Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 17 Outline 1

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS

RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS HOMER F. WALKER Abstract. Recent results on residual smoothing are reviewed, and it is observed that certain of these are equivalent

More information

Iterative Methods for Sparse Linear Systems

Iterative Methods for Sparse Linear Systems Iterative Methods for Sparse Linear Systems Luca Bergamaschi e-mail: berga@dmsa.unipd.it - http://www.dmsa.unipd.it/ berga Department of Mathematical Methods and Models for Scientific Applications University

More information

Introduction. Chapter One

Introduction. Chapter One Chapter One Introduction The aim of this book is to describe and explain the beautiful mathematical relationships between matrices, moments, orthogonal polynomials, quadrature rules and the Lanczos and

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Iterative Methods for Linear Systems of Equations

Iterative Methods for Linear Systems of Equations Iterative Methods for Linear Systems of Equations Projection methods (3) ITMAN PhD-course DTU 20-10-08 till 24-10-08 Martin van Gijzen 1 Delft University of Technology Overview day 4 Bi-Lanczos method

More information

Conjugate Gradient (CG) Method

Conjugate Gradient (CG) Method Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous

More information

Preconditioned GMRES Revisited

Preconditioned GMRES Revisited Preconditioned GMRES Revisited Roland Herzog Kirk Soodhalter UBC (visiting) RICAM Linz Preconditioning Conference 2017 Vancouver August 01, 2017 Preconditioned GMRES Revisited Vancouver 1 / 32 Table of

More information

The rate of convergence of the GMRES method

The rate of convergence of the GMRES method The rate of convergence of the GMRES method Report 90-77 C. Vuik Technische Universiteit Delft Delft University of Technology Faculteit der Technische Wiskunde en Informatica Faculty of Technical Mathematics

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Institute for Advanced Computer Studies. Department of Computer Science. Iterative methods for solving Ax = b. GMRES/FOM versus QMR/BiCG

Institute for Advanced Computer Studies. Department of Computer Science. Iterative methods for solving Ax = b. GMRES/FOM versus QMR/BiCG University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{96{2 TR{3587 Iterative methods for solving Ax = b GMRES/FOM versus QMR/BiCG Jane K. Cullum

More information

290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f

290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f Numer. Math. 67: 289{301 (1994) Numerische Mathematik c Springer-Verlag 1994 Electronic Edition Least supported bases and local linear independence J.M. Carnicer, J.M. Pe~na? Departamento de Matematica

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009) Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential

More information

OUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact

OUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact Computational Linear Algebra Course: (MATH: 6800, CSCI: 6800) Semester: Fall 1998 Instructors: { Joseph E. Flaherty, aherje@cs.rpi.edu { Franklin T. Luk, luk@cs.rpi.edu { Wesley Turner, turnerw@cs.rpi.edu

More information

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Leslie Foster 11-5-2012 We will discuss the FOM (full orthogonalization method), CG,

More information

Review of Basic Concepts in Linear Algebra

Review of Basic Concepts in Linear Algebra Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

Algebra II. Paulius Drungilas and Jonas Jankauskas

Algebra II. Paulius Drungilas and Jonas Jankauskas Algebra II Paulius Drungilas and Jonas Jankauskas Contents 1. Quadratic forms 3 What is quadratic form? 3 Change of variables. 3 Equivalence of quadratic forms. 4 Canonical form. 4 Normal form. 7 Positive

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

A short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering

A short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering A short course on: Preconditioned Krylov subspace methods Yousef Saad University of Minnesota Dept. of Computer Science and Engineering Universite du Littoral, Jan 19-3, 25 Outline Part 1 Introd., discretization

More information

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin On the stability of invariant subspaces of commuting matrices Tomaz Kosir and Bor Plestenjak September 18, 001 Abstract We study the stability of (joint) invariant subspaces of a nite set of commuting

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization

More information

Linear Solvers. Andrew Hazel

Linear Solvers. Andrew Hazel Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 207, v 260) Contents Matrices and Systems of Linear Equations Systems of Linear Equations Elimination, Matrix Formulation

More information

The Conjugate Gradient Method for Solving Linear Systems of Equations

The Conjugate Gradient Method for Solving Linear Systems of Equations The Conjugate Gradient Method for Solving Linear Systems of Equations Mike Rambo Mentor: Hans de Moor May 2016 Department of Mathematics, Saint Mary s College of California Contents 1 Introduction 2 2

More information

5 Compact linear operators

5 Compact linear operators 5 Compact linear operators One of the most important results of Linear Algebra is that for every selfadjoint linear map A on a finite-dimensional space, there exists a basis consisting of eigenvectors.

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Hessenberg eigenvalue eigenvector relations and their application to the error analysis of finite precision Krylov subspace methods

Hessenberg eigenvalue eigenvector relations and their application to the error analysis of finite precision Krylov subspace methods Hessenberg eigenvalue eigenvector relations and their application to the error analysis of finite precision Krylov subspace methods Jens Peter M. Zemke Minisymposium on Numerical Linear Algebra Technical

More information

On the influence of eigenvalues on Bi-CG residual norms

On the influence of eigenvalues on Bi-CG residual norms On the influence of eigenvalues on Bi-CG residual norms Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic duintjertebbens@cs.cas.cz Gérard Meurant 30, rue

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Lecture 5, Continuous Optimisation Oxford University Computing Laboratory, HT 2006 Notes by Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The notion of complexity (per iteration)

More information

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate

More information

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved Fundamentals of Linear Algebra Marcel B. Finan Arkansas Tech University c All Rights Reserved 2 PREFACE Linear algebra has evolved as a branch of mathematics with wide range of applications to the natural

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294) Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps

More information

Linear Models Review

Linear Models Review Linear Models Review Vectors in IR n will be written as ordered n-tuples which are understood to be column vectors, or n 1 matrices. A vector variable will be indicted with bold face, and the prime sign

More information

1 Vectors. Notes for Bindel, Spring 2017 Numerical Analysis (CS 4220)

1 Vectors. Notes for Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for 2017-01-30 Most of mathematics is best learned by doing. Linear algebra is no exception. You have had a previous class in which you learned the basics of linear algebra, and you will have plenty

More information

On the accuracy of saddle point solvers

On the accuracy of saddle point solvers On the accuracy of saddle point solvers Miro Rozložník joint results with Valeria Simoncini and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic Seminar at

More information

Basic Concepts in Linear Algebra

Basic Concepts in Linear Algebra Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University February 2, 2015 Grady B Wright Linear Algebra Basics February 2, 2015 1 / 39 Numerical Linear Algebra Linear

More information

Krylov subspace projection methods

Krylov subspace projection methods I.1.(a) Krylov subspace projection methods Orthogonal projection technique : framework Let A be an n n complex matrix and K be an m-dimensional subspace of C n. An orthogonal projection technique seeks

More information

Math 1180, Notes, 14 1 C. v 1 v n v 2. C A ; w n. A and w = v i w i : v w = i=1

Math 1180, Notes, 14 1 C. v 1 v n v 2. C A ; w n. A and w = v i w i : v w = i=1 Math 8, 9 Notes, 4 Orthogonality We now start using the dot product a lot. v v = v v n then by Recall that if w w ; w n and w = v w = nx v i w i : Using this denition, we dene the \norm", or length, of

More information

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS Instructor: Shmuel Friedland Department of Mathematics, Statistics and Computer Science email: friedlan@uic.edu Last update April 18, 2010 1 HOMEWORK ASSIGNMENT

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

Eigenvalue problems and optimization

Eigenvalue problems and optimization Notes for 2016-04-27 Seeking structure For the past three weeks, we have discussed rather general-purpose optimization methods for nonlinear equation solving and optimization. In practice, of course, we

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

Mathematics Department Stanford University Math 61CM/DM Inner products

Mathematics Department Stanford University Math 61CM/DM Inner products Mathematics Department Stanford University Math 61CM/DM Inner products Recall the definition of an inner product space; see Appendix A.8 of the textbook. Definition 1 An inner product space V is a vector

More information

Lecture 2: Review of Prerequisites. Table of contents

Lecture 2: Review of Prerequisites. Table of contents Math 348 Fall 217 Lecture 2: Review of Prerequisites Disclaimer. As we have a textbook, this lecture note is for guidance and supplement only. It should not be relied on when preparing for exams. In this

More information

QUADRATURES INVOLVING POLYNOMIALS AND DAUBECHIES' WAVELETS *

QUADRATURES INVOLVING POLYNOMIALS AND DAUBECHIES' WAVELETS * QUADRATURES INVOLVING POLYNOMIALS AND DAUBECHIES' WAVELETS * WEI-CHANG SHANN AND JANN-CHANG YAN Abstract. Scaling equations are used to derive formulae of quadratures involving polynomials and scaling/wavelet

More information

ETNA Kent State University

ETNA Kent State University Electronic ransactions on Numerical Analysis. Volume 2, pp. 57-75, March 1994. Copyright 1994,. ISSN 1068-9613. ENA MINIMIZAION PROPERIES AND SHOR RECURRENCES FOR KRYLOV SUBSPACE MEHODS RÜDIGER WEISS Dedicated

More information

Fraction-free Row Reduction of Matrices of Skew Polynomials

Fraction-free Row Reduction of Matrices of Skew Polynomials Fraction-free Row Reduction of Matrices of Skew Polynomials Bernhard Beckermann Laboratoire d Analyse Numérique et d Optimisation Université des Sciences et Technologies de Lille France bbecker@ano.univ-lille1.fr

More information

Algorithms that use the Arnoldi Basis

Algorithms that use the Arnoldi Basis AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Arnoldi Methods Dianne P. O Leary c 2006, 2007 Algorithms that use the Arnoldi Basis Reference: Chapter 6 of Saad The Arnoldi Basis How to

More information

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication.

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication. CME342 Parallel Methods in Numerical Analysis Matrix Computation: Iterative Methods II Outline: CG & its parallelization. Sparse Matrix-vector Multiplication. 1 Basic iterative methods: Ax = b r = b Ax

More information

LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT FALL 2006 PRINCETON UNIVERSITY

LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT FALL 2006 PRINCETON UNIVERSITY LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT 204 - FALL 2006 PRINCETON UNIVERSITY ALFONSO SORRENTINO 1 Adjoint of a linear operator Note: In these notes, V will denote a n-dimensional euclidean vector

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give

More information

Generalized Shifted Inverse Iterations on Grassmann Manifolds 1

Generalized Shifted Inverse Iterations on Grassmann Manifolds 1 Proceedings of the Sixteenth International Symposium on Mathematical Networks and Systems (MTNS 2004), Leuven, Belgium Generalized Shifted Inverse Iterations on Grassmann Manifolds 1 J. Jordan α, P.-A.

More information