Convergence of Inner-Iteration GMRES Methods for Least Squares Problems

Size: px
Start display at page:

Download "Convergence of Inner-Iteration GMRES Methods for Least Squares Problems"

Transcription

1 ISSN NII Technical Report Convergence of Inner-Iteration GMRES Methods for Least Squares Problems Keiichi Moriuni and Ken Hayami NII-212-5E Aug. 212

2 CONVERGENCE OF INNER-ITERATION GMRES METHODS FOR LEAST SQUARES PROBLEMS KEIICHI MORIKUNI AND KEN HAYAMI Abstract. We develop a general convergence theory for the generalized minimal residual method for least squares problems preconditioned with inner iterations. The inner iterations are performed by stationary iterative methods. We also present theoretical ustifications for using specific inner iterations such as the Jacobi and SOR-type methods. The theory is improved particularly in the randeficient case. We analyse the spectrum of the preconditioned coefficient matrix, and characterize it by the spectral radius of the iteration matrix for the inner iterations. The analysis is supported by numerical experiments. Key words. least squares problems, iterative methods, preconditioner, inner-outer iteration, GMRES method, stationary iterative method, ran-deficient problem AMS subect classifications. 65F8, 65F1, 65F2, 65F5 1. Introduction. Consider solving least squares problems min x R n b Ax 2, (1.1) where A R m n is not necessarily of full ran and b R m is not necessarily in R(A), the range of A. The least squares problem (1.1) is equivalent to the normal equations A T Ax = A T b. (1.2) In addition, applying B R n m, we may transform the problem (1.1) to equivalent problems [6, Theorems 3.1, 3.11]. Theorem 1.1. min b Ax x R n 2 = min b ABz z R m 2 holds for all b Rm if and only if R(AB) =R(A). Theorem 1.2. min b Ax x R n 2 and min Bb BAx x R n 2 are equivalent for all b R m if and only if R(B T BA) =R(A). Thus, the original problem (1.1) may be reduced to least squares problems with a square matrix AB or BA. Based on these transformations, the generalized minimal residual method (GMRES) [12] was extended to deal with least squares problems (1.1) in [6]. The right- and left-preconditioned GMRES for least squares problems were called AB- and BA-GMRES, respectively. Sufficient conditions under which these methods determine a least squares solution without breadown for arbitrary b for overdetermined, underdetermined, and ran-deficient problems, were shown. In [9], these methods were preconditioned with several iterations of stationary iterative methods such as variants of the Jacobi overrelaxation (JOR) and successive overrelaxation (SOR) methods, which may be considered as inner iterations. The This wor was supported by the Grants-in-Aid for Scientific Research (C) of the Ministry of Education, Culture, Sports, Science and Technology, Japan. Department of Informatics, School of Multidisciplinary Sciences, The Graduate University for Advanced Studies, Soendai, Hitotsubashi, Chiyoda-u, Toyo, , Japan (moriuni@nii.ac.p). National Institute of Informatics, and Department of Informatics, School of Multidisciplinary Sciences, The Graduate University for Advanced Studies, Soendai, Hitotsubashi, Chiyoda-u, Toyo, , Japan (hayami@nii.ac.p). 1

3 2 KEIICHI MORIKUNI AND KEN HAYAMI Cimmino-NE and Cimmino-NR methods are mathematically equivalent to JOR applied to AA T u = b with x = A T u and A T Ax = A T b, respectively. The normal-error (NE-)SOR and normal-residual (NR-)SOR methods are mathematically equivalent to SOR applied to AA T u = b with x = A T u and A T Ax = A T b, respectively [9], [11], [1]. Krylov subspace methods preconditioned with inner iterations for solving linear systems of equations were described in [9] and references therein. We assumed that A should be of full-column ran for the convergence theory for BA-GMRES with the Cimmino-NR and NR-SOR inner iterations in [9], but numerical experiments in [9] showed that these methods actually converge also for ran-deficient problems. In this paper, we give theoretical ustifications for the convergence also in the ran-deficient case. The outline of the paper is as follows. In Section 2, we introduce AB- and BA- GMRES, give their new convergence theory, and analyse the spectrum of the preconditioned matrix. In Section 3, we give a framewor of inner-iteration preconditioning for these methods, and main results on sufficient conditions in terms of the inner iterations for the convergence. In Section 4, we analyse the spectrum of the the preconditioned matrix with inner iterations. In Section 5, we give the main conclusions of this paper. Throughout this paper, we use bold letters for column vectors. e denotes the th column of an identity matrix. We denote quantities related to the th inner iteration with a superscript with bracets, e.g., x (), and for outer iterations with a subscript without bracets, e.g., x.(a, b) denotes the inner product a T b between real vectors a and b. 2. GMRES methods for least squares problems. We first give an explanation about AB-GMRES and BA-GMRES. AB-GMRES applies GMRES to min u R m b ABu 2 with x = Bu, whereas BA-GMRES applies GMRES to min x R n Bb BAx 2. Concerning the convergence of AB-GMRES and BA-GMRES, we have the following [6, Corollaries 3.8, 3.19]. Theorem 2.1. If R(B T )=R(A) and R(B) =R(A T ), then AB-GMRES determines a least squares solution of min x R n b Ax 2 for all b Rm and all x R n without breadown. Here we say AB-GMRES breas down at some step if dim AB(K (AB, r )) < dim K (AB, r ) or dim K (AB, r ) <, where K (AB, r ) = span{r,abr,...,(ab) 1 r } is the Krylov subspace of dimension [2]. The breadown causes a division by in the algorithm. Theorem 2.2. If R(B T )=R(A) and R(B) =R(A T ), then BA-GMRES determines a least squares solution of min x R n b Ax 2 for all b Rm and all x R n without breadown. We say BA-GMRES breas down at some step if dim BA(K (BA,Br )) < dim K (BA,Br )ordimk (BA,Br ) <[2]. Note that R(B) =R(A T )givesr(ab) =R(A) (cf. Theorem 1.1), and R(B T )= R(A) givesr(b T BA) =R(A) (cf. Theorem 1.2).

4 CONVERGENCE OF INNER-ITERATION GMRES 3 On the other hand, the convergence of the standard GMRES method for least squares problems min b à x 2 with à x R RN N, is given as follows [5, Theorem N 2.8]. Theorem 2.3. GMRES determines a solution of min b à x 2 for all b x R N R(Ã), x R N if and only if R(Ã) N(Ã) ={}. Here, N (Ã) is the null space of à and x is the initial approximate solution for GMRES. Applying this theorem to the case of AB- and BA-GMRES, we obtain the following. Theorem 2.4. Suppose that R(B) =R(A T ). Then, AB-GMRES determines a solution of min b Ax x R n 2 without breadown for all b R(A) and x R n if and only if R(A) N(B) ={}. Proof. Substitute AB, u, and b into Ã, x, and b, respectively, in Theorem 2.3. R(B) =R(A T )givesr(ab) =R(AA T )=R(A) andn (AB) =R(B T A T ) = R(B T B) = R(B T ) = N (B), where S is the orthogonal complement of a subspace S. Theorem 1.1 completes the proof. We remar that this theorem is restricted to the consistent case b R(A). A similar theorem holds for BA-GMRES. Theorem 2.5. Suppose that R(B T )=R(A). Then, BA-GMRES determines a solution of min b Ax x R n 2 without breadown for all b Rm and x R n if and only if N (A) R(B) ={}. Proof. Substitute BA, x, andbb into Ã, x, and b, respectively, in Theorem 2.3. R(B T )=R(A) givesn (BA) =R(A T B T ) = R(A T A) = R(A T ) = N (A) and R(BA) =R(BB T )=R(B). Hence, for all Bb R(BA) =R(B) is equivalent to for all b R m. Therefore, Theorem 1.2 completes the proof. In contrast to BA-GMRES, the condition b R(AB) =R(A) is required for AB-GMRES since the preconditioned system min b ABu u R m 2 is inconsistent for b R(AB). Since the BA-GMRES algorithm will be treated in Section 3, it is given in the following. For the AB-GMRES algorithm, see [6]. Algorithm 2.6. BA-GMRES method. 1. Let x be the initial approximate solution. 2. r = b Ax, r = Br, β = r 2, v 1 = r /β 3. For =1, 2,... until convergence, Do 4. w = BAv 5. For i =1, 2,...,,Do 6. h i, =(w, v i ), w = w h i, v i 7. EndDo 8. h +1, = w 2, v +1 = w /h +1, 9. EndDo 1. y = arg min βe 1 H y 2, x = x +[v 1, v 2,...,v ] y y R Here, H = {h i, } R (+1) Spectrum of the preconditioned matrix. We describe how BA- GMRES depends on the spectrum of the preconditioned matrix. Assume R(B T )= R(A). Then, Bb R(BA) =R(B) holds, and min x R n Bb BAx 2 is equivalent to BAx = Bb. Let r = ran A, Q 1 R n r such that R(Q 1 )=R(BA), Q 2 R n (n r)

5 4 KEIICHI MORIKUNI AND KEN HAYAMI such that R(Q 2 )=R(BA),andQ =[Q 1,Q 2 ], where the columns of Q are orthonormal. Then, GMRES applied to BAx = Bb is equivalent to GMRES applied to [ ][ ] [ ] A11 A 12 x 1 b 1 x 2 =, where A 11 = Q T 1 (BA)Q 1 R r r, A 12 = Q T 1 (BA)Q 2 R r (n r), x 1 = Q T 1 x, x 2 = Q T 2 x, b 1 = Q T 1 Bb, andb 2 = Q T 2 Bb = since Bb R(BA). As shown in [5], if x R(BA) =R(B), then the R(BA) component of GMRES applied to BAx = Bb, is equivalent to GMRES applied to A 11 x 1 = b 1. On the other hand, in the R(BA) component, x 2 = x2 for all iterates x. Now, note the following. Theorem 2.7. A 11 is nonsingular if and only if R(BA) N(BA). Proof. See [5, Theorem 2.3]. Theorem 2.8. Assume R(BA) N(BA). Then, λ is an eigenvalue of BA if and only if λ is an eigenvalue of A 11. Proof. Let Q =[Q 1,Q 2 ] R n n be as given above. Then, BAu = λu, u Q T (BA)QQ T u = λq T u, u [ ][ ] [ ] [ ] A11 A 12 u 1 u 1 u 1 u 2 = u 2, u = u 2 { A 11 u 1 + A 12 u 2 = λu 1 [ ], u 1 = λu 2 u =, u 2. Hence, if λ is an eigenvalue of BA, then we have u 2 =, A 11 u 1 = λu 1,and u 1 so that λ is an eigenvalue of A 11. On the other hand, if λ isan eigenvalue of A 11, then by setting u 2 =, we can show that λ is an eigenvalue of BA. Assume R(BA) N(BA) ={}, equivalently R(B) N(A) ={}. Then, A 11 is nonsingular, and its eigenvalues are all nonzero and correspond to the nonzero eigenvalues of BA. Therefore, the convergence behavior of GMRES applied to BAx = Bb may be explained by the (nonzero) eigenvalues of A 11 (or BA). 3. BA-GMRES preconditioned by stationary iterative methods as inner iterations. Instead of applying B explicitly as in Algorithm 2.6, consider using inner iterations as follows [9]. Algorithm 3.1. BA-GMRES method with the inner-iteration preconditioning. 1. Let x be the initial approximate solution. 2. r = b Ax 3. Apply l steps of a stationary iterative method to A T Az = A T r to obtain z (l) = B (l) r. 4. β = z (l) 2, v 1 = z (l) /β 5. For =1, 2,... until convergence, Do 6. u = Av 7. Apply l steps of a stationary iterative method to A T Ay = A T u to obtain z (l) = B (l) u. 8. For i =1, 2,...,,Do 9. h i, =(z (l), v i), z (l) = z (l) h i,v i b 2

6 CONVERGENCE OF INNER-ITERATION GMRES 5 1. EndDo 11. h +1, = z (l) 2, v +1 = z (l) /h +1, 12. EndDo 13. y = arg min βe 1 H y 2, x = x +[v 1, v 2,...,v ] y y R Here, B (l) denotes the preconditioning matrix for l inner iterations. Similarly we can obtain AB-GMRES with stationary inner iterations. In lines 3 and 7 in Algorithm 3.1, stationary iterative methods are applied to the normal equations. We now introduce a stationary iterative method for the normal equations A T Az = A T c. Consider the splitting A T A = M N, where M is nonsingular. Then, consider a class of iterative methods of the form z (l) = M 1 Nz (l 1) + M 1 A T c. Let H = M 1 N = I M 1 A T A be the iteration matrix. In practice, there is no need to form A T A, M 1,andN explicitly, as will be seen in the Cimmino-NR and NR-SOR methods [11] in Section 3.1. Here, we define the following, e.g., [8]. Definition 3.2. A matrix C is called semi-convergent if lim C i exists. i Hensel [7], Oldenburger [1], and Tanabe [13] showed the following. Theorem 3.3. The following are equivalent. 1. C is semi-convergent. 2. For any eigenvalue λ of C, either (a) λ < 1 or (b) λ =1and index(i C) =1 holds. Here, index(c) denotes the smallest nonnegative integer i such that R(C i )= R(C i+1 ). Thus, index(c) is equal to the size of the largest Jordan bloc corresponding to the zero eigenvalue of C Convergence theory. We first give an explicit expression for the preconditioned matrix B (l) A for BA-GMRES with l inner iterations. Assume that the initial approximate solution for the inner iteration is z () =. Then, the lth iterate for the inner iteration is l 1 z (l) = Hz (l 1) + M 1 A T c = H M 1 A T c. (3.1) Hence, if we define the preconditioning matrix by l 1 B (l) = H M 1 A T, (3.2) = l 1 we have z (l) = B (l) c. Let C (l) = H M 1. Then, B (l) = C (l) A T. Hence, the = preconditioned matrix is expressed as B (l) A = C (l) A T A. We obtain the following. Lemma 3.4. Let A R m n, A T A = M N, where M is nonsingular, H = M 1 N, and B (l) be given by (3.2). Assume that H is semi-convergent. Then, index(b (l) A) 1 for all l 1. =

7 6 KEIICHI MORIKUNI AND KEN HAYAMI Proof. Let J = S 1 (I H)S be the Jordan canonical form of (I H). Assume that H is semi-convergent. Then, from Theorem 3.3, index(i H) = index(j) 1. Without loss of generality, we denote J as follows: [ ] J J = C n n, J = diag (J1,J 2,...,J s 1 ) C r r, J s = n r, (3.3) J s J i = diag ( J i1,j i2,...,j isi ) C n i n i (i =1, 2,...,s 1), λ i 1 λ i 1 J i = C n i n i, 1 λ i s i =1 s 1 n i = r, (3.4) i=1 n i = n i, (3.5) where r = ran A, n r is the zero matrix of size n r, J has no eigenvalues equal to zero, s is the number of distinct eigenvalues of I H, andλ i is a nonzero eigenvalue of I H. Since J is nonsingular, [ l 1 ( J) l ] B (l) A = S (I J) JS 1 = S I r I r S 1. n r = Since λ i =1 ν(h), where ν(h) is an eigenvalue of H and ν(h) < 1, 1 λ i = ν(h) < 1. The eigenvalue of I r (I r J) l has the form μ i =1 (1 λ i ) l.ifμ i =, then (1 λ i ) l = 1, which contradicts 1 λ i < 1. Hence, I r (I r J) l is nonsingular. Therefore, index(b (l) A) 1 for all l 1. Lemma 3.5. Using the notations and the assumption of Lemma 3.4, R ( B (l)t) = R(A) holds for all l 1. ( Proof. IfC (l) is nonsingular, then R B (l)t) ( = R AC (l)t) = R(A). Hence, we show that C (l) is nonsingular Assume that H is semi-convergent. Then, we have [ l 1 J C (l) = (I SJS 1 ) M 1 1 [I = S r (I r J) ] ] l S 1 M 1. li n r = As in Lemma 3.4, I r (I r ( J) l is nonsingular. Hence, C (l) is nonsingular for all l 1. Therefore, we have R B (l)t) = R(A) for all l 1. Hence, we obtain the main result. Theorem 3.6. Assume that H is semi-convergent. Then, BA-GMRES with the inner-iteration preconditioning of the form (3.1) determines a least squares solution of min b Ax x R n 2 without breadown for all b R m and all x R n. Proof. Assume that H is semi-convergent. Then, from Lemma 3.4, we have index(b ( (l) A) 1, or equivalently R(B (l) A) N(B (l) A) = {}. Moreover, since R B (l)t) ( = R(A) from Lemma 3.5, we have R(B (l) A)=R B (l) B (l)t) = R(B (l) ) ( and N (B (l) A)=R A T B (l)t) ( = R A T A ) = R(A T ) = N (A). Hence, Theorem 2.5 completes the proof.

8 CONVERGENCE OF INNER-ITERATION GMRES 7 We remar that this theorem holds whether A is of full ran or ran-deficient, and whether A is overdetermined or underdetermined, i.e., unconditionally with respect to A. As for the convergence of AB-GMRES with inner iterations, it follows from Theorem 2.4 that a similar convergence theorem holds, which may be shown via lemmas similar to Lemmas 3.4 and 3.5. Now, we consider applying Theorem 3.6 for BA-GMRES preconditioned with specific inner-iteration methods as follows. The inner-iteration preconditioning matrices for Cimmino-NR, NR-SOR, and NR-SSOR [9] are respectively obtained from (3.2) by setting ωd : Cimmino-NR, 1 M = ω (D + ωl) : NR-SOR, ω 1 (2 ω) 1 (D + ωl)d 1 (D + ωl T ) : NR-SSOR, where A T A = L + D + L T, L is a strictly lower triangular matrix, D is a diagonal matrix, and ω is the relaxation parameter. The following are the algorithms for these methods for inner iterations. Algorithm 3.7. Cimmino-NR method. 1. Let z () = and r () = c. 2. For =, 1,...,l,Do 3. d () = D 1 A T r () 4. z (+1) = z () + ωd () 5. r (+1) = r () ωad () 6. EndDo Algorithm 3.8. NR-SOR method. 1. Let z () = and r = c. 2. For =, 1,...,l,Do 3. For =1, 2,...,n,Do 4. d () =(r, a )/ a z (+1) = z () 6. r = r ωd () 7. EndDo + ωd () a Algorithm 3.9. NR-SSOR method. 1. Let z () = and r = c. 2. For =, 1,...,l,Do 3. For =1, 2,...,n,Do 4. d () =(r, a )/ a z (+ 1 2 ) = z () 6. r = r ωd () + ωd () a 7. EndDo 8. For = n, n 1,...,1, Do 9. d (+ 1 2 ) 2 =(r, a )/ a 2 1. z (+1) = z (+ 1 2 ) + ωd (+ 1 2 ) 11. r = r ωd (+ 1 2 ) 12. EndDo 13. EndDo 8. EndDo Here, a is the th column of A. According to Dax [4], the iteration matrix H for Cimmino-NR with <ω<2/ρ(d 1/2 A T AD 1/2 ) NR-SOR with <ω<2, NR-SSOR with <ω<2, is semi-convergent, where ρ(c) is the spectral radius of C. Here, we assume A has no zero columns. Hence, from Theorem 3.6, these methods can serve as the inner iterations for BA-GMRES. Theorem 3.1. Assume that the relaxation parameter for Cimmino-NR satisfies <ω<2/ρ(d 1/2 A T AD 1/2 ). Then, BA-GMRES with the Cimmino-NR inneriteration preconditioning determines a solution of min b Ax x R n 2 without breadown for all b R m and all x R n. Theorem Assume that the relaxation parameter for NR-SOR satisfies <ω<2. Then, BA-GMRES with the NR-SOR inner-iteration preconditioning determines a solution of min b Ax x R n 2 without breadown for all b R m and all a

9 8 KEIICHI MORIKUNI AND KEN HAYAMI x R n. Theorem Assume that the relaxation parameter for NR-SSOR satisfies <ω<2. Then, BA-GMRES with the NR-SSOR inner-iteration preconditioning determines a solution of min b Ax x R n 2 without breadown for all b R m and all x R n. We omit similar convergence theorems and their proofs for AB-GMRES preconditioned with the Cimmino-NE, NE-SOR, and NE-SSOR inner iterations [9]. 4. Spectrum of the preconditioned matrix with inner iterations. Next, we analyse the spectrum of the preconditioned matrix for BA-GMRES with l inner iterations. The preconditioned matrix may be expressed as B (l) A = I H l. ( Let ) ν be an eigenvalue of H. Then, there exists v such that Hv = νv or I H l v =(1 ν l )v. Hence, B (l) A has an eigenvalue μ =1 ν l. Assume that H is semi-convergent. Let r = ran A. Then, from Theorem 3.3, H has r eigenvalues such that ν < 1andn r eigenvalues such that ν =1. Forν =1, we have μ =. For ν < 1, we obtain μ 1 = ν l ρ(h) l < 1. This means that r eigenvalues of B (l) A lie inside the circle of radius ρ(h) l with center at 1, and these eigenvalues approaches 1 as l increases. The remaining n r eigenvalues are zero. We demonstrate the above observation for a test matrix called Maragal 3 [3] of size 1,69 86 with 18,391 nonzero elements (nonzero density 1.27 %), and ran 613. Figure 4.1 shows the spectrum of the preconditioned matrix B (l) A with the NR-SOR inner iterations for l = 1, 2, 4, and 8. The relaxation parameter was set to ω =1. The computations were done using Matlab 211b. As the number of inner iterations l increased, the eigenvalues of B (l) A approached 1. From the discussion in Section 2.1, we see that if H is semi-convergent, then B (l) r 2 depends on the eigenvalues of H not equal to 1, but not on the eigenvalues equal to Conclusions. We considered applying inner-iteration preconditioning to GMRES methods for least squares problems and gave a general convergence theory for the methods. Theoretical ustifications for the convergence were given also for specific inner-iteration methods lie NR-SOR. We have reinforced the previous theory particularly for the ran-deficient case. The spectrum of the preconditioned matrix was analysed and characterized using the spectral radius of the iteration matrix for the inner iterations. Numerical experiments were done to examine the analysis.

10 CONVERGENCE OF INNER-ITERATION GMRES inner iteration.3 2 inner iterations.2.2 Im[μ(B (1) A)] Im[μ(B (2) A)] Re[μ(B (1) A)] Re[μ(B (2) A)].3 4 inner iterations.3 8 inner iterations.2.2 Im[μ(B (4) A)] Im[μ(B (8) A)] Re[μ(B (4) A)] Re[μ(B (8) A)] Fig Spectrum of the preconditioned matrix B (l) A with NR-SOR inner iterations for Maragal 3. Upper left: l =1, right upper: l =2, left lower: l =4, right lower: l =8. REFERENCES [1] Å. Börc and T. Elfving, Accelerated proection methods for computing pseudoinverse solutions of systems of linear equations, BIT, 19 (1979), pp [2] P. N. Brown and H. F. Waler, GMRES on (nearly) singular systems, SIAM J. Matrix Anal. Appl., 18 (1997), pp [3] T. Davis, The University of Florida Sparse Matrix Collection, available online at The University of Florida. [4] A. Dax, The convergece of linear stationary iterative processes for solving singular unstructured systems of linear equations, SIAM Review, 32 (199), pp [5] K. Hayami and M. Sugihara, A geometric view of Krylov subspace methods on singular systems, Numer. Linear Algebra Appl., 18 (211), pp [6] K. Hayami, J.-F. Yin, and T. Ito, GMRES methods for least squares problems, SIAM J. Matrix Anal. Appl., 31 (21), pp [7] V. K. Hensel, Über potenzreihen von matrizen, J. Reine Angew. Math., 155 (1926), pp (in German). [8] C. D. Meyer and R. J. Plemmons, Convergent powers of a matrix with applications to iterative methods for singular linear sysmtes, SIAM J. Numer. Anal., 14 (1977), pp [9] K. Moriuni and K. Hayami, Inner-iteration Krylov subspace methods for least squares problems, NII Technical Report, NII-211-1E (211), pp (revised, submitted). [1] R. Oldenburger, Infinite powers of matrices and characteristic roots, Due Math., 6 (194), pp [11] Y. Saad, Iterative Methods for Sparse Linear Systems, 2nd ed., SIAM, Philadelphia, 23. [12] Y. Saad and M. H. Schultz, GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems, SIAM J. Sci. Stat. Comput., 7 (1986), pp [13] K. Tanabe, Characterization of linear stationary iterative processes for solving a singular system of linear equations, Numer. Math., 22 (1974), pp

Convergence of Inner-iteration GMRES Methods for Least Squares Problems (Revised Version)

Convergence of Inner-iteration GMRES Methods for Least Squares Problems (Revised Version) ISSN 1346-5597 NII Technical Report Convergence of Inner-iteration GMRES Methods for Least Squares Problems (Revised Version) Keiichi Morikuni and Ken Hayami NII-2013-004E Nov. 2013 CONVERGENCE OF INNER-ITERATION

More information

GMRES Methods for Least Squares Problems

GMRES Methods for Least Squares Problems ISSN 1346-5597 NII Technical Report GMRES Methods for Least Squares Problems Ken HAYAMI, Jun-Feng YIN, and Tokushi ITO NII-2007-009E July 2007 GMRES METHODS FOR LEAST SQUARES PROBLEMS KEN HAYAMI, JUN-FENG

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

Performance Comparison of Relaxation Methods with Singular and Nonsingular Preconditioners for Singular Saddle Point Problems

Performance Comparison of Relaxation Methods with Singular and Nonsingular Preconditioners for Singular Saddle Point Problems Applied Mathematical Sciences, Vol. 10, 2016, no. 30, 1477-1488 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2016.6269 Performance Comparison of Relaxation Methods with Singular and Nonsingular

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

CONVERGENCE OF MULTISPLITTING METHOD FOR A SYMMETRIC POSITIVE DEFINITE MATRIX

CONVERGENCE OF MULTISPLITTING METHOD FOR A SYMMETRIC POSITIVE DEFINITE MATRIX J. Appl. Math. & Computing Vol. 182005 No. 1-2 pp. 59-72 CONVERGENCE OF MULTISPLITTING METHOD FOR A SYMMETRIC POSITIVE DEFINITE MATRIX JAE HEON YUN SEYOUNG OH AND EUN HEUI KIM Abstract. We study convergence

More information

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative

More information

Weaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms

Weaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms DOI: 10.1515/auom-2017-0004 An. Şt. Univ. Ovidius Constanţa Vol. 25(1),2017, 49 60 Weaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms Doina Carp, Ioana Pomparău,

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 36, pp. 39-53, 009-010. Copyright 009,. ISSN 1068-9613. ETNA P-REGULAR SPLITTING ITERATIVE METHODS FOR NON-HERMITIAN POSITIVE DEFINITE LINEAR SYSTEMS

More information

CLASSICAL ITERATIVE METHODS

CLASSICAL ITERATIVE METHODS CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped

More information

The Singular Value Decomposition and Least Squares Problems

The Singular Value Decomposition and Least Squares Problems The Singular Value Decomposition and Least Squares Problems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 27, 2009 Applications of SVD solving

More information

ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER *

ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER * Journal of Computational Mathematics Vol.xx, No.x, 2x, 6. http://www.global-sci.org/jcm doi:?? ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER * Davod

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Iterative Solution methods

Iterative Solution methods p. 1/28 TDB NLA Parallel Algorithms for Scientific Computing Iterative Solution methods p. 2/28 TDB NLA Parallel Algorithms for Scientific Computing Basic Iterative Solution methods The ideas to use iterative

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili,

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili, Australian Journal of Basic and Applied Sciences, 5(3): 35-358, 20 ISSN 99-878 Generalized AOR Method for Solving Syste of Linear Equations Davod Khojasteh Salkuyeh Departent of Matheatics, University

More information

CAAM 454/554: Stationary Iterative Methods

CAAM 454/554: Stationary Iterative Methods CAAM 454/554: Stationary Iterative Methods Yin Zhang (draft) CAAM, Rice University, Houston, TX 77005 2007, Revised 2010 Abstract Stationary iterative methods for solving systems of linear equations are

More information

Chap 3. Linear Algebra

Chap 3. Linear Algebra Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information

AMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1.

AMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1. J. Appl. Math. & Computing Vol. 15(2004), No. 1, pp. 299-312 BILUS: A BLOCK VERSION OF ILUS FACTORIZATION DAVOD KHOJASTEH SALKUYEH AND FAEZEH TOUTOUNIAN Abstract. ILUS factorization has many desirable

More information

Iterative techniques in matrix algebra

Iterative techniques in matrix algebra Iterative techniques in matrix algebra Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan September 12, 2015 Outline 1 Norms of vectors and matrices 2 Eigenvalues and

More information

FGMRES for linear discrete ill-posed problems

FGMRES for linear discrete ill-posed problems FGMRES for linear discrete ill-posed problems Keiichi Morikuni Department of Informatics, School of Multidisciplinary Sciences, The Graduate University for Advanced Studies, Sokendai, 2-1-2 Hitotsubashi,

More information

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc. Lecture 11: CMSC 878R/AMSC698R Iterative Methods An introduction Outline Direct Solution of Linear Systems Inverse, LU decomposition, Cholesky, SVD, etc. Iterative methods for linear systems Why? Matrix

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

PROJECTED GMRES AND ITS VARIANTS

PROJECTED GMRES AND ITS VARIANTS PROJECTED GMRES AND ITS VARIANTS Reinaldo Astudillo Brígida Molina rastudillo@kuaimare.ciens.ucv.ve bmolina@kuaimare.ciens.ucv.ve Centro de Cálculo Científico y Tecnológico (CCCT), Facultad de Ciencias,

More information

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector

More information

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant

More information

Iterative Methods for Sparse Linear Systems

Iterative Methods for Sparse Linear Systems Iterative Methods for Sparse Linear Systems Luca Bergamaschi e-mail: berga@dmsa.unipd.it - http://www.dmsa.unipd.it/ berga Department of Mathematical Methods and Models for Scientific Applications University

More information

Weaker hypotheses for the general projection algorithm with corrections

Weaker hypotheses for the general projection algorithm with corrections DOI: 10.1515/auom-2015-0043 An. Şt. Univ. Ovidius Constanţa Vol. 23(3),2015, 9 16 Weaker hypotheses for the general projection algorithm with corrections Alexru Bobe, Aurelian Nicola, Constantin Popa Abstract

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

Stabilization and Acceleration of Algebraic Multigrid Method

Stabilization and Acceleration of Algebraic Multigrid Method Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration

More information

On the Hermitian solutions of the

On the Hermitian solutions of the Journal of Applied Mathematics & Bioinformatics vol.1 no.2 2011 109-129 ISSN: 1792-7625 (print) 1792-8850 (online) International Scientific Press 2011 On the Hermitian solutions of the matrix equation

More information

On the Preconditioning of the Block Tridiagonal Linear System of Equations

On the Preconditioning of the Block Tridiagonal Linear System of Equations On the Preconditioning of the Block Tridiagonal Linear System of Equations Davod Khojasteh Salkuyeh Department of Mathematics, University of Mohaghegh Ardabili, PO Box 179, Ardabil, Iran E-mail: khojaste@umaacir

More information

4.6 Iterative Solvers for Linear Systems

4.6 Iterative Solvers for Linear Systems 4.6 Iterative Solvers for Linear Systems Why use iterative methods? Virtually all direct methods for solving Ax = b require O(n 3 ) floating point operations. In practical applications the matrix A often

More information

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University of Minnesota 2 Department

More information

The semi-convergence of GSI method for singular saddle point problems

The semi-convergence of GSI method for singular saddle point problems Bull. Math. Soc. Sci. Math. Roumanie Tome 57(05 No., 04, 93 00 The semi-convergence of GSI method for singular saddle point problems by Shu-Xin Miao Abstract Recently, Miao Wang considered the GSI method

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1 Parallel Numerics, WT 2016/2017 5 Iterative Methods for Sparse Linear Systems of Equations page 1 of 1 Contents 1 Introduction 1.1 Computer Science Aspects 1.2 Numerical Problems 1.3 Graphs 1.4 Loop Manipulations

More information

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices Chapter 7 Iterative methods for large sparse linear systems In this chapter we revisit the problem of solving linear systems of equations, but now in the context of large sparse systems. The price to pay

More information

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY published in IMA Journal of Numerical Analysis (IMAJNA), Vol. 23, 1-9, 23. OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY SIEGFRIED M. RUMP Abstract. In this note we give lower

More information

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU Preconditioning Techniques for Solving Large Sparse Linear Systems Arnold Reusken Institut für Geometrie und Praktische Mathematik RWTH-Aachen OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative

More information

A NOTE ON THE JORDAN CANONICAL FORM

A NOTE ON THE JORDAN CANONICAL FORM A NOTE ON THE JORDAN CANONICAL FORM H. Azad Department of Mathematics and Statistics King Fahd University of Petroleum & Minerals Dhahran, Saudi Arabia hassanaz@kfupm.edu.sa Abstract A proof of the Jordan

More information

9. Iterative Methods for Large Linear Systems

9. Iterative Methods for Large Linear Systems EE507 - Computational Techniques for EE Jitkomut Songsiri 9. Iterative Methods for Large Linear Systems introduction splitting method Jacobi method Gauss-Seidel method successive overrelaxation (SOR) 9-1

More information

Numerical behavior of inexact linear solvers

Numerical behavior of inexact linear solvers Numerical behavior of inexact linear solvers Miro Rozložník joint results with Zhong-zhi Bai and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic The fourth

More information

Splitting Iteration Methods for Positive Definite Linear Systems

Splitting Iteration Methods for Positive Definite Linear Systems Splitting Iteration Methods for Positive Definite Linear Systems Zhong-Zhi Bai a State Key Lab. of Sci./Engrg. Computing Inst. of Comput. Math. & Sci./Engrg. Computing Academy of Mathematics and System

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Comparison results between Jacobi and other iterative methods

Comparison results between Jacobi and other iterative methods Journal of Computational and Applied Mathematics 169 (2004) 45 51 www.elsevier.com/locate/cam Comparison results between Jacobi and other iterative methods Zhuan-De Wang, Ting-Zhu Huang Faculty of Applied

More information

Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes

Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes Elena Virnik, TU BERLIN Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

EXAMPLES OF CLASSICAL ITERATIVE METHODS

EXAMPLES OF CLASSICAL ITERATIVE METHODS EXAMPLES OF CLASSICAL ITERATIVE METHODS In these lecture notes we revisit a few classical fixpoint iterations for the solution of the linear systems of equations. We focus on the algebraic and algorithmic

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

c 2005 Society for Industrial and Applied Mathematics

c 2005 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 26, No. 4, pp. 1001 1021 c 2005 Society for Industrial and Applied Mathematics BREAKDOWN-FREE GMRES FOR SINGULAR SYSTEMS LOTHAR REICHEL AND QIANG YE Abstract. GMRES is a

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

In order to solve the linear system KL M N when K is nonsymmetric, we can solve the equivalent system

In order to solve the linear system KL M N when K is nonsymmetric, we can solve the equivalent system !"#$% "&!#' (%)!#" *# %)%(! #! %)!#" +, %"!"#$ %*&%! $#&*! *# %)%! -. -/ 0 -. 12 "**3! * $!#%+,!2!#% 44" #% &#33 # 4"!#" "%! "5"#!!#6 -. - #% " 7% "3#!#3! - + 87&2! * $!#% 44" ) 3( $! # % %#!!#%+ 9332!

More information

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J Class Notes 4: THE SPECTRAL RADIUS, NORM CONVERGENCE AND SOR. Math 639d Due Date: Feb. 7 (updated: February 5, 2018) In the first part of this week s reading, we will prove Theorem 2 of the previous class.

More information

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true? . Let m and n be two natural numbers such that m > n. Which of the following is/are true? (i) A linear system of m equations in n variables is always consistent. (ii) A linear system of n equations in

More information

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294) Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps

More information

ON A GENERAL CLASS OF PRECONDITIONERS FOR NONSYMMETRIC GENERALIZED SADDLE POINT PROBLEMS

ON A GENERAL CLASS OF PRECONDITIONERS FOR NONSYMMETRIC GENERALIZED SADDLE POINT PROBLEMS U..B. Sci. Bull., Series A, Vol. 78, Iss. 4, 06 ISSN 3-707 ON A GENERAL CLASS OF RECONDIIONERS FOR NONSYMMERIC GENERALIZED SADDLE OIN ROBLE Fatemeh anjeh Ali BEIK his paper deals with applying a class

More information

SOLVING ILL-POSED LINEAR SYSTEMS WITH GMRES AND A SINGULAR PRECONDITIONER

SOLVING ILL-POSED LINEAR SYSTEMS WITH GMRES AND A SINGULAR PRECONDITIONER SOLVING ILL-POSED LINEAR SYSTEMS WITH GMRES AND A SINGULAR PRECONDITIONER LARS ELDÉN AND VALERIA SIMONCINI Abstract. Almost singular linear systems arise in discrete ill-posed problems. Either because

More information

MATHEMATICAL ENGINEERING TECHNICAL REPORTS

MATHEMATICAL ENGINEERING TECHNICAL REPORTS MATHEMATICAL ENGINEERING TECHNICAL REPORTS Combinatorial Relaxation Algorithm for the Entire Sequence of the Maximum Degree of Minors in Mixed Polynomial Matrices Shun SATO (Communicated by Taayasu MATSUO)

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD

More information

Preface to the Second Edition. Preface to the First Edition

Preface to the Second Edition. Preface to the First Edition n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................

More information

SEMI-CONVERGENCE ANALYSIS OF THE INEXACT UZAWA METHOD FOR SINGULAR SADDLE POINT PROBLEMS

SEMI-CONVERGENCE ANALYSIS OF THE INEXACT UZAWA METHOD FOR SINGULAR SADDLE POINT PROBLEMS REVISTA DE LA UNIÓN MATEMÁTICA ARGENTINA Vol. 53, No. 1, 2012, 61 70 SEMI-CONVERGENCE ANALYSIS OF THE INEXACT UZAWA METHOD FOR SINGULAR SADDLE POINT PROBLEMS JIAN-LEI LI AND TING-ZHU HUANG Abstract. Recently,

More information

PRODUCT OF OPERATORS AND NUMERICAL RANGE

PRODUCT OF OPERATORS AND NUMERICAL RANGE PRODUCT OF OPERATORS AND NUMERICAL RANGE MAO-TING CHIEN 1, HWA-LONG GAU 2, CHI-KWONG LI 3, MING-CHENG TSAI 4, KUO-ZHONG WANG 5 Abstract. We show that a bounded linear operator A B(H) is a multiple of a

More information

ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS

ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS BIT Numerical Mathematics 6-3835/3/431-1 $16. 23, Vol. 43, No. 1, pp. 1 18 c Kluwer Academic Publishers ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS T. K. JENSEN and P. C. HANSEN Informatics

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Iteration basics Notes for 2016-11-07 An iterative solver for Ax = b is produces a sequence of approximations x (k) x. We always stop after finitely many steps, based on some convergence criterion, e.g.

More information

ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS

ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS YOUSEF SAAD University of Minnesota PWS PUBLISHING COMPANY I(T)P An International Thomson Publishing Company BOSTON ALBANY BONN CINCINNATI DETROIT LONDON MADRID

More information

Lecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra

Lecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra Lecture: Linear algebra. 1. Subspaces. 2. Orthogonal complement. 3. The four fundamental subspaces 4. Solutions of linear equation systems The fundamental theorem of linear algebra 5. Determining the fundamental

More information

Chapter 1. Matrix Algebra

Chapter 1. Matrix Algebra ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface

More information

10.6 ITERATIVE METHODS FOR DISCRETIZED LINEAR EQUATIONS

10.6 ITERATIVE METHODS FOR DISCRETIZED LINEAR EQUATIONS 10.6 ITERATIVE METHODS FOR DISCRETIZED LINEAR EQUATIONS 769 EXERCISES 10.5.1 Use Taylor expansion (Theorem 10.1.2) to give a proof of Theorem 10.5.3. 10.5.2 Give an alternative to Theorem 10.5.3 when F

More information

Goal: to construct some general-purpose algorithms for solving systems of linear Equations

Goal: to construct some general-purpose algorithms for solving systems of linear Equations Chapter IV Solving Systems of Linear Equations Goal: to construct some general-purpose algorithms for solving systems of linear Equations 4.6 Solution of Equations by Iterative Methods 4.6 Solution of

More information

A generalization of the Gauss-Seidel iteration method for solving absolute value equations

A generalization of the Gauss-Seidel iteration method for solving absolute value equations A generalization of the Gauss-Seidel iteration method for solving absolute value equations Vahid Edalatpour, Davod Hezari and Davod Khojasteh Salkuyeh Faculty of Mathematical Sciences, University of Guilan,

More information

MIT Final Exam Solutions, Spring 2017

MIT Final Exam Solutions, Spring 2017 MIT 8.6 Final Exam Solutions, Spring 7 Problem : For some real matrix A, the following vectors form a basis for its column space and null space: C(A) = span,, N(A) = span,,. (a) What is the size m n of

More information

arxiv: v1 [math.na] 26 Dec 2013

arxiv: v1 [math.na] 26 Dec 2013 General constraint preconditioning iteration method for singular saddle-point problems Ai-Li Yang a,, Guo-Feng Zhang a, Yu-Jiang Wu a,b a School of Mathematics and Statistics, Lanzhou University, Lanzhou

More information

The upper Jacobi and upper Gauss Seidel type iterative methods for preconditioned linear systems

The upper Jacobi and upper Gauss Seidel type iterative methods for preconditioned linear systems Applied Mathematics Letters 19 (2006) 1029 1036 wwwelseviercom/locate/aml The upper Jacobi upper Gauss Seidel type iterative methods for preconditioned linear systems Zhuan-De Wang, Ting-Zhu Huang School

More information

The Drazin inverses of products and differences of orthogonal projections

The Drazin inverses of products and differences of orthogonal projections J Math Anal Appl 335 7 64 71 wwwelseviercom/locate/jmaa The Drazin inverses of products and differences of orthogonal projections Chun Yuan Deng School of Mathematics Science, South China Normal University,

More information

Residual iterative schemes for largescale linear systems

Residual iterative schemes for largescale linear systems Universidad Central de Venezuela Facultad de Ciencias Escuela de Computación Lecturas en Ciencias de la Computación ISSN 1316-6239 Residual iterative schemes for largescale linear systems William La Cruz

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

Weak Monotonicity of Interval Matrices

Weak Monotonicity of Interval Matrices Electronic Journal of Linear Algebra Volume 25 Volume 25 (2012) Article 9 2012 Weak Monotonicity of Interval Matrices Agarwal N. Sushama K. Premakumari K. C. Sivakumar kcskumar@iitm.ac.in Follow this and

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD

More information

FEM and sparse linear system solving

FEM and sparse linear system solving FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich

More information

THE solution of the absolute value equation (AVE) of

THE solution of the absolute value equation (AVE) of The nonlinear HSS-like iterative method for absolute value equations Mu-Zheng Zhu Member, IAENG, and Ya-E Qi arxiv:1403.7013v4 [math.na] 2 Jan 2018 Abstract Salkuyeh proposed the Picard-HSS iteration method

More information

Numerical Linear Algebra And Its Applications

Numerical Linear Algebra And Its Applications Numerical Linear Algebra And Its Applications Xiao-Qing JIN 1 Yi-Min WEI 2 August 29, 2008 1 Department of Mathematics, University of Macau, Macau, P. R. China. 2 Department of Mathematics, Fudan University,

More information

FEM and Sparse Linear System Solving

FEM and Sparse Linear System Solving FEM & sparse system solving, Lecture 7, Nov 3, 2017 1/46 Lecture 7, Nov 3, 2015: Introduction to Iterative Solvers: Stationary Methods http://people.inf.ethz.ch/arbenz/fem16 Peter Arbenz Computer Science

More information

Jae Heon Yun and Yu Du Han

Jae Heon Yun and Yu Du Han Bull. Korean Math. Soc. 39 (2002), No. 3, pp. 495 509 MODIFIED INCOMPLETE CHOLESKY FACTORIZATION PRECONDITIONERS FOR A SYMMETRIC POSITIVE DEFINITE MATRIX Jae Heon Yun and Yu Du Han Abstract. We propose

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 19: More on Arnoldi Iteration; Lanczos Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 17 Outline 1

More information

Dragan S. Djordjević. 1. Introduction

Dragan S. Djordjević. 1. Introduction UNIFIED APPROACH TO THE REVERSE ORDER RULE FOR GENERALIZED INVERSES Dragan S Djordjević Abstract In this paper we consider the reverse order rule of the form (AB) (2) KL = B(2) TS A(2) MN for outer generalized

More information

Pseudoinverse & Orthogonal Projection Operators

Pseudoinverse & Orthogonal Projection Operators Pseudoinverse & Orthogonal Projection Operators ECE 174 Linear & Nonlinear Optimization Ken Kreutz-Delgado ECE Department, UC San Diego Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 1 / 48 The Four

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

Block-Triangular and Skew-Hermitian Splitting Methods for Positive Definite Linear Systems

Block-Triangular and Skew-Hermitian Splitting Methods for Positive Definite Linear Systems Block-Triangular and Skew-Hermitian Splitting Methods for Positive Definite Linear Systems Zhong-Zhi Bai State Key Laboratory of Scientific/Engineering Computing Institute of Computational Mathematics

More information

7. Symmetric Matrices and Quadratic Forms

7. Symmetric Matrices and Quadratic Forms Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value

More information

A note on the unique solution of linear complementarity problem

A note on the unique solution of linear complementarity problem COMPUTATIONAL SCIENCE SHORT COMMUNICATION A note on the unique solution of linear complementarity problem Cui-Xia Li 1 and Shi-Liang Wu 1 * Received: 13 June 2016 Accepted: 14 November 2016 First Published:

More information