CHAPTER 5. Basic Iterative Methods
|
|
- Sharon Lambert
- 5 years ago
- Views:
Transcription
1 Basic Iterative Methods CHAPTER 5 Solve Ax = f where A is large and sparse (and nonsingular. Let A be split as A = M N in which M is nonsingular, and solving systems of the form Mz = r is much easier than solving Ax = f. Iteration: Convergence: Mx k+ = Nx k + f x 0 := arbitrary x k+ = M Nx k + M f = Hx k + g but, x = Hx + g. Then if δx k = x k x, we have δx k+ = Hδx k, i.e., or δx = Hδx 0 δx = Hδx = H δx 0. δx k = H k δx 0 δx k H k δx 0 H k δx 0. Thus, a sufficient condition for convergence ( lim k H k = 0 is H <. Necessary condition: Let x 0 = 0, then x k+ = (I + H + H + + H k g k lim ( H j converges iff ρ(h <. In this case x k (I H g. Going back to k j=0 δx H k δx 0 since then ρ(h H ρ(h + ɛ ; ɛ > 0. δx k [ρ(h + ɛ] k δx 0
2 If δx 0 is an eigenvector of H corresponding to the eigenvalue of max. modules, i.e., Hδx 0 = µδx 0 ; ρ(h = µ δx k = H k δx 0 = µ k δx 0 & δx k = ρ k (H δx 0. Now, to reduce the error by a factor of 0 m, m > 0, we need have ρ k (H 0 m or 0 m = [/ρ(h] k, i.e., m k log 0 [/ρ(h] = k R where R is called the asymptotic rate of convergence of the iterative scheme, and k m R = # of iterations needed to reduce the error by a factor of 0 m. m log 0 ρ(h Example: Let ρ(h = 0.9 and m = 6, then k 6 k 9 iterations. log 0 (0. 6 log 0 ( iterations. However, if ρ(h = 0., then Algorithm: x k+ = M Nx k + M f but A = M N, or N = M A, then Summary: x := arbitrary r := f Ax while r tolerance x k+ = M (M Ax k + M f = x k + M (f Ax k = x k + M r k Solve Md = r x := x + d r := f Ax end
3 Example: Solve Ax = f Exact solution: Let A = M N, and x T 0 = (0, 0, 0. Then A = 4 M = x = 4 ; f =. ; N = H = M N = x k+ = Hx k + g ; g = M f = 4 λ k (H = 4 ( cos kπ 4 k =,, i.e., λ(h := 4 ; 0 ; 4 ρ(h = , & R = log 0 ( Normally, the stopping criterion is either r k < tol, or r k / r 0 tol, where tol and tol are tolerances to be determined by the user. A stopping criterion based on the error in the solution is given by the following theorem. Theorem: Let A = M N, with H = M N, and H <. Thus x x k H H x k x k k =,, 3,... 3
4 Proof: subtracting, we get and by recursion, we have x k+ = Hx k + g ; g = M f x k = Hx k + g x k+ x k = H(x k x k Since, we have But p m=0 Thus, since H <, we get (x m+k+ x m+k = H m+ (x k x k. (x k+p x k = (x k+p x k+p + (x k+p x k+p + x k+p x k = p m=0. + (x k+ x k p (x k+m+ x k+m m=0 H m+ x k x k. H m+ = H [+ H + H + + H p ]. ( H (+ H + H + + H p = H p i.e., as p x k+p x k H ( H p H x x k x k x k H H x k x k. 4
5 Example: Since x = [ A = ( 3 4 A = M N ; M = x 0 = 0 ; H = M N = N = ; f = ( ( ( ( 0 / 3/4 0 H = H 3/4 < x 3 = [ ] 6 ; x = [ x x 4 3/4 3/4 x 4 x 3 = = ], then x x 4 = 9 or /3 of the upper bound above. 64 Acceleration of the Iterative Scheme A = M N x k+ = x k + M r k (Original scheme [r k = f Ax k ]. From (I Hx = g, H = M N; g = M f, the original scheme may also be expressed as, We can accelerate this basic scheme via x k+ = Hx k + g = x k + [g (I Hx k ] = x k + r k. ] x k+ = x k + αˆr k ( in which α is a scalar and ˆr k = M r k. Iteration (** could be written as x k+ = [I αg]x k + αg 5
6 where G = (I H. Necessary condition for convergence is: ρ(i αg <. Objective: Choose α so that ρ(i αg is minimized. Let λ(g := [λ min, λ max ]. Let λ k (I αg = µ k, thus αλ max µ k αλ min. Note that if λ min < 0, then for some k, µ k > and the iteration diverges. Therefore, we assume that λ min > 0, i.e., that the basic scheme is convergent. Hence 0 < λ min < λ max. Since µ k = αλ k, we have the st. lines αλ max µ.0 αλ min 0 λ max α opt λ min α for α optimal, we must have and ( αλ max = ( αλ min α optimal = λ min + λ max ρ(i α optimal G = λ min λ min +λ max = λmax λ min λ min +λ max Diagonally Dominant Matrices * A (n n is strictly diagonal dominant if: a ii > a ij. j= j i * A is weakly diagonally dominant if: a ii 6 a ij. j= j i
7 * A is irreducibly diagonally dominant if: (i there does not exist a permutation P for which [ P AP B B = 0 B ] where B and B are square matrices. (ii a ii a ij with strict inequality holding for at least one row k. j= j i Gerschgorin s Theorems. Let A C n n, then each eigenvalue of A lies in one of the disks in the complex plane λ a ii a ij. Proof: Consider the eigenvalue problem Ax = λx where, x k max j n x j. Since (λi Ax = 0, then from the k-th row (λ a kk x k = j= j i a kj x j. j= j i λ a kk = x k a kj x j j= j= j k j k a kj x j x k or λ a kk a kj. j= j k 7
8 Example: A = r,3 = 5 r = λ(a := ± 3.885i All 3 eigenvalues are contained in the union of the 3 disks.. If k Gerschgorin disks are disjoint from the rest, then exactly k eigenvalues lie in the union of the k disks. Example: A = / / 5 / / λ(a := 0.5,.5, 5.0 8
9 eigenvalues here one eigenvalue here Theorem: If A is strictly diagonally dominant or irreducibly diagonally dominant, then it is nonsingular. Proof: (a If A is strictly diagonally dominant then zero is not contained in the union of the G-disks. o a ii > j i a ij (b If A is irreducibly diagonally dominant, then let A be singular, i.e., there exists an x s.t. Ax = 0. Let m be that row of A for which a mm > a mj. j= j m Let J be the set of indices J = {k : x k x i, i =,,..., n, and x k > x j for j n}. Clearly, J is not empty, for this would imply x = x = = x n 0 9
10 contradicting a mm > j m a mj. Now, for any k J, we have a kk j k ( a kj x j. x k Thus, to maintain the property of diagonal dominance, we must have a kj = 0 whenever x j x k < for k J; j J. But, since A is irreducible, this is a contradiction. Q.E.D. Theorem: If A is strictly diagonally dominant, or irreducibly-diagonally dominant, then the associated Jacobi and Gauss-Seidel iterations converge for any initial iterate x 0. Proof: Let A = D L U, where D is diagonal and L, U are strictly lower and upper triangular, respectively. (a strictly diagonally dominant case: (i Jacobi: i.e., H J is of the form H J = A = M J N J M J = D ; N J = D + L H J = M J N J = M J (M J A = I M J A = (I D A 0 a a a 0 a n a a a n,n a n,n a n a a nn n,n a n,n 0. Thus, from the Gerschgorin Theorems, 0
11 ρ( H J < 0 ρ ρ < as well as, H J < sufficient condition for convergence. (ii Gauss-Seidel: Consider the eigenvalue problem H G.S. = (D L U. H G.S. w = λw or Uw = λ(d Lw. Also, let w be scaled such that for some m, w m =, and w i. Then, from j>ma [ mj w j = λ a mm w m + ] a mj w j j<m i.e., or λ = a mj w j / a mmw m + j<ma mj w j j>m λ a mj j>m a mm a mj j<m = σ d σ = σ σ +(d σ σ
12 where, d = a mm ; σ = j>m a mj, σ = j<m am j and d > (σ + σ. Hence, λ < and G.S. converges. (a Irreducibly diagonally dominant case: (i Jacobi: Property A: From the Gerschgorin theorems, we can only show that ρ(m J N J. Let ρ(h J =, where H J = M J N J then (H J λi is singular with λ =. Hence, for λ =, M J (N J λm J is singular. Since A = M J N J is irreducibly diagonally dominant, then for any scalar α with α =, we see that (αm J N J is also irreducibly diagonally dominant, i.e., (αm J N J is nonsingular. Then, (λm J N J is nonsingular for λ = contradiction if λ is an eigenvalue of H J. Consequently, ρ(h J <. (ii Gauss-Seidel: The proof that ρ(h G.S. < is identical to that of Jacobi above. A matrix A is -cyclic, or has property A if there is a permutation P such that [ ] P AP T D E = F D where D and D are diagonal. The tridiagonal matrix A =
13 has property A with the Permutation P given by, ( D E Let A = F F P T = [e, e 3, e 5,... ; e }{{}, e 4, e 6,...]. }{{} odd even with D, D diagonal. (i Jacobi: or H J = ( D 0 0 D ( H J = D Now consider the eigenvalue problem ( 0 D D F 0 for convergence we must have λ J <. ( 0 E F 0 0 D E F 0 ( x y E. ( x = λ y (+D F (D Ey = λ Jy (ii Gauss-Seidel: Hence, H GS = For convergence we must have = ( D 0 [ H GS = F D ( 0 E 0 0 D 0 D F D D [ 0 D E 0 D F D E ] [ 0 E 0 ] 0. λ(h GS := 0 ; λ GS (D F D E. ] λ GS [(D F (D E]. Thus, λ GS = λ J or GS convergence twice as fast as Jacobi for matrices with property A. 3
14 Positive Definite Systems Theorem: Let A be symmetric, with A = M N, in which M is nonsingular, and S = M T + N is positive definite. Then, H = M N = I M A is convergent (i.e., ρ(h < iff A is positive definite. Proof: (sufficiency. Let λ, u be an eigenpair of H, i.e., Hu = λu or Hence, and From (, we have or, From (, we get Adding (3 and (4, we get (I M Au = λu Au = ( λmu. ( λ = u Au u Mu ( ( λ = u Au u M T u. ( λ (u Au = u (A + N T u = u Au + u N T u ( λ (u Au λ = u N T u. (3 ( (u Au = u Mu. (4 λ [ ] (u λ Au ( λ( λ = u S T u. Thus, if λ = α + iβ, with i =, then [ ] λ u S T u = (u Au γ where γ = ( α + β > 0. As a result, if A is s.p.d., and S is s.p.d. then ρ(h <. 4
15 Observation : Let A = D L L T s.p.d. Jacobi M = D ; e T j De j > 0 S = M T + N = D + L + L T Jacobi converges if S is s.p.d. Observation : Gauss- Seidel M = D L ; N = L T S = M T + N = D L T + L T = D s.p.d. Gauss-Seidel converges for s.p.d. systems. Symmetric Gauss-Seidel Let A be s.p.d. and given by A = D L L T. D is diagonal, and L is strictly lower triangular. Iteration for solving Ax = f Let, L = D / LD / y k = D / x k. (D Lx k+/ = L T x k + f (D L T x k+ = Lx k+/ + f. Then the symmetric G.S. iteration is given by, or Let (I L T y k+ = z k+. Then, (I Ly k+/ = L T y k + D / f (I L T y k+ = Ly k+/ + D / f (I L T y k+ = L(I L L T (I L T (I L T y k + [I + L(I L ]D / f. z k+ = L(I L L T (I L T z k + h = Kz k + h. 5
16 Since L is strictly lower triangular (diagonal elements are all zero, then or L(I L = L(I + L + L + + L n L(I L = (I + L + L + + L n L = (I L L. Thus, the iteration matrix K is given by, K = (I L LL T (I L T i.e., K is symmetric positive semidefinite, or λ(k 0 are all real. Also, I K = I (I L LL T (I L T = (I L [(I L(I L T LL T ](I L T i.e., and (I K is s.p.d., or λ(k > 0 (I K = (I L [I L L T ](I L T = (I L [D / AD / ](I L }{{} s.p.d. 0 λ(k <. Thus, symmetric G.S. converges for s.p.d. systems. 6
Algebra C Numerical Linear Algebra Sample Exam Problems
Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric
More informationUp to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas
Finding Eigenvalues Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas plus the facts that det{a} = λ λ λ n, Tr{A}
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD
More informationCAAM 454/554: Stationary Iterative Methods
CAAM 454/554: Stationary Iterative Methods Yin Zhang (draft) CAAM, Rice University, Houston, TX 77005 2007, Revised 2010 Abstract Stationary iterative methods for solving systems of linear equations are
More informationThe Solution of Linear Systems AX = B
Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has
More informationScientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1
Scientific Computing WS 2018/2019 Lecture 9 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 9 Slide 1 Lecture 9 Slide 2 Simple iteration with preconditioning Idea: Aû = b iterative scheme û = û
More informationEIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..
EIGENVALUE PROBLEMS Background on eigenvalues/ eigenvectors / decompositions Perturbation analysis, condition numbers.. Power method The QR algorithm Practical QR algorithms: use of Hessenberg form and
More informationc 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp , March
SIAM REVIEW. c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp. 93 97, March 1995 008 A UNIFIED PROOF FOR THE CONVERGENCE OF JACOBI AND GAUSS-SEIDEL METHODS * ROBERTO BAGNARA Abstract.
More informationJordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS
Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative
More informationON A HOMOTOPY BASED METHOD FOR SOLVING SYSTEMS OF LINEAR EQUATIONS
TWMS J. Pure Appl. Math., V.6, N.1, 2015, pp.15-26 ON A HOMOTOPY BASED METHOD FOR SOLVING SYSTEMS OF LINEAR EQUATIONS J. SAEIDIAN 1, E. BABOLIAN 1, A. AZIZI 2 Abstract. A new iterative method is proposed
More informationMath Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.
Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems
More informationIterative Methods for Ax=b
1 FUNDAMENTALS 1 Iterative Methods for Ax=b 1 Fundamentals consider the solution of the set of simultaneous equations Ax = b where A is a square matrix, n n and b is a right hand vector. We write the iterative
More information9. Iterative Methods for Large Linear Systems
EE507 - Computational Techniques for EE Jitkomut Songsiri 9. Iterative Methods for Large Linear Systems introduction splitting method Jacobi method Gauss-Seidel method successive overrelaxation (SOR) 9-1
More informationJACOBI S ITERATION METHOD
ITERATION METHODS These are methods which compute a sequence of progressively accurate iterates to approximate the solution of Ax = b. We need such methods for solving many large linear systems. Sometimes
More informationGoal: to construct some general-purpose algorithms for solving systems of linear Equations
Chapter IV Solving Systems of Linear Equations Goal: to construct some general-purpose algorithms for solving systems of linear Equations 4.6 Solution of Equations by Iterative Methods 4.6 Solution of
More informationBindel, Fall 2016 Matrix Computations (CS 6210) Notes for
1 Iteration basics Notes for 2016-11-07 An iterative solver for Ax = b is produces a sequence of approximations x (k) x. We always stop after finitely many steps, based on some convergence criterion, e.g.
More informationThe convergence of stationary iterations with indefinite splitting
The convergence of stationary iterations with indefinite splitting Michael C. Ferris Joint work with: Tom Rutherford and Andy Wathen University of Wisconsin, Madison 6th International Conference on Complementarity
More informationSolving Linear Systems of Equations
November 6, 2013 Introduction The type of problems that we have to solve are: Solve the system: A x = B, where a 11 a 1N a 12 a 2N A =.. a 1N a NN x = x 1 x 2. x N B = b 1 b 2. b N To find A 1 (inverse
More informationIterative Methods. Splitting Methods
Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition
More informationNumerical Methods - Numerical Linear Algebra
Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear
More informationComputational Methods. Systems of Linear Equations
Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations
More information6. Iterative Methods for Linear Systems. The stepwise approach to the solution...
6 Iterative Methods for Linear Systems The stepwise approach to the solution Miriam Mehl: 6 Iterative Methods for Linear Systems The stepwise approach to the solution, January 18, 2013 1 61 Large Sparse
More informationIterative techniques in matrix algebra
Iterative techniques in matrix algebra Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan September 12, 2015 Outline 1 Norms of vectors and matrices 2 Eigenvalues and
More informationSolving Linear Systems
Solving Linear Systems Iterative Solutions Methods Philippe B. Laval KSU Fall 207 Philippe B. Laval (KSU) Linear Systems Fall 207 / 2 Introduction We continue looking how to solve linear systems of the
More informationLINEAR SYSTEMS (11) Intensive Computation
LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY
More informationLecture # 20 The Preconditioned Conjugate Gradient Method
Lecture # 20 The Preconditioned Conjugate Gradient Method We wish to solve Ax = b (1) A R n n is symmetric and positive definite (SPD). We then of n are being VERY LARGE, say, n = 10 6 or n = 10 7. Usually,
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationLecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University
Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector
More informationCOURSE Iterative methods for solving linear systems
COURSE 0 4.3. Iterative methods for solving linear systems Because of round-off errors, direct methods become less efficient than iterative methods for large systems (>00 000 variables). An iterative scheme
More informationLecture 18 Classical Iterative Methods
Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,
More informationEigenvalues and Eigenvectors
Chapter 1 Eigenvalues and Eigenvectors Among problems in numerical linear algebra, the determination of the eigenvalues and eigenvectors of matrices is second in importance only to the solution of linear
More informationMath 471 (Numerical methods) Chapter 3 (second half). System of equations
Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular
More informationLecture 4 Basic Iterative Methods I
March 26, 2018 Lecture 4 Basic Iterative Methods I A The Power Method Let A be an n n with eigenvalues λ 1,...,λ n counted according to multiplicity. We assume the eigenvalues to be ordered in absolute
More informationIterative Methods and Multigrid
Iterative Methods and Multigrid Part 1: Introduction to Multigrid 2000 Eric de Sturler 1 12/02/09 MG01.prz Basic Iterative Methods (1) Nonlinear equation: f(x) = 0 Rewrite as x = F(x), and iterate x i+1
More informationLast Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection
Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue
More informationComputing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm
Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 13, 2009 Today
More informationLecture 7: Positive Semidefinite Matrices
Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.
More informationComputing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices
Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 9, 2008 Today
More informationComputing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices
Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 8, 2009 Today
More informationSolving Linear Systems
Solving Linear Systems Iterative Solutions Methods Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) Linear Systems Fall 2015 1 / 12 Introduction We continue looking how to solve linear systems of
More informationReview of matrices. Let m, n IN. A rectangle of numbers written like A =
Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an
More informationNext topics: Solving systems of linear equations
Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:
More informationSome definitions. Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization. A-inner product. Important facts
Some definitions Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 A matrix A is SPD (Symmetric
More informationTheory of Iterative Methods
Based on Strang s Introduction to Applied Mathematics Theory of Iterative Methods The Iterative Idea To solve Ax = b, write Mx (k+1) = (M A)x (k) + b, k = 0, 1,,... Then the error e (k) x (k) x satisfies
More informationDiscrete mathematical models in the analysis of splitting iterative methods for linear systems
Computers and Mathematics with Applications 56 2008) 727 732 wwwelseviercom/locate/camwa Discrete mathematical models in the analysis of splitting iterative methods for linear systems Begoña Cantó, Carmen
More informationChapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015
Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 205. If A is a 3 3 triangular matrix, explain why det(a) is equal to the product of entries on the diagonal. If A is a lower triangular or diagonal
More information1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )
Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical
More informationThis property turns out to be a general property of eigenvectors of a symmetric A that correspond to distinct eigenvalues as we shall see later.
34 To obtain an eigenvector x 2 0 2 for l 2 = 0, define: B 2 A - l 2 I 2 = È 1, 1, 1 Î 1-0 È 1, 0, 0 Î 1 = È 1, 1, 1 Î 1. To transform B 2 into an upper triangular matrix, subtract the first row of B 2
More informationCOURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.
COURSE 9 4 Numerical methods for solving linear systems Practical solving of many problems eventually leads to solving linear systems Classification of the methods: - direct methods - with low number of
More informationProcess Model Formulation and Solution, 3E4
Process Model Formulation and Solution, 3E4 Section B: Linear Algebraic Equations Instructor: Kevin Dunn dunnkg@mcmasterca Department of Chemical Engineering Course notes: Dr Benoît Chachuat 06 October
More informationRecall : Eigenvalues and Eigenvectors
Recall : Eigenvalues and Eigenvectors Let A be an n n matrix. If a nonzero vector x in R n satisfies Ax λx for a scalar λ, then : The scalar λ is called an eigenvalue of A. The vector x is called an eigenvector
More informationIntroduction. Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods. Example: First Order Richardson. Strategy
Introduction Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 Solve system Ax = b by repeatedly computing
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 21: Sensitivity of Eigenvalues and Eigenvectors; Conjugate Gradient Method Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis
More informationMath 1080: Numerical Linear Algebra Chapter 4, Iterative Methods
Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 March 2015 1 / 70 Topics Introduction to Iterative Methods
More informationIntroduction to Iterative Solvers of Linear Systems
Introduction to Iterative Solvers of Linear Systems SFB Training Event January 2012 Prof. Dr. Andreas Frommer Typeset by Lukas Krämer, Simon-Wolfgang Mages and Rudolf Rödl 1 Classes of Matrices and their
More informationComputing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm
Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 19, 2010 Today
More informationDEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular
form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix
More informationACM 104. Homework Set 5 Solutions. February 21, 2001
ACM 04 Homework Set 5 Solutions February, 00 Franklin Chapter 4, Problem 4, page 0 Let A be an n n non-hermitian matrix Suppose that A has distinct eigenvalues λ,, λ n Show that A has the eigenvalues λ,,
More information7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP.
7.3 The Jacobi and Gauss-Siedel Iterative Techniques Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 7.3 The Jacobi and Gauss-Siedel Iterative Techniques
More informationMath 577 Assignment 7
Math 577 Assignment 7 Thanks for Yu Cao 1. Solution. The linear system being solved is Ax = 0, where A is a (n 1 (n 1 matrix such that 2 1 1 2 1 A =......... 1 2 1 1 2 and x = (U 1, U 2,, U n 1. By the
More informationMatrices, Moments and Quadrature, cont d
Jim Lambers CME 335 Spring Quarter 2010-11 Lecture 4 Notes Matrices, Moments and Quadrature, cont d Estimation of the Regularization Parameter Consider the least squares problem of finding x such that
More informationMathematical Foundations
Chapter 1 Mathematical Foundations 1.1 Big-O Notations In the description of algorithmic complexity, we often have to use the order notations, often in terms of big O and small o. Loosely speaking, for
More informationLecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems
University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 45 Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems Peter J. Hammond latest revision 2017 September
More informationModified Gauss Seidel type methods and Jacobi type methods for Z-matrices
Linear Algebra and its Applications 7 (2) 227 24 www.elsevier.com/locate/laa Modified Gauss Seidel type methods and Jacobi type methods for Z-matrices Wen Li a,, Weiwei Sun b a Department of Mathematics,
More informationCLASSICAL ITERATIVE METHODS
CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped
More information3 (Maths) Linear Algebra
3 (Maths) Linear Algebra References: Simon and Blume, chapters 6 to 11, 16 and 23; Pemberton and Rau, chapters 11 to 13 and 25; Sundaram, sections 1.3 and 1.5. The methods and concepts of linear algebra
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 3: Positive-Definite Systems; Cholesky Factorization Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 11 Symmetric
More informationChap 3. Linear Algebra
Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions
More informationMath 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018
1 Linear Systems Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March, 018 Consider the system 4x y + z = 7 4x 8y + z = 1 x + y + 5z = 15. We then obtain x = 1 4 (7 + y z)
More informationMonte Carlo simulation inspired by computational optimization. Colin Fox Al Parker, John Bardsley MCQMC Feb 2012, Sydney
Monte Carlo simulation inspired by computational optimization Colin Fox fox@physics.otago.ac.nz Al Parker, John Bardsley MCQMC Feb 2012, Sydney Sampling from π(x) Consider : x is high-dimensional (10 4
More informationHere is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J
Class Notes 4: THE SPECTRAL RADIUS, NORM CONVERGENCE AND SOR. Math 639d Due Date: Feb. 7 (updated: February 5, 2018) In the first part of this week s reading, we will prove Theorem 2 of the previous class.
More informationStabilization and Acceleration of Algebraic Multigrid Method
Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration
More informationChapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices
Chapter 7 Iterative methods for large sparse linear systems In this chapter we revisit the problem of solving linear systems of equations, but now in the context of large sparse systems. The price to pay
More informationOUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU
Preconditioning Techniques for Solving Large Sparse Linear Systems Arnold Reusken Institut für Geometrie und Praktische Mathematik RWTH-Aachen OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative
More informationChapter 12: Iterative Methods
ES 40: Scientific and Engineering Computation. Uchechukwu Ofoegbu Temple University Chapter : Iterative Methods ES 40: Scientific and Engineering Computation. Gauss-Seidel Method The Gauss-Seidel method
More informationIterative Methods for Sparse Linear Systems
Iterative Methods for Sparse Linear Systems Luca Bergamaschi e-mail: berga@dmsa.unipd.it - http://www.dmsa.unipd.it/ berga Department of Mathematical Methods and Models for Scientific Applications University
More informationScientific Computing
Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting
More informationFoundations of Matrix Analysis
1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the
More informationEigenvalues and Eigenvectors
Eigenvalues and Eigenvectors week -2 Fall 26 Eigenvalues and eigenvectors The most simple linear transformation from R n to R n may be the transformation of the form: T (x,,, x n ) (λ x, λ 2,, λ n x n
More informationScientific Computing WS 2018/2019. Lecture 10. Jürgen Fuhrmann Lecture 10 Slide 1
Scientific Computing WS 2018/2019 Lecture 10 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 10 Slide 1 Homework assessment Lecture 10 Slide 2 General Please apologize terse answers - on the bright
More informationChapter 7 Iterative Techniques in Matrix Algebra
Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition
More informationAnalysis of Iterative Methods for Solving Sparse Linear Systems C. David Levermore 9 May 2013
Analysis of Iterative Methods for Solving Sparse Linear Systems C. David Levermore 9 May 2013 1. General Iterative Methods 1.1. Introduction. Many applications lead to N N linear algebraic systems of the
More informationLU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark
DM559 Linear and Integer Programming LU Factorization Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark [Based on slides by Lieven Vandenberghe, UCLA] Outline
More informationComputational Economics and Finance
Computational Economics and Finance Part II: Linear Equations Spring 2016 Outline Back Substitution, LU and other decomposi- Direct methods: tions Error analysis and condition numbers Iterative methods:
More informationIterative Methods and Multigrid
Iterative Methods and Multigrid Part 3: Preconditioning 2 Eric de Sturler Preconditioning The general idea behind preconditioning is that convergence of some method for the linear system Ax = b can be
More informationIn particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with
Appendix: Matrix Estimates and the Perron-Frobenius Theorem. This Appendix will first present some well known estimates. For any m n matrix A = [a ij ] over the real or complex numbers, it will be convenient
More informationChapter 2. Solving Systems of Equations. 2.1 Gaussian elimination
Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix
More informationKasetsart University Workshop. Multigrid methods: An introduction
Kasetsart University Workshop Multigrid methods: An introduction Dr. Anand Pardhanani Mathematics Department Earlham College Richmond, Indiana USA pardhan@earlham.edu A copy of these slides is available
More informationNotes on PCG for Sparse Linear Systems
Notes on PCG for Sparse Linear Systems Luca Bergamaschi Department of Civil Environmental and Architectural Engineering University of Padova e-mail luca.bergamaschi@unipd.it webpage www.dmsa.unipd.it/
More informationTHE MATRIX EIGENVALUE PROBLEM
THE MATRIX EIGENVALUE PROBLEM Find scalars λ and vectors x 0forwhich Ax = λx The form of the matrix affects the way in which we solve this problem, and we also have variety as to what is to be found. A
More informationITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 3 : SEMI-ITERATIVE METHODS
ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 3 : SEMI-ITERATIVE METHODS Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation
More informationChapter 3. Determinants and Eigenvalues
Chapter 3. Determinants and Eigenvalues 3.1. Determinants With each square matrix we can associate a real number called the determinant of the matrix. Determinants have important applications to the theory
More informationA = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is,
65 Diagonalizable Matrices It is useful to introduce few more concepts, that are common in the literature Definition 65 The characteristic polynomial of an n n matrix A is the function p(λ) det(a λi) Example
More informationJae Heon Yun and Yu Du Han
Bull. Korean Math. Soc. 39 (2002), No. 3, pp. 495 509 MODIFIED INCOMPLETE CHOLESKY FACTORIZATION PRECONDITIONERS FOR A SYMMETRIC POSITIVE DEFINITE MATRIX Jae Heon Yun and Yu Du Han Abstract. We propose
More informationLECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel
LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count
More informationGeneralized line criterion for Gauss-Seidel method
Computational and Applied Mathematics Vol. 22, N. 1, pp. 91 97, 2003 Copyright 2003 SBMAC Generalized line criterion for Gauss-Seidel method M.V.P. GARCIA 1, C. HUMES JR. 2 and J.M. STERN 2 1 Department
More informationNumerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD
Mathematical Question we are interested in answering numerically How to solve the following linear system for x Ax = b? where A is an n n invertible matrix and b is vector of length n. Notation: x denote
More informationSolving Linear Systems of Equations
1 Solving Linear Systems of Equations Many practical problems could be reduced to solving a linear system of equations formulated as Ax = b This chapter studies the computational issues about directly
More informationA NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES
Journal of Mathematical Sciences: Advances and Applications Volume, Number 2, 2008, Pages 3-322 A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES Department of Mathematics Taiyuan Normal University
More informationConjugate Gradient (CG) Method
Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous
More information