COURSE Iterative methods for solving linear systems
|
|
- Primrose McBride
- 5 years ago
- Views:
Transcription
1 COURSE Iterative methods for solving linear systems Because of round-off errors, direct methods become less efficient than iterative methods for large systems (> variables). An iterative scheme for linear systems consists of converting the system to the form Ax = b () x = b Bx. After an initial guess for x (0), the sequence of approximations of the solution x (0), x (),..., x (k),...is generated by computing x (k) = b Bx (k ), for k =,, 3,...
2 4.3.. Jacobi iterative method Consider the n n linear system, a x + a x a n x n = b a x + a x a n x n = b... a n x + a n x a nn x n = b n, where we assume that the diagonal terms a, a,..., a nn are all nonzero. We begin our iterative scheme by solving each equation for one of the variables: x = u x u n x n + c x = u x u n x n + c... x n = u n x u nn x n + c n, where u ij = a ij a ii, c i = b i a ii, i =,..., n.
3 Let x (0) = (x (0), x(0),..., x(0) n ) be an initial approximation of the solution. The k + -th approximation is: for k = 0,,,... x (k+) = u x (k) u nx (k) n + c x (k+) = u x (k) + u 3x (k)... x (k+) n An algorithmic form: x (k) i = b i u nx n (k) + c = u n x (k) u nn x (k) n + c n, n j=,j i a ij x (k ) j a ii, i =,,..., n, for k. The iterative process is terminated when a convergence criterion is satisfied. Stopping criterions: x (k) x (k ) x (k) x (k ) < ε or x (k) < ε, with ε > 0 - a prescribed tolerance.
4 Example Solve the following system using the Jacobi iterative method. Use ε = 0 3 and x (0) = ( ) as the starting vector. 7x x + x 3 = 7 x 9x + 3x 3 x 4 = 3 x + 0x 3 + x 4 = 5 x x + x 3 + 6x 4 = 0. These equations can be rearranged to give and, for example, x = (7 + x x 3 )/7 x = ( 3 + x + 3x 3 x 4 )/9 x 3 = (5 x x 4 )/0 x 4 = (0 x + x x 3 )/6 x () = (7 + x (0) x (0) 3 )/7 x () = ( 3 + x (0) + 3x (0) 3 x (0) 4 )/9 x () 3 = (5 x (0) x (0) 4 )/0 x () 4 = (0 x (0) + x (0) x (0) 3 )/6.
5 Substitute x (0) = (0, 0, 0, 0) into the right-hand side of each of these equations to get x () = ( )/7 = x () = ( )/9 = x () 3 = (5 0 0)/0 =.5 x () 4 = ( )/6 = and so x () = ( , ,.5, ). The Jacobi iterative process: x (k+) = x (k+) = x (k+) 3 = x (k+) 4 = ( ( ( 7 + x (k) 3 + x (k) x(k) 3 ) /7 + 3x(k) 3 x(k) 4 ) 5 x (k) x(k) 4 /0 ( 0 x (k) + x(k) x(k) 3 We obtain a sequence that converges to ) ) /9 /6, k. x (9) = ( , , ,.00067).
6 4.3.. Gauss-Seidel iterative method Almost the same as Jacobi method, except that each x-value is improved using the most recent approx. of the other variables. For a n n system, the k + -th approximation is: x (k+) = u x (k) u nx (k) n + c x (k+) = u x (k+) + u 3 x (k)... x (k+) n u nx n (k) + c = u n x (k+) u nn x (k+) n + c n, with k = 0,,,...; u ij = a ij a, c ii i = method). Algorithmic form: b i a ii, i =,..., n (as in Jacobi x (k) i = b i i j= a ij x (k) j n j=i+ a ii a ij x (k ) j
7 for each i =,,...n, and for k. Stopping criterions: prescribed tolerance, ε > 0. x (k) x (k ) x (k) x (k ) < ε, or x (k) < ε, with ε - a Remark Because the new values can be immediately stored in the location that held the old values, the storage requirements for x with the Gauss-Seidel method is half than that for Jacobi method and the rate of convergence is faster. Example 3 Solve the following system using the Gauss-Seidel iterative method. Use ε = 0 3 and x (0) = ( ) as the starting vector. 7x x + x 3 = 7 x 9x + 3x 3 x 4 = 3 x + 0x 3 + x 4 = 5 x x + x 3 + 6x 4 = 0
8 We have x = (7 + x x 3 )/7 x = ( 3 + x + 3x 3 x 4 )/9 x 3 = (5 x x 4 )/0 x 4 = (0 x + x x 3 )/6, and, for example, x () = (7 + x (0) x (0) 3 )/7 x () = ( 3 + x () + 3x (0) 3 x (0) 4 )/9 x () 3 = (5 x () x (0) 4 )/0 x () 4 = (0 x () + x () x () 3 )/6,
9 which provide the following Gauss-Seidel iterative process: x (k+) = x (k+) = x (k+) 3 = x (k+) 4 = ( ( ( 7 + x (k) x(k) 3 ) /7 3 + x (k+) + 3x (k) 3 x(k) 4 ) 5 x (k+) x (k) 4 /0 ( 0 x (k+) + x (k+) x (k+) 3 ) /9 ) /6, for k. Substitute x (0) = (0, 0, 0, 0) into the right-hand side of each of these equations to get x () = ( )/7 = x () = ( )/9 = x () 3 = ( )/0 = x () 4 = ( )/6 =
10 and so x () = ( , , ). Similar procedure generates a sequence that converges to x (5) = (.00005,.00030, ).
11 Relaxation method In case of convergence, the Gauss-Seidel method is faster than Jacobi method. The convergence can be more improved using relaxation method (SOR method) (SOR=Succesive Over Relaxation) Algorithmic form of the method: x (k) i = ω a ii ( b i i a ij x (k) j= j n j=i+ for each i =,,...n, and for k. a ij x (k ) j ) + ( ω)x (k ) i For 0 < ω < the procedure is called under relaxation method, that can be used to obtain convergence for systems which are not convergent by Gauss-Siedel method. For ω > the procedure is called over relaxation method, that can be used to accelerate the convergence for systems which are convergent by Gauss-Siedel method.
12 By Kahan s Theorem follows that the method converges for 0 < ω <. Remark 4 For ω =, relaxation method is Gauss-Seidel method. Example 5 Solve the following system, using relaxation iterative method. Use ε = 0 3, x (0) = ( ) and ω =.5, We have 4x + 3x = 4 3x + 4x x 3 = 30 x + 4x 3 = 4 x (k) = x (k ) 0.5x (k ) x (k) = x (k) x(k ) 3 0.5x (k ) x (k) 3 = x (k) 0.5x(k ) 3, for k. The solution is (3, 4, 5).
13 4.3.4 The matriceal formulations of the iterative methods Split the matrix A into the sum A = D + L + U, where D is the diagonal of A, L the lower triangular part of A, and U the upper triangular part of A. That is, D = U = a a nn 0 a a n a n,n 0 0, L = The system Ax = b can be written as (D + L + U)x = b. 0 0 a a n a n,n 0,
14 The Jacobi method in matriceal form is given by: Dx (k) = (L + U)x (k ) + b the Gauss-Seidel method in matriceal form is given by: (D + L)x (k) = Ux (k ) + b and the relaxation method in matriceal form is given by: (D + ωl)x (k) = (( ω)d ωu)x (k ) + ωb Convergence of the iterative methods Remark 6 The convergence (or divergence) of the iterative process in the Jacobi and Gauss-Seidel methods does not depend on the initial guess, but depends only on the character of the matrices themselves. However, a good first guess in case of convergence will make for a relatively small number of iterations.
15 A sufficient condition for convergence: Theorem 7 (Convergence Theorem) If A is strictly diagonally dominant, then the Jacobi, Gauss-Seidel and relaxation methods converge for any choice of the starting vector x (0). Example 8 Consider the system of equations x x x 3 = The coefficient matrix of the system is strictly diagonally dominant since 4 a = 3 = 3 > + =. a = 4 = 4 > + 0 = a 33 = 6 = 6 > + = 3. Hence, if the Jacobi or Gauss-Seidel method are used to solve the system of equations, they will converge for any choice of the starting vector x (0).
16 Example 9 Consider the linear system 4x + x = 3 x + 5x =. a) Perform two iterations of the Jacobi method to this system, beginning with the vector x = [3, ]. b) Perform two iterations of the Gauss-Seidel method to this system, beginning with the vector x = [3, ]. (Solutions of the system are 7/9 and /9).
17 5. Numerical methods for solving nonlinear equations in R Let f : Ω R, Ω R. Consider the equation f (x) = 0, x Ω. () We attach a mapping F : D D, D Ω n to this equation. Let (x 0,..., x n ) D. Using F and the numbers x 0, x,..., x n we construct iteratively the sequence with x 0, x,..., x n, x n,... (3) x i = F ( x i n,..., x i ), i = n,.... (4) The problem consists in choosing F and x 0,..., x n D such that the sequence (3) to be convergent to the solution of the equation ().
18 Definition 0 The procedure of approximation the solution of equation () by the elements of the sequence (3), computed as in (4), is called F -method. The numbers x 0, x,..., x n are called the starting points and the k-th element of the sequence (3) is called an approximation of k-th order of the solution. If the set of starting points has only one element then the F -method is an one-step method; if it has more than one element then the F -method is a multistep method. Definition If the sequence (3) converges to the solution of the equation () then the F -method is convergent, otherwise it is divergent.
19 Definition Let α Ω be a solution of the equation () and let x 0, x,..., x n, x n,... be the sequence generated by a given F -method. The number p having the property lim x i α α F (x i n+,..., x i ) (α x i ) p = C 0, C = constant, is called the order of the F -method. We construct some classes of F -methods based on the interpolation procedures. Let α Ω be a solution of the equation () and V (α) a neighborhood of α. Assume that f has inverse on V (α) and denote g := f. Since it follows that f (α) = 0 α = g (0). This way, the approximation of the solution α is reduced to the approximation of g (0).
20 Definition 3 The approximation of g by means of an interpolating method, and of α by the value of g at the point zero is called the inverse interpolation procedure. 5.. One-step methods Let F be a one-step method, i.e., for a given x i we have x i+ = F (x i ). Remark 4 If p = the convergence condition is F (x) <. If p > there always exists a neighborhood of α where the F -method converges. All information on f are given at a single point, the starting value we are lead to Taylor interpolation. Theorem 5 Let α be a solution of equation (), V (α) a neighborhood of α, x, x i V (α), f fulfills the necesary continuity conditions.
21 Then we have the following method, denoted by Fm T, for approximating α: where g = f. F T m(x i ) = x i + m k= ( ) k k! [f(x i )] k g (k) (f(x i )), (5) Proof. There exists g = f C m [V (0)]. Let y i = f (x i ) and consider Taylor interpolation formula with g (y) = ( T m g ) (y) + ( R m g ) (y), ( Tm g ) m (y) = k=0 and R m g is the corresponding remainder. Since α = g (0) and g T m g, it follows k! (y y i) k g (k) (y i ), α ( T m g ) (0) = x i + m k= ( ) k k! y k i g(k) (y i ).
22 Hence, x i+ := F T m (x i ) = x i + m k= ( ) k k! [f(x i )] k g (k) (f(x i )) is an approximation of α, and F T m is an approximation method for the solution α. Concerning the order of the method F T m we state: Theorem 6 If g = f satisfies condition g (m) (0) 0, then ord(f T m) = m. Remark 7 We have an upper bound for the absolute error in approximating α by x i+ : α Fm T (x i ) m! [f (x i)] m M m g, with M m g = sup y V (0) g (m) (y).
23 Particular cases. ) Case m =. F T (x i ) = x i f(x i) f (x i ). This method is called Newton s method (the tangent method). Its order is. ) Case m = 3. F T 3 (x i ) = x i f(x i) f (x i ) [ ] f(xi ) f (x i ) f (x i ) f (x i ), with ord(f T 3 ) = 3. So, this method converges faster than F T. 3) Case m = 4. F4 T (x i) = x i f(x i) f (x i ) f (x i )f (x i ) [f (x i )] 3 + ( f (x i )f (x i ) 3[f (x i )] ) f 3 (x i ) 3![f (x i )] 5.
24 Remark 8 The higher the order of a method is, the faster the method converges. Still, this doesn t mean that a higher order method is more efficient (computation requirements). By the contrary, the most efficient are the methods of relatively low order, due to their low complexity (methods F T and F T 3 ). Newton s method According to Remark 4, there always exists a neighborhood of α where the F method is convergent. Choosing x 0 in such a neighborhood allows approximating α by terms of the sequence with a prescribed error ε. x i+ = F T (x i ) = x i f(x i) f, i = 0,,..., (x i ) If α is a solution of equation () and x n+ = F T (x n), for approximation error, Remark 7 gives α x n+ [f (x n )] M g.
25 Lemma 9 Let α (a, b) be a solution of equation () and let x n = ( ) xn. Then F T α x n m f (x n ), with m m f = min a x b f (x). Proof. We use the mean formula f (α) f (x n ) = f (ξ) (α x n ), with ξ to the interval determined by α and x n. From f (α) = 0 and f (x) m for x (a, b), it follows f (x n ) m α x n, that is α x n m f (x n ). In practical applications the following evaluation is more useful:
26 Lemma 0 If f C [a, b] and F T n 0 N such that is convergent, then there exists x n α x n x n, n > n 0. Remark The starting value is chosen randomly. If, after a fixed number of iterations the required precision is not achieved, i.e., condition x n x n ε, does not hold for a prescribed positive ε, the computation has to be started over with a new starting value. A modified form of Newton s method: computation of f : x k+ = x k f (x k) f, k = 0,,.... (x 0 ) - the same value during the It is very useful because it doesn t request the computation of f at x j, j =,,... but the order is no longer equal to.
27 Another way for obtaining Newton s method. We start with x 0 as an initial guess, sufficiently close to the α. Next approximation x is the point at which the tangent line to f at (x 0, f(x 0 )) crosses the Ox-axis. The value x is much closer to the root α than x 0. We write the equation of the tangent line at (x 0, f(x 0 )) : y f(x 0 ) = f (x 0 )(x x 0 ). If x = x is the point where this line intersects the Ox-axis, then y = 0 and solving for x gives f(x 0 ) = f (x 0 )(x x 0 ), x = x 0 f(x 0) f (x 0 ). By repeating the process using the tangent line at (x, f(x )), we obtain for x x = x f(x ) f (x )
28 For the general case we have x n+ = x n f(x n) f, n 0. (6) (x n ) α x x x 0 The algorithm: Let x 0 be the initial approximation. for n = 0,,..., IT MAX x n+ x n f(x n) f (x n ).
29 A stopping criterion is: f(x n ) ε or where ε is a specified tolerance value. xn+ x n x n+ x n ε or ε, x n+ Example Use Newton s method to compute a root of x 3 x = 0, to an accuracy of 0 4. Use x 0 =.
30 Sol. The derivative of f is f (x) = 3x x. Using x 0 = gives f() = and f () = and so the first Newton s iterate is x = = and f() = 3, f () = 8. The next iterate is x = 3 8 =.65. Continuing in this manner we obtain the sequence of approximations which converges to
Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018
1 Linear Systems Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March, 018 Consider the system 4x y + z = 7 4x 8y + z = 1 x + y + 5z = 15. We then obtain x = 1 4 (7 + y z)
More information1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0
Numerical Analysis 1 1. Nonlinear Equations This lecture note excerpted parts from Michael Heath and Max Gunzburger. Given function f, we seek value x for which where f : D R n R n is nonlinear. f(x) =
More informationFixed Points and Contractive Transformations. Ron Goldman Department of Computer Science Rice University
Fixed Points and Contractive Transformations Ron Goldman Department of Computer Science Rice University Applications Computer Graphics Fractals Bezier and B-Spline Curves and Surfaces Root Finding Newton
More informationIterative Methods. Splitting Methods
Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition
More informationCOURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.
COURSE 9 4 Numerical methods for solving linear systems Practical solving of many problems eventually leads to solving linear systems Classification of the methods: - direct methods - with low number of
More informationCAAM 454/554: Stationary Iterative Methods
CAAM 454/554: Stationary Iterative Methods Yin Zhang (draft) CAAM, Rice University, Houston, TX 77005 2007, Revised 2010 Abstract Stationary iterative methods for solving systems of linear equations are
More informationIterative Solution methods
p. 1/28 TDB NLA Parallel Algorithms for Scientific Computing Iterative Solution methods p. 2/28 TDB NLA Parallel Algorithms for Scientific Computing Basic Iterative Solution methods The ideas to use iterative
More informationIterative techniques in matrix algebra
Iterative techniques in matrix algebra Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan September 12, 2015 Outline 1 Norms of vectors and matrices 2 Eigenvalues and
More information9. Iterative Methods for Large Linear Systems
EE507 - Computational Techniques for EE Jitkomut Songsiri 9. Iterative Methods for Large Linear Systems introduction splitting method Jacobi method Gauss-Seidel method successive overrelaxation (SOR) 9-1
More informationLecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University
Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector
More informationJACOBI S ITERATION METHOD
ITERATION METHODS These are methods which compute a sequence of progressively accurate iterates to approximate the solution of Ax = b. We need such methods for solving many large linear systems. Sometimes
More informationProcess Model Formulation and Solution, 3E4
Process Model Formulation and Solution, 3E4 Section B: Linear Algebraic Equations Instructor: Kevin Dunn dunnkg@mcmasterca Department of Chemical Engineering Course notes: Dr Benoît Chachuat 06 October
More informationChapter 7 Iterative Techniques in Matrix Algebra
Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD
More informationIterative Methods for Solving A x = b
Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http
More informationSolving Linear Systems
Solving Linear Systems Iterative Solutions Methods Philippe B. Laval KSU Fall 207 Philippe B. Laval (KSU) Linear Systems Fall 207 / 2 Introduction We continue looking how to solve linear systems of the
More informationNumerical methods, midterm test I (2018/19 autumn, group A) Solutions
Numerical methods, midterm test I (2018/19 autumn, group A Solutions x Problem 1 (6p We are going to approximate the limit 3/2 x lim x 1 x 1 by substituting x = 099 into the fraction in the present form
More informationIterative Methods and Multigrid
Iterative Methods and Multigrid Part 1: Introduction to Multigrid 2000 Eric de Sturler 1 12/02/09 MG01.prz Basic Iterative Methods (1) Nonlinear equation: f(x) = 0 Rewrite as x = F(x), and iterate x i+1
More information7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP.
7.3 The Jacobi and Gauss-Siedel Iterative Techniques Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 7.3 The Jacobi and Gauss-Siedel Iterative Techniques
More informationGoal: to construct some general-purpose algorithms for solving systems of linear Equations
Chapter IV Solving Systems of Linear Equations Goal: to construct some general-purpose algorithms for solving systems of linear Equations 4.6 Solution of Equations by Iterative Methods 4.6 Solution of
More information10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method
54 CHAPTER 10 NUMERICAL METHODS 10. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS As a numerical technique, Gaussian elimination is rather unusual because it is direct. That is, a solution is obtained after
More informationReview of matrices. Let m, n IN. A rectangle of numbers written like A =
Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an
More informationHere is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J
Class Notes 4: THE SPECTRAL RADIUS, NORM CONVERGENCE AND SOR. Math 639d Due Date: Feb. 7 (updated: February 5, 2018) In the first part of this week s reading, we will prove Theorem 2 of the previous class.
More informationThe Solution of Linear Systems AX = B
Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has
More informationGauss-Seidel method. Dr. Motilal Panigrahi. Dr. Motilal Panigrahi, Nirma University
Gauss-Seidel method Dr. Motilal Panigrahi Solving system of linear equations We discussed Gaussian elimination with partial pivoting Gaussian elimination was an exact method or closed method Now we will
More informationMath 1080: Numerical Linear Algebra Chapter 4, Iterative Methods
Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 March 2015 1 / 70 Topics Introduction to Iterative Methods
More informationIntroduction. Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods. Example: First Order Richardson. Strategy
Introduction Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 Solve system Ax = b by repeatedly computing
More informationComputational Economics and Finance
Computational Economics and Finance Part II: Linear Equations Spring 2016 Outline Back Substitution, LU and other decomposi- Direct methods: tions Error analysis and condition numbers Iterative methods:
More information6. Iterative Methods for Linear Systems. The stepwise approach to the solution...
6 Iterative Methods for Linear Systems The stepwise approach to the solution Miriam Mehl: 6 Iterative Methods for Linear Systems The stepwise approach to the solution, January 18, 2013 1 61 Large Sparse
More informationNext topics: Solving systems of linear equations
Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:
More informationEXAMPLES OF CLASSICAL ITERATIVE METHODS
EXAMPLES OF CLASSICAL ITERATIVE METHODS In these lecture notes we revisit a few classical fixpoint iterations for the solution of the linear systems of equations. We focus on the algebraic and algorithmic
More informationMath Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.
Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems
More informationComputational Methods. Systems of Linear Equations
Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations
More informationIterative Methods for Ax=b
1 FUNDAMENTALS 1 Iterative Methods for Ax=b 1 Fundamentals consider the solution of the set of simultaneous equations Ax = b where A is a square matrix, n n and b is a right hand vector. We write the iterative
More information9.1 Preconditioned Krylov Subspace Methods
Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete
More informationNumerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD
Mathematical Question we are interested in answering numerically How to solve the following linear system for x Ax = b? where A is an n n invertible matrix and b is vector of length n. Notation: x denote
More informationStabilization and Acceleration of Algebraic Multigrid Method
Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration
More informationIterative methods for Linear System
Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and
More informationIntroduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim
Introduction - Motivation Many phenomena (physical, chemical, biological, etc.) are model by differential equations. Recall the definition of the derivative of f(x) f f(x + h) f(x) (x) = lim. h 0 h Its
More informationMA3232 Numerical Analysis Week 9. James Cooley (1926-)
MA umerical Analysis Week 9 James Cooley (96-) James Cooley is an American mathematician. His most significant contribution to the world of mathematics and digital signal processing is the Fast Fourier
More informationTheory of Iterative Methods
Based on Strang s Introduction to Applied Mathematics Theory of Iterative Methods The Iterative Idea To solve Ax = b, write Mx (k+1) = (M A)x (k) + b, k = 0, 1,,... Then the error e (k) x (k) x satisfies
More information30.5. Iterative Methods for Systems of Equations. Introduction. Prerequisites. Learning Outcomes
Iterative Methods for Systems of Equations 0.5 Introduction There are occasions when direct methods (like Gaussian elimination or the use of an LU decomposition) are not the best way to solve a system
More informationBindel, Fall 2016 Matrix Computations (CS 6210) Notes for
1 Iteration basics Notes for 2016-11-07 An iterative solver for Ax = b is produces a sequence of approximations x (k) x. We always stop after finitely many steps, based on some convergence criterion, e.g.
More informationJordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS
Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative
More informationNUMERICAL ALGORITHMS FOR A SECOND ORDER ELLIPTIC BVP
ANALELE ŞTIINŢIFICE ALE UNIVERSITĂŢII AL.I. CUZA DIN IAŞI (S.N. MATEMATICĂ, Tomul LIII, 2007, f.1 NUMERICAL ALGORITHMS FOR A SECOND ORDER ELLIPTIC BVP BY GINA DURA and RĂZVAN ŞTEFĂNESCU Abstract. The aim
More informationCHAPTER 5. Basic Iterative Methods
Basic Iterative Methods CHAPTER 5 Solve Ax = f where A is large and sparse (and nonsingular. Let A be split as A = M N in which M is nonsingular, and solving systems of the form Mz = r is much easier than
More informationIterative Solvers. Lab 6. Iterative Methods
Lab 6 Iterative Solvers Lab Objective: Many real-world problems of the form Ax = b have tens of thousands of parameters Solving such systems with Gaussian elimination or matrix factorizations could require
More informationLINEAR SYSTEMS (11) Intensive Computation
LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY
More informationNumerical Programming I (for CSE)
Technische Universität München WT 1/13 Fakultät für Mathematik Prof. Dr. M. Mehl B. Gatzhammer January 1, 13 Numerical Programming I (for CSE) Tutorial 1: Iterative Methods 1) Relaxation Methods a) Let
More informationLecture 18 Classical Iterative Methods
Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,
More informationSolving Linear Systems
Solving Linear Systems Iterative Solutions Methods Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) Linear Systems Fall 2015 1 / 12 Introduction We continue looking how to solve linear systems of
More informationSolving Linear Systems of Equations
November 6, 2013 Introduction The type of problems that we have to solve are: Solve the system: A x = B, where a 11 a 1N a 12 a 2N A =.. a 1N a NN x = x 1 x 2. x N B = b 1 b 2. b N To find A 1 (inverse
More informationPowerPoints organized by Dr. Michael R. Gustafson II, Duke University
Part 3 Chapter 10 LU Factorization PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
More informationNotes for CS542G (Iterative Solvers for Linear Systems)
Notes for CS542G (Iterative Solvers for Linear Systems) Robert Bridson November 20, 2007 1 The Basics We re now looking at efficient ways to solve the linear system of equations Ax = b where in this course,
More informationExam in TMA4215 December 7th 2012
Norwegian University of Science and Technology Department of Mathematical Sciences Page of 9 Contact during the exam: Elena Celledoni, tlf. 7359354, cell phone 48238584 Exam in TMA425 December 7th 22 Allowed
More informationClassical iterative methods for linear systems
Classical iterative methods for linear systems Ed Bueler MATH 615 Numerical Analysis of Differential Equations 27 February 1 March, 2017 Ed Bueler (MATH 615 NADEs) Classical iterative methods for linear
More informationIterative Techniques and Application of Gauss-Seidel Method
Iterative Techniques and Application of Gauss-Seidel Method Gulshan Khatuo ' Abstract This project work is based on 'iterative techniques' by which the linear systems can be solved both analy tically and
More information7.3 The Jacobi and Gauss-Seidel Iterative Methods
7.3 The Jacobi and Gauss-Seidel Iterative Methods 1 The Jacobi Method Two assumptions made on Jacobi Method: 1.The system given by aa 11 xx 1 + aa 12 xx 2 + aa 1nn xx nn = bb 1 aa 21 xx 1 + aa 22 xx 2
More informationA nonlinear equation is any equation of the form. f(x) = 0. A nonlinear equation can have any number of solutions (finite, countable, uncountable)
Nonlinear equations Definition A nonlinear equation is any equation of the form where f is a nonlinear function. Nonlinear equations x 2 + x + 1 = 0 (f : R R) f(x) = 0 (x cos y, 2y sin x) = (0, 0) (f :
More informationIntroduction to Boundary Value Problems
Chapter 5 Introduction to Boundary Value Problems When we studied IVPs we saw that we were given the initial value of a function and a differential equation which governed its behavior for subsequent times.
More informationDEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular
form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix
More informationNum. discretization. Numerical simulations. Finite elements Finite volume. Finite diff. Mathematical model A set of ODEs/PDEs. Integral equations
Scientific Computing Computer simulations Road map p. 1/46 Numerical simulations Physical phenomenon Mathematical model A set of ODEs/PDEs Integral equations Something else Num. discretization Finite diff.
More informationThe amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.
AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative
More informationChapter 2. Solving Systems of Equations. 2.1 Gaussian elimination
Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix
More information(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by
1. QUESTION (a) Given a nth degree Taylor polynomial P n (x) of a function f(x), expanded about x = x 0, write down the Lagrange formula for the truncation error, carefully defining all its elements. How
More informationChapter 12: Iterative Methods
ES 40: Scientific and Engineering Computation. Uchechukwu Ofoegbu Temple University Chapter : Iterative Methods ES 40: Scientific and Engineering Computation. Gauss-Seidel Method The Gauss-Seidel method
More informationNumerical solutions of nonlinear systems of equations
Numerical solutions of nonlinear systems of equations Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan E-mail: min@math.ntnu.edu.tw August 28, 2011 Outline 1 Fixed points
More informationLecture 16 Methods for System of Linear Equations (Linear Systems) Songting Luo. Department of Mathematics Iowa State University
Lecture 16 Methods for System of Linear Equations (Linear Systems) Songting Luo Department of Mathematics Iowa State University MATH 481 Numerical Methods for Differential Equations Songting Luo ( Department
More informationCLASSICAL ITERATIVE METHODS
CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped
More informationNumerical Solution Techniques in Mechanical and Aerospace Engineering
Numerical Solution Techniques in Mechanical and Aerospace Engineering Chunlei Liang LECTURE 3 Solvers of linear algebraic equations 3.1. Outline of Lecture Finite-difference method for a 2D elliptic PDE
More informationx x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)
Numerical Methods - PROBLEMS. The Taylor series, about the origin, for log( + x) is x x2 2 + x3 3 x4 4 + Find an upper bound on the magnitude of the truncation error on the interval x.5 when log( + x)
More informationTsung-Ming Huang. Matrix Computation, 2016, NTNU
Tsung-Ming Huang Matrix Computation, 2016, NTNU 1 Plan Gradient method Conjugate gradient method Preconditioner 2 Gradient method 3 Theorem Ax = b, A : s.p.d Definition A : symmetric positive definite
More informationESTIMATING THE OPTIMAL EXTRAPOLATION PARAMETER FOR EXTRAPOLATED ITERATIVE METHODS WHEN SOLVING SEQUENCES OF LINEAR SYSTEMS. A Thesis.
ESTIMATING THE OPTIMAL EXTRAPOLATION PARAMETER FOR EXTRAPOLATED ITERATIVE METHODS WHEN SOLVING SEQUENCES OF LINEAR SYSTEMS A Thesis Presented to The Graduate Faculty of The University of Akron In Partial
More informationMath 471 (Numerical methods) Chapter 3 (second half). System of equations
Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular
More informationLECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel
LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count
More informationPHYS 410/555 Computational Physics Solution of Non Linear Equations (a.k.a. Root Finding) (Reference Numerical Recipes, 9.0, 9.1, 9.
PHYS 410/555 Computational Physics Solution of Non Linear Equations (a.k.a. Root Finding) (Reference Numerical Recipes, 9.0, 9.1, 9.4) We will consider two cases 1. f(x) = 0 1-dimensional 2. f(x) = 0 d-dimensional
More informationChapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices
Chapter 7 Iterative methods for large sparse linear systems In this chapter we revisit the problem of solving linear systems of equations, but now in the context of large sparse systems. The price to pay
More informationExact and Approximate Numbers:
Eact and Approimate Numbers: The numbers that arise in technical applications are better described as eact numbers because there is not the sort of uncertainty in their values that was described above.
More informationKasetsart University Workshop. Multigrid methods: An introduction
Kasetsart University Workshop Multigrid methods: An introduction Dr. Anand Pardhanani Mathematics Department Earlham College Richmond, Indiana USA pardhan@earlham.edu A copy of these slides is available
More informationScientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1
Scientific Computing WS 2018/2019 Lecture 9 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 9 Slide 1 Lecture 9 Slide 2 Simple iteration with preconditioning Idea: Aû = b iterative scheme û = û
More informationNumerical Analysis: Solving Nonlinear Equations
Numerical Analysis: Solving Nonlinear Equations Mirko Navara http://cmp.felk.cvut.cz/ navara/ Center for Machine Perception, Department of Cybernetics, FEE, CTU Karlovo náměstí, building G, office 104a
More information10.34: Numerical Methods Applied to Chemical Engineering. Lecture 7: Solutions of nonlinear equations Newton-Raphson method
10.34: Numerical Methods Applied to Chemical Engineering Lecture 7: Solutions of nonlinear equations Newton-Raphson method 1 Recap Singular value decomposition Iterative solutions to linear equations 2
More informationUniversity of Houston, Department of Mathematics Numerical Analysis, Fall 2005
3 Numerical Solution of Nonlinear Equations and Systems 3.1 Fixed point iteration Reamrk 3.1 Problem Given a function F : lr n lr n, compute x lr n such that ( ) F(x ) = 0. In this chapter, we consider
More informationCache Oblivious Stencil Computations
Cache Oblivious Stencil Computations S. HUNOLD J. L. TRÄFF F. VERSACI Lectures on High Performance Computing 13 April 2015 F. Versaci (TU Wien) Cache Oblivious Stencil Computations 13 April 2015 1 / 19
More informationMath 4329: Numerical Analysis Chapter 03: Newton s Method. Natasha S. Sharma, PhD
Mathematical question we are interested in numerically answering How to find the x-intercepts of a function f (x)? These x-intercepts are called the roots of the equation f (x) = 0. Notation: denote the
More informationBackground. Background. C. T. Kelley NC State University tim C. T. Kelley Background NCSU, Spring / 58
Background C. T. Kelley NC State University tim kelley@ncsu.edu C. T. Kelley Background NCSU, Spring 2012 1 / 58 Notation vectors, matrices, norms l 1 : max col sum... spectral radius scaled integral norms
More informationSolution of Nonlinear Equations
Solution of Nonlinear Equations In many engineering applications, there are cases when one needs to solve nonlinear algebraic or trigonometric equations or set of equations. These are also common in Civil
More informationPART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435
PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435 Professor Biswa Nath Datta Department of Mathematical Sciences Northern Illinois University DeKalb, IL. 60115 USA E mail: dattab@math.niu.edu
More informationHence a root lies between 1 and 2. Since f a is negative and f(x 0 ) is positive The root lies between a and x 0 i.e. 1 and 1.
The Bisection method or BOLZANO s method or Interval halving method: Find the positive root of x 3 x = 1 correct to four decimal places by bisection method Let f x = x 3 x 1 Here f 0 = 1 = ve, f 1 = ve,
More informationCSC 576: Linear System
CSC 576: Linear System Ji Liu Department of Computer Science, University of Rochester September 3, 206 Linear Equations Consider solving linear equations where A R m n and b R n m and n could be extremely
More informationFEM and Sparse Linear System Solving
FEM & sparse system solving, Lecture 7, Nov 3, 2017 1/46 Lecture 7, Nov 3, 2015: Introduction to Iterative Solvers: Stationary Methods http://people.inf.ethz.ch/arbenz/fem16 Peter Arbenz Computer Science
More informationNumerical Methods - Numerical Linear Algebra
Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationToday s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn
Today s class Linear Algebraic Equations LU Decomposition 1 Linear Algebraic Equations Gaussian Elimination works well for solving linear systems of the form: AX = B What if you have to solve the linear
More informationMATH 1372, SECTION 33, MIDTERM 3 REVIEW ANSWERS
MATH 1372, SECTION 33, MIDTERM 3 REVIEW ANSWERS 1. We have one theorem whose conclusion says an alternating series converges. We have another theorem whose conclusion says an alternating series diverges.
More informationPartitioned Methods for Multifield Problems
Partitioned Methods for Multifield Problems Joachim Rang, 10.5.2017 10.5.2017 Joachim Rang Partitioned Methods for Multifield Problems Seite 1 Contents Blockform of linear iteration schemes Examples 10.5.2017
More informationNumerical Analysis: Solving Systems of Linear Equations
Numerical Analysis: Solving Systems of Linear Equations Mirko Navara http://cmpfelkcvutcz/ navara/ Center for Machine Perception, Department of Cybernetics, FEE, CTU Karlovo náměstí, building G, office
More informationMath 577 Assignment 7
Math 577 Assignment 7 Thanks for Yu Cao 1. Solution. The linear system being solved is Ax = 0, where A is a (n 1 (n 1 matrix such that 2 1 1 2 1 A =......... 1 2 1 1 2 and x = (U 1, U 2,, U n 1. By the
More informationAP Calculus Testbank (Chapter 9) (Mr. Surowski)
AP Calculus Testbank (Chapter 9) (Mr. Surowski) Part I. Multiple-Choice Questions n 1 1. The series will converge, provided that n 1+p + n + 1 (A) p > 1 (B) p > 2 (C) p >.5 (D) p 0 2. The series
More informationCOURSE Numerical integration of functions
COURSE 6 3. Numerical integration of functions The need: for evaluating definite integrals of functions that has no explicit antiderivatives or whose antiderivatives are not easy to obtain. Let f : [a,
More information