Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018

Similar documents
COURSE Iterative methods for solving linear systems

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

Iterative techniques in matrix algebra

Iterative Methods. Splitting Methods

The Solution of Linear Systems AX = B

Classical iterative methods for linear systems

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Computational Methods. Systems of Linear Equations

c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp , March

Computational Economics and Finance

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

9. Iterative Methods for Large Linear Systems

Solving Linear Systems

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

CAAM 454/554: Stationary Iterative Methods

Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

Iterative Methods and Multigrid

Iterative methods for Linear System

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

I = i 0,

Chapter 7 Iterative Techniques in Matrix Algebra

Numerical Methods - Numerical Linear Algebra

Lecture 2 INF-MAT : A boundary value problem and an eigenvalue problem; Block Multiplication; Tridiagonal Systems

Linear Algebra and Matrix Inversion

Lecture 16 Methods for System of Linear Equations (Linear Systems) Songting Luo. Department of Mathematics Iowa State University

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

Theory of Iterative Methods

Algebra C Numerical Linear Algebra Sample Exam Problems

Goal: to construct some general-purpose algorithms for solving systems of linear Equations

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Process Model Formulation and Solution, 3E4

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Homework 6 Solutions

JACOBI S ITERATION METHOD

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

CS 246 Review of Linear Algebra 01/17/19

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by

Linear Algebra: Matrix Eigenvalue Problems

G1110 & 852G1 Numerical Linear Algebra

Lecture 18 Classical Iterative Methods

EXAMPLES OF CLASSICAL ITERATIVE METHODS

Math 291-2: Final Exam Solutions Northwestern University, Winter 2016

Iterative Solution methods

This property turns out to be a general property of eigenvectors of a symmetric A that correspond to distinct eigenvalues as we shall see later.

Conjugate Gradient (CG) Method

Next topics: Solving systems of linear equations

COMP 558 lecture 18 Nov. 15, 2010

Solving Linear Systems

Lecture 4 Basic Iterative Methods I

Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD

Math 489AB Exercises for Chapter 1 Fall Section 1.0

Introduction. Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods. Example: First Order Richardson. Strategy

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Solving Linear Systems of Equations

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns

Linear Analysis Lecture 16

5. Orthogonal matrices

30.5. Iterative Methods for Systems of Equations. Introduction. Prerequisites. Learning Outcomes

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

First, we review some important facts on the location of eigenvalues of matrices.

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

Chapter Two: Numerical Methods for Elliptic PDEs. 1 Finite Difference Methods for Elliptic PDEs

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Direct Methods for Solving Linear Systems. Matrix Factorization

Properties of Matrices and Operations on Matrices

Computational Linear Algebra

4.6 Iterative Solvers for Linear Systems

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

ON A HOMOTOPY BASED METHOD FOR SOLVING SYSTEMS OF LINEAR EQUATIONS

The Singular Value Decomposition

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices

10.34: Numerical Methods Applied to Chemical Engineering. Lecture 7: Solutions of nonlinear equations Newton-Raphson method

Iterative Methods for Solving A x = b

LINEAR SYSTEMS (11) Intensive Computation

Fundamentals of Engineering Analysis (650163)

Math 577 Assignment 7

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

9.1 Preconditioned Krylov Subspace Methods

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

STAT200C: Review of Linear Algebra

FIXED POINT ITERATIONS

Matrix Theory. A.Holst, V.Ufnarovski

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3

MTH 215: Introduction to Linear Algebra

HW2 Solutions

Kasetsart University Workshop. Multigrid methods: An introduction

Introduction to Iterative Solvers of Linear Systems

Iterative Methods for Ax=b

Hence a root lies between 1 and 2. Since f a is negative and f(x 0 ) is positive The root lies between a and x 0 i.e. 1 and 1.

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C.

Linear Equations and Matrix

Transcription:

1 Linear Systems Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March, 018 Consider the system 4x y + z = 7 4x 8y + z = 1 x + y + 5z = 15. We then obtain x = 1 4 (7 + y z) y = 1 8 (1 + 4x + z) z = 1 5 (15 + x y). Jacobi Iteration: Given (x k, y k, z k ), the next iteration is defined by x k+1 = 1 4 (7 + y k z k ) y k+1 = 1 8 (1 + 4x k + z k ) which can be easily written in matrix form z k+1 = 1 5 (15 + x k y k ), x k+1 x k y k+1 = M y k + c. z k+1 z k Gauss-Seidel Iteration: Given (x k, y k, z k ), the next iteration is defined by x k+1 = 1 4 (7 + y k z k ) y k+1 = 1 8 (1 + 4x k+1 + z k ) z k+1 = 1 5 (15 + x k+1 y k+1 ). To derive matrix form for Gauss-Seidel iteration, suppose the system is a 11 x + a 1 y + a 13 z = b 1 a 1 x + a y + a 33 z = b, a 31 x + a 3 y + a 33 z = b 3 Then Gauss-Seidel is a 11 x k+1 = a 1 y k a 13 z k +b 1 a y k+1 = a 1 x k+1 a 3 z k +b. a 33 z k+1 = a 13 x k+1 a 3 y k+1 +b 3 We rewrite This means, a 11 x k+1 = a 1 y k a 13 z k +b 1 a 1 x k+1 + a y k+1 = a 13 z k +b. a 31 x k+1 + a 3 y k+1 +a 13 z k+1 = b 3 a 11 0 0 x k+1 0 a 1 a 13 x k b 1 a 1 a 0 y k+1 = 0 0 a 3 y k + b. a 31 a 3 a 33 z k+1 0 0 0 z k b 3 In general, consider the linear system Ax = b. We decompose A = L + D + U, where L, D, U are the lower triangular, diagonal, and upper triangular portion of A. Then Jacobi method can be derived as Dx = (L + U)x + b x k+1 = D 1 (L + U)x k + D 1 b, and Gauss-Seidel method can be derived as (L + D)x = Ux + b x k+1 = (L + D) 1 Ux k + (L + D) 1 b. In Matlab/Julia, we can obtain L and U by L=tril(A,-1) and U=triu(A,1). 1

Successive Over Relaxation (SOR) with parameter ω (1, ): at each entry, this method interpolates the Gauss-Seidel entry and the old entry as follows: which implies Next, which implies a 11 x k = a 1 y k a 1 z k + b, and x k+1 = (1 ω)x k + ω x k, a 11 x k+1 = (1 ω)a 11 x k ωa 1 y k ωa 13 z k + ωb 1. a ỹ k = a 1 x k+1 a 3 z k + b, and y k+1 = (1 ω)y k + ωỹ k, ωa 1 x k+1 + a y k+1 = (1 ω)a y k ωa 3 z k + ωb Deriving similar formula for z k+1, we can write a 11 0 0 x k+1 (1 ω)a 11 ωa 1 ωa 13 ωa 1 a 0 y k+1 = 0 (1 ω)a ωa 3 ωa 31 ωa 3 a 33 z k+1 0 0 (1 ω)a 33 Dividing by ω and using x k as a vector, we obtain the SOR equation ( 1 ω D + L) x k+1 = ( ( 1 ω 1)D U) x k + b. x k y k + ω z k b 1 b. b 3 Strictly Diagonally Dominant Matrices Definition 1 A matrix A N N is said to be strictly diagonally dominant if a kk > a ki for k = 1,,..., N. i k Theorem Suppose that A N N is strictly diagonally dominant. Then A is invertible. Proof. Suppose on the contrary that there exists x = (x 1,..., x N ) R N {0} such that Ax = 0. Let j be an index with largest absolute value x j, that means, x j = max x i > 0. The j-th row of Ax = 0 reads as 0 = a j1 x 1 + a j x + + a jn x N a jj x j = a ji x i. i It follows that 0 < a jj x j = a 1i x i a ji x i a ji max x i < a jj x j. i which is a contradiction. Thus, A is invertible. Theorem 3 Suppose A = [a ij ] R N N is strictly diagonally dominant and b = (b i ) R N. Then the Jacobi iteration will converge to the unique solution of Ax = b for any starting point. Proof. We use the matrix form of Jacobi iterates x k+1 = Mx k + c where M := D 1 (L + U), c := D 1 b.

We will show that M < 1. Indeed, each absolute row sum in M is smaller than 1, for example, row 1 of M reads as M 1 = 1 a 11 [ 0, a1,..., a 1N ] M1 1 = 1 a 11 a 1i < 1. i 1 This implies M < 1. Theorem 4 Suppose A = [a ij ] R N N is strictly diagonally dominant and b = (b i ) R N. Then the Gauss-Seidel iterates converges to the unique solution of Ax = b for any starting point. Proof. The Gauss-Seidel in matrix form reads as x k+1 = Mx k + c where M = (D + L) 1 U, c = (D + L) 1 b. We will show that M < 1. Take x R N {0} and let y = Mx, i.e., (D + L)y = Ux. Consider any row j, we have a jj y j + k<j a jk y k = k>j a jk x k. Let j be such that y = y j. Then the left and right hand side satisfy LHS a jj y j a jk y k a jj y j a jk y j = a jj a jk y, k<j k<j k<j RHS a jk x j a jk x. k>j k>j It follows that where γ = max j y k>j a jk a jj k<j a jk x γ x, k>j a jk a jj k<j a jk < 1. This proves M < 1. 3 Positive Definite Matrices Definition 5 A complex matrix A = [a ij ] C N N is said to be Hermitian if A = A where A := [a ij ] is the conjugate transpose. In the formula a ij is the complex conjugate, i.e., if a ij = x + iy C then a ij = x iy. Theorem 6 Let A C N N be Hermitian and positive definite, and ω (0, ). Then the SOR method for solving Ax = b converges to a unique solution for any starting point. Proof. Recall that SOR method is written as ( 1 ω D + L) x k+1 = ( ( 1 ω 1)D U) x k + b. 3

Define Q = ( 1 ω D + L) and G = Q 1 ( ( 1 ω 1)D U). We will show that the spectrum radius ρ(g) < 1. Let (λ, x) be a (possibly complex) eigenpair. Define y := (I G)x = (1 λ)x. Notice that I G = Q 1[ Q ( 1 ω 1)D + U ] = Q 1 A. So, Qy = Ax y = Q 1 Ax. Next, (Q A)y = Ax AQ 1 Ax = A(I Q 1 A)x = AGx = λax. It follows that So Qy, y = Ax, y = Ax, (1 λ)x, and y, (Q A)y = y, λax = (1 λ)λ x, Ax. 1 ω Dy, y + Ly, y = (1 λ) Ax, x ( 1 ω 1) Dy, y y, Uy = (1 λ)λ x, Ax. Since Ly, y = y, Uy, we add two equations ( ω 1) Dy, y = (1 λ ) Ax, x. Since x 0, y = Q 1 Ax 0. So Dy, y > 0, Ax, x > 0 and that ω 1 > 0. Thus, we have 1 λ > 0, which means λ < 1. The proof is complete. 4 Picard Iterates for Nonlinear Systems { { x = f xk+1 = f(x k, y k ) Suppose we solve the system by the Picard iterates y = g y k+1 = g(x k, y k ). Suppose f and g are continuously differentiable around a fixed point. Then by mean value theorem [ xk+1 x f(xk, y = k ) f f = x (u 1, v 1 ) f ] y(u 1, v 1 ) xk x y k+1 y g(x k, y k ) g g x(u, v ) g y(u,, v ) y k y where (u 1, v 1 ) and (u, v ) are some points on the segment connecting and (x k, y k ). From this equation, we easily derive the following result. { x = f Theorem 7 Let be a fixed point of where f and g are continuously differentiable. y = g Suppose for all points (u 1, v 1 ), (u, v ) around, the matrix [ f x (u 1, v 1 ) f y(u ] 1, v 1 ) g x(u, v ) g y(u has infinity norm less than 1., v ) Then for starting point (x 0, y 0 ) close to, the Picard iterate converges to. Definition 8 (Jacobian matrix) The matrix [ f x f y(x, ] y) g x(x, y) g y(x,, or equivalently, y) is called the Jacobian matrix of the pair of functions (f, g). f x x f y y, 4

Example 9 Use Picard iteration to solve the system of nonlinear equations x x y + 0.5 = 0 x + 4y 4 = 0. Solution. The problem is to find the intersection of a parabola and an ellipse. y = x x 0.5 x 4 + y 1 = 1 There are solutions: one near ( 0., 1) and one near (1.9, 0.3). First, convert the system to x = x y + 0.5 x k+1 = x k y k + 0.5 y = x 4y and use the iterate + 8y + 4 y k+1 = x k 4y k + 8y k + 4 8 8 Then with the starting point (0, 1), we find the solution ( 0.146, 0.9938084). cannot detect the second solution. To see why it is the case, we define f = x y + 0.5 and consider the Jacobian matrix of (f, g) J = and g = x 4y + 8y + 4, 8 x 1 x 4 y + 1 But this iterate Consider the neighborhood Ω = { 0.4 x 0, 0.8 y 1. } around the first solution (see picture), we have f x + f y = x + 0.5 0.9 < 1, g x(x, y) + g y(x, y) = x 4 + y + 1 0.1 + 0. < 1. That implies for all Ω, the infinity norm of the Jacobian J < 1. By Theorem 7, the Picard iterate converges to a solution for all starting point in Ω. On the other hand, if we consider the neighborhood Ω = { 1.8 x, 0. y 0.4 } around the second solution, then we check that f x + f y = x + 0.5 > 1.8 + 0.5 > 1. This means, the Picard iterate might not converge for starting point in Ω. In order to find the second solution, we will use the Newton method below. 5

5 Newton Method for Nonlinear Systems { 0 = f, Consider the system. To develop the Newton method for solving this system, we 0 = g. consider small changes in the function near the point (x 0, y 0 ). We have f f(x 0, y 0 ) f x (x 0, y 0 )(x x 0 ) + f y (x 0, y 0 )(y x 0 ) g g(x 0, y 0 ) x (x 0, y 0 )(x x 0 ) + y (x 0, y 0 )(y x 0 ) If is a solution, then f x x0 J(x 0, y 0 ) := x (x 0, y 0 ) y x 0 x (x 0, y 0 ) If the Jacobian matrix is nonsingular, then we can solve for as follows [ xk xk f(xk, y k ) xk Given, solve J(x y k, y k ) = for k y k g(x k, y k ) y k xk+1 xk xk Set = +, then repeat. y k+1 y k y k { 0 = f, Theorem 10 Consider the system f y (x 0, y 0 ) x x0 f(x0, y 0 ) y (x y x 0, y 0 ) 0 g(x 0, y 0 ) x. So the Newton method can be described y. If the Jacobian of (f, g) is invertible in a neighborhood of a solution, then Newton s method converges when the starting point is close to the 0 = g. solution. Example 11 Use Newton method to solve the system of nonlinear equations f = x x y + 0.5 = 0 g = x + 4y 4 = 0. x 1 Solution. The Jacobian is J =. To check if J is invertible, we consider the determinant x 8y det J = 8y(x ) + x. For the solution near ( 0., 1), we have For the solution near (1.9, 0.3), we have det J det J( 0., 1) = 8(( 0.) ) + ( 0.) 0. det J det J(1.9, 0.3) = 8((1.9) ) + (0.3) 0. So, we can apply Newton method near both solutions. The Newton iterates read as xk+1 xk = J(x k, y k ) 1 f(xk, y k ). g(x k, y k ) y k+1 y k Use Newton method with (x 0, y 0 ) = (0, 1), we obtain the solution ( 0.146, 0.9938084). Use Newton method with (x 0, y 0 ) = (, 0), we obtain the solution (1.900677, 0.31119). ]. 6