SyDe312 (Winter 2005) Unit 1 - Solutions (continued)

Similar documents
JACOBI S ITERATION METHOD

Chapter 12: Iterative Methods

Theory of Iterative Methods

Iterative Methods and Multigrid

Conjugate Gradient (CG) Method

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

. The following is a 3 3 orthogonal matrix: 2/3 1/3 2/3 2/3 2/3 1/3 1/3 2/3 2/3

Iterative Methods for Ax=b

Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas

THE MATRIX EIGENVALUE PROBLEM

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12,

9. Iterative Methods for Large Linear Systems

The Solution of Linear Systems AX = B

Course Notes: Week 1

Iterative Methods. Splitting Methods

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Lecture 11. Fast Linear Solvers: Iterative Methods. J. Chaudhry. Department of Mathematics and Statistics University of New Mexico

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

COURSE Iterative methods for solving linear systems

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD

Computational Linear Algebra

Some definitions. Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization. A-inner product. Important facts

Motivation: Sparse matrices and numerical PDE's

Goal: to construct some general-purpose algorithms for solving systems of linear Equations

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

9.1 Preconditioned Krylov Subspace Methods

CHAPTER 5. Basic Iterative Methods

Numerical Programming I (for CSE)

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

NUMERICAL COMPUTATION IN SCIENCE AND ENGINEERING

Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm

Eigenvalues, Eigenvectors, and Diagonalization

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

Notes on Some Methods for Solving Linear Systems

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Poisson Equation in 2D

Solving Linear Systems

Linear algebra II Homework #1 due Thursday, Feb A =

Numerical Methods - Numerical Linear Algebra

CAAM 454/554: Stationary Iterative Methods

MATH 583A REVIEW SESSION #1

Introduction. Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods. Example: First Order Richardson. Strategy

Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods

Lecture 15, 16: Diagonalization

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

201B, Winter 11, Professor John Hunter Homework 9 Solutions

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

Solutions Problem Set 8 Math 240, Fall

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

Lecture 18 Classical Iterative Methods

c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp , March

Algebra C Numerical Linear Algebra Sample Exam Problems

Linear algebra II Homework #1 due Thursday, Feb. 2 A =. 2 5 A = When writing up solutions, write legibly and coherently.

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

Final Exam - Take Home Portion Math 211, Summer 2017

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

18.06 Problem Set 8 Solution Due Wednesday, 22 April 2009 at 4 pm in Total: 160 points.

Solution of Linear Equations

Solving Linear Systems of Equations

A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES

632 CHAP. 11 EIGENVALUES AND EIGENVECTORS. QR Method

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

Cheat Sheet for MATH461

CS 323: Numerical Analysis and Computing

The residual again. The residual is our method of judging how good a potential solution x! of a system A x = b actually is. We compute. r = b - A x!

, b = 0. (2) 1 2 The eigenvectors of A corresponding to the eigenvalues λ 1 = 1, λ 2 = 3 are

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

lecture 2 and 3: algorithms for linear algebra

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is,

Differential Topology Final Exam With Solutions

Section 4.1 The Power Method

4.6 Iterative Solvers for Linear Systems

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Chapter 10 Exercises

Linear algebra & Numerical Analysis

Linear Equation Systems Direct Methods

Introduction to Scientific Computing II Multigrid

Chapter 7 Iterative Techniques in Matrix Algebra

The QR Decomposition

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

AIMS Exercise Set # 1

Solution of Matrix Eigenvalue Problem

From Stationary Methods to Krylov Subspaces

Diagonalization. Hung-yi Lee

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

235 Final exam review questions

7.3 The Jacobi and Gauss-Seidel Iterative Methods

Iterative Methods for Solving A x = b

Econ Slides from Lecture 7

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

C&O367: Nonlinear Optimization (Winter 2013) Assignment 4 H. Wolkowicz

EK102 Linear Algebra PRACTICE PROBLEMS for Final Exam Spring 2016

9.1 Mean and Gaussian Curvatures of Surfaces

Linear algebra II Tutorial solutions #1 A = x 1

Linear algebra and differential equations (Math 54): Lecture 13

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

Transcription:

SyDe3 (Winter 5) Unit - Solutions (continued) March, 5 Chapter 6 - Linear Systems Problem 6.6 - b Iterative solution by the Jacobi and Gauss-Seidel iteration methods: Given: b = [ 77] T, x = [ ] T 9x + x + x 3 = x + x + 3x 3 = 7 3x + 4x + x 3 = 7 x = 9 ( x x 3 ) x = ( 7 x 3x 3 ) x 3 = (7 3x 4x ) Jacobi method: x (k+) = 9 x (k+) = x (k+) 3 = ( x(k) x (k) 3 ) ( 7 x(k) 3x (k) 3 ) (7 3x(k) 4x (k) ) Gauss-Seidel method: x (k+) = 9 ( x(k) x (k) x (k+) = x (k+) 3 = 3 ) ( 7 x(k+) 3x (k) 3 ) (7 3x(k+) 4x (k+) )

For example beginiing with the initial x = [,, ], in the Jacobi method, we get: x () = 9 ( ) = x () = x () 3 = ( 7 3 ) = 7 (7 3 4 ) = 7 and with the Gauss-Seidel method: ( ) = x () = 9 x () = x () 3 = ( 7 3 ) = 7 (7 3 4 7 ) = 98 Iterating these procedure until x x k.5, we get: k x k x k x k 3 Error Ratio.E+ -.7.63636 3.64E-.364 7.77E-3 -.899.899.9E-.3 3-6.67E-9 -.96869.9584 4.6E-.38 4.47E-3 -.9875.9886.5E-.3 5 -.4E-4 -.9968.9955 4.85E-3.389 6.8468E-4 -.9985.99887.48E-3.35 7-3.946E-5 -.9997.9994 5.88E-4.398 8 3.94E-5 -.9998.9999.84E-4.33 9-9.535E-6 -.99998.9999 7.58E-5.4 5.8346E-6 -.99998.99999.46E-5.35 Table : Problem 6.6.b Jacobi Iteration k x k x k x k 3 Error Ratio.E+ -.7.899 3.E- 3.E- -.E- -.973.993 3.7E-.3E- 3-3.568E-3 -.997.9998 3.6E-3 8.8E- 4-3.7E-4 -.9999. 3.7E-4 9.43E- 5 -.7557E-5 -...76E-5 5.7E- Table : Problem 6.6.b Gauss-Seidel Iteration

Problem 6.6-3 In case a, for the matrix converge. [ 4 4 ], in which the M >, the iteration methods do not k x k x k -4 3 7-4 4 7-68 5 73-68 6 73-9 7 4369-9 8 4369-7476 9 6995-7476 6995-796 Table 3: Problem 6.6.3a Jacobi Iteration ], which is diagonally dominant, both methods converge. k x k x k -4 7-68 3 73-9 4 4369-7476 5 6995-796 6 848-447394 7 7895697-758788 8.86E+8 -.453E9 9 4.58E+9 -.835E 7.33E+ -.93E Table 4: Problem 6.6.3a Gauss-Seidel Iteration So the Gauss-Seidel method diverges more rapidly. In case b, for the matrix [ 4 4 So Gauss-Seidel method converges faster. 3

Problem 6.6-4 A = k x k x k Error.5.667 -.65.5.67 3 -.65.656.4 4 -.664.656. 5 -.664.666.6E-4 6 -.667.666 6.5E-5 7 -.667.667.63E-5 8 -.667.667 4.7E-6 Table 5: Problem 6.6.3b Jacobi Iteration k x k x k Error..5 6.67E- -.65.656 4.E-3 3 -.664.666.6E-4 4 -.667.667 6.5E-5 5 -.667.667.E-6 Table 6: Problem 6.6.3b Gauss-Seidel Iteration 4 4 4 4 b = 5 3 7 9 x = [The following solutions use the notation of the textbook for the iteration matrices etc. You can easily translate to relate these to the matrices defined in the lecture notes.] Recall that in Jacobi: N = a... a.............. ann P = N A and In Gauss-Seidel: 4

N = a... a a......... an an... ann P = N A and M = N P k x k x k x k 3 x k 4 Error Ratio.5 -.75 -.75.5 7.5E-.375.65.5 -.875.65 3.75E-.5 3.65 -.875 -.875.65.88E-.5 4.963.33 -.9688.963 9.38E-.5 5.56 -.469 -.469.56 4.69E-.5 6.9766.78 -.99.9766.34E-.5 7.39 -.7 -.7.39.7E-.5 8.994. -.998.994 5.86E-3.5 9. -.9 -.9..93E-3.5.9985.5 -.9995.9985.46E-3.5. -.7 -.7. 7.3E-4.5.9996. -.9999.9996 3.66E-4.5 3. -. -...83E-4.5 4.9999. -..9999 9.6E-5.5 5.. -.. 4.58E-5.5 Table 7: Problem 6.6-4 Jacobi Iteration M = 4 M =.5. This is consistent with the column ratio in the above table. Problem 6.6-4b M = 4 /4 /4 /6 /6 /4 /6 /6 /4 /3 /3 /8 5 M =.5.

k x k x k x k 3 x k 4 Error Ratio.5-4.38E- -.44E+.783 4.38E-.9.783 -.9E- -.E+.9453.9E-.5 3.9453 -.73E- -.3E+.9863 5.47E-.5 4.9863-6.84E-3 -.E+.9966.37E-.5 5.9966 -.7E-3 -.E+.999 3.4E-3.5 6.999-4.7E-4 -.E+.9998 8.54E-4.5 7.9998 -.7E-4 -.E+.9999.4E-4.5 8.9999 -.67E-5 -.E+. 5.34E-5.5 9. -6.68E-6 -.E+..34E-5.5 Table 8: Problem 6.6-4 Gauss-Seidel Iteration The actual convergence rate is better than.5. upper bound on the convergence or divergence rate. Keep in mind that M gives an Problem 6.6-5 4 3 x x x 3 b b b 3 6.6-5a 4 3 = 3 since det(a), for any right-hand side vector b, the system has a unique solution x = [x, x, x 3 ] T. 6.6-5b ()Jacobi method: N = 4 3 N = /4 / /3 P = 6

M = N P = /4 /4 / /3 M = /3 Since M <, the method converges. () Gauss-Seidel method: N = 4 3 N = /4 /8 / / /3 /3 P = M = N P = /4 /4 /8 /8 / / M = / Since M <, the method converges. Problem 6.6-6.... Consider A n =................ The iteration method Nx (k+) = b+p x (k) will converge, for all b vectors and all initial guesses x (), if and only if all eigenvalues λ of M = N P satisfy λ < 7

() Jacobi method n=5, A=[,-,,,;-,,-,,;,-,,-,;,,-,,-;,,,-,]; for i=:n for j=:n N_Jacobi(i,j)=(i==j)*A(i,j); end end P=N_Jacobi-A; M=inv(N_Jacobi)*P; lambda=eig(m) A 5 = N = P = M = N P = N = The eigenvalues of M are - using eig(m): 8

λ =.866 λ =.5 λ 3 = λ 4 =.5 λ 5 =.866 Likewise for n =, the eigenvalues of M are: λ =.9595 λ =.843 λ 3 =.6549 λ 4 =.454 λ 5 =.43 λ 6 =.43 λ 7 =.454 λ 8 =.6549 λ 9 =.843 λ =.9595 For n =, the eigenvalues of M are: λ =.9888 λ =.9556 λ 3 =.9 λ 4 =.86 λ 5 =.733 λ 6 =.635 λ 7 =.5 λ 8 =.3653 λ 9 =.5 λ =.747 λ =.747 λ =.5 λ 3 =.3653 λ 4 =.5 λ 5 =.635 λ 6 =.733 λ 7 =.86 λ 8 =.9 λ 9 =.9556 λ =.9888 We see that for n = 5,,, λ i <, so Jacobi method will converge. () Gauss-Seidel method Again for n = 5, we have: for i=:n for j=:n N_GS(i,j)=(i>=j)*A(i,j); end end P=N_GS-A; M=inv(Ngs)*P; lambda=eig(m) M = N P =.5.5.5.5.5.5.65.5.5.5.33.65.5.5 The eigenvalues of M are - using eig(m): λ = λ =.75 λ 3 =.5 λ 4 = λ 5 = 9

Likewise for n =, the eigenvalues of M are: λ = λ =.96 λ 3 =.777 λ 4 =.488 λ 5 =.76 λ 6 =.3 λ 7 = λ 8 = λ 9 = λ = For n =, the eigenvalues of M are: λ = λ =.9778 λ 3 =.93 λ 4 =.87 λ 5 =.687 λ 6 =.5374 λ 7 =.3887 λ 8 =.5 λ 9 =.335 λ =.495 λ =.68 λ =.5 +.46i λ 3 =.5.46i λ 4 =.8 +.7i λ 5 =.8.7i λ 6 =.43 +.6i λ 7 =.43.6i λ 8 =.78 +.3i λ 9 =.78 +.3i λ = So for n = 5,,, λ i <, so Gauss-Seidel method will converge. Problem 6.6-4 Note: this is slightly different from the method described in the lecture notes, but equivalent. Careful with the sign of the correction vector. The residual correction method: A ɛ x = b where A ɛ = A + ɛb. A = B = Let N = A and P = N A ɛ = ɛb, so M = N P = A ( ɛb) M = A ( ɛb) = ɛ. A B) = ɛ.5.5.5.5..5.5 /5.5 = ɛ. = ɛ In order to ensure the convergence of the residual correction method, α <.

k x k x k x k 3 Error Ratio.e+.e+ 9.e-.e- 9.99e-.e+.e+.e+.e-.e- 3 9.995e-.e+.5e+ 5.e-4 5.e- 4.e+ 9.9995e-.e+ 5.e-5.e- 5.e+.e+.e+.5e-6 5.e- Table 9: Problem 6.6.4 ɛ =. In this case, we are not given the vector b, but the true answer x = [,, ]. Let x = [,, ] and repeat the procedure until x x (k) 5 k x k x k x k 3 Error Ratio.e+.e+ 8.e-.e-.6667e-.e+.4e+.e+ 4.e-.e- 3 9.96e-.e+.4e+ 4.e-3.e- 4.e+ 9.99e-.e+ 8.e-4.e- 5.e+.e+ 9.999e- 8.e-5.e- 6.e+.e+.e+.6e-5.e- 7.e+.e+.e+.6e-6.e- Table : Problem 6.6.4 ɛ =. The number of iterations increases as ɛ increases, consistent with expectations. Problem 7. - b [ ] 36 For, 36 3 f(λ) = λ 5λ 5 = (λ 5)(λ + 5), λ = 5, λ = 5 Using (A λi)v =, we get : An eigenvector corresponding to λ = 5: v = [, 5] T

k x k x k x k 3 Error Ratio.5e+.e+ 5.e- 5.e- 3.3333e-.e+.5e+.e+.5e- 5.e- 3 9.375e-.e+.65e+ 6.5e-.5e- 4.e+ 9.6875e-.e+ 3.5e- 5.e- 5.78e+.e+ 9.99e- 7.85e-3.5e- 6.e+.39e+.e+ 3.963e-3 5.e- 7 9.99e-.e+.e+ 9.7656e-4.5e- 8.e+ 9.995e-.e+ 4.888e-4 5.e- 9.e+.e+ 9.9988e-.7e-4.5e-.e+.e+.e+ 6.35e-5 5.e- 9.9998e-.e+.e+.559e-5.5e-.e+ 9.9999e-.e+ 7.694e-6 5.e- Table : Problem 6.6.4 ɛ =.5 k x k x k x k 3 Error Ratio.8E+.E+.E- 8.E- 4.44E-.E+.64E+.E+ 6.4E- 8.E- 3 7.44E-.E+.6E+.56E- 4.E- 4.E+ 7.95E-.E+.5E- 8.E- 5.8E+.E+ 9.8E- 8.9E- 4.E- 6.E+.7E+.E+ 6.55E- 8.E- 7 9.74E-.E+.3E+.6E- 4.E- 8.E+ 9.79E-.E+.E- 8.E- 9.E+.E+ 9.9E- 8.39E-3 4.E-.E+.E+.E+ 6.7E-3 8.E- 9.97E-.E+.E+.68E-3 4.E-.E+ 9.98E-.E+.5E-3 8.E- 3.E+.E+ 9.99E- 8.59E-4 4.E- 4.E+.E+.E+ 6.87E-4 8.E- 5.E+.E+.E+.75E-4 4.E- 6.E+.E+.E+.E-4 8.E- 7.E+.E+.E+ 8.8E-5 4.E- 8.E+.E+.E+ 7.4E-5 8.E- 9.E+.E+.E+.8E-5 4.E-.E+.E+.E+.5E-5 8.E-.E+.E+.E+ 9.E-6 4.E- Table : Problem 6.6.4 ɛ =.8

An eigenvector corresponding to λ = 5: v = [, ] T Problem 7. - For the symmetric matrix A = 7 3 6 3 3 6 3 7, eigenvalues and eigenvectors are: λ = { 36, 3, 9} v () = [ 3, 3, 3 ] T, v () = [,, ] T, v (3) = [ 6, 6, ] T,, U can be: U = [v () ; v () ; v (3) ] = 6 3 3 6 3 Each column in U is one of the eigenvectors of A. The eigenvectors are normalized to get the columns of U which is supposed to be an orthogonal matrix. We get: U T AU = D = diag(λ, λ, λ 3 ) and U T U = UU T = I. A=[-7,3,-6; 3, -, 3; -6, 3, -7]; [U,lam]=eig(A) The result might be: λ = 36. 3. 9., U = Run the following command, in order to see:.5774.48.77.5774.865..5774.48.77 = [v () ; v (3) ; v () ] U *A*U 36. 3. 9. 3

and also the commands U *U and U*U, in order to see U T U = UU T = I. Problem 7. - 4 [ ] For, eigenvalues and eigenvectors are: f(λ) = λ λ + = (λ ) : λ =, v () = [, ] T λ =, v () = [, ] T For [ ], e-values and e-vectors are: f(λ) = λ λ + = (λ ) : λ =, v () = [, ] T In case b, this eigenvector and its multiples are the only eigenvectors of the matrix. It does NOT contradict Theorem 7..4, because the matrix is not symmetric. 4