Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

Similar documents
Math 552 Scientific Computing II Spring SOLUTIONS: Homework Set 1

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

CS 323: Numerical Analysis and Computing

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

The Solution of Linear Systems AX = B

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

Iterative Methods. Splitting Methods

Algebra C Numerical Linear Algebra Sample Exam Problems

Solving Linear Systems

9. Iterative Methods for Large Linear Systems

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

Numerical Methods - Numerical Linear Algebra

Solving Linear Systems

CLASSICAL ITERATIVE METHODS

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

9.1 Preconditioned Krylov Subspace Methods

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Review Questions REVIEW QUESTIONS 71

Classical iterative methods for linear systems

CAAM 454/554: Stationary Iterative Methods

Lecture 18 Classical Iterative Methods

LINEAR SYSTEMS (11) Intensive Computation

Computational Methods. Systems of Linear Equations

MATLAB Project: LU Factorization

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018

Iterative Methods for Solving A x = b

AMSC/CMSC 466 Problem set 3

Introduction to Scientific Computing

Numerical Programming I (for CSE)

Numerical Analysis: Solving Systems of Linear Equations

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

ACM106a - Homework 2 Solutions

TMA4125 Matematikk 4N Spring 2017

EXAMPLES OF CLASSICAL ITERATIVE METHODS

1 Number Systems and Errors 1

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)

Numerical Linear Algebra

Background. Background. C. T. Kelley NC State University tim C. T. Kelley Background NCSU, Spring / 58

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Let x be an approximate solution for Ax = b, e.g., obtained by Gaussian elimination. Let x denote the exact solution. Call. r := b A x.

Gaussian Elimination and Back Substitution

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

A LINEAR SYSTEMS OF EQUATIONS. By : Dewi Rachmatin

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

Math 3C Lecture 20. John Douglas Moore

MATH2071: LAB #5: Norms, Errors and Condition Numbers

Notes for CS542G (Iterative Solvers for Linear Systems)

Numerical Linear Algebra

Homework 2 Foundations of Computational Math 2 Spring 2019

22A-2 SUMMER 2014 LECTURE 5

Computational Economics and Finance

Chapter 7 Iterative Techniques in Matrix Algebra

Numerical Methods I Non-Square and Sparse Linear Systems

Next topics: Solving systems of linear equations

10725/36725 Optimization Homework 4

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Introduction to Applied Linear Algebra with MATLAB

Computational Linear Algebra

G1110 & 852G1 Numerical Linear Algebra

3.2 Iterative Solution Methods for Solving Linear

Sparse Linear Systems. Iterative Methods for Sparse Linear Systems. Motivation for Studying Sparse Linear Systems. Partial Differential Equations

Some definitions. Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization. A-inner product. Important facts

Illustration of Gaussian elimination to find LU factorization. A = a 11 a 12 a 13 a 14 a 21 a 22 a 23 a 24 a 31 a 32 a 33 a 34 a 41 a 42 a 43 a 44

MA3232 Numerical Analysis Week 9. James Cooley (1926-)

Numerical Linear Algebra Homework Assignment - Week 2

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Recall : Eigenvalues and Eigenvectors

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

Solution of Matrix Eigenvalue Problem

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

1.Chapter Objectives

Introduction. Math 1080: Numerical Linear Algebra Chapter 4, Iterative Methods. Example: First Order Richardson. Strategy

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

Linear Algebraic Equations

Applied Linear Algebra

Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

Math 502 Fall 2005 Solutions to Homework 3

Lecture 11. Fast Linear Solvers: Iterative Methods. J. Chaudhry. Department of Mathematics and Statistics University of New Mexico

A TOUR OF LINEAR ALGEBRA FOR JDEP 384H

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

Review of Some Concepts from Linear Algebra: Part 2

Section 3.9. Matrix Norm

Linear Systems of n equations for n unknowns

c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp , March

MA2501 Numerical Methods Spring 2015

6.4 Krylov Subspaces and Conjugate Gradients

JACOBI S ITERATION METHOD

AIMS Exercise Set # 1

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns

Transcription:

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer 28. (Vector and Matrix Norms) Homework 3 Due: Tuesday, July 3, 28 Show that the l vector norm satisfies the three properties (a) x for x R n and x = if and only if x = (b) λx = λ x for λ R and x R n (c) x + y x + y 2. (Pivoting) for x, y R n (a) Prove that the matrix ( ) does not have an LU decomposition. Hint: assume that such decomposition exists and then show that this brings a contradiction. (b) Does the system ( ) ( ) x y = ( ) a b have a unique solution for all a, b R? (Why?) (c) How can you modify the system in part (b) so that LU decomposition applies? 3. (Partial Pivoting) Consider the linear system, Ax = b, where A is the following matrix, 5 2 A = 3. 3 6 (a) Using partial pivoting technique, determine the P, L, U decomposition of the matrix A, such that P A = LU. (Show EACH STEP in the decomposition.) (b) Use the P, L, U decomposition found in (a) to find the solution to 2 Ax = 2 (Show ALL relevant steps). (c) Use the P, L, U decomposition found in (a) to find the solution to Ax = (Show ALL relevant steps). 5

4. (Partial Pivoting: MATLAB program) Write a program to find the LU decomposition of a given n n matrix A using partial pivoting. The program should return the updated matrix A and the pivot vector p. In MATLAB, name your file mylu.m, the first few lines of which should be as follows: function [a,p]=mylu(a) % [n n]=size(a); p=(:n) ; (your code here!) The code above sets n equal to the dimension of the matrix and initializes the pivot vector p. Make sure to store the multipliers m ij in the proper matrix entries. For more help on function m-files see pages 9 3 of the MATLAB Primer by Kermit Sigmon available from the course webpage. You should experiment with a few small matrices to make sure your code is correct. Check if matrices resulting in LU decomposition satisfy P A = LU. As a test of your code, in MATLAB execute the statements >>diary mylu.txt >>format short e >>type mylu.m >>a=[2 2-3;3-2;6 8 ]; >>[a,p]=mylu(a) >>diary off Print and hand-in the text file containing your program. 5. (a) Consider the matrix A = 2 3 4 2 5 Compute A and find a vector x such that A = Ax / x. (b) Find an example of a 2 2 matrix A such that A = but ρ(a) =. This shows that the spectral radius ρ(a) = {max λ : λ is an eigenvalue of A} does not define a matrix norm.. 6. Consider the matrix, right side vector, and two approximate solutions ( ) ( ) ( ) ( ).2969.8648.8642.99 A =, b =, x.26.44.44 =, x 2 =.487. (a) Show that x = (2, 2) T is the exact solution of Ax = b. (b) Compute the error and residual vectors for x and x 2. (c) Find A, A and cond (A) (you may use MATLAB for this calculation). 2

(d) In class we proved a theorem relating the condition number of A, the relative error, and the relative residual. Check this result for the two approximate solutions x and x 2. 7. (LU factorization) (a) Write a program that takes the output A and p from problem # 4, along with a righthand side b, and computes the solution of Ax = b by performing the forward and backward substitution steps. If you are using MATLAB, name your m-file lusolve.m. The first line of your code lusolve.m should be as follows: function x=lusolve(a,p,b) (your code here!) Turn in a copy of your code. (b) The famous Hilbert matrices are given by H ij = /(i + j ). The n n Hilbert matrix H n is easily produced in MATLAB using hilb(n). Assume the true solution of H n x = b for a given n is x = [,..., ] T. Hence the righthand side b is simply the row sums of H n, and b is easily computed in MATLAB using b=sum(hilb(n) ). Use your codes mylu.m and lusolve.m to solve the system H n x = b for n = 5,, 5, 2. For each n, using the norm, compute the relative error and the relative residual. Discuss what is happening here. You may find it useful to look at the cond command in MATLAB. 8. (Iterative Methods: Analysis). Recall that an n n matrix A is said to be strictly diagonally dominant if n a ij < a ii for i =,..., n. j=, j i Note that the strict inequality implies that each diagonal entry a ii is non-zero. Suppose that A is strictly diagonally dominant. (a) Show that the Jacobi iteration matrix satisfies B J < and, therefore, Jacobi iteration converges in this case. (b) For a 2 2 matrix A, show that the Gauss-Seidel iteration matrix also satisfies B GS < and, hence, Gauss-Seidel iteration converges as well. 9. (from Mathews Fink 24) The Rockmore Corp. is the considering the purchase of a new computer and will choose either the DoGood 74 or the MightDo. They test both computers ability to solve the linear system 34x + 55y 2 = 55x + 89y 34 = 3

The DoGood 74 computer gives x =. and y =.45, and its check for accuracy is found by substitution: 34(.) + 55(.45) 2 =. 55(.) + 89(.45) 34 =. The MightDo computer gives x =.99 and y =., and its check for accuracy is found by substitution: 34(.99) + 55(.) 2 =.89 55(.99) + 89(.) 34 =.44 Which computer gave the better answer? Why? Suggested / Additional problems for Math 529 / Phys 528 students:. (Special Matrices) Consider the matrix b 4. 5 (a) For what values of b will this matrix be positive definite? (Hint: theorem on page 25 on leading principal submatrices may be useful.) (b) For what values of b will this matrix be strictly diagonally dominant? (Recall that an n n matrix A is said to be strictly diagonally dominant if n a ij < a ii for i =,..., n. j=, j i Note that the strict inequality implies that each diagonal entry a ii is nonzero.). Consider a linear system with matrix A = ( ) 2 4 (a) Write down the iteration matrices B J and B GS for Jacobi s Method and Gauss Seidel. (b) Find the l norm and spectral radius of the iteration matrix for Jacobi and Gauss-Seidel. (Recall that the spectral radius of a matrix can be calculated by finding the roots of its characteristic polynomial.) (c) Which of the two iterative methods will converge for an arbitrary starting point x ()? Why? 4

(d) Write a program to calculate and plot the spectral radius of B sor (ω) for parameter ω in the range (, 2) in increments of.. Provide the code and the plot. Based on inspection of the graph, what value of ω will lead to the fastest convergence? (e) Use the theorem on page 234 of Bradie to calculate analytically the optimal relaxation parameter ω for SOR. Does it match the value predicted in Part (d)? 2. Matrix Norms (a) Prove that if A <, then (I A) + A. (b) Suppose that A R n n is invertible, B is an estimate of A, and AB = I + E. Show that the relative error in B is bounded by E (using an arbitrary matrix norm). 3. (Cholesky decomposition) (Cholesky decomposition can be used for symmetric positive definite matrices (see pages 25-27 of the textbook).)) (a) Compute the Cholesky decomposition for matrix 6 28 28 53 29 (b) Construct an algorithm to perform forward and backward substitution on the system Ax = b, given a Cholesky decomposition A = LL T for the coefficient matrix. How many arithmetic operations are required by the algorithm? (c) Solve the system Ax = b with b = ( 8 2 38 ) T and the above matrix A by using the Cholesky decomposition and then performing forward and backward substitution. 5