AMSC/CMSC 466 Problem set 3

Similar documents
Let x be an approximate solution for Ax = b, e.g., obtained by Gaussian elimination. Let x denote the exact solution. Call. r := b A x.

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

Outline. Math Numerical Analysis. Errors. Lecture Notes Linear Algebra: Part B. Joseph M. Mahaffy,

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

Review Questions REVIEW QUESTIONS 71

Linear Algebraic Equations

AIMS Exercise Set # 1

AMS 147 Computational Methods and Applications Lecture 17 Copyright by Hongyun Wang, UCSC

Computational Economics and Finance

Topics. Review of lecture 2/11 Error, Residual and Condition Number. Review of lecture 2/16 Backward Error Analysis The General Case 1 / 22

Iterative Methods for Linear Systems

6 Linear Systems of Equations

ESTIMATION OF ERROR. r = b Abx a quantity called the residual for bx. Then

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

MAT 343 Laboratory 3 The LU factorization

MATH2071: LAB #5: Norms, Errors and Condition Numbers

Hani Mehrpouyan, California State University, Bakersfield. Signals and Systems

AM205: Assignment 2. i=1

Section 3.5 LU Decomposition (Factorization) Key terms. Matrix factorization Forward and back substitution LU-decomposition Storage economization

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

Numerical Methods - Numerical Linear Algebra

Boundary Value Problems and Iterative Methods for Linear Systems

The Solution of Linear Systems AX = B

Dense LU factorization and its error analysis

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Gaussian Elimination and Back Substitution

Scientific Computing

NUMERICAL MATHEMATICS & COMPUTING 7th Edition

Numerical Linear Algebra

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Laboratory #3: Linear Algebra. Contents. Grace Yung

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

5.1 Banded Storage. u = temperature. The five-point difference operator. uh (x, y + h) 2u h (x, y)+u h (x, y h) uh (x + h, y) 2u h (x, y)+u h (x h, y)

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 2. Systems of Linear Equations

Lecture 9. Errors in solving Linear Systems. J. Chaudhry (Zeb) Department of Mathematics and Statistics University of New Mexico

Applied Linear Algebra in Geoscience Using MATLAB

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

Next topics: Solving systems of linear equations

Gaussian Elimination without/with Pivoting and Cholesky Decomposition

Scientific Computing: An Introductory Survey

Ax = b. Systems of Linear Equations. Lecture Notes to Accompany. Given m n matrix A and m-vector b, find unknown n-vector x satisfying

LU Factorization. LU Decomposition. LU Decomposition. LU Decomposition: Motivation A = LU

Lecture Note 2: The Gaussian Elimination and LU Decomposition

Solving Linear Systems of Equations

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

Solving linear systems (6 lectures)

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

2.1 Gaussian Elimination

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

MA2501 Numerical Methods Spring 2015

Linear System of Equations

Numerical Linear Algebra

MODULE 7. where A is an m n real (or complex) matrix. 2) Let K(t, s) be a function of two variables which is continuous on the square [0, 1] [0, 1].

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

Chapter 1 Linear Equations. 1.1 Systems of Linear Equations

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Part II Linear Algebra

Finite Difference Methods for Boundary Value Problems

SOLVING LINEAR SYSTEMS

Math 552 Scientific Computing II Spring SOLUTIONS: Homework Set 1

Part II Linear Algebra

Sparse Linear Systems. Iterative Methods for Sparse Linear Systems. Motivation for Studying Sparse Linear Systems. Partial Differential Equations

1 Finite difference example: 1D implicit heat equation

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim

Cheat Sheet for MATH461

Introduction to PDEs and Numerical Methods Lecture 7. Solving linear systems

Linear System of Equations

MATH 333: Partial Differential Equations

9. Iterative Methods for Large Linear Systems

Name: INSERT YOUR NAME HERE. Due to dropbox by 6pm PDT, Wednesday, December 14, 2011

Linear Systems of Equations. ChEn 2450

1.Chapter Objectives

Chapter 2 - Linear Equations

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)

Direct Methods for Solving Linear Systems. Matrix Factorization

14.2 QR Factorization with Column Pivoting

Department of Aerospace Engineering AE602 Mathematics for Aerospace Engineers Assignment No. 4

JACOBI S ITERATION METHOD

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD

Math 502 Fall 2005 Solutions to Homework 3

FLOATING POINT ARITHMETHIC - ERROR ANALYSIS

Consider the following example of a linear system:

The following steps will help you to record your work and save and submit it successfully.

MATLAB Project 2: MATH240, Spring 2013

Numerical Linear Algebra

G1110 & 852G1 Numerical Linear Algebra

Chapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers.

Homework and Computer Problems for Math*2130 (W17).

Computational Methods. Systems of Linear Equations

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

Engineering Computation

Numerical Linear Algebra

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

Numerical Linear Algebra

5. Hand in the entire exam booklet and your computer score sheet.

Introduction to Mathematical Programming

The System of Linear Equations. Direct Methods. Xiaozhou Li.

ECE133A Applied Numerical Computing Additional Lecture Notes

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Transcription:

AMSC/CMSC 466 Problem set 3 1. Problem 1 of KC, p180, parts (a), (b) and (c). Do part (a) by hand, with and without pivoting. Use MATLAB to check your answer. Use the command A\b to get the solution, and the command [L,U,P] = lu(a) to verify the factorization. Then do parts (b) and (c) using MATLAB to find the solution x and to find the factoriztion. 2. Consider the linear system Ax = b where A is the matrix 6 2 2 A = 2 2/3 1/3 1 2 1 and b = ( 2, 1, 0). a) Verify that the exact solution is x 1 = 2.6, x 2 = 3.8, x 3 = 5.0. b) Using four digit floating-point decimal arithmetic with rounding, solve this system by Gaussian elimination without pivoting. In performing the arithmetic operations, remember to round to four digits after each operation. In particular, use this rounding when entering the elements of the matrix, and in computing the multipliers. Call the computed solution y. c) Repeat part b) using partial pivoting. Call the computed solution z. Compare the solutions y and z with the exact solution x. 3. Problem 32, page 160 of KC. Use the algorithm given in class for the Cholesky factorization. You may check your answer with MATLAB. 4. The n n Hilbert matrix is H n = [h i,j ] where h i,j = 1, 1 i, j n. i + j 1 This matrix is nonsingular and the inverse may be computed explicitly, The inverse Hn 1 = a ij where a ij = ( 1) i+j (n + i 1)!(n + j 1)! (i + j 1)[(i 1)!(j 1)!] 2 (n i)!(n j)!. The MATLAB functions hilb(n) and invhilb(n) give H n and Hn 1 respectively, using the formulas. If we set b n = (1, 0,..., 0), then the exact solution x n of H n x n = b n is the first column of Hn 1. 1

(a) Using the MATLAB back slash command, solve for x n for the cases n = 5 and n = 10. Call this computed result y n. (b) Let e n = x n y n where x n is the exact solution as given by the formula above and let r n = b n H n y n be the residual. Find the condition number of H n in the 2-norm using the MATLAB command cond for n = 5 and n = 10. Verify that e n x n cond(h n)u where u = eps/2 is the unit roundoff, and is the 2-norm. The MATLAB command norm(v) computes the 2-norm of a vector v. (c) How many accurate digits are there in the computed solutions in each case? (d) Verify in each case that r n H n y n u. 5. Determine (by hand) the condition number, relative to the 1-norm, of the matrix [ ] 1 a A = a 1 as a function of the parameter a. For what values of a does A become ill conditionned, assuming the unit roundoff u = 10 7 and at least three significant digits are desired? 6. Let A be an n n matrix, and let a j be the jth column of A. Prove that cond(a) max a j min a j. Hint: Use the fact that Ae j = a j, where e j is the jth column of the identity matrix, to derive bounds for A and A 1. Here can be the 1-norm, the 2-norm or the infinity-norm. 7. Consider the problem of finding a solution of the boundary value problem for given f(x) u (x) = f(x) for 0 < x < 1 (1) 2

u(0) = u(1) = 0. The solution u can be found analytically in the form of an integral, but we shall use a finite difference approximation to calculate some numerical solutions of this problem. First we approximate the second derivative with a difference quotient: u (x) u(x + x)) 2u(x) + u(x x) ( x) 2 so that the DE becomes u(x + x) 2u(x) + u(x x) ( x) 2 f(x). Now introduce grid points in the interval [0, 1]. Let the number of subintervals be n, setting x = 1/n and x j = j x, j = 0,..., n. We let u j be the approximate value we wish to compute for u(x j ). The u j are chosen to satisfy the system of linear equations u j+1 + 2u j u j 1 = ( x) 2 f(x j ), j = 1,..., J where J = n 1 is the number of interior mesh points. The boundary conditions say that u 0 = u J+1 = 0 so we get the J J system T u = ( x) 2 Su = f where S is the the J J tridiagonal matrix S = 2 1 0... 0 1 2 1... 0 0 1 2... 0...... 0...... 0..... 2 1 0 0... 1 2 u = (u 1,..., u J ) and f = (f(x 1 ),..., f(x J )). Write a short MATLAB program (script mfile) to solve this system with the given function f as input. You can write f as in inline function. The 3

program should have n as an input parameter and should plot the solution u along with the function f on the interval [0, 1]. The program should be written to accomodate values of n as large as 200 (J = 199). Remember that in MATLAB an index in a vector always starts with one. We shall take advantage of the fact that S is a sparse matrix to save on storage space. You will need to enter the matrix S as a sparse matrix using the sparse commands of MATLAB. Enter help sparse at the MATLAB prompt to get information. Note that S can be written S = E + D + E where D = 2I and E has 1 on the super diagonal and zeros elsewhere. D and E can each be entered as sparse matrices. You can use the MATLAB command u = T\f to solve the sytem. You can also use the sequence of commands [L,U] = lu(t) v = L\f; u = U\v; For this size problem, the LU factorization does not save much time. a) For starters take f(x) = 1. The exact solution is u(x) = x(1 x)/2. Note that the values of u at the points x j satisfy the difference equation exactly. You can use this solution to see if your program is working correctly. b) Now use the function f(x) = sin(πx) with exact solution u(x) = sin(πx)/π 2. Run your program with values of n = 10, 50, 100, 200. To see convergence of the numerical solutions to the exact solution, plot the numerical solutions for these values of n together on the same graph using the hold on command. c) For each n of part c) compute the error e n = max u j sin(πx j )/π 2. According to what power of x = 1/n is e n tending to zero as n increases? d) Now try a function f which is continuous but not C 1, such as { x/a, 0 x a f(x) = (1 x)/(1 a), a x 1. How does the solution curve compare with f? Is it smoother, or does it have the same jump in the derivative at x = a? Again try n = 20, 50, 100, 200 and observe the convergence of the computed solutions. 4

8. Now consider the heat equation for u(x, t) with the initial condition u t (x, t) u xx (x, t) = 0, 0 < x < 1, t > 0 u(0, t) = u(1, t) = 0, t 0 u(x, 0) = f(x), 0 x 1. Note that when u does not depend on t, the problem reduces to the one considered in problem 7. We will see that the time dependent problem can be solved numerically by the same methods. We introduce a grid of mesh points in the x, t plane, setting x j = j x and t k = k t. Here x = 1/n, with n being the number of subintervals. J = n 1 is the number of interior meshpoints. We approximate the time derivative u t with a difference quotient u t (x, t) and u xx with the difference quotient u(x, t + t) u(x, t) t u xx (x, t) u(x + x, t) 2u(x, t) + u(x x, t) ( x) 2. Then we use u j,n to denote the values we will compute to approximate u(x j, t k ). We will use an implicit scheme to calculate the u j,k because it is more stable (resistent to error). We center the second difference quotient at the point (x j, t k+1 ) which yields the system of equations u j,k+1 u j,k t u j+1,k+1 2u j,k + u j 1,k+1 ( x) 2 = 0 for j = 1,..., J, k = 0, 1, 2,.... The boundary condition u(0, t) = u(1, t) = 0 says that we must have u 0,k = u n,k = u J+1,k = 0 for all k. Let the vector u k = (u 1,k,..., u J,k ). Then the system can be written compactly as where now the J J matrix S is again tridiagonal, Su k+1 = u k (2) 5

S = 1 + 2ρ ρ 0... 0 ρ 1 + 2ρ ρ... 0 0 0 ρ 1 + 2ρ... 0...... 0..... 1 + 2ρ ρ..... ρ 1 + 2ρ. Here ρ = t/( x) 2. The initial condition u(x, 0) = f(x) tells us that the first vector in the sequence is u 0 = (f(x 1 ),..., f(x J )). Then we find the vectors u k by repeatedly solving the system (2). In fact we have u k = S k u 0 but this is not a very fast nor accurate way of calculating the u k. a) Write a MATLAB program which solves the system (2) for K time steps and plots out the snapshot of the solution at time t = K t. n, the number of spatial subintervals and and K, the number of time steps, should be input parameters. The initial data f(x) can be written as an inline function. Enter the sparse matrix S using the techniques of problem 7. You will need to write a for loop, k = 1, K. You could use the short loop for k = 1:K u = S\u; end However, this means you are making the LU factorization over and over again, at each iteration. Now we can take advantage of the LU factorization of S. Factor S once before the loop and then use the iteration [L,U] = lu(s) for k = 1:K v = L\u; u = U\v; end b) To begin use f(x) = sin(πx). The exact solution is u(x, t) = exp( π 2 t) sin(πx). 6

Check your code to see if it is working correctly with this example. c) Now watch the convergence of computed solutions to the exact solution. Fix n = 100. First use t =.01 and K = 10 which computes the solution at t =.1. Next use t =.005 and K = 20. Superimpose the graphs of the two runs using the hold on command. Finally use t =.0025 and K = 40. What happens to the graphs? They all represent the computed solution at time t =.1. For each run, compute max u x j,k u(x j,.1). 1 j J How is the error converging to zero? c) Now use initial data f(x) = x(.33 x)(x 1). Plot f with a vector of points x = 0:.01:1 in red with the command plot(x,f(x), r ). Then enter hold on. Next run your program with a time step t =.01, and K = 1, 2, 3, 4, 5, 10, plotting all the curves on the same graph. How does the solution evolve? What happens to the hot spot, what happens to the cool spot? Hand in programs (script files and function files), Diaries, and plots. Be sure to label plots carefully. You can put in comment lines in the script and mfiles to indicated what is being done. You can edit the diaries to make additional comments. Above all, make sure your work is well organized and understandable. 7