Chapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers.

Similar documents
Chapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers.

Solving Linear Systems Using Gaussian Elimination. How can we solve

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

Numerical Analysis Fall. Gauss Elimination

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

2.1 Gaussian Elimination

14.2 QR Factorization with Column Pivoting

MATH 3511 Lecture 1. Solving Linear Systems 1

A Review of Matrix Analysis

Scientific Computing

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)

Review Questions REVIEW QUESTIONS 71

Gaussian Elimination -(3.1) b 1. b 2., b. b n

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Chapter 9: Gaussian Elimination

Direct Methods for Solving Linear Systems. Matrix Factorization

Linear Algebra and Matrix Inversion

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

Matrix decompositions

Scientific Computing: Dense Linear Systems

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

CS513, Spring 2007 Prof. Amos Ron Assignment #5 Solutions Prepared by Houssain Kettani. a mj i,j [2,n] a 11

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015

Illustration of Gaussian elimination to find LU factorization. A = a 11 a 12 a 13 a 14 a 21 a 22 a 23 a 24 a 31 a 32 a 33 a 34 a 41 a 42 a 43 a 44

ACM106a - Homework 2 Solutions

Math 304 (Spring 2010) - Lecture 2

Review of Basic Concepts in Linear Algebra

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Numerical Algorithms. IE 496 Lecture 20

Chapter 3. Linear and Nonlinear Systems

9. Numerical linear algebra background

Linear Algebraic Equations

Linear System of Equations

Numerical Linear Algebra

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Chapter 7. Tridiagonal linear systems. Solving tridiagonal systems of equations. and subdiagonal. E.g. a 21 a 22 a A =

Since the determinant of a diagonal matrix is the product of its diagonal elements it is trivial to see that det(a) = α 2. = max. A 1 x.

CHAPTER 6. Direct Methods for Solving Linear Systems

Gaussian Elimination for Linear Systems

The Solution of Linear Systems AX = B

9. Numerical linear algebra background

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

SOLVING LINEAR SYSTEMS

MA2501 Numerical Methods Spring 2015

Basic Concepts in Linear Algebra

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

Gaussian Elimination and Back Substitution

Numerical Linear Algebra

Orthogonal Transformations

Roundoff Analysis of Gaussian Elimination

Inverses. Stephen Boyd. EE103 Stanford University. October 28, 2017

Dense LU factorization and its error analysis

Lecture Notes 7, Math/Comp 128, Math 250

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky

Process Model Formulation and Solution, 3E4

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Chapter 8 Gauss Elimination. Gab-Byung Chae

Lecture 9. Errors in solving Linear Systems. J. Chaudhry (Zeb) Department of Mathematics and Statistics University of New Mexico

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

CSE 160 Lecture 13. Numerical Linear Algebra

Lecture 1 INF-MAT : Chapter 2. Examples of Linear Systems

LU Factorization. LU Decomposition. LU Decomposition. LU Decomposition: Motivation A = LU

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 2. Systems of Linear Equations

April 26, Applied mathematics PhD candidate, physics MA UC Berkeley. Lecture 4/26/2013. Jed Duersch. Spd matrices. Cholesky decomposition

The purpose of computing is insight, not numbers. Richard Wesley Hamming

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in

Draft. Lecture 12 Gaussian Elimination and LU Factorization. MATH 562 Numerical Analysis II. Songting Luo

The QR Factorization

LINEAR ALGEBRA WITH APPLICATIONS

Orthonormal Transformations

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Solving Linear Systems of Equations

G1110 & 852G1 Numerical Linear Algebra

Chapter 1 Matrices and Systems of Equations

Numerical Linear Algebra

Numerical methods for solving linear systems

Orthonormal Transformations and Least Squares

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

HW2 Solutions

Math 240 Calculus III

Linear Systems of n equations for n unknowns

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

Linear Algebra March 16, 2019

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place...

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey

Example: Current in an Electrical Circuit. Solving Linear Systems:Direct Methods. Linear Systems of Equations. Solving Linear Systems: Direct Methods

CPE 310: Numerical Analysis for Engineers

Introduction. Vectors and Matrices. Vectors [1] Vectors [2]

Lecture 5. Linear Systems. Gauss Elimination. Lecture in Numerical Methods from 24. March 2015 UVT. Lecture 5. Linear Systems. show Practical Problem.

I = i 0,

Ax = b. Systems of Linear Equations. Lecture Notes to Accompany. Given m n matrix A and m-vector b, find unknown n-vector x satisfying

1 Backward and Forward Error

Scientific Computing: An Introductory Survey

Solving Linear Systems Using Gaussian Elimination

POLI270 - Linear Algebra

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Transcription:

MATH 434/534 Theoretical Assignment 3 Solution Chapter 4 No 40 Answer True or False to the following Give reasons for your answers If a backward stable algorithm is applied to a computational problem, the solution will be accurate False See Conditioning, Stability, and Accuracy on page 60 When a (backward) stable algorithm is applied to a well-conditioned problem, the computed solution should be near the exact solution However, if a (backward) stable algorithm is applied to an illcondition problem, accuracy is not guaranteed (b) A backward stable algorithm produces a good approximation to an exact solution False The same reason as the previous problem (c) Well-conditioning is a good property of an algorithm False The conditioning of a problem is a property of the problem itself It deals with how the solution of the problem changes when the input data is perturbed (d) Cancellation is always bad False Cancellation highlights the earlier errors For example, subtraction is performed rather accurately, because it reveals the errors made in earlier computations or even those in the data associated with the subtraction (e) If the zeros of a polynomial are all distinct, then they must be well-conditioned False Consider the Wilkinson polynomial of degree 20, p(x) = (x 1)(x 2) (x 20) = x 20 210x 19 + The roots of p(x) are 1, 2,, 20 However, the Wilkinson polynomial is ill-conditioned Read The Wilkinson Polynomial on page 59 (f) An efficient algorithm is necessarily a stable algorithm False Gaussian elimination without pivoting is efficient but it is not stable in general (g) Backward errors relates the errors to the data of the problem True backward errors relate the errors to the data of the problem rather than to the problmes s solution (h) A backward stable algorithm applied to a well-conditioned problem produces an accurate solution True See Conditioning, Stability, and Accuracy on page 60 (i) Stability analysis of an algorithm is performed by means of perturbation analysis False A stability analysis is performed by round-off erro analysis on the algorithm A perturbation analysis is a study that one can detect whether a given problem is wellconditioned or ill-conditioned when small perturbations in the data will cause a large or small change in the solution

(j) A symmetric matrix must be well-conditioned False Consider the Hilbert matrix: 1 1 1 1 2 3 n 1 1 1 1 A = 2 3 4 n+1 1 1 1 n n=1 2n 1 For n = 10, Cond 2 (A) = 16025 10 13 ; Cond (A) = 35353 10 13 ; Cond 1 (A) = 35353 10 13 Therefore, A is ill-conditioned (k) If the determine of a matrix A is small, then it must be close to a singular matrix False In this case close to singular means that a matrix is theoretically nonsingular, but a small perturbation will make the matrix singular Its determinant is : 10 30, very small However, it is perfectly a nonsingular matrix, its condition number is 1 (l) One must perform a large amount of computation to obtain a large round-off error False We could obtain a large round-off error in a single computation No 41 Show that the floating point computations of the sum, product, and division of two numbers are backward stable Let x and y be floating point numbers First consider the case of the sum of x and y fl(x + y) = (x + y)(1 + δ) = x(1 + δ) + y(1 + δ) = x + y Since δ µ, both x and y are close to x and y, respectively Therefore, the sum of two floating point numbers is backward stable Next, we look at the case of the product of x and y In a similar way, we can see that fl(xy) = (xy)(1 + δ) = (x 1 + δ)(y 1 + δ) = x y Similarly, since δ µ, x and y are close to x and y Therefore, the product of two floating point number is backward stable Finally, we look at the case of the division of x and y where y 0 fl(x/y) = (x/y)(1 + δ) = x(1 + δ) y = x y Again, since δ µ, x and y are close to x and y Therefore, the division of two floating point number is backward stable No 42 Are the following floating point computations backward stable? Give reasons for your answer 2

in each case fl(x + 1) (b) fl(x(y + z)) From No41-, it is backward stable (b) Since the inner product of two vectors, x T y = x 1 y 1 + x 2 y 2 + + x n y n and the computed inner product of two vectors fl(x T y = x 1 y 1 + x 2 y 2 + + x n y n ) is backward stable from the example 48 on page 55 Taking these two vectors x = (x, x) and y = (y, z), we have fl(xy + xz) = fl(x(y + z)) Therefore, fl(x(y + z)) is backward stable Let x, y and z be floating point numbers Then consider the following floating point computation: fl(x(y + z)) = {x fl(y + z)} (1 + δ 1 ) = x(y + z)(1 + δ 2 )(1 + δ 1 ) = x(y + z)(1 + δ 1 δ2 + δ1 + δ2) x(y + z)(1 + δ 3 ) where δ 3 = δ 1 + δ 2 Since δ 1 and δ 2 are small, δ 1 δ 2 is neglected No 43 Show that the roots of the following polynomials are ill-conditioned and give reasons for your answers x 3 3x 2 + 3x 1 (c) (x 1)(x 099)(x 2) Consider the problem of solving x 3 3x 2 + 3x 1 = 0 The exact roots of this equation are x = 1, 1, 1 Let p(x) = x 3 3x 2 +3x 1 Now assume that the coefficient of the second term is perturbed by 10 5 Let the perturbed polynomial be f(x) = x 3 300001x 2 +3x+1 The roots of this perturbed polynomial f(x) = x 3 300001x 2 +3x+1 are x 1 = 102185714622108, x 2 = 098907642688946 + 001838999238619i and x 3 = 098907642688946 001838999238619i The relative errors in x 1, x 2 and x 3 are 002185714622108, 002138962995158 and 002138962995158, respectively The relative error in the data is 3333333333355171 10 6 Therefore, a small perturbation in the data causes the large error between the roots of p(x) and the roots of the perturbed polynomial f(x) (b) Let p(x) = (x 1)(x 099)(x 2) = x 3 399x 2 + 497x 198 The roots of p(x) are x = 1, x = 099, and x = 2 Now perturbing the coefficient of the second term is perturbed by 10 5, then let f(x) = x 3 (399 + 10 5 )x 2 + 497x 198 The roots of this perturbed polynomial of f(x) are x 1 = 199996039448653, x 2 = 100091841077417 and x 3 = 098911119473931 The relative errors in x 1, x 2 and x 3 are 001980275673574 10 3, 091841077416599 10 3 and 089778309160866 10 3, respectively The relative error in the data is 2506265664176820e 006 Therefore, a small perturbation in the data causes the large error between the roots of p(x) and the roots of the perturbed polynomial f(x) 3

No 44 Work out the flop-counts for the following simple matrix operations (i) Multiplication of matrices A and B of orders n m and m p, respectively (vi) Computation of the matrix A = uvt, where u and v are m column vectors u T v (vii) Computation of the matrix B = A uv T, where A and B are two n n matrices and u and v are two column vectors (i) Let A i be the ith row vector of a matrix A such that A i = (a i1, a i2,, a im ), and B i be the ith column vector of a matrix B such that B i = (b 1i, b 2i,, b mi ) T A n m B m p = = A 1 A 2 A n ( ) B1 B 2 B p A 1 B 1 A 1 B 2 A 1 B p A 2 B 1 A 2 B 2 A 2 B p A n B 1 A n B 2 A n B p Let C = AB Then each entry of C can be expressed as C ij = A i B j C ij = A i B j = (a i1, a i2,, a im ) To compute C ij, multiplications and additions are carried out m times and m 1 times, respectively So, the flop-count to compute C ij is 2m 1 Since the size of C is n p, that is, there are n p entries, the total flop-counts for the multiplication of A n m and B ( m p) is np(2m 1) (vi) First consider the flop-count for computing the outer product uv T uv T = u 1 v 1 u 1 v 2 u 1 v m u 2 v 1 u 2 v 2 u 2 v m b 1j b 2j b mj u m v 1 u m v 2 u m v m The flop-count for computing uv T is m 2 Now consider the flop-count for computing the inner product u T v u 1 u T u 2 ( ) v = v1 v 2 v m u m = u 1 v 1 + u 2 v 2 + + u m v m 4

So, the flop-count for computing the inner product u T v is 2m 1 Since each entry of A = uvt u T v, A ij, is expressed as A ij = 1 u T v u iv j (another m 2 flops-count), the total flop-counts are m 2 + 2m 1 + m 2 = 2m 2 + 2m 1 (vii) From part (vi), the flop-count for computing uv T are n 2 Let a ij and b ij be an entry of A and B, respectively Since we have b ij = a ij u i v j for each entry of B, that is, B = a 11 u 1 v 1 a 12 u 1 v 2 a 1n u 1 v n a 21 u 2 v 1 a 22 u 2 v 2 a 2n u 2 v n a n1 u n v 1 a n2 u n v 2 a nn u n v n Therefore, the total flop-count is n 2 + n 2 = 2n 2 No 45 Develop an algorithm to compute the following matrix products Your algorithm should take advantage of the special structure of the matrices in each case Give flop-count and show storage requirement in each case A and B are both lower triangular matrices (b) A is arbitrary and B is lower triangular Let A and B be both n n lower triangular matrices, that is, a 11 0 0 b 11 0 0 a 21 a 22 0 A = and B = b 21 b 22 0 a n1 a nn 1 a nn b n1 b nn 1 b nn Let a matrix C be the matrix such that C = AB Since the product of two lower triangular matrices is a triangular matrix, a matrix C is a lower triangular matrix Letting c ij is the entry of a matrix C The following is the algorithm for this computation: For i, j = 1, 2,, n If i < j, c ij = 0 (entries of the lower part of C) If i = j, c ij = a ii b ii (diagonal entries of C) If i > j, c ij = i k=1 a ik b kj (diagonal entries of C) The flop-count for computations of the nth row of C is n k=1 (2k 1) = n 2 The flopcount for computations of the (n 1)th row of C is n 1 k=1 (2k 1) = (n 1)2 The flopcount for computations of the (n 2)th row of C is n 2 k=1 (2k 1) = (n 2)2, and so on On this computation the flop-count for computations of the (n l)th row of C is n l k=1 (2k 1) = (n l)2 where 1 l n 1 Therefore, the total flop-count is n 1 l=0 n 1 (n l) 2 = n 2 + (n l) 2 5 l=1

= n 2 n(n 1)(2n 1) + 6 = 1 3 n3 + 1 2 n2 + 1 6 n (b) Let A be an n n arbitrary matrix and B be n n lower triangular matrix such that a 11 a 12 a 1n b 11 0 0 a 21 a 22 a 2n A = and B = b 21 b 22 0 a n1 a nn 1 a nn b n1 b nn 1 b nn Let a matrix C be the matrix such that C = AB Letting c ij is the entry of a matrix C The following is the algorithm for this computation: For i, j = 1, 2,, n c ij = n k=j a ik b kj Since each entry of the first column of C takes 2n 1 flops, the flop-count for computations of the 1st column of C is n (2n 1) = n(2n 1) Each entry of the second column takes 2n 3 flops Thus, the flop-count for computations of the second column of C is n (2n 3) = n(2n 3) The each entry of the third column of C takes 2n 5 flops Similarly we can find the flop-count for computations of the third column, that is, n (2n 5) = n(2n 5) On this computation the flop-count for computations of the lth column of C is n(2n 2l + 1) Therefore, the total flop-count is n n n(2n 2l + 1) = (2n 2 2nl + n) = n 3 l=1 l=1 No 46 A square matrix A = (a ij ) is said to be a band matrix of bandwidth 2k + 1 if a ij = 0 whenever i j > k Develop an algorithm to compute the product C = AB, where A is arbitrary and B is a band matrix of bandwidth 3, taking advantage of the structure of the matrix B Overwrite A with AB and give flop-count Let A and B be n n matrices Setting bandwidth 2k + 1 = 3, we have k = 1 By the definition of a band matrix, a matrix B is a tridiagonal such as b 11 b 12 0 0 b 21 b 22 b 23 0 B = 0 b 32 bn 1,n 1 b n 1,n 0 0 b n,n 1 b nn 6

Now let c ij be an entry of a matrix C Since C = AB, we consider C = AB = a 11 a 12 a 13 a 1n a 21 a 22 a 23 a 2n a 31 a 32 a 33 a 3n a n1 a n2 a n3 a nn b 11 b 12 0 0 b 21 b 22 b 23 0 0 b 32 bn 1,n 1 b n 1,n 0 0 b n,n 1 b nn First consider the first column of C Since just only first two entries of B are nonzero, the first column of C can be expressed as 2 c i1 = a ik b k1 k=1 Similarly, the nth column of C is expressed as c in = n k=n 1 a ik b kn For the column indices j = 2, 3,, n 1, we have c ij = j+1 k=j 1 a ik b kj Therefore, the following is the algorithm for this computation: For i = 1, 2,, n For j = 1 c i1 = 2 k=1 a ik b k1 For j = 2, 3,, n 1 c ij = j+1 k=j 1 a ikb kj For j = n c in = n k=n 1 a ik b kn Each element of first column and the nth column takes 3 flops, that is, the total flops to compute these two columns is 6n flops Since it takes 5 flops to compute the other entries, the total flops to compute entries which does not belonging to the first and the nth columns is 5n(n 2) Therefore, the total flop-count for computing C is 6n + 5n(n 2) = 5n 2 4n 7