Lecture 9. Errors in solving Linear Systems. J. Chaudhry (Zeb) Department of Mathematics and Statistics University of New Mexico

Similar documents
Lecture 7. Gaussian Elimination with Pivoting. David Semeraro. University of Illinois at Urbana-Champaign. February 11, 2014

Solving Linear Systems of Equations

Outline. Math Numerical Analysis. Errors. Lecture Notes Linear Algebra: Part B. Joseph M. Mahaffy,

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

Topics. Review of lecture 2/11 Error, Residual and Condition Number. Review of lecture 2/16 Backward Error Analysis The General Case 1 / 22

1 Error analysis for linear systems

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Linear Algebraic Equations

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra

Review Questions REVIEW QUESTIONS 71

Chapter 2 - Linear Equations

Scientific Computing: Dense Linear Systems

Numerical Analysis FMN011

Scientific Computing

Roundoff Analysis of Gaussian Elimination

Computational Methods. Systems of Linear Equations

1.Chapter Objectives

Scientific Computing: Solving Linear Systems

Since the determinant of a diagonal matrix is the product of its diagonal elements it is trivial to see that det(a) = α 2. = max. A 1 x.

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

Notes on Conditioning

Gaussian Elimination for Linear Systems

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

Numerical Linear Algebra

5 Solving Systems of Linear Equations

AMS526: Numerical Analysis I (Numerical Linear Algebra)

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place...

A Review of Linear Algebra

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 2. Systems of Linear Equations

Introduction to Numerical Analysis

lecture 2 and 3: algorithms for linear algebra

Lecture 11. Fast Linear Solvers: Iterative Methods. J. Chaudhry. Department of Mathematics and Statistics University of New Mexico

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for At a high level, there are two pieces to solving a least squares problem:

lecture 3 and 4: algorithms for linear algebra

Scientific Computing: An Introductory Survey

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD

Lecture Note 2: The Gaussian Elimination and LU Decomposition

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization

MTH 464: Computational Linear Algebra

ANONSINGULAR tridiagonal linear system of the form

Least squares: the big idea

Matrix decompositions

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Ax = b. Systems of Linear Equations. Lecture Notes to Accompany. Given m n matrix A and m-vector b, find unknown n-vector x satisfying

EECS 275 Matrix Computation

Let x be an approximate solution for Ax = b, e.g., obtained by Gaussian elimination. Let x denote the exact solution. Call. r := b A x.

Numerical Linear Algebra

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

Math 220: Summer Midterm 1 Questions

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

The Solution of Linear Systems AX = B

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization

6 Linear Systems of Equations

Chapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers.

UNIVERSITY OF CAPE COAST REGULARIZATION OF ILL-CONDITIONED LINEAR SYSTEMS JOSEPH ACQUAH

Direct Methods for Solving Linear Systems. Matrix Factorization

Bindel, Fall 2011 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Jan 9

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Linear Algebra

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

Numerical Linear Algebra And Its Applications

CPE 310: Numerical Analysis for Engineers

Applied Numerical Linear Algebra. Lecture 8

Name: INSERT YOUR NAME HERE. Due to dropbox by 6pm PDT, Wednesday, December 14, 2011

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

AMS 147 Computational Methods and Applications Lecture 17 Copyright by Hongyun Wang, UCSC

Numerical Methods I Eigenvalue Problems

4. Direct Methods for Solving Systems of Linear Equations. They are all over the place...

Numerical methods, midterm test I (2018/19 autumn, group A) Solutions

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Dense LU factorization and its error analysis

Numerical Linear Algebra

CE 601: Numerical Methods Lecture 7. Course Coordinator: Dr. Suresh A. Kartha, Associate Professor, Department of Civil Engineering, IIT Guwahati.

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

Introduction to Mathematical Programming

A Review of Matrix Analysis

Numerical Algorithms. IE 496 Lecture 20

14.2 QR Factorization with Column Pivoting

Quiz ) Locate your 1 st order neighbors. 1) Simplify. Name Hometown. Name Hometown. Name Hometown.

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Linear Algebra and Matrix Inversion

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Chapter 12 Block LU Factorization

Lecture 9: Numerical Linear Algebra Primer (February 11st)

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

Maths for Signals and Systems Linear Algebra for Engineering Applications

Shan Feng

Transcription:

Lecture 9 Errors in solving Linear Systems J. Chaudhry (Zeb) Department of Mathematics and Statistics University of New Mexico J. Chaudhry (Zeb) (UNM) Math/CS 375 1 / 23

What we ll do: Norms and condition number Error and residual of a linear system Matlab backslash operator J. Chaudhry (Zeb) (UNM) Math/CS 375 2 / 23

Geometric Interpretation of Singularity Consider a 2 2 system describing two lines that intersect y = 2x + 6 y = 1 2 x + 1 The matrix form of this equation is [ ] [ ] [ 2 1 x1 6 = 1/2 1 x 2 1] The equations for two parallel but not intersecting lines are [ ] [ ] [ 2 1 x1 6 = 2 1 x 2 5] Here the coefficient matrix is singular (rank(a) = 1), and the system is inconsistent J. Chaudhry (Zeb) (UNM) Math/CS 375 3 / 23

Geometric Interpretation of Singularity The equations for two parallel and coincident lines are [ ] [ ] [ 2 1 x1 6 = 2 1 x 2 6] The equations for two nearly parallel lines are [ ] [ ] [ ] 2 1 x1 6 = 2 + δ 1 x 2 6 + δ J. Chaudhry (Zeb) (UNM) Math/CS 375 4 / 23

Geometric Interpretation of Singularity 8 6 4 2 0 8 6 4 2 0 A and b are consistent A is nonsingular 0 1 2 3 4 A and b are consistent A is singular 0 1 2 3 4 8 6 4 2 0 8 6 4 2 0 A and b are inconsistent A is singular 0 1 2 3 4 A and b are consistent A is ill conditioned 0 1 2 3 4 J. Chaudhry (Zeb) (UNM) Math/CS 375 5 / 23

Effect of Perturbations to b Consider the solution of a 2 2 system where [ ] 1 b = 2/3 One expects that the exact solutions to [ ] [ ] 1 1 Ax = and Ax = 2/3 0.6667 will be different. Should these solutions be a lot different or a little different? J. Chaudhry (Zeb) (UNM) Math/CS 375 6 / 23

Norms Vectors: x p = ( x 1 p + x 2 p +... + x n p) 1/p x 1 = x 1 + x 2 +... + x n = n x i i=1 x = max ( x 1, x 2,..., x n ) = max ( x i ) i J. Chaudhry (Zeb) (UNM) Math/CS 375 7 / 23

Matrices: A = max x 0 A p = max x 0 A 1 = max 1 j n A = max 1 i m Ax x Ax p x p m a ij i=1 n a ij The above Matrix norms satisfy the so called consistency relation: given a matrix A and vector x we have j=1 Ax A x J. Chaudhry (Zeb) (UNM) Math/CS 375 8 / 23

Effect of Perturbations to b Perturb b with δb such that The perturbed system is The perturbations satisfy δb b 1, A(x + δx b ) = b + δb Aδx b = δb Analysis shows (see next two slides for proof) that δx b x A A 1 δb b Thus, the effect of the perturbation is small if A A 1 is small. if A A 1 1 then δx b x 1 J. Chaudhry (Zeb) (UNM) Math/CS 375 9 / 23

Effect of Perturbations to b (Proof) Let x + δx b be the exact solution to the perturbed system A(x + δx b ) = b + δb (1) Expand Ax + Aδx b = b + δb Subtract Ax from left side and b from right side since Ax = b Aδx b = δb Left multiply by A 1 δx b = A 1 δb (2) J. Chaudhry (Zeb) (UNM) Math/CS 375 10 / 23

Effect of Perturbations to b (Proof, p. 2) Take norm of equation (2) δx b = A 1 δb Applying consistency requirement of matrix norms δx A 1 δb (3) Similarly, Ax = b gives b = Ax, and b A x (4) Rearrangement of equation (4) yields 1 A x b (5) J. Chaudhry (Zeb) (UNM) Math/CS 375 11 / 23

Effect of Perturbations to b (Proof) Multiply Equation (4) by Equation (3) to get δx b x A A 1 δb b (6) Summary: If x + δx b is the exact solution to the perturbed system A(x + δx b ) = b + δb then δx b x A A 1 δb b J. Chaudhry (Zeb) (UNM) Math/CS 375 12 / 23

Effect of Perturbations to A Perturb A with δa such that The perturbed system is Analysis shows that δa A 1, (A + δa)(x + δx A ) = b δx A x + δx A A A 1 δa A Thus, the effect of the perturbation is small if A A 1 is small. if A A 1 1 then δx A x + δx A 1 J. Chaudhry (Zeb) (UNM) Math/CS 375 13 / 23

Effect of Perturbations to both A and b Perturb both A with δa and b with δb such that The perturbation satisfies δa A 1 and δb b 1 (A + δa)(x + δx) = b + δb Analysis shows that δx x + δx A A 1 1 A A 1 δa A [ δa A + δb ] b Thus, the effect of the perturbation is small if A A 1 is small. if A A 1 1 then δx x + δx 1 J. Chaudhry (Zeb) (UNM) Math/CS 375 14 / 23

Condition number of A The condition number κ(a) A A 1 indicates the sensitivity of the solution to perturbations in A and b. The condition number can be measured with any p-norm. The condition number is always in the range 1 κ(a) κ(a) is a mathematical property of A Any algorithm will produce a solution that is sensitive to perturbations in A and b if κ(a) is large. In exact math a matrix is either singular or non-singular. κ(a) = for a singular matrix κ(a) indicates how close A is to being numerically singular. A matrix with large κ is said to be ill-conditioned J. Chaudhry (Zeb) (UNM) Math/CS 375 15 / 23

Computational Stability In Practice, applying Gaussian elimination with partial pivoting and back substitution to Ax = b gives the exact solution, ˆx, to the nearby problem (A + E)ˆx = b where E ε m A Gaussian elimination with partial pivoting and back substitution gives exactly the right answer to nearly the right question. Trefethen and Bau J. Chaudhry (Zeb) (UNM) Math/CS 375 16 / 23

Computational Stability An algorithm that gives the exact answer to a problem that is near to the original problem is said to be backward stable. Algorithms that are not backward stable will tend to amplify roundoff errors present in the original data. As a result, the solution produced by an algorithm that is not backward stable will not necessarily be the solution to a problem that is close to the original problem. Gaussian elimination without partial pivoting is not backward stable for arbitrary A. If A is symmetric and positive definite, then Gaussian elimination without pivoting in backward stable. J. Chaudhry (Zeb) (UNM) Math/CS 375 17 / 23

The Residual Let ˆx be the numerical solution to Ax = b. ˆx x (x is the exact solution) because of roundoff. The residual measures how close ˆx is to satisfying the original equation It is not hard to show that r = b Aˆx ˆx x x κ(a) r b Small r does not guarantee a small ˆx x. If κ(a) is large the ˆx returned by Gaussian elimination and back substitution (or any other solution method) is not guaranteed to be anywhere near the true solution to Ax = b. J. Chaudhry (Zeb) (UNM) Math/CS 375 18 / 23

Rules of Thumb Applying Gaussian elimination with partial pivoting and back substitution to Ax = b yields a numerical solution ˆx such that the residual vector r = b Aˆx is small even if the κ(a) is large. If A and b are stored to machine precision ε m, the numerical solution to Ax = b by any variant of Gaussian elimination is correct to d digits where d = log 10 (ε m ) log 10 (κ(a)) J. Chaudhry (Zeb) (UNM) Math/CS 375 19 / 23

Rules of Thumb d = log 10 (ε m ) log 10 (κ(a)) Example: MATLAB computations have ε m 2.2 10 16. For a system with κ(a) 10 10 the elements of the solution vector will have d = log 10 (2.2 10 16 ) log 10 ( 10 10 ) correct (decimal) digits 15 10 = 5 J. Chaudhry (Zeb) (UNM) Math/CS 375 20 / 23

Summary of Limits to Numerical Solution of Ax = b 1 κ(a) indicates how close A is to being numerically singular 2 If κ(a) is large, A is ill-conditioned and even the best numerical algorithms will produce a solution, ˆx that cannot be guaranteed to be close to the true solution, x 3 In practice, Gaussian elimination with partial pivoting and back substitution produces a solution with a small residual even if κ(a) is large. r = b Aˆx J. Chaudhry (Zeb) (UNM) Math/CS 375 21 / 23

The Backslash Operator Consider the scalar equation 5x = 20 = x = (5) 1 20 The extension to a system of equations is, of course Ax = b = x = A 1 b where A 1 b is the formal solution to Ax = b In MATLAB notation the system is solved with 1 x = A\b J. Chaudhry (Zeb) (UNM) Math/CS 375 22 / 23

The Backslash Operator Given an n n matrix A, and an n 1 vector b the \ operator performs a sequence of tests on the A matrix. MATLAB attempts to solve the system with the method that gives the least roundoff and the fewest operations. When A is an n n matrix: 1 MATLAB examines A to see if it is a permutation of a triangular system If so, the appropriate triangular solve is used. 2 MATLAB examines A to see if it appears to be symmetric and positive definite. If so, MATLAB attempts a Cholesky factorization and two triangular solves. 3 If the Cholesky factorization fails, or if A does not appear to be symmetric, MATLAB attempts an LU factorization and two triangular solves. J. Chaudhry (Zeb) (UNM) Math/CS 375 23 / 23