Numerical Analysis FMN011

Similar documents
Outline. Math Numerical Analysis. Errors. Lecture Notes Linear Algebra: Part B. Joseph M. Mahaffy,

Justify all your answers and write down all important steps. Unsupported answers will be disregarded.

Lecture 9. Errors in solving Linear Systems. J. Chaudhry (Zeb) Department of Mathematics and Statistics University of New Mexico

Review Questions REVIEW QUESTIONS 71

The Solution of Linear Systems AX = B

Lecture 7. Gaussian Elimination with Pivoting. David Semeraro. University of Illinois at Urbana-Champaign. February 11, 2014

7.6 The Inverse of a Square Matrix

Chapter 2 - Linear Equations

Topics. Review of lecture 2/11 Error, Residual and Condition Number. Review of lecture 2/16 Backward Error Analysis The General Case 1 / 22

MATH 3511 Lecture 1. Solving Linear Systems 1

4 Elementary matrices, continued

Matrix notation. A nm : n m : size of the matrix. m : no of columns, n: no of rows. Row matrix n=1 [b 1, b 2, b 3,. b m ] Column matrix m=1

Formula for the inverse matrix. Cramer s rule. Review: 3 3 determinants can be computed expanding by any row or column

4 Elementary matrices, continued

Definition of Equality of Matrices. Example 1: Equality of Matrices. Consider the four matrices

Math 313 Chapter 1 Review

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Linear Algebraic Equations

Numerical Linear Algebra

Lectures on Linear Algebra for IT

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Lectures on Linear Algebra for IT

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra

Computational Methods. Systems of Linear Equations

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Methods for Solving Linear Systems Part 2

Solving Linear Systems Using Gaussian Elimination. How can we solve

Elementary matrices, continued. To summarize, we have identified 3 types of row operations and their corresponding

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

Solutions to Exam I MATH 304, section 6

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

Linear Algebra I Lecture 8

Lecture 12 Simultaneous Linear Equations Gaussian Elimination (1) Dr.Qi Ying

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey

3.4 Elementary Matrices and Matrix Inverse

MODEL ANSWERS TO THE THIRD HOMEWORK

Linear System of Equations

MTH 464: Computational Linear Algebra

MTH 464: Computational Linear Algebra

Solving Linear Systems of Equations

Since the determinant of a diagonal matrix is the product of its diagonal elements it is trivial to see that det(a) = α 2. = max. A 1 x.

Chapter 2 Notes, Linear Algebra 5e Lay

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Matrix decompositions

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization

Lesson 3. Inverse of Matrices by Determinants and Gauss-Jordan Method

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

SOLVING LINEAR SYSTEMS

Scientific Computing

Chapter 9: Gaussian Elimination

22A-2 SUMMER 2014 LECTURE 5

Example: Current in an Electrical Circuit. Solving Linear Systems:Direct Methods. Linear Systems of Equations. Solving Linear Systems: Direct Methods

Matrices and systems of linear equations

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization

Digital Workbook for GRA 6035 Mathematics

Linear Systems and Matrices

POLI270 - Linear Algebra

Problem Sheet 1 with Solutions GRA 6035 Mathematics

Elementary maths for GMT

Lecture 12: Solving Systems of Linear Equations by Gaussian Elimination

Matrix operations Linear Algebra with Computer Science Application

Numerical Analysis Fall. Gauss Elimination

30.4. Matrix Norms. Introduction. Prerequisites. Learning Outcomes

Lecture 6 & 7. Shuanglin Shao. September 16th and 18th, 2013

Lecture 7: Introduction to linear systems

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

2.1 Gaussian Elimination

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

A Review of Matrix Analysis

MTH501- Linear Algebra MCQS MIDTERM EXAMINATION ~ LIBRIANSMINE ~

Gaussian Elimination -(3.1) b 1. b 2., b. b n

Gaussian Elimination and Back Substitution

MAT 1332: CALCULUS FOR LIFE SCIENCES. Contents. 1. Review: Linear Algebra II Vectors and matrices Definition. 1.2.

6 Linear Systems of Equations

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij

MATH 323 Linear Algebra Lecture 6: Matrix algebra (continued). Determinants.

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 60. Rumbos Spring Solutions to Assignment #17

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Chapter 7. Tridiagonal linear systems. Solving tridiagonal systems of equations. and subdiagonal. E.g. a 21 a 22 a A =

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form:

ENGR-1100 Introduction to Engineering Analysis. Lecture 21. Lecture outline

Kevin James. MTHSC 3110 Section 2.2 Inverses of Matrices

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

System of Linear Equations

Solving Linear Systems of Equations

Linear Equations in Linear Algebra

Linear Algebraic Equations

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Chapter 1. Vectors, Matrices, and Linear Spaces

ORIE 6300 Mathematical Programming I August 25, Recitation 1

CHAPTER 6. Direct Methods for Solving Linear Systems

Chapter 3. Determinants and Eigenvalues

ENGR-1100 Introduction to Engineering Analysis. Lecture 21

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Math 3C Lecture 20. John Douglas Moore

Gaussian Elimination without/with Pivoting and Cholesky Decomposition

is a 3 4 matrix. It has 3 rows and 4 columns. The first row is the horizontal row [ ]

Transcription:

Numerical Analysis FMN011 Carmen Arévalo Lund University carmen@maths.lth.se Lecture 4

Linear Systems Ax = b A is n n matrix, b is given n-vector, x is unknown solution n-vector. A n n is non-singular (invertible) if it has any one of the following properties: A has an inverse det(a) 0. rank(a) = n The unique solution of Ax = 0 is x = 0. The system Ax = b has a unique solution. If A is singular, then the system Ax = b has either infinitely many solutions or no solution at all. 1

Solving Triangular Linear Systems Upper triangular matrix: a ij = 0 if i > j. A = a 11 a 12 a 13 a 1n 1 a 1n 0 a 22 a 23 a 2n 1 a 2n 0 0 a 33...... a 3n 1. a 3n. 0 0 0... a n 1n 1 a n 1n 0 0 0... 0 a nn 2

Back substitution function x=back(a,b) % x = back(a,b) % Performs back substitution on system Ax=b. % A is assumed to be in upper triangular form. n = length(b); x(n,1) = b(n)/a(n,n); for i= n-1:-1:1 x(i,1) = (b(i) - A(i,i+1:n)*x(i+1:n,1))/A(i,i); end 3

Forward Substitution Lower triangular matrix: a ij = 0 if i < j. function x = forward( A, b ) % % Performs forward substitution on system Ax=b. % A is assumed to be in lower triangular form. n = length(b); x(1,1) = b(1)/a(1,1); for i = 2:n x(i,1) = (b(i)-a(i,1:i-1)*x(1:i-1,1))/a(i,i); end 4

Computational complexity of substitution N k = k=1 N(N + 1) 2 Back and forward substitution have same number of FLOPs (sum, difference, multiplication, division or square root). Number of FLOPs: 1 + n [(i 1) + i] = 1 + i=2 n 1 (2i + 1) = n 2 i=1 5

Elementary transformations Equivalent systems have the same solution Operations on equations that yield an equivalent system: Interchanges (order of equations can be changed) Scaling (equation can be multiplied by a constant) Replacement (add a multiple of another equation) Operations on rows that yield an equivalent system: Row interchanges Multiplication by a constant row r = row r m rp row p 6

Gaussian elimination Converts a system into one with a matrix by means of elementary transformations. 2x 1 + 4x 2 6x 3 = 4 x 1 + 5x 2 + 3x 3 = 10 3x 1 + 9x 2 + 6x 3 = 15 Augmented matrix: pivot 2 4 6 4 m 21 = 1/2 1 5 3 10 m 31 = 3/2 3 9 6 15 7

m ij are called multipliers Replace rows 2 and 3: pivot m 32 = 1 2 4 6 4 0 3 6 12 0 3 15 21 Replace row 3: 2 4 6 4 0 3 6 12 0 0 9 9 The system matrix is now upper 8

Solving a system of equations With upper matrix, solve by back substitution. To solve a system: 1. Perform a Gaussian elimination 2. Perform a back substitution In previous example: x 3 = 9/9 = 1 x 2 = (12 6 1)/3 = 2 x 1 = ( 4 4 2 + 6 1)/2 = 3 Pivoting: If during the process a pivot is zero, exchange rows. If this is not possible because all elements below it are also zero, the system is singular. 9

Operation count for Gauss elimination N k 2 = k=1 N(N + 1)(2N + 1) 6 for j=1:n-1 for i=j+1:n m=a(i,j)/a(j,j); b(i)=b(i)-m*b(j); for k=j+1:n a(i,k)=a(i,k)-m*a(j,k); end end end 10

Number of FLOPs for the elimination step: n 1 (3 + 2(n j)(n j)) = 2 3 n3 + 1 2 n2 7 6 n j=1 11

Properties of vector norms 1. x > 0 if x 0 2. cx = c x for any scalar c 3. x + y x + y 12

Vector norms The p norm of a vector x p = ( n ) 1/p x i p i=1 1-norm: x 1 = n i=1 x i 2-norm: ( n i=1 x i 2) 1/2 (Euclidean) -norm: max i x i x x 2 x 1 n x 2 n x 13

Matrix Norms induced by vector norms Properties: A = max x =1 Ax 1. A > 0 if A 0 2. ca = c A for any scalar c 3. A + B A + B 4. AB A B 14

Norms of some interesting matrices A 1 = max j n i=1 a ij, A = max i n a ij j=1 0 = 0 I = 1 P = 1 if P is a permutation of the rows of I D = max{ d i } 15

Ill conditioning Ax = b is ill conditioned if small perturbations in the coefficients of A or b produce large changes in x 34x 1 + 55x 2 = 21 55x 1 + 89x 2 = 34 x 1 = 1, x 2 = 1 Add 1% error to entry a 21 34x 1 + 55x 2 = 21 55.55x 1 + 89x 2 = 34 ˆx 1 = 0.03419, ˆx 2 = 0.3607. 16

The relative error (in max-norm) is 1 0.034188 / 1 = 103.4% 34 (0.034188) + 55 (0.36068) = 21 55 (0.034188) + 89 (0.36068) = 33.9812 Relative error in residual is Aˆx b / b 0.05% The system is ill conditioned (A is ill conditioned) A small residual does not point to a small error! 17

Error magnification factor The error magnification factor for Ax = b is the ratio between the relative error norm and the relative residual norm, emf = x ˆx / x b Aˆx / b It tells us how much the error is affected by the size of the residual. The maximum possible error magnification factor, over all right-hand sides b, is called the condition number 18

Condition number Condition number of a matrix relative to a norm p : κ p (A) = A p A 1 p If κ(a) 10 k, about k significant digits will be lost in solving Ax = b. In the previous example, k = 4, so we need to have an input with at least 5 correct significant digits. κ(a) 1 κ(i) = 1 κ(p ) = 1 if P is a permutation of the rows of I κ(ca) = κ(a) κ(d) = max d i min d i 19

Swamping Exact solution ( 10 20 1 1 2 ) ( x y ) = ( 1 4 ) y = 1 4 10 20 1; x 2 1 2 10 20 With double precision ( 10 20 1 1 2 ) ( x y ) = ( 1 4 ) y = 1, x = 0 After row exchange ( 1 2 10 20 1 ) ( x y ) = ( 4 1 ) y = 1, x = 2 Multipliers should be small (pivots should be large) 20