Numerical Analysis FMN011 Carmen Arévalo Lund University carmen@maths.lth.se Lecture 4
Linear Systems Ax = b A is n n matrix, b is given n-vector, x is unknown solution n-vector. A n n is non-singular (invertible) if it has any one of the following properties: A has an inverse det(a) 0. rank(a) = n The unique solution of Ax = 0 is x = 0. The system Ax = b has a unique solution. If A is singular, then the system Ax = b has either infinitely many solutions or no solution at all. 1
Solving Triangular Linear Systems Upper triangular matrix: a ij = 0 if i > j. A = a 11 a 12 a 13 a 1n 1 a 1n 0 a 22 a 23 a 2n 1 a 2n 0 0 a 33...... a 3n 1. a 3n. 0 0 0... a n 1n 1 a n 1n 0 0 0... 0 a nn 2
Back substitution function x=back(a,b) % x = back(a,b) % Performs back substitution on system Ax=b. % A is assumed to be in upper triangular form. n = length(b); x(n,1) = b(n)/a(n,n); for i= n-1:-1:1 x(i,1) = (b(i) - A(i,i+1:n)*x(i+1:n,1))/A(i,i); end 3
Forward Substitution Lower triangular matrix: a ij = 0 if i < j. function x = forward( A, b ) % % Performs forward substitution on system Ax=b. % A is assumed to be in lower triangular form. n = length(b); x(1,1) = b(1)/a(1,1); for i = 2:n x(i,1) = (b(i)-a(i,1:i-1)*x(1:i-1,1))/a(i,i); end 4
Computational complexity of substitution N k = k=1 N(N + 1) 2 Back and forward substitution have same number of FLOPs (sum, difference, multiplication, division or square root). Number of FLOPs: 1 + n [(i 1) + i] = 1 + i=2 n 1 (2i + 1) = n 2 i=1 5
Elementary transformations Equivalent systems have the same solution Operations on equations that yield an equivalent system: Interchanges (order of equations can be changed) Scaling (equation can be multiplied by a constant) Replacement (add a multiple of another equation) Operations on rows that yield an equivalent system: Row interchanges Multiplication by a constant row r = row r m rp row p 6
Gaussian elimination Converts a system into one with a matrix by means of elementary transformations. 2x 1 + 4x 2 6x 3 = 4 x 1 + 5x 2 + 3x 3 = 10 3x 1 + 9x 2 + 6x 3 = 15 Augmented matrix: pivot 2 4 6 4 m 21 = 1/2 1 5 3 10 m 31 = 3/2 3 9 6 15 7
m ij are called multipliers Replace rows 2 and 3: pivot m 32 = 1 2 4 6 4 0 3 6 12 0 3 15 21 Replace row 3: 2 4 6 4 0 3 6 12 0 0 9 9 The system matrix is now upper 8
Solving a system of equations With upper matrix, solve by back substitution. To solve a system: 1. Perform a Gaussian elimination 2. Perform a back substitution In previous example: x 3 = 9/9 = 1 x 2 = (12 6 1)/3 = 2 x 1 = ( 4 4 2 + 6 1)/2 = 3 Pivoting: If during the process a pivot is zero, exchange rows. If this is not possible because all elements below it are also zero, the system is singular. 9
Operation count for Gauss elimination N k 2 = k=1 N(N + 1)(2N + 1) 6 for j=1:n-1 for i=j+1:n m=a(i,j)/a(j,j); b(i)=b(i)-m*b(j); for k=j+1:n a(i,k)=a(i,k)-m*a(j,k); end end end 10
Number of FLOPs for the elimination step: n 1 (3 + 2(n j)(n j)) = 2 3 n3 + 1 2 n2 7 6 n j=1 11
Properties of vector norms 1. x > 0 if x 0 2. cx = c x for any scalar c 3. x + y x + y 12
Vector norms The p norm of a vector x p = ( n ) 1/p x i p i=1 1-norm: x 1 = n i=1 x i 2-norm: ( n i=1 x i 2) 1/2 (Euclidean) -norm: max i x i x x 2 x 1 n x 2 n x 13
Matrix Norms induced by vector norms Properties: A = max x =1 Ax 1. A > 0 if A 0 2. ca = c A for any scalar c 3. A + B A + B 4. AB A B 14
Norms of some interesting matrices A 1 = max j n i=1 a ij, A = max i n a ij j=1 0 = 0 I = 1 P = 1 if P is a permutation of the rows of I D = max{ d i } 15
Ill conditioning Ax = b is ill conditioned if small perturbations in the coefficients of A or b produce large changes in x 34x 1 + 55x 2 = 21 55x 1 + 89x 2 = 34 x 1 = 1, x 2 = 1 Add 1% error to entry a 21 34x 1 + 55x 2 = 21 55.55x 1 + 89x 2 = 34 ˆx 1 = 0.03419, ˆx 2 = 0.3607. 16
The relative error (in max-norm) is 1 0.034188 / 1 = 103.4% 34 (0.034188) + 55 (0.36068) = 21 55 (0.034188) + 89 (0.36068) = 33.9812 Relative error in residual is Aˆx b / b 0.05% The system is ill conditioned (A is ill conditioned) A small residual does not point to a small error! 17
Error magnification factor The error magnification factor for Ax = b is the ratio between the relative error norm and the relative residual norm, emf = x ˆx / x b Aˆx / b It tells us how much the error is affected by the size of the residual. The maximum possible error magnification factor, over all right-hand sides b, is called the condition number 18
Condition number Condition number of a matrix relative to a norm p : κ p (A) = A p A 1 p If κ(a) 10 k, about k significant digits will be lost in solving Ax = b. In the previous example, k = 4, so we need to have an input with at least 5 correct significant digits. κ(a) 1 κ(i) = 1 κ(p ) = 1 if P is a permutation of the rows of I κ(ca) = κ(a) κ(d) = max d i min d i 19
Swamping Exact solution ( 10 20 1 1 2 ) ( x y ) = ( 1 4 ) y = 1 4 10 20 1; x 2 1 2 10 20 With double precision ( 10 20 1 1 2 ) ( x y ) = ( 1 4 ) y = 1, x = 0 After row exchange ( 1 2 10 20 1 ) ( x y ) = ( 4 1 ) y = 1, x = 2 Multipliers should be small (pivots should be large) 20