Linear System of Equations

Similar documents
Linear System of Equations

A Review of Matrix Analysis

The Solution of Linear Systems AX = B

Numerical Analysis Fall. Gauss Elimination

Linear Algebraic Equations

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

Review Questions REVIEW QUESTIONS 71

Computational Methods. Systems of Linear Equations

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

Process Model Formulation and Solution, 3E4

Linear Algebraic Equations

Matrix notation. A nm : n m : size of the matrix. m : no of columns, n: no of rows. Row matrix n=1 [b 1, b 2, b 3,. b m ] Column matrix m=1

SOLVING LINEAR SYSTEMS

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

Chapter 9: Gaussian Elimination

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

2.1 Gaussian Elimination

Matrix & Linear Algebra

Scientific Computing

1.Chapter Objectives

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Numerical Linear Algebra

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

CHAPTER 6. Direct Methods for Solving Linear Systems

MATH 3511 Lecture 1. Solving Linear Systems 1

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)

Numerical Linear Algebra

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Next topics: Solving systems of linear equations

4.2 Floating-Point Numbers

CPE 310: Numerical Analysis for Engineers

Section 9.2: Matrices.. a m1 a m2 a mn

1 Number Systems and Errors 1

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

Gauss Elimination. Hsiao-Lung Chan Dept Electrical Engineering Chang Gung University, Taiwan

Elementary Linear Algebra

Solving Linear Systems Using Gaussian Elimination. How can we solve

MTH 464: Computational Linear Algebra

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

The purpose of computing is insight, not numbers. Richard Wesley Hamming

Computational Linear Algebra

Chapter 2 Notes, Linear Algebra 5e Lay

Solving Linear Systems of Equations

Matrix decompositions

Linear Algebra and Matrix Inversion

Elementary maths for GMT

Introduction to Applied Linear Algebra with MATLAB

5 Solving Systems of Linear Equations

MATH2210 Notebook 2 Spring 2018

Phys 201. Matrices and Determinants

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

Linear Algebra V = T = ( 4 3 ).

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Lectures on Linear Algebra for IT

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD

B553 Lecture 5: Matrix Algebra Review

POLI270 - Linear Algebra

Iterative methods for Linear System

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

9. Numerical linear algebra background

Scientific Computing: An Introductory Survey

Linear Equations in Linear Algebra

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method

Section 9.2: Matrices. Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns.

Gaussian Elimination and Back Substitution

Gaussian Elimination -(3.1) b 1. b 2., b. b n

Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3

Introduction to PDEs and Numerical Methods Lecture 7. Solving linear systems

6.4 Krylov Subspaces and Conjugate Gradients

Numerical Methods in Matrix Computations

CSE 160 Lecture 13. Numerical Linear Algebra

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

Lecture 18 Classical Iterative Methods

Numerical Linear Algebra

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

A FIRST COURSE IN LINEAR ALGEBRA. An Open Text by Ken Kuttler. Matrix Arithmetic

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

MATRICES. a m,1 a m,n A =

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Analysis FMN011

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

Fundamentals of Engineering Analysis (650163)

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Consider the following example of a linear system:

Numerical Linear Algebra

Applied Linear Algebra in Geoscience Using MATLAB

Matrices. Chapter Definitions and Notations

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey

Solving Linear Systems of Equations

MTH501- Linear Algebra MCQS MIDTERM EXAMINATION ~ LIBRIANSMINE ~

Systems of Linear Equations and Matrices

CE 206: Engineering Computation Sessional. System of Linear Equations

Laboratory #3: Linear Algebra. Contents. Grace Yung

Matrices and systems of linear equations

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 2. Systems of Linear Equations

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

Transcription:

Linear System of Equations Linear systems are perhaps the most widely applied numerical procedures when real-world situation are to be simulated. Example: computing the forces in a TRUSS. F F 5. 77F F. 6585F6 F7. 77F F4. 755F6 F 4 8 F. 6585F F 7 F8. 755F F. 6585F F 5 6 9. 755F6 F8 F. 77F4 F. 77F4 F9. 6585F F. 755F F F. 77F4 8 5 6. 77F4 4 equations with 4 unknowns Methods for numerically solving ordinary and partial differential equations depend on Linear Systems of Equations. No of unknowns = number of equations i.e. nn system

Linear System of Equations a x a x a x b a x a x a x b a x a x a x b a a a a a a a a a x x x b b b A x = b n * n Matrix Column vector Column vector

Matrix Notation A matrix consists of a rectangular array of elements represented by a single symbol (example: [A]). An individual entry of a matrix is an element (example: a )

Matrix Notation A horizontal set of elements is called a row and a vertical set of elements is called a column. The first subscript of an element indicates the row while the second indicates the column. The size of a matrix is given as m rows by n columns, or simply m by n (or m x n). x n matrices are row vectors. m x matrices are column vectors. 4

Special Matrices Matrices where m=n are called square matrices. There are a number of special forms of square matrices: Symmetric A 5 7 7 8 Diagonal A a a a Identity A Upper Triangular A a a a a a a Lower Triangular A a a a a a a Banded A a a a a a a a a 4 a 4 a 44 5

Matrix Operations Two matrices are considered equal if and only if every element in the first matrix is equal to every corresponding element in the second. This means the two matrices must be the same size. A=B if a ij = b ij for all i and j Matrix addition and subtraction are performed by adding or subtracting the corresponding elements. This requires that the two matrices be the same size. C=A+B c ij =a ij + b ij multiplication by a scalar is performed by multiplying each element by the same scalar. ga ga ga D=gA ga ga ga ga ga ga 6

Matrix Multiplication The elements in the matrix [C] that results from multiplying matrices [A] and [B] are calculated using: c ij n k a ik b kj 7

Matrix Inverse and Transpose The inverse of a square, nonsingular matrix [A] is that matrix which, when multiplied by [A], yields the identity matrix. [A][A] - =[A] - [A]=[I] The transpose of a matrix involves transforming its rows into columns and its columns into rows. (a ij ) T =a ji 8

Solving With MATLAB MATLAB provides two direct ways to solve systems of linear algebraic equations [A]{x}={b}: Left-division x = A\b Matrix inversion x = inv(a)*b Note: The matrix inverse is less efficient than left-division and also only works for square, non-singular systems. 9

Gauss Elimination method Forward elimination Starting with the first row, add or subtract multiples of that row to eliminate the first coefficient from the second row and beyond. Continue this process with the second row to remove the second coefficient from the third row and beyond. Stop when an upper triangular matrix remains. Back substitution Starting with the last row, solve for the unknown, then substitute that value into the next highest row. Because of the upper-triangular nature of the matrix, each row will contain only one more unknown.

Gauss Elimination method Number of flops for Gauss Elimination method: n Total flops Forward Elimination Back substitution n / % due to Elimination 85 75 667 87.58% 6855 6755 4 666667 98.5% 6.68* 8 6.67* 8 6 6.67* 8 99.85% As the system gets larger, the cost of computing increases - Flops increases nearly orders of magnitude for every order of increase in the number of equations Most of the efforts is incurred in the forward elimination steps.

Gauss Elimination method clc; close all; format long e; % A= coefficient matrix % b=right hand side vector % x=solution vector A=[ -7 ; - 6; 5-5]; b=[7;4;6]; [m,n]=size(a); if m~=n error('matrix A must be square'); end Nb=n+; Aug=[A b]; % Augmented matrix, combines A and b %----forward elimilation------- for k=:n- % from row to n- if abs(a(k,k))<eps error('zero Pivot encountered') end for j=k+:n multiplier=aug(j,k)/aug(k,k); Aug(j,k:Nb)=Aug(j,k:Nb)-multiplier*Aug(k,k:Nb); end end

Gauss Elimination method %----Back substitution--------- x=zeros(n,); x(n)=aug(n,nb)/aug(n,n); for k=n-:-: for j=k+:n Aug(k,Nb)=Aug(k,Nb)-Aug(k,j)*x(j); % computes b(k)-sum(akj*xj) end end x(k)=aug(k,nb)/aug(k,k);

Cost of computing Example : Estimate the time required to carry out back substitution on a system of 5 equations in 5 unknowns, on a computer where elimination takes second. For Elimination approx. flops, fe= n /=*5 / time needed, te= sec For Back substitution approx. flops, fb=n =5 time needed, tb=? Back substitution time=. sec Total computation time=. sec tb/te=fb/fe tb=te*fb/fe=/(*5)=. sec 4

Cost of computing Example : Assume that your computer can solve a * linear system Ax=b in second by Gauss elimination method. Estimate the time required to solve four systems of 5 equations in 5 unknowns with the same coefficient matrix, using LU factorization method. For Gauss Elimination approx. flops, fg= n /=* / time needed, tg= sec For LU approx. flops, fl=n /+kn =*5 /+*4*5 time needed, tl=tg*fl/fg=(*5^/+*4*5 )/(* /)=6 sec If we do the same 4 problems by Gauss Elimination: tg=4*5 / =6.5 sec Time save=46.5 sec %!!! 5

Forward and backward error Definition Let x c be an approximate solution of the linear system Ax=b. The residual is the vector r=b-ax c The backward error = b-ax c The forward error = x exact -x c Error magnification factor= relative forward error relative backward error x exact x x exact b Ax b c c 6

Sources of error There are two major sources of error in Gauss-elimination as we have discussed so far. ill-conditioning swamping Ill-conditioning -sensitivity of the solution to the input data..first we examine the effect of small change in the coefficients Let us consider a system: Ax=b..99x. Exact solution x exact.99.x. Now, we make a small change in the coefficient matrix A...98 New solution A x C.8.99. Large change in soln 7

Sources of error:ill-conditioning. we examine the effect of small change in b for the same coefficient matrix A b b..98.98. b b b solution solution Ax=b solver x C x C x c x exact x C Here we see, even though the inputs are close together we get very distinct outputs Therefore, for ill-conditioned system, small change in input, we get large change in output. How can we express this characteristic mathematically? 8

Sources of error:ill-conditioning Let us consider another example of ill-conditioned system:. x x. x exact x C. Backward error vector = r b Ax Forward error vector = x exact x C C b Ax C b..... Relative backward error =.5% Relative forward error = x exact x x exact C =. % r = b-axc =. x exact -x c =. For ill-conditioned system, error magnification factor is very high Error magnification factor =./(./.) = 44. 9

Condition number There is another quantity known as Condition number is also used to show that the system is ill-conditioned. Definition The condition number of a square matrix A, cond(a), is the maximum possible error magnification factor for Ax=b, over all right-hand side b. Theorem The condition number of a n*n matrix A is cond(a)= A * A - For ill-conditioned system, condition number is high If the coefficient matrix A have t-digit precision, and cond(a)= k The solution vector x is accurate upto (t-k) digits. Example: elements of a coefficient matrix A is accurate up to 5 digits, and cond(a)= 4 So, the solution is accurate (5-4) = digit only!!!

Sources of error:swamping A second significant source of error in Gauss elimination method is Swamping. Much easier to fix. The Gauss elimination method discussed so far, works well as long as the diagonal elements of coefficient matrix A (pivot) are non-zero. Note: The computation of the multiplier and the back substitution require divisions by the pivots. Consequently the algorithm can not be carried out if any of the pivots are zero or near to zero. Let us consider our original example again, 6 4 7 x x x 5 5 6 7 6.9 7 x x x 5 5 6.99 7 x exact

Sources of error:swamping let us assume that the solution is to be computed on a hypothetical machine that does decimal floating point arithmetic with five significant digits. the first step of elimination produces,.5 6. 7 x x x 5.5 6. 7 R R R 5 R R R quite small compared with the other elements

Sources of error:swamping without interchanging the rows, the multiplier.5..5.5 R R R 6..5.5 6. 7 x x x 6.5 5 6. 7.9999.5.5 x C Finally, x exact But,

Sources of error:swamping So, where did things go wrong? There is no accumulation of rounding error caused by doing thousands of arithmetic operations The matrix is NOT close to singular Actually, The difficulty comes from choosing a small pivot at second step of the elimination. As a result the multiplier is.5 and the final equation involves coefficients that are times larger those in the original problem. That means the equation is overpowered/swamped. If the multipliers are all less than or equal to in magnitude, then computed solution can be proved to be satisfactory. Keeping the multipliers less than one in absolute value can be ensured by a process known as Pivoting. 4

5 Sources of error: Pivoting 6..5 7 x x x 6. 5.5 7 If we allow pivoting (partial), i.e, by interchanging R and R, we have,..6.4 x C.5. R R R Solution is close to the exact!!!

Pivoting Problems arise with Gauss elimination if a coefficient along the diagonal (Pivot element) is (problem: division by ) or close to (problem: round-off error) One way to combat these issues is to determine the coefficient with the largest absolute value in the column below the pivot element. The rows can then be switched so that the largest element is the pivot element. This is called partial pivoting. If the rows to the right of the pivot element are also checked and columns switched, this is called complete pivoting. 6

Gauss Elimination with partial pivoting clc; close all; format long e; % A= coefficient matrix % b=right hand side vector % x=solution vector A=[ -7 ; - 6; 5-5]; b=[7;4;6]; [m,n]=size(a); if m~=n, error('matrix A must be square');end Nb=n+; Aug=[A b]; % Augmented matrix, combines A and b %----forward elimilation------- for k=:n- % from row to n- %-----------partial pivoting------------ [big, big_pos]=max(abs(aug(k:n,k))); pivot_pos=big_pos+k-; if pivot_pos~=k Aug([k pivot_pos],:)=aug([pivot_pos k],:); end %-------------------------------------- for j=k+:n multiplier=aug(j,k)/aug(k,k); Aug(j,k:Nb)=Aug(j,k:Nb)-multiplier*Aug(k,k:Nb); end end 7

Gauss Elimination method %----Back substitution--------- x=zeros(n,); x(n)=aug(n,nb)/aug(n,n); for k=n-:-: for j=k+:n Aug(k,Nb)=Aug(k,Nb)-Aug(k,j)*x(j); % computes b(k)-sum(akj*xj) end end x(k)=aug(k,nb)/aug(k,k); 8

Scaling Scaling is the operation of adjusting the coefficients of a set of equations so that they are all of the same order of magnitude. Example x x x 5 x exact If we do pivoting without scaling than round-off error may occur. For the above system, computed solution is x C.94.9. But if you scale the system (divide the maximum coefficient in each Row) & then do the pivoting, R R R R R R...5.. x x.5 x.5. x C... 9

Scaling So, whenever the coefficients in one column are widely different from those in another column, scaling is beneficial. When all values are about the same order of magnitude, scaling should be avoided, as additional round-off error incurred during the scaling operation itself may adversely affect the accuracy.

Tridiagonal systems function [xout]=tridiagonal(a,b,c,d) N=length(b); %---forward elimination------------ for k=:n multiplier=a(k)/b(k-); b(k)=b(k)-multiplier*c(k-); d(k)=d(k)-multiplier*d(k-); end %-----back substitution---------- x(n)=d(n)/b(n); for k=n-:-: x(k)=(d(k)-c(k)*x(k+))/b(k); end xout=x.'; %--------------------------------- Main: format long e; clc; clear all; close all; a=[ - - -]; b=[.4.4.4.4]; c=[- - - ]; d=[4.8.8.8.8]; [xout]=tridiagonal(a,b,c,d)

Cholesky factorization for symmetric coefficient matrix: a ij =a ji and A=A T Algorithm is based on the fact that symmetric matrix can be decomposed as A= U U T Ax=b U T Ux=b U T c=b Ux=c (i) Solve for c (ii) solve for x. MATLAB: U = chol(a) U is a upper triangular matrix, so that U T *U=A

Matrix inversion For square matrix If matrix A is square, there is another matrix A-, called the inverse of A, for which AA - =I=A - A Inverse can be computed in a column-by-column fashion by generating solutions with the unit vectors as the right-hand-side constants. Example: for a system b Solution of the system Ax =b will provide the column of A - Then use, b b Solution of the system Ax =b will provide the column of A - Solution of the system Ax =b will provide the column of A -

Matrix inversion finally, x x x A - =????????? our original system is Ax = b x=a - * b No of flops: multiplication: n n 6 6 Total flops n n 5 addition: -no of flops times Gauss elimination, more round-off error -computationally more expensive -Note: singular matrix (det A=) has no inverse -MATLAB: xc=inv(a)*b 5 6 n 4 n n 5 6 n 4

MATLAB \ operator It is useful to know that specific algorithm used by MATLAB when the \operator is invoked depends on the structure of the coefficient matrix A. To determine the structure of A and select the appropriate algorithm, MATLAB follows the following precedence:. If A is banded, then banded solvers are used (e.g. Thomas algorithm).. If A is a upper or lower triangular matrix, then the system is solved by a backward substitution algorithm for upper triangular matrix, or by a forward substitution algorithm for a lower triangular matrix.. If A is symmetric and has real positive diagonal elements, then a Cholesky factorization is applied first. 4. If none of the above criteria is fulfilled, then a general triangular factorization is computed by Gauss elimination with partial pivoting (lu). 5. If A is not square, i.e., rectangular, QR factorization is used. Note: points -5 are simplified in the context of our course. 5

Iterative methods Uses an initial guess and refine the guess at each step, converging to the solution vector. 6

Jacobi method function [xout iter_number]=jacobi(a,b,max_iter,tol) N=length(b); d=diag(a); % takes the main diagonal elements only x=zeros(n,); % initial guess vector for k=:max_iter x=(b-(a-diag(d))*x)./d; maxresidual=norm(b-a*x,inf); if maxresidual<tol break; end End iter_number=k; xout=x; %---------------------------------------------------- Main: format long e; clc; clear all; close all; A=[ -. -.;. 7 -.;. -. ]; b=[7.85;-9.; 7.4]; max_iter=;tol=e-5; [xc iter_number]=jacobi(a,b,max_iter,tol); 7

Gauss Seidel and Jacobi method: Graphical depiction 8

Gauss Seidel method function [xout iter_number]=gaussseidel(a,b,max_iter,tol) N=length(b); d=diag(diag(a)); % takes the main diagonal elements only x=zeros(n,); % initial guess vector U=triu(A,); % above the main diagonal L=tril(A,-); % below the main diagonal for k=:max_iter bl=b-u*x; for j=:n x(j)=(bl(j)-l(j,:)*x)./d(j,j); end maxresidual=norm(b-a*x,inf); if maxresidual<tol break; end End iter_number=k; xout=x; %--------------------------------------------------------- 9

SOR method function [xout iter_number]=sor(a,b,relax,max_iter,tol) N=length(b); d=diag(diag(a)); x=zeros(n,); % initial guess vector U=triu(A,); % above the main diagonal L=tril(A,-); % below the main diagonal for k=:max_iter bl=relax*(b-u*x)+(-relax)*d*x; for j=:n x(j)=(bl(j)-relax*l(j,:)*x)./d(j,j); end maxresidual=norm(b-a*x,inf); if maxresidual<tol break; end end iter_number=k; xout=x; 4

4 Sparse matrix computations AS No row of A has more than 4 non-zero entries. Since fewer than 4n of the n potential entries are non zero. We may call this matrix Sprase.

Sparse matrix computations We want to solve a system of equations defined by AS for n= 5 What are the options? Treating the coefficient matrix A as a full matrix means storing n = elements each as a double precision floating point number. One double precision floating point number requires 8 bytes (64 bit) of storage. So, n = elements will require 8* bytes = 8 gigabytes (approx.) of RAM Not only the size is enemy, but so is time. The number of flops by Gauss Elimination will be order of n 5 If a PC runs on the order of a few GHz ( 9 cycle per second), an upper bound of floating point operations per second is around 8. Therefore, time required of Gauss Elimination= 5 / 8 = 7 seconds. Note: year =* 7 seconds. 4

Sparse matrix computations So, it is clear that Gauss Elimination method for this problem is not an overnight computation. On the other hand, One step of an iterative method will require *4n=8* 5 (approx.) operations. If the solution needs iterations in Jacobi method, total operations= 8 Time required in modern PC for Jacobi method will be = second (approx.) or less!!! 4

MATLAB functions for iterative methods Function pcg bicg cgs bicgstab gmres Method Preconditioned conjugate gradients method Biconjugate gradients method Conjugate gradients squared method Biconjugate gradients stabilized method Generalized minimum residual method Syntax: same for all of the functions. For example, xc = pcg(a,b,tol,maxit); 44

Rank Deficiency Kronecker-Capelli theorem: A system has a SOLUTION if and only if its coeffiecient matrix (denoted by A) and its augmented matrix (denoted by Ab) have the same Rank. Let, Rank of A matrix=r Number of equation=n If R=n, the system has unique solution (full Rank system) If R<n, the system has an infinite number of solution (Rank-deficient system) We can express the system in terms of (n-r) variables under-determined system (number of equation<number of unknowns) 45

Rank Deficiency Example : A=[ - ; 4 - ; - -4 ]; b=[ 5; 7; ]; Ab=[A b]; % Augmented matrix RA=rank(A); RAb=rank(Ab); xc=a\b Solution: RA= RAb= Warning: Matrix is singular to working precision. xc = NaN -Inf -Inf 46

Rank Deficiency Example : A=[ - ; 4 - ; - -4 ]; b=[ 5; 7; ]; Ab=[A b]; % Augmented matrix RA=rank(A); RAb=rank(Ab); xc=a\b xc=pinv(a)*b Solution: RA= RAb= Warning: Matrix is singular to working precision. xc = NaN NaN NaN xc =.455996e+.79865777e+.69485687574e+ 47