Linear System of Equations

Similar documents
Linear System of Equations

Numerical Analysis Fall. Gauss Elimination

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

A Review of Matrix Analysis

Chapter 9: Gaussian Elimination

Linear Algebraic Equations

Linear Algebraic Equations

Matrix notation. A nm : n m : size of the matrix. m : no of columns, n: no of rows. Row matrix n=1 [b 1, b 2, b 3,. b m ] Column matrix m=1

Process Model Formulation and Solution, 3E4

Review Questions REVIEW QUESTIONS 71

The Solution of Linear Systems AX = B

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

SOLVING LINEAR SYSTEMS

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

CHAPTER 6. Direct Methods for Solving Linear Systems

Computational Methods. Systems of Linear Equations

2.1 Gaussian Elimination

MATH 3511 Lecture 1. Solving Linear Systems 1

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)

Solving Linear Systems Using Gaussian Elimination. How can we solve

Section 9.2: Matrices.. a m1 a m2 a mn

Elementary maths for GMT

Matrix decompositions

Phys 201. Matrices and Determinants

Gauss Elimination. Hsiao-Lung Chan Dept Electrical Engineering Chang Gung University, Taiwan

Next topics: Solving systems of linear equations

Scientific Computing

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

5 Solving Systems of Linear Equations

4.2 Floating-Point Numbers

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

The purpose of computing is insight, not numbers. Richard Wesley Hamming

Linear Algebra V = T = ( 4 3 ).

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD

Matrix & Linear Algebra

1.Chapter Objectives

MTH 464: Computational Linear Algebra

Section 9.2: Matrices. Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns.

Numerical Linear Algebra

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

CPE 310: Numerical Analysis for Engineers

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Linear Equations in Linear Algebra

MATH2210 Notebook 2 Spring 2018

Linear Algebra and Matrix Inversion

NUMERICAL MATHEMATICS & COMPUTING 7th Edition

Elementary Linear Algebra

Lectures on Linear Algebra for IT

Numerical Analysis FMN011

MATRICES. a m,1 a m,n A =

Gaussian Elimination and Back Substitution

Matrices. Chapter Definitions and Notations

Chapter 2 Notes, Linear Algebra 5e Lay

Chapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers.

POLI270 - Linear Algebra

Solving Linear Systems

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

CSE 160 Lecture 13. Numerical Linear Algebra

A primer on matrices

Introduction. Vectors and Matrices. Vectors [1] Vectors [2]

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

Solving Linear Systems of Equations

Numerical Linear Algebra

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in

Let x be an approximate solution for Ax = b, e.g., obtained by Gaussian elimination. Let x denote the exact solution. Call. r := b A x.

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

9. Numerical linear algebra background

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

Linear Systems of n equations for n unknowns

Consider the following example of a linear system:

Elementary Linear Algebra

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Chapter 8 Gauss Elimination. Gab-Byung Chae

Gaussian Elimination -(3.1) b 1. b 2., b. b n

Direct Methods for Solving Linear Systems. Matrix Factorization

Matrix decompositions

Scientific Computing: An Introductory Survey

Matrices and systems of linear equations

Solving Linear Systems of Equations

Matrix Operations. Linear Combination Vector Algebra Angle Between Vectors Projections and Reflections Equality of matrices, Augmented Matrix

Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3

Laboratory #3: Linear Algebra. Contents. Grace Yung

Ack: 1. LD Garcia, MTH 199, Sam Houston State University 2. Linear Algebra and Its Applications - Gilbert Strang

MAC Learning Objectives. Learning Objectives (Cont.) Module 10 System of Equations and Inequalities II

1300 Linear Algebra and Vector Geometry Week 2: Jan , Gauss-Jordan, homogeneous matrices, intro matrix arithmetic

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Numerical Analysis Lecture Notes

Lecture Notes in Linear Algebra

CE 206: Engineering Computation Sessional. System of Linear Equations

CITS2401 Computer Analysis & Visualisation

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 2. Systems of Linear Equations

Fundamentals of Engineering Analysis (650163)

MTH501- Linear Algebra MCQS MIDTERM EXAMINATION ~ LIBRIANSMINE ~

Transcription:

Linear System of Equations Linear systems are perhaps the most widely applied numerical procedures when real-world situation are to be simulated. Example: computing the forces in a TRUSS. F F 5. 77F F. 6585F6 F7. 77F F4. 755F6 F 4 8 F. 6585F F 7 F8. 755F F. 6585F F 5 6 9. 755F6 F8 F. 77F4 F. 77F4 F9. 6585F F. 755F F F. 77F4 8 5 6. 77F4 4 equations with 4 unknowns Methods for numerically solving ordinary and partial differential equations depend on Linear Systems of Equations. No of unknowns = number of equations i.e. nn system

Linear System of Equations a x a x a x b a x a x a x b a x a x a x b a a a a a a a a a x x x b b b A x = b n * n Matrix Column vector Column vector

Matrix Notation A matrix consists of a rectangular array of elements represented by a single symbol (example: [A]). An individual entry of a matrix is an element (example: a )

Matrix Notation A horizontal set of elements is called a row and a vertical set of elements is called a column. The first subscript of an element indicates the row while the second indicates the column. The size of a matrix is given as m rows by n columns, or simply m by n (or m x n). x n matrices are row vectors. m x matrices are column vectors. 4

Special Matrices Matrices where m=n are called square matrices. There are a number of special forms of square matrices: Symmetric A 5 7 7 8 Diagonal A a a a Identity A Upper Triangular A a a a a a a Lower Triangular A a a a a a a Banded A a a a a a a a a 4 a 4 a 44 5

Matrix Operations Two matrices are considered equal if and only if every element in the first matrix is equal to every corresponding element in the second. This means the two matrices must be the same size. A=B if a ij = b ij for all i and j Matrix addition and subtraction are performed by adding or subtracting the corresponding elements. This requires that the two matrices be the same size. C=A+B c ij =a ij + b ij multiplication by a scalar is performed by multiplying each element by the same scalar. ga ga ga D=gA ga ga ga ga ga ga 6

Matrix Multiplication The elements in the matrix [C] that results from multiplying matrices [A] and [B] are calculated using: c ij n k a ik b kj 7

Matrix Inverse and Transpose The inverse of a square, nonsingular matrix [A] is that matrix which, when multiplied by [A], yields the identity matrix. [A][A] - =[A] - [A]=[I] The transpose of a matrix involves transforming its rows into columns and its columns into rows. (a ij ) T =a ji 8

Gauss Elimination method Forward elimination Starting with the first row, add or subtract multiples of that row to eliminate the first coefficient from the second row and beyond. Continue this process with the second row to remove the second coefficient from the third row and beyond. Stop when an upper triangular matrix remains. Back substitution Starting with the last row, solve for the unknown, then substitute that value into the next highest row. Because of the upper-triangular nature of the matrix, each row will contain only one more unknown. 9

Gauss Elimination method clc; close all; format long e; % A= coefficient matrix % b=right hand side vector % x=solution vector A=[ -7 ; - 6; 5-5]; b=[7;4;6]; [m,n]=size(a); if m~=n error('matrix A must be square'); end Nb=n+; Aug=[A b]; % Augmented matrix, combines A and b %----forward elimination------- for k=:n- % from row to n- if abs(a(k,k))<eps error('zero Pivot encountered') end for j=k+:n multiplier=aug(j,k)/aug(k,k); Aug(j,k:Nb)=Aug(j,k:Nb)-multiplier*Aug(k,k:Nb); end end

Gauss Elimination method %----Back substitution--------- x=zeros(n,); x(n)=aug(n,nb)/aug(n,n); for k=n-:-: for j=k+:n Aug(k,Nb)=Aug(k,Nb)-Aug(k,j)*x(j); % computes b(k)-sum(akj*xj) end end x(k)=aug(k,nb)/aug(k,k);

Gauss Elimination method The execution of Gauss elimination depends on the amount of floating-point operations (or flops). The flop count for an n x n system is: Forward Elimination Back Substitution Total n On n On n On

Gauss Elimination method Number of flops for Gauss Elimination method: n Total flops Forward Elimination Back substitution n / % due to Elimination 85 75 667 87.58% 6855 6755 4 666667 98.5% 6.68* 8 6.67* 8 6 6.67* 8 99.85% As the system gets larger, the cost of computing increases - Flops increases nearly orders of magnitude for every order of increase in the number of equations Most of the efforts is incurred in the forward elimination steps.

Cost of computing Example : Estimate the time required to carry out back substitution on a system of 5 equations in 5 unknowns, on a computer where elimination takes second. For Elimination approx. flops, fe= n /=*5 / time needed, te= sec For Back substitution approx. flops, fb=n =5 time needed, tb=? Back substitution time=. sec Total computation time=. sec tb/te=fb/fe tb=te*fb/fe=/(*5)=. sec 4

Cost of computing Example : Assume that your computer can solve a * linear system Ax=b in second by Gauss elimination method. Estimate the time required to solve four systems of 5 equations in 5 unknowns with the same coefficient matrix, using LU factorization method. For Gauss Elimination approx. flops, fg= n /=* / time needed, tg= sec For LU approx. flops, fl=n /+kn =*5 /+*4*5 time needed, tl=tg*fl/fg=(*5^/+*4*5 )/(* /)=6 sec If we do the same 4 problems by Gauss Elimination: tg=4*5 / =6.5 sec Time save=46.5 sec %!!! 5

Forward and backward error Definition Let x c be an approximate solution of the linear system Ax=b. The residual is the vector r=b-ax c The backward error = b-ax c The forward error = x exact -x c Error magnification factor= relative forward error relative backward error x exact x x exact b Ax b c c 6

Sources of error There are two major sources of error in Gauss-elimination as we have discussed so far. ill-conditioning swamping Ill-conditioning -sensitivity of the solution to the input data..first we examine the effect of small change in the coefficients Let us consider a system: Ax=b..99x. Exact solution x exact.99.x. Now, we make a small change in the coefficient matrix A...98 New solution A x C.8.99. Large change in soln 7

Sources of error:ill-conditioning. we examine the effect of small change in b for the same coefficient matrix A b b..98.98. b b b solution solution Ax=b solver x C x C x c x exact x C Here we see, even though the inputs are close together we get very distinct outputs Therefore, for ill-conditioned system, small change in input, we get large change in output. How can we express this characteristic mathematically? 8

Sources of error:ill-conditioning Let us consider another example of ill-conditioned system:. x x. x exact x C. Backward error vector = r b Ax Forward error vector = x exact x C C b Ax C b..... Relative backward error =.5% Relative forward error = x exact x x exact C =. % r = b-axc =. x exact -x c =. For ill-conditioned system, error magnification factor is very high Error magnification factor =./(./.) = 44. 9

Condition number There is another quantity known as Condition number is also used to show that the system is ill-conditioned. Definition The condition number of a square matrix A, cond(a), is the maximum possible error magnification factor for Ax=b, over all right-hand side b. Theorem The condition number of a n*n matrix A is cond(a)= A * A - For ill-conditioned system, condition number is high If the coefficient matrix A have t-digit precision, and cond(a)= k The solution vector x is accurate upto (t-k) digits. Example: elements of a coefficient matrix A is accurate up to 5 digits, and cond(a)= 4 So, the solution is accurate (5-4) = digit only!!!

Sources of error:swamping A second significant source of error in Gauss elimination method is Swamping. Much easier to fix. The Gauss elimination method discussed so far, works well as long as the diagonal elements of coefficient matrix A (pivot) are non-zero. Note: The computation of the multiplier and the back substitution require divisions by the pivots. Consequently the algorithm can not be carried out if any of the pivots are zero or near to zero. Let us consider our original example again, 6 4 7 x x x 5 5 6 7 6.9 7 x x x 5 5 6.99 7 x exact

Sources of error:swamping let us assume that the solution is to be computed on a hypothetical machine that does decimal floating point arithmetic with five significant digits. the first step of elimination produces,.5 6. 7 x x x 5.5 6. 7 R R R 5 R R R quite small compared with the other elements

Sources of error:swamping without interchanging the rows, the multiplier.5..5.5 R R R 6..5.5 6. 7 x x x 6.5 5 6. 7.9999.5.5 x C Finally, x exact But,

Sources of error:swamping So, where did things go wrong? There is no accumulation of rounding error caused by doing thousands of arithmetic operations The matrix is NOT close to singular Actually, The difficulty comes from choosing a small pivot at second step of the elimination. As a result the multiplier is.5 and the final equation involves coefficients that are times larger those in the original problem. That means the equation is overpowered/swamped. If the multipliers are all less than or equal to in magnitude, then computed solution can be proved to be satisfactory. Keeping the multipliers less than one in absolute value can be ensured by a process known as Pivoting. 4

5 Sources of error: Pivoting 6..5 7 x x x 6. 5.5 7 If we allow pivoting (partial), i.e, by interchanging R and R, we have,..6.4 x C.5. R R R Solution is close to the exact!!! Ref: http://www.mathworks.com/moler/chapters.html

Pivoting Problems arise with Gauss elimination if a coefficient along the diagonal (Pivot element) is (problem: division by ) or close to (problem: round-off error) One way to combat these issues is to determine the coefficient with the largest absolute value in the column below the pivot element. The rows can then be switched so that the largest element is the pivot element. This is called partial pivoting. If the rows to the right of the pivot element are also checked and columns switched, this is called complete pivoting. 6

Gauss Elimination with partial pivoting clc; close all; format long e; % A= coefficient matrix % b=right hand side vector % x=solution vector A=[ -7 ; - 6; 5-5]; b=[7;4;6]; [m,n]=size(a); if m~=n, error('matrix A must be square');end Nb=n+; Aug=[A b]; % Augmented matrix, combines A and b %----forward elimilation------- for k=:n- % from row to n- %-----------partial pivoting------------ [big, big_pos]=max(abs(aug(k:n,k))); pivot_pos=big_pos+k-; if pivot_pos~=k Aug([k pivot_pos],:)=aug([pivot_pos k],:); end %-------------------------------------- for j=k+:n multiplier=aug(j,k)/aug(k,k); Aug(j,k:Nb)=Aug(j,k:Nb)-multiplier*Aug(k,k:Nb); end end 7

Gauss Elimination method with partial pivoting %----Back substitution--------- x=zeros(n,); x(n)=aug(n,nb)/aug(n,n); for k=n-:-: for j=k+:n Aug(k,Nb)=Aug(k,Nb)-Aug(k,j)*x(j); % computes b(k)-sum(akj*xj) end end x(k)=aug(k,nb)/aug(k,k); 8

Scaling Scaling is the operation of adjusting the coefficients of a set of equations so that they are all of the same order of magnitude. Example x x x 5 x exact If we do pivoting without scaling than round-off error may occur. For the above system, computed solution is x C.94.9. But if you scale the system (divide the maximum coefficient in each Row) & then do the pivoting, R R R R R R...5.. x x.5 x.5. x C... 9

Scaling So, whenever the coefficients in one column are widely different from those in another column, scaling is beneficial. When all values are about the same order of magnitude, scaling should be avoided, as additional round-off error incurred during the scaling operation itself may adversely affect the accuracy.

Matrix inversion For square matrix If matrix A is square, there is another matrix A-, called the inverse of A, for which AA - =I=A - A Inverse can be computed in a column-by-column fashion by generating solutions with the unit vectors as the right-hand-side constants. Example: for a system b Solution of the system Ax =b will provide the column of A - Then use, b b Solution of the system Ax =b will provide the column of A - Solution of the system Ax =b will provide the column of A -

Matrix inversion finally, x x x A - =????????? our original system is Ax = b x=a - * b No of flops: multiplication: n n 6 6 Total flops n n 5 addition: -no of flops times Gauss elimination, more round-off error -computationally more expensive -Note: singular matrix (det A=) has no inverse -MATLAB: xc=inv(a)*b 5 6 n 4 n n 5 6 n

Iterative methods Uses an initial guess and refine the guess at each step, converging to the solution vector.

Gauss Seidel and Jacobi method: Graphical depiction 4

Solving With MATLAB MATLAB provides two direct ways to solve systems of linear algebraic equations [A]{x}={b}: Left-division x = A\b Matrix inversion x = inv(a)*b Note: The matrix inverse is less efficient than left-division and also only works for square, non-singular systems. 5

6 Sparse matrix computations AS No row of A has more than 4 non-zero entries. Since fewer than 4n of the n potential entries are non zero. We may call this matrix Sprase.

Sparse matrix computations We want to solve a system of equations defined by AS for n= 5 What are the options? Treating the coefficient matrix A as a full matrix means storing n = elements each as a double precision floating point number. One double precision floating point number requires 8 bytes (64 bit) of storage. So, n = elements will require 8* bytes = 8 gigabytes (approx.) of RAM Not only the size is enemy, but so is time. The number of flops by Gauss Elimination will be order of n 5 If a PC runs on the order of a few GHz ( 9 cycle per second), an upper bound of floating point operations per second is around 8. Therefore, time required of Gauss Elimination= 5 / 8 = 7 seconds. Note: year =* 7 seconds. 7

Sparse matrix computations So, it is clear that Gauss Elimination method for this problem is not an overnight computation. On the other hand, One step of an iterative method will require *4n=8* 5 (approx.) operations. If the solution needs iterations in Jacobi method, total operations= 8 Time required in modern PC for Jacobi method will be = second (approx.) or less!!! 8