Systems of Linear Equations

Similar documents
Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Linear Algebraic Equations

Lecture 12. Linear systems of equations II. a 13. a 12. a 14. a a 22. a 23. a 34 a 41. a 32. a 33. a 42. a 43. a 44)

Elementary Linear Algebra

2.1 Gaussian Elimination

CS475: Linear Equations Gaussian Elimination LU Decomposition Wim Bohm Colorado State University

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015

Solving Dense Linear Systems I

Solution of Linear systems

MATH 3511 Lecture 1. Solving Linear Systems 1

Linear Equations and Matrix

Solving Linear Systems of Equations

Chapter 4. Solving Systems of Equations. Chapter 4

Solution of Linear Equations

LINEAR SYSTEMS (11) Intensive Computation

Review. Example 1. Elementary matrices in action: (a) a b c. d e f = g h i. d e f = a b c. a b c. (b) d e f. d e f.

Engineering Computation

Computational Methods. Systems of Linear Equations

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)

CSE 160 Lecture 13. Numerical Linear Algebra

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

Matrix notation. A nm : n m : size of the matrix. m : no of columns, n: no of rows. Row matrix n=1 [b 1, b 2, b 3,. b m ] Column matrix m=1

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in

Linear Algebraic Equations

The Solution of Linear Systems AX = B

LU Factorization. LU Decomposition. LU Decomposition. LU Decomposition: Motivation A = LU

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

1.Chapter Objectives

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

MATH 2030: MATRICES. Example 0.2. Q:Define A 1 =, A. 3 4 A: We wish to find c 1, c 2, and c 3 such that. c 1 + c c

lecture 2 and 3: algorithms for linear algebra

CHAPTER 6. Direct Methods for Solving Linear Systems

Next topics: Solving systems of linear equations

1 GSW Sets of Systems

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Scientific Computing

Formula for the inverse matrix. Cramer s rule. Review: 3 3 determinants can be computed expanding by any row or column

Computational Linear Algebra

Fundamentals of Engineering Analysis (650163)

INVERSE OF A MATRIX [2.2]

Gaussian Elimination and Back Substitution

Math 313 Chapter 1 Review

Math Computation Test 1 September 26 th, 2016 Debate: Computation vs. Theory Whatever wins, it ll be Huuuge!

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

Department of Aerospace Engineering AE602 Mathematics for Aerospace Engineers Assignment No. 4

Process Model Formulation and Solution, 3E4

Lecture 12 Simultaneous Linear Equations Gaussian Elimination (1) Dr.Qi Ying

Digital Workbook for GRA 6035 Mathematics

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

Introduction to PDEs and Numerical Methods Lecture 7. Solving linear systems

1.5 Gaussian Elimination With Partial Pivoting.

Matrices and Matrix Algebra.

lecture 3 and 4: algorithms for linear algebra

22A-2 SUMMER 2014 LECTURE 5

Numerical Linear Algebra

Linear Systems of n equations for n unknowns

AMS 147 Computational Methods and Applications Lecture 17 Copyright by Hongyun Wang, UCSC

Linear Algebra and Matrix Inversion

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey

4. Direct Methods for Solving Systems of Linear Equations. They are all over the place...

Direct Methods for Solving Linear Systems. Matrix Factorization

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

Linear Algebra I Lecture 8

Lecture 8: Determinants I

Matrix decompositions

7.2 Linear equation systems. 7.3 Linear least square fit

LU Factorization a 11 a 1 a 1n A = a 1 a a n (b) a n1 a n a nn L = l l 1 l ln1 ln 1 75 U = u 11 u 1 u 1n 0 u u n 0 u n...

Linear Algebra: Lecture notes from Kolman and Hill 9th edition.

Matrix decompositions

Matrices and systems of linear equations

Numerical Methods Lecture 2 Simultaneous Equations

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

18.06SC Final Exam Solutions

4 Elementary matrices, continued

TMA4125 Matematikk 4N Spring 2017

MH1200 Final 2014/2015

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

5 Solving Systems of Linear Equations

The matrix will only be consistent if the last entry of row three is 0, meaning 2b 3 + b 2 b 1 = 0.

Vectors and matrices: matrices (Version 2) This is a very brief summary of my lecture notes.

Lecture 10: Determinants and Cramer s Rule

Review of Matrices and Block Structures

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns

MODULE 7. where A is an m n real (or complex) matrix. 2) Let K(t, s) be a function of two variables which is continuous on the square [0, 1] [0, 1].

Phys 201. Matrices and Determinants

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

MAC Module 2 Systems of Linear Equations and Matrices II. Learning Objectives. Upon completing this module, you should be able to :

Finite Mathematics Chapter 2. where a, b, c, d, h, and k are real numbers and neither a and b nor c and d are both zero.

Y = ax + b. Numerical Applications Least-squares. Start with Self-test 10-1/459. Linear equation. Error function: E = D 2 = (Y - (ax+b)) 2

1111: Linear Algebra I

Transcription:

Systems of Linear Equations Last time, we found that solving equations such as Poisson s equation or Laplace s equation on a grid is equivalent to solving a system of linear equations. There are many other examples where systems of linear equations appear, such as eigenvalue problems. In this lecture, we look into different approaches to solving systems of linear equations (SLE s). SLE: a 11 x 1 + a 12 x 2 + a 13 x 3 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + a 23 x 3 + + a 2n x n = b 2 a 31 x 1 + a 32 x 2 + a 33 x 3 + + a 3n x n = b 3 n unknowns x j m equations a ij and b i are known a m x 1 + a m 2 x 2 + a m 3 x 3 + + a mn x n = b m Winter Semester 2006/7 Computational Physics I Lecture 8 1

SLE s If m<n, there is not a unique solution for the x s (underconstrained). If m>n, the system can be overconstrained and usually the job is to find the best set {x} to represent the set if equations. We will consider here square matrices m=n, with deta0, in which case the equations are linearly independent and there is a unique solution. Matrix representation: Ax = b a 11 a 12 a 1n a 21 a 22 a 2n A = a n1 a n2 a nn b = b 1 b 2 b n Winter Semester 2006/7 Computational Physics I Lecture 8 2

SLE s Note that if the matrix is diagonal, then the solutions is easy: a 11 0 0 a 21 a 22 0 a n1 a n2 a nn x 1 x 2 x n = b 1 b 2 b n a 11 x 1 = b 1 x 1 = b 1 /a 11 a 21 x 1 + a 22 x 2 = b 2 x 2 = b2 a 21 b 1 a11 Goal of Gaussian elimination method is to bring A into this form. a 22 Winter Semester 2006/7 Computational Physics I Lecture 8 3

SLE s Ax = b a 11 a 12 a 1n x 1 a 21 a 22 a 2n x 2 = a n1 a n2 a nn x n b n Note: 1. Interchanging two rows of A and the same rows of b gives the same set of equations. 2. Solution not changed if replace a row with a linear combination including another row, as long as the b s are treated in a similar way. E.g., a ij k i a ij + k i a i j j = 1,,n 3. Interchanging two columns of A gives the same result if simultaneously two rows of vector x interchanged. Winter Semester 2006/7 Computational Physics I Lecture 8 4 b 1 b 2 b i k i b i + k i b i

a i1 Gauss Elimination Method define l i1 = a 11 Transform A by subtracting l i1 a t 1 from row i where a t 1 is the transpose of row 1. t t a 1 a 1 t a 2 a t t 2 l 21 a 1 a n t L 1 = a n t l n1 a 1 t 1 l 21 1 0 l 31 1 0 l n1 1 or A (1) = L 1 A where is the Frobenius matrix Winter Semester 2006/7 Computational Physics I Lecture 8 5

Gauss Elimination Method The result is a 11 a 12 a 1n (1) (1) A (1) 0 a 22 a 2n = (1) (1) 0 a n2 a nn now subtract a (1) i2 /a (1) 22 times the second row from rows 3...n A (2) = L 2 A (1) = L 2 L 1 A L 2 = 1 0 0 1 0 l 32 1 0 0 0 l n 2 0 0 1 where l i2 = a (1) (1) i2 /a 22 Winter Semester 2006/7 Computational Physics I Lecture 8 6

Gauss Elimination Method Now have: a 11 a 12 a 13 a 1n (1) (1) (1) 0 a 22 a 23 a 2n A (2) (2) (2) = 0 0 a 33 a 3n (2) (2) 0 0 a n3 a nn Keep going until have upper triangular matrix (n 1) steps A (n1) = L n1 L n 2 L 1 A = U u 11 u 12 u 13 u 1n 0 u 22 u 23 u 2n U = 0 0 u 33 u 3n 0 0 0 u nn with Ux = c and c = L n 1 L n 2 L 1 b Winter Semester 2006/7 Computational Physics I Lecture 8 7

Note that: L 1 = Gauss Elimination Method 1 l 21 1 0 l 31 1 0 l n1 1 1 l 21 1 0 L 1 1 = l 31 1 0 l n1 1 and similarly for the other L i, so L n 1 L n 2 L 1 A = U implies A = L 1 1 L 1 2 L 1 n 1 U = LU,with L = L 1 1 L 1 1 2 L n1 1 l 21 1 0 = l 31 l 32 1 l 43 l n1 l n2 l nn1 1 Winter Semester 2006/7 Computational Physics I Lecture 8 8

Gauss Elimination Method i.e., the matrix A has been decomposed (LU decomposition) into an upper triangular and a lower triangular matrix. The solution is now easy. A x = LU x = b First solve L y = b, then U x = y 1 = b 1 y 2 = b 2 l 21 y 1 then going from the bottom y so x n = y n u nn x n 1 = y n 1 x n u n 1,n u n 1,n 1 Winter Semester 2006/7 Computational Physics I Lecture 8 9

Gauss Elimination Method 4 1 1 1 Take a concrete example: x 1 1 4 1 1 x 2 = 1 1 4 1 x 3 Here is some code: 1 1 1 4 x 4 1 1 1 1 * Loop over the rows. The index i is the row which is not touched in this step Do i=1,n-1 * * For step i, we modify all rows j>i Do j=i+1,n * * Loop over the column elements in this row. Perform linear transformation * on matrix A. Keep the upper diagonal elements in A as we go. Also build * up the lower diagonal matrix at the same time. Lji=A(j,i)/A(i,i) Do k=1,n A(j,k)=A(j,k)-Lji*A(i,k) Enddo L(j,i)=Lji Enddo Enddo Winter Semester 2006/7 Computational Physics I Lecture 8 10

Gauss Elimination Method 4.00000 1.00000 1.00000 1.00000 0.00000 3.75000 0.75000 0.75000 0.00000 0.75000 3.75000 0.75000 0.00000 0.75000 0.75000 3.75000 A (1) = L (1) = 1.00000 0.00000 0.00000 0.00000 0.25000 1.00000 0.00000 0.00000 0.25000 0.00000 1.00000 0.00000 0.25000 0.00000 0.00000 1.00000 A (2) = 4.00000 1.00000 1.00000 1.00000 0.00000 3.75000 0.75000 0.75000 0.00000 0.00000 3.60000 0.60000 0.00000 0.00000 0.60000 3.60000 L (2) = 1.00000 0.00000 0.00000 0.00000 0.25000 1.00000 0.00000 0.00000 0.25000 0.20000 1.00000 0.00000 0.25000 0.20000 0.00000 1.00000 A (3) = 4.00000 1.00000 1.00000 1.00000 0.00000 3.75000 0.75000 0.75000 0.00000 0.00000 3.60000 0.60000 0.00000 0.00000 0.00000 3.50000 L (3) = 1.00000 0.00000 0.00000 0.00000 0.25000 1.00000 0.00000 0.00000 0.25000 0.20000 1.00000 0.00000 0.25000 0.20000 0.16667 1.00000 Winter Semester 2006/7 Computational Physics I Lecture 8 11

Gauss Elimination Method * Now build up the solution matrix * First the vector y Do i=1,n y(i)=b(i) Do j=1,i-1 y(i)=y(i)-l(i,j)*y(j) Enddo Enddo * * and now x * Do i=n,1,-1 x(i)=y(i) Do j=i+1,n x(i)=x(i)-a(i,j)*x(j) Enddo x(i)=x(i)/a(i,i) Enddo y= x= 1.00 0.75 0.60 0.50 0.14 0.14 0.14 0.14 It s good practice to check result. Is Ax=b? Is A=LU? Winter Semester 2006/7 Computational Physics I Lecture 8 12

Gauss-Jordan Elimination The Gauss-Jordan method changes the matrix A into the diagonal matrix in one pass. First try, use 2. from P.4 to recast equations: a 1 j a 1 j = a 1 j j = 1,2,3, 4 a 11 Then take the following linear combination: a a ij = a ij + k i a 1 j = a ij + k 1 j i i = 2,3,4 a 11 and choose k i such that a i1 = 0 = a i1 + k i a 11 = a i1 + k i i = 2,3,4 or k i = a i1 Winter Semester 2006/7 Computational Physics I Lecture 8 13

Gauss-Jordan Elimination a 11 a 12 a 13 a 14 Work with a 4x4 matrix to be a 21 a 22 a 23 a 24 concrete: a 31 a 32 a 33 a 34 a 41 a 42 a 43 a 44 After the first step, our matrix looks like this x 1 x 2 x 3 x 4 = b 1 b 2 b 3 b 4a11 a 1 12 a 13 a 14 a11 a11 a11 a 0 a 22 a 12 a 21 a11 a 23 a 13 a 21 a11 a 24 a 14 21 a11 a 0 a 32 a 12 a 31 a11 a 33 a 13 a 31 a11 a 34 a 14 31 a11 a 0 a 42 a 12 a 41 a11 a 43 a 13 a 41 a11 a 44 a 14 41 a11 x 1 x 2 x 3 x 4 = b 1 a11 b b 2 a 1 21 a11 b b 3 a 1 31 a11 b b 4 a 1 41 a11 Now we move on and make the second column look like the unit matrix, and so on. Winter Semester 2006/7 Computational Physics I Lecture 8 14

1. a 2 j = 2. a ij = 3. b i = a 2 j Gauss-Jordan Elimination a 22 a ij a i2 b i a i2 b 2 a 2 j a i = 1, 3, 4 22 a i = 1, 3, 4 22 Note that first column is not affected: a 21 = a i1 = 0 a 22 = 0 a i1 a i2 a 21 a = 22 a i1 a i2 0 a = 22 a i1 i = 1, 3, 4 Winter Semester 2006/7 Computational Physics I Lecture 8 15

Gauss-Jordan Elimination After these two sets of transformations, we have 1 0 a 13 0 1 a 23 0 0 a 33 0 0 a 43 a 14 a 24 a 34 a 44 x 1 x 2 x 3 x 4 = b 1 b 2 b 3 b 4 Keep going until we have 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 x 1 x 2 x 3 x 4 = b 1 (4) b 2 (4) b 3 (4) b 4 (4) x 1 x 2 x 3 x 4 = b 1 (4) b 2 (4) b 3 (4) b 4 (4) Winter Semester 2006/7 Computational Physics I Lecture 8 16

Gauss-Jordan Elimination Algorithm looks like this: 1. Loop over the n columns. We want to turn the columns one at a time into the unit matrix 2. For column k, we find the linear transformation on the rows which gives the desired result (1 in row k, 0 in other rows) a) Loop over the n rows and make the diagonal element a 1. For i=k, do this by dividing every element in A(i,j) by A(k,k), where j is the column index and i is the row index. Also need to divide b(i) by A(k,k) b) For ik, make the element in column k a 0. We do this by subtracting A(i,k)*A(k,j)/A(k,k) from A(i,j). For b(i), we subtract A(i,k)*b(k)/A(k,k) Test matrices: 4 1 1 1 1 4 1 1 A 1 = 1 1 4 1 1 1 1 4 b 1 = 1 1 1 1 1 1 1 1 2 3 4 1 1 1 1 A 2 = 2 3 4 5 1 1 1 1 3 4 5 6 1 1 1 1 4 5 6 7 b 2 = 1 1 1 1 Winter Semester 2006/7 Computational Physics I Lecture 8 17

Gauss-Jordan Elimination Start with A 1 : 4.00 1.00 1.00 1.00 1.00 1.00 4.00 1.00 1.00 b 1 1.00 1.00 1.00 4.00 1.00 1.00 1.00 1.00 1.00 4.00 1.00 First iteration Second iteration 1.00 0.25 0.25 0.25 0.25 0.00 3.75 0.75 0.75 0.75 0.00 0.75 3.75 0.75 0.75 0.00 0.75 0.75 3.75 0.75 1.00 0.00 0.20 0.20 0.20 0.00 1.00 0.20 0.20 0.20 0.00 0.00 3.60 0.60 0.60 0.00 0.00 0.60 3.60 0.60 Third iteration Fourth iteration 1.00 0.00 0.00 0.17 0.17 0.00 1.00 0.00 0.17 0.17 0.00 0.00 1.00 0.17 0.17 0.00 0.00 0.00 3.50 0.50 1.00 0.00 0.00 0.00 0.14 0.00 1.00 0.00 0.00 0.14 0.00 0.00 1.00 0.00 0.14 0.00 0.00 0.00 1.00 0.14 Winter Semester 2006/7 Computational Physics I Lecture 8 18

Gauss-Jordan Elimination We can use the same transformations to get the inverse of A: AA 1 = E now suppose the transformation of A to E consists of the following steps : T 4 T 3 T 2 T 1 A = E Apply to the equation above or T 4 T 3 T 2 T 1 AA 1 = T 4 T 3 T 2 T 1 E EA 1 = A 1 = T 4 T 3 T 2 T 1 E Winter Semester 2006/7 Computational Physics I Lecture 8 19

Gauss-Jordan Elimination Let s try it on the previous example: After T 1 After T 2 After T 3 1.00000 0.00000 0.00000 0.00000 0.00000 1.00000 0.00000 0.00000 0.00000 0.00000 1.00000 0.00000 0.00000 0.00000 0.00000 1.00000 0.25000 0.00000 0.00000 0.00000 -.25000 1.00000 0.00000 0.00000 -.25000 0.00000 1.00000 0.00000 -.25000 0.00000 0.00000 1.00000 0.26667 -.06667 0.00000 0.00000 -.06667 0.26667 0.00000 0.00000 -.20000 -.20000 1.00000 0.00000 -.20000 -.20000 0.00000 1.00000 0.27778 -.05556 -.05556 0.00000 -.05556 0.27778 -.05556 0.00000 -.05556 -.05556 0.27778 0.00000 -.16667 -.16667 -.16667 1.00000 After T 4 0.28571 -.04762 -.04762 -.04762 -.04762 0.28571 -.04762 -.04762 -.04762 -.04762 0.28571 -.04762 -.04762 -.04762 -.04762 0.28571 Winter Semester 2006/7 Computational Physics I Lecture 8 20

With A 2 : Gauss-Jordan Elimination 1.00000 0.50000 0.33333 0.25000 1.00 0.50000 0.33333 0.25000 0.20000 1.00 0.33333 0.25000 0.20000 0.16667 1.00 0.25000 0.20000 0.16667 0.14286 1.00 Start to see rounding issues: 1.00000 0.50000 0.33333 0.25000 1.00 0.00000 0.08333 0.08333 0.07500 0.50 0.00000 0.08333 0.08889 0.08333 0.67 0.00000 0.07500 0.08333 0.08036 0.75 1.00000 0.00000 -.16667 -.20000-2.00 0.00000 1.00000 1.00000 0.90000 6.00 0.00000 0.00000 0.00556 0.00833 0.17 0.00000 0.00000 0.00833 0.01286 0.30 1.00000 0.00000 0.00000 0.05000 3.00 0.00000 1.00000 0.00000 -.60000-24.00 0.00000 0.00000 1.00000 1.50000 30.00 0.00000 0.00000 0.00000 0.00036 0.05 1.00000 0.00000 0.00000 0.00000-4.00 0.00000 1.00000 0.00000 0.00000 59.99 0.00000 0.00000 1.00000 0.00000-179.99 0.00000 0.00000 0.00000 1.00000 139.99 Winter Semester 2006/7 Computational Physics I Lecture 8 21

Gauss-Jordan Elimination As a check, we calculate the result for x for different initial matrices. Use matrices of the form A 2 : a ij = 1 i + j 1 Try different size square matrices. Should give us our initial vector b. Find that start into mumerical problems with n=m=10. Single precision calculation. All values should be 1. 0.971252441 0.977539062 0.981124878 0.983856201 0.985870361 0.987503052 0.988098145 0.989746094 0.99067688 0.991607666 Double precision calculation. All values are =1. Winter Semester 2006/7 Computational Physics I Lecture 8 22

Gauss-Jordan Elimination Check 100x100 matrix double precision 0.999999987 0.999999987 0.999999988 0.999999988 0.999999988 0.999999989 0.999999989 0.999999989 0.999999989 0.99999999 0.99999999 0.99999999 0.99999999 0.999999991 0.999999991 0.999999991 0.999999991 0.999999991 0.999999992 0.999999992 0.999999992 0.999999992 0.999999992 0.999999992 0.999999993 0.999999993 0.999999993 0.999999993 0.999999993 0.999999993 0.999999993 0.999999993 0.999999993 0.999999994 0.999999994 0.999999994 0.999999994 0.999999994 0.999999994 0.999999994 0.999999994 0.999999994 0.999999994 0.999999994 0.999999995 0.999999995 0.999999995 0.999999995 0.999999995 0.999999995 0.999999995 0.999999995 0.999999995 0.999999995 0.999999995 0.999999995 0.999999995 0.999999995 0.999999995 0.999999995 0.999999996 0.999999996 0.999999996 0.999999996 0.999999996 0.999999996 0.999999996 0.999999996 0.999999996 0.999999996 0.999999996 0.999999996 0.999999996 0.999999996 0.999999996 0.999999996 0.999999996 0.999999996 0.999999996 0.999999996 0.999999996 0.999999996 0.999999996 0.999999996 0.999999996 0.999999996 0.999999997 0.999999997 0.999999997 0.999999997 0.999999997 0.999999997 0.999999997 0.999999997 0.999999997 0.999999997 0.999999997 0.999999997 0.999999997 0.999999997 Looks pretty good. However, there are cases where this simple Gauss-Jordan technique does not work so well. Winter Semester 2006/7 Computational Physics I Lecture 8 23

Pivoting What if the diagonal element which we divide by is very small or zero? (The element we divide by is called the pivot). This obviously causes problems. Resolved by a technique called pivoting (exchanging rows and columns). Partial pivoting - exchange rows in the region which have not yet been transformed to the unit matrix. Full pivoting - exchange rows and columns. These tricks are allowed according to the rules we stated on pages 4,5. Winter Semester 2006/7 Computational Physics I Lecture 8 24

Partial Pivoting Algorithm looks like this: 1. Loop over the n columns. We want to turn the columns one at a time into the unit matrix. Use the index k to specify which column element we want to turn into a 1. 2. For each value of k, we loop over the rows ik and look for the maximum value A(i,k). The row i which gives this maximum is swapped with row k. Use an index vector to keep track of the permutations Ind(i)=row which is to be considered as i th row. 3. Find the linear transformation: a) Loop over the n rows and make the diagonal element a 1. For Ind(i)=k, do this by dividing every element in A(Ind(i),j) by A(Ind(i),k), where j is the column index and Ind(i) is the row index. Also need to divide b(ind(i)) by A(Ind(i),k) b) For Ind(i)k, make the element in column k a 0. We do this by subtracting A(Ind(i),k)*A(Ind(k),j)/A(Ind(k),k) from A(Ind(i),j). For b(i), we subtract A(Ind(i),k)*b(Ind(k))/A(Ind(k),k) Winter Semester 2006/7 Computational Physics I Lecture 8 25

Partial Pivoting Example: 1.00000 0.50000 0.33333 0.25000 1.00 0.50000 0.33333 0.25000 0.20000 1.00 0.33333 0.25000 0.20000 0.16667 1.00 0.25000 0.20000 0.16667 0.14286 1.00 1.00000 0.50000 0.33333 0.25000 1.00 0.00000 0.08333 0.08333 0.07500 0.50 0.00000 0.08333 0.08889 0.08333 0.67 0.00000 0.07500 0.08333 0.08036 0.75 1.00000 0.00000 -.16667 -.20000-2.00 0.00000 1.00000 1.00000 0.90000 6.00 0.00000 0.00000 0.00556 0.00833 0.17 0.00000 0.00000 0.00833 0.01286 0.30 1.00000 0.00000 0.00000 0.05714 4.00 0.00000 1.00000 0.00000 -.64286-30.00 0.00000 0.00000 0.00000 -.00024-0.03 0.00000 0.00000 1.00000 1.54286 36.00 1.00000 0.00000 0.00000 0.00000-4.00 0.00000 1.00000 0.00000 0.00000 60.00 0.00000 0.00000 0.00000 1.00000 139.99 0.00000 0.00000 1.00000 0.00000-179.99 Winter Semester 2006/7 Computational Physics I Lecture 8 26

Exercizes 1. Solve for x in Ax=b using the LU decomposition method a ij = 1 i + j +1 b j = j 2. Do it again with the Gauss-Jordan method 3. (more difficult). Now try a ij = i j i, j b j = j You will need to use pivoting. Winter Semester 2006/7 Computational Physics I Lecture 8 27