solve EB œ,, we can row reduce the augmented matrix

Similar documents
solve EB œ,, we can row reduce the augmented matrix

B œ c " " ã B œ c 8 8. such that substituting these values for the B 3 's will make all the equations true

: œ Ö: =? À =ß> real numbers. œ the previous plane with each point translated by : Ðfor example,! is translated to :)

Linear Equations in Linear Algebra

MATLAB Project: LU Factorization

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

Introduction to Diagonalization

Lecture 9: Elementary Matrices

1 GSW Sets of Systems

THE NULLSPACE OF A: SOLVING AX = 0 3.2

Row Reduced Echelon Form

Linear Systems of n equations for n unknowns

Systems of Equations 1. Systems of Linear Equations

Handout 1 EXAMPLES OF SOLVING SYSTEMS OF LINEAR EQUATIONS

Introduction to Mathematical Programming

Æ Å not every column in E is a pivot column (so EB œ! has at least one free variable) Æ Å E has linearly dependent columns

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

Math 344 Lecture # Linear Systems

The Spotted Owls 5 " 5. = 5 " œ!þ")45 Ð'!% leave nest, 30% of those succeed) + 5 " œ!þ("= 5!Þ*%+ 5 ( adults live about 20 yrs)

MAT 343 Laboratory 3 The LU factorization

The Leontief Open Economy Production Model

Matrix Factorization Reading: Lay 2.5

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

Chapter 1 Linear Equations. 1.1 Systems of Linear Equations

Chapter 4. Solving Systems of Equations. Chapter 4

Section 5.6. LU and LDU Factorizations

Elementary matrices, continued. To summarize, we have identified 3 types of row operations and their corresponding

The Leontief Open Economy Production Model

Relations. Relations occur all the time in mathematics. For example, there are many relations between # and % À

Example: A Markov Process

Linear Programming The Simplex Algorithm: Part II Chapter 5

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

Section 3.5 LU Decomposition (Factorization) Key terms. Matrix factorization Forward and back substitution LU-decomposition Storage economization

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

CHAPTER 8: MATRICES and DETERMINANTS

And, even if it is square, we may not be able to use EROs to get to the identity matrix. Consider

Here are proofs for some of the results about diagonalization that were presented without proof in class.

Homework Set #1 Solutions

SOLVING LINEAR SYSTEMS

The Solution of Linear Systems AX = B

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

Next topics: Solving systems of linear equations

36 What is Linear Algebra?

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Introduction to Systems of Equations

The Gauss-Jordan Elimination Algorithm

Linear Algebraic Equations

4 Elementary matrices, continued

MATH 612 Computational methods for equation solving and function minimization Week # 6

MAC1105-College Algebra. Chapter 5-Systems of Equations & Matrices

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra

2 Systems of Linear Equations

Elementary Linear Algebra

The following steps will help you to record your work and save and submit it successfully.

Review. Example 1. Elementary matrices in action: (a) a b c. d e f = g h i. d e f = a b c. a b c. (b) d e f. d e f.

Definition Suppose M is a collection (set) of sets. M is called inductive if

x + 2y + 3z = 8 x + 3y = 7 x + 2z = 3

This can be accomplished by left matrix multiplication as follows: I

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns

Chapter 2 - Linear Equations

Section Gaussian Elimination

MATH 310, REVIEW SHEET 2

Section Matrices and Systems of Linear Eqns.

8 Square matrices continued: Determinants

Proofs Involving Quantifiers. Proof Let B be an arbitrary member Proof Somehow show that there is a value

Numerical Linear Algebra

Lecture 2e Row Echelon Form (pages 73-74)

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Mon Feb Matrix algebra and matrix inverses. Announcements: Warm-up Exercise:

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

Math Week 1 notes

Solving Consistent Linear Systems

Topic 15 Notes Jeremy Orloff

Elementary Linear Algebra

Introduction to Determinants

A Review of Matrix Analysis

Solving Linear Systems Using Gaussian Elimination

Numerical Linear Algebra

A 2. =... = c c N. 's arise from the three types of elementary row operations. If rref A = I its determinant is 1, and A = c 1

(17) (18)

Linear Algebra March 16, 2019

22A-2 SUMMER 2014 LECTURE 5

Rectangular Systems and Echelon Forms

Gaussian Elimination and Back Substitution

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

Linear Algebra, Summer 2011, pt. 2

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)

REPLACE ONE ROW BY ADDING THE SCALAR MULTIPLE OF ANOTHER ROW

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij

MIDTERM 1 - SOLUTIONS

Numerical Linear Algebra

Math 308 Midterm Answers and Comments July 18, Part A. Short answer questions

1.1 SOLUTIONS. Replace R2 by R2 + (2)R1 and obtain: 2. Scale R2 by 1/3: Replace R1 by R1 + ( 5)R2:

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4

Chapter 1. Vectors, Matrices, and Linear Spaces

Example Let VœÖÐBßCÑ À b-, CœB - ( see the example above). Explain why

Scientific Computing

Modern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur

EBG # 3 Using Gaussian Elimination (Echelon Form) Gaussian Elimination: 0s below the main diagonal

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Transcription:

Motivation To PY Decomposition: Factoring E œ P Y solve EB œ,, we can row reduce the augmented matrix Ò + + ÞÞÞ+ ÞÞÞ + l, Ó µ ÞÞÞ µ Ò- - ÞÞÞ- ÞÞÞ-. Ó " # 3 8 " # 3 8 we get to Ò-" -# ÞÞÞ -3 ÞÞÞ-8Ó in an echelon form ( this is the forward part of the row reduction process ). Then we can i) use back substitution to solve for Bß or ii) continue on with the second ( backward ) part of the row reduction process until Ò-" -# ÞÞÞ -3 ÞÞÞ-8 Ó is in row reduced echelon form, at which point it's easy to read off the solutions of EB œ, Þ A computer might take the first approach; the second might be better for hand calculations. But both i) and ii) take about the same number of arithmetic operations (see the first paragraph about back substitution in Sec. 1.2 (p. 19) of the textbook). Whether we use i) or ii) to solve EB œ,, it is sometimes necessary in applications to solve a large system EB œ, many times for the same E but changing, each time perhaps a system with millions of variables and solving thousands of times! For example, read the description of the aircraft design problem at the beginning of Chapter 2. It's too inefficient to row reduce Ò + + ÞÞÞ+ ÞÞÞ + l, Ó each time, even if a computer is doing the work. " # 3 8 One way to avoid these repeated row reductions is to try to factor E, once and for all, in a way that makes it relatively easy to solve EB œ, again each time, changes. This is the motivation for the PY decomposition (also called an PY factorization ). The PY Decomposition An PY decomposition of an 7 8 matrix E is a factorization E œ P Y ÚY is an echelon form of E, and where ÛP is a square unit lower triangular matrix Ü For a square matrix, lower triangula r means all!'s above the diagonal. Unit lower triangular means only "'s on the diagonal. A couple of observations: ñ Y has the same shape, 7 8, as E because Y is an echelon form of E. ñ If E and Y happen to be square, then the echelon form Y is automatically upper triangular. But even when Y is not square, Y has only!'s below the leading entry in each row, so Y still resembles an upper triangular matrix. ñ Since P is square, P must be 7 7 for the product PY to make sense. Since P is unit lower triangular, P is always invertible Ðwhy? think about the row reduced echelon form for PÑ It's not always possible to find an PY -decomposition for E. But when it is possible, the number of steps to find P and Y is not much different from the number of steps required to solve

EB œ, (one time!) by row reduction. And if we can factor E in this way, then it takes relatively few additional steps to solve EB œ, for B each time a new, is chosen. Why? If we can writeeb œ PYB œ, ß then we can substitute YB œ C. Then the original matrix equation EB œ, is replaced by two new ones. œ P C œ, C œ YB (*) This is an improvement (as Example 1 will show) because: i) it's easy to solve PC œ, for C because P is lower triangular; and ii) after finding Cß then solving YB œ C is also easy, because Y is already in echelon form iii) P and Y remain the same whenever we change, Þ Example 1 Find an PY decomposition of E œ " # # "! and use it to solve the matrix Õ! % # equation " # # # "! B œ " Þ Õ! % # Õ First, reduce E to an echelon form Y. ( It turns out that we need to be careful about what EROs we use during the row reduction but for now, just follow along; the discussion that comes later will explain what caution is necessary.) Each elementary row operation that we use corresponds to left multiplication by an elementary matrix, and we record those matrices at each step. so we get I 3 E œ " # " # " # # "! µ! % µ! % œ Y (an echelon form) Õ! % # Õ! % # Õ ##!! Å Å I " œ # "! I# œ! "! Õ!! " Õ %! " I# I" E œ Y. Then E œ ÐI# I" Ñ Y œ I" I# ÐI" I# Ñ Y Æ Æ Æ Æ Æ " # " # # "! œ # "! Y œ # % Õ! % # Õ!! " Õ % Õ %! "! " Õ ##!! In this example, I" I# is unit lower triangular, so we let P œ I" I#, and E œ PY is the decomposition we want. (Is it just luck that I" I# turned out to be unit lower triangular? No, it's because we were careful about the steps in the row reduction: see the discussion below.)

To solve " # " # # # "! B œ # % B œ ". Õ! % # Õ %! " Õ ##!! Õ Å Å Å EB œ P Y B œ, substitute YB œ C to get the two equations (*) œ P C œ, that is, C œ YB Ú # PC œ # "! C œ " Õ %! " Õ Û " # C œ YB œ! % B Ü Õ!! ## First solve the top equation for C Þ To do this, we can reduceòp, Ó to row reduced echelon form, or we can use forward substitution just like back substitution but starting with the top equation. Either way we need only a few steps to get C because P is lower triangular. Using forward substitution gives: C œ #à " C# œ " #C" œ à and then C" # % C œ C# œ, so C œ C # œ ÕC Õ " # # Then the second equation becomes! % B œ, which we can quickly Õ ##!! Õ solve with back substitution or further row reduction. Using back substitution gives: B œ Ð Ñ œ * ## ## "& & # "" # "" B œ %ÐB Ñ œ, so B œ, and B œ # B #ÐB Ñ œ " # B " "" ). Therefore B œ B œ Ö Ù "" # ÕB Õ ) & "" * ##

Example 1, continued: same coefficient matrix E but different, Now that we have factored E œ PY we can solve another equation EB œ " # easily: Õ! See above: here only, has changed! Æ Ú " # "! C œ # Õ %! " Õ! Û " #! % B œ C Ü Õ ##!! " % By forward substitution C" œ "; C# œ # 2Ð"Ñ œ 0; C œ Ð0Ñ œ 0: so! C œ Þ Õ! " # " Then the second equation becomes! % B œ! and, by back substitution, Õ ##!! Õ! " B œ!; B# œ! %Ð!Ñ so B# œ!; and B" œ "Ð!Ñ #Ð!Ñ œ ": so B œ! Þ Õ! When does the method in Example 1 produce an PY decomposition? We can always perform the steps illustrated in Example 1 to get a factorization E œ ^ Y, where Y is an echelon form of E and ^ is a product of elementary matrices Ðin Example 1, ^ œ I" I# Ñ. But ^ won't always turn out to be lower triangular (so that ^ might not deserve the name P ). When does the method work? ( Compare the comments below to Example 1. ) Suppose ( as in Example ") that the only type of ERO used to row reduce E is add a multiple of a row to a lower row. Let these EROs correspond to the elementary matrices I" ßÞÞÞßI:. Then I : ÞÞÞ I" E œ Y and therefore E œ I " ÞÞÞ I: YÞ Each matrix I3 and its inverse I3 will be unit lower triangular Ðwhy? Ñ. Then P œ I " ÞÞÞ I: is also lower triangular with "'s on its diagonal. (Convince yourself that a product of unit lower triangular matrices is a unit lower triangular matrix.) However, note that i) if a row rescaling ERO had been used, then the corresponding elementary matrix would not have only "'s on the diagonal and therefore the product I ÞÞÞ I: might not be unit lower triangular. " ii) if a row interchange ( swap ) ERO had been used, then the corresponding elementary matrix would not be lower triangular and therefore the product I ÞÞÞ I: might not be lower triangular. "

To summarize: the method illustrated in Example 1 always produces an PY decomposition for E if the only EROs used to row reduce E to Y are of the form add a multiple of a row to a lower row. Ð Ñ In that case: if the EROs used correspond to multiplication by elementary matrices I ßÞÞÞÞß I, theni ÞÞÞ I E œ Y is in echelon form and " : : " E œ PY, where P œ ÐI : ÞÞÞ I" Ñ œ I " ÞÞÞ I: Ð Ñ is unit lower triangular Stopping to think, you can see that the restriction in Ð Ñ is not really as severe as it sounds. For example, ñ When you do row reduction systematically, according to the standard steps as outlined in the text, you never add a multiple of a row to a higher row. The reason for using this ERO is to create!'s below a pivot. In other words, adding multiples of rows only to lower rows doesn't actually constrain you at all: it's just standard procedure in row reductions. ñ No row rescaling is at most an inconvenience. When row reducing by hand, row rescaling may be helpful to simplify arithmetic, but rescaling is never necessary to get to an echelon form of E. ( Of course, row rescaling might be needed to proceed on to the row reduced echelon form to do that, rescaling may be needed to convert leading entries into "'s.) Side comment: If we did allow row rescaling, it would only mean that we could might end up with a lower triangular P in which some diagonal entries Á ". From an equation-solving point of view, this would not be a disaster. However, the text follows the rule that P should have only "'s on its diagonal, so we'll stick to that convention.! #! But Ð Ñ really does impose some restriction: for example, E œ #!! simply cannot be Õ!! # reduced to echelon form without swapping rows " and #. Here the method in Example 1 simply will not work to produce an PY decomposition. However ( see the last section of these notes), there is a work-around that is almost as good as an PY decomposition in such situations.

To find P more efficiently If E is large, then row reducing E to echelon form might involve a very large number of elementary matrices I3 in the formula Ð Ñ so that using this formula to find P is too much work. Fortunately, there is a technique, illustrated in Example 2, to write down P, entry by entry, as E is being row reduced. ( The text gives a slightly different presentation, and offers an explanation of why it works. It's really nothing more than careful bookkeeping. Since these notes are just a survey about PY decompositions, we will only write down the method. ) Assume E is 7 8 and that it can be row-reduced to Y following the rule Ð ÑÞ Then P will be a square 7 7 matrix, and you can write down P using these steps: ñ Put "'s on the diagonal of P and!'s everywhere above the diagonal ñ Whenever you use a row operation add - times row 3 to a lower row 4, then put - in the Ð4ß3Ñ position of P Ðcareful: notice the sign change on -, and put - in the Ð4ß3Ñ position in Pß not in the Ð3ß4Ñ positionñ ñ When E has been reduced to echelon form, fill in any remaining empty positions in P with!'s. If we apply these steps while row reducing in Example 1, here's what we'd get: ñ E was, so start with P œ * "! Õ * * " ñ The row operations we used in Example 1 were: add # Ðrow 1 Ñ to row #, so put # in the Ð#ß"Ñ position in P: that is, P#" œ # % % % add Ð row #Ñ to row 3, so put in the Ðß#Ñ entry in P: that is, P œ # Now we have P œ # "! Õ * " ñ Fill in the rest of P with!'sß to get % P œ # "! ß the same as we got by multiplying elementary matrices Õ %! "

Example 2 In this example, E is a little larger and not square. But otherwise, the calculations are quite parallel to those in Example 1. If possible, "!! # " find an PY factorization of E œ #! # ) and use it to solve Õ! " EB œ, œ # Õ "!! # " "!! # " E œ #! # ) µ!! & # "# & Õ! " Õ! " "!! # " "!! # " µ!! & # "# & µ!! & # "# & œ Y Õ!! # & # Õ " "!!!! Y is in echelon form, and no rescaling or row swaps were used in the row reduction. To find P À start with P œ "!. The row operations we used were Õ " Ú add #* Ð row "Ñ to Ð row #Ñ Û add "* Ð row "Ñ to Ð row Ñ so P œ # "! Þ Then Õ # " & Ü # add Ð row #Ñ to Ð row Ñ & P Y E œ "!! # " #! # ) œ "!! # " # "!!! & # "# & Õ! " Õ # " Õ " "!!!! We can solve EB œ, œ # by writing Õ & & & & & Ú Û Ü PC œ,, that is C œ YBß that is C" # "! C # œ, œ # Õ # " ÕC Õ & B" "!! # " B#!! & # "# & C" B œ C# Õ " "!!!! B% & & Ö Ù ÕC B& ÕB '

The first equation gives C œ C" C # œ # #ÐC" Ñ œ!, and then the second ÕC Õ # C C Õ # " & # B" B# "!! # " B equation becomes!! & # "# & œ!. So the solution is Õ " " B%!!!! Ö Ù Õ # & & B& ÕB ' B " "" %B& #B ' ""! % # B# B #! "!! B #B& B' % %! # B œ œ œ B# B& B' B% B&!!! "! Ö Ù Ö Ù Ö Ù Ö Ù Ö Ù Ö Ù B& B &!! "! ÕB Õ B Õ! Õ! Õ! Õ " ' ' Factoring E with slight variations from E œ PY PHY decompositions Suppose we row reduce E according to ( ) and get the factorization E œ PY. If we loosen up and also allow rescaling during the row reduction of E, we can get a factorization E œ PHY where as before P is unit lower triangular, H is a (square) diagonal matrix and Y is an echelon form of E which (by rescaling) has only "'s in its pivot positions. Since P is unit lower triangular, this produces more symmetry in appearance between P and Y. (For example: suppose E is square and invertible, and that E œ LDU as described. Then P and Y will both be square, with P unit lower triangular and Y unit upper triangular! 's below the diagonal and only "'s on the diagonal. Why must Y have that form? ) For Y, this is an aesthetically nicer form, but frankly, it doesn't help much in solving the system of equations. To illustrate the (easy) details: first factor with E œ PY. Create a diagonal matrix H as follows: If Row 3 of Y has a pivot position: let : 3Ð Á!Ñ be the number in the pivot position. th Factor : out of Row of Y, and use the number : as the 3 diagonal entry in H. 3 3 3 th If Row 3 of Y has no pivot position (a row of!'s), use! as the 3 diagonal entry in H. Use! for all the other entries in H. w After factoring out the : 3's out of the rows of Y, what's left is a new matrix Y that has only "'s w in its pivot positions, and HY œ Y (since multiplication on the left by a diagonal matrix just w w multiples each row of Y by the corresponding diagonal element of HÑÞ Therefore E œ PHY Þ At this point, we abuse the notation and use Y in place of Y w we do this only for neatness, since the rightmost matrix in this factorization is traditionally called Y : just remember that the Y in E PHY usually isn't the same as the Y in the original factorization E œ PYÞ E œ PHY is cleverly called an PHY matrices from Example 2. decomposition of EÞ Here's an illustration using the

E œ "!! # " #! # ) œ "!! # " # "!!! & # "# & Õ! " Õ # " Õ " "!!!! & & & P Y Ð old Y Ñ "!! # " œ # "! 0 5! # "#!! " & & " Õ # " Õ " 0 0 Õ!!! "! & & P H Y Ð new Y Ñ A Useful Related Decomposition Sometimes row interchanges are simply unavoidable to reduce E to echelon form. Other times, even when row interchanges can be avoided, they are desirable in computer computations to reduce roundoff errors (see the Numerical Note about partial pivoting in the text on p. 17). So what happens if row interchanges are used? We can still get a factorization similar to E œ PY and every bit as useful for solving equations. Here's how. Note: we will again not allow rescaling of rows; but the preceding example suggests a way you could bring those into the picture if you wanted to. 1) Look at how you will use to row reduce E to an echelon form Y and note what row interchanges will be used. Then go back to the beginning and rearrange rows in E: do all these row interchanges first. The result is rearranged E. Look at this in more detail: Suppose the row interchanges we use are represented by the elementary matrices I" ßÞÞÞßI5Þ Then the rearranged E is just I 5 ÞÞÞ I" E œ TE, where T œ I 5 ÞÞÞ I". Since T œ I 5 ÞÞÞ I" œ I 5 ÞÞÞ I" M we see that T is just the final result of when the same row interchanges are performed on M that is, T is just M with some rows rearranged. T is called a permutation matrix; the multiplication TE rearranges ( permutes ) the rows of E in exactly the same way that the rows of M were permuted to create T. A permutation matrix T is the same as a square matrix with exactly one " in each row and exactly one " in each column. Why? T is a product of invertible (elementary) matrices, so T is invertible and T œ I " ÞÞÞ I: is also a permutation matrix (why? what does each I3 do to M?). (If you know the steps to write down T, it's just as easy to actually write down the matrix X T. Or, perhaps you can convince yourself that T œ T Þ) Since T TE œ E, multiplication by T rearranges the rows of TE to restore the original matrix E.

2) We now have TEÐthe rearranged E ") which can be row reduced to an echelon form without using any further row interchanges. Therefore we can use our previous method to get an PY decomposition of TE À TE œ PY Ð Ñ 3) Then E œ ÐT PÑY. Here, T P is a permuted lower triangular matrix, meaning lower triangular with some rows interchanged. MATLAB Help gives T P the cute name psychologically l ower triangular matrix, a term never used anywhere else to my knowledgeþ This factorization is just as useful for equation solving as the PY decomposition. For example, we can solve EB œ ÐT PÑYB œ, by first substituting C œ YB. It's still easy to solve ÐT PÑC œ, because T P is psychologically lower triangular ( see illustration below), and then knowing C, it's easy to solve YB œ C because Y is in echelon form.. For example suppose ÐT PÑC œ, is the equation # C "!! ÙÖC # " Ö ÙÖ Ù œ Ö Ù # # " " C " Õ" " "! ÕC Õ! % ß permuted lower triangular matrix We can simply imitate forward substation, but solve using the rows in the order that (rearranged) would make a lower triangular matrix. That is, Ú C" œ ", from the second row, then C# œ! #C" œ #, from the first row, then Û C œ! C" C# œ ", from the fourth row, then ÜC œ " #C #C C œ #, from the third row % " # (In some books or applications, when you see E œ PY it might even be assumed that P is permuted lower triangular rather than lower triangular. Be careful when reading other books to check what the author means.) OR to solve EB œ ÐT PÑYB œ,, you can reason as follows: The equation TEB œ T, has exactly the same solutions as EB œ, because: If TEB œ T, is true, then multiplying both sides by T shows that B is also a solution EB œ, also, and vice-versa. If you think in terms of writing out the linear systems of equations: TEB œ T, is the same system of equations as EB œ, but with the equations listed in some other interchanged order determined by the permutation matrix T. Since we have an PY decomposition for TEß solving TEB œ PYB œ T, for B is easyþ

" " # " " " Example 3 Let E œ Ö ÙÞ " % % Õ! " " 1) On scratch paper, do enough steps row reducing E to echelon form ( no row rescaling allowed, and, following standard procedure, only adding multiples of rows to lower rows ) to see what row interchanges, if any, are necessary. It turns out that only one, interchanging rows 2 and 4 at a certain stage, will be enough. 2) So go back to the start and perform that row interchange first, creating " " #!! "! " TE œ Ö Ù, where T œ Ö Ù " % %!! "! Õ" " " Õ! 3) Row reduce TE to an echelon form Y, keeping track of the EROs used: " " # " " # " " # " " #! " " Ð"Ñ! " " Ð#Ñ! " " ÐÑ! " " Ö Ù µ Ö Ù µ Ö Ù µ Ö Ù " % %! 2! 2!! 1 Õ" " " Õ" " " Õ!! Õ!! " " # Ð%Ñ! " " µ Ö Ù œ Y!! 1 Õ!!! The row operations were (1) add row(1) to row 3 (2) add row(1) to row 4 (3) add 3 row(2) to row Ð%Ñ add 1*row(3) to row 4 Using the method described earlier to find P more efficiently we can write down P as we row reduce TE À 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 * 1 0 0 * 1 0 0 * 1 0 0 * 1 0 0 Ö Ù Ä Ö Ù Ä Ö Ù Ä Ö Ù * * 1 0 " * 1 0 " * 1 0 " 1 0 Õ * * * 1 Õ * * * 1 Õ" * * 1 Õ" 1

!!! Ä Ö Ù Ä Ö Ù œ P " "! " "! Õ" " " Õ"! " " Then TE œ P Y " " #! " " #! " "!! " " Ö Ù œ Ö Ù Ö Ù " % % "! 1 Õ" " " Õ"! " " Õ!!! # # " ( To solve EB œ Ö Ùß write TEB œ PYB œ T, œ Ö Ù Ðrows 2,4 of, interchanged, as Õ( Õ" # ( for E earlier Ñ. We then solve PYB œ Ö Ù in the same way as before Õ" For those who use MATLAB MATLAB has a command ÒPß Yß TÓ œ lu(a) ( lu is lowercase in MATLAB ) For beginners, anyway, I recommend only using it when E is square. It should give you 1) a square unit lower triangular P 2) Y in echelon form, and 3) a permutation matrix T for which TE œ PY. If it's possible simply to factor E œ PYß then MATLAB might just give T œ M œ the identity matrix. However, sometimes MATLAB will permute some rows even when it's not mathematically necessary (because doing so, in large complicated problems, will help reduce roundoff errors) If E is not square, then MATLAB gives some a result so that TE œ PY is still true, but the shapes of the matrices may vary from our discussion.

Here's a small-scale illustration of another way in which the PY decomposition can be helpful in some large, applied computations. A band matrix refers to a square 8 8 matrix in which all the nonzero entries lie in a band that surrounds the main diagonal. For example, in the following matrix all nonzero entries are in the band that consists of one superdiagonal above the main diagonal and two subdiagonals below the main diagonal. The number of superdiagonals in the band is called the upper bandwidth ("ß in the example below) and the number of subdiagonals in the band is the lower bandwidth (2, in the example). The total number of diagonals in the band œ Ðupper bandwidth lower bandwidth Ñ is called the bandwidth (4, in the example). " "! # " %! ( and all other entries Ö Ù # " " " Õ " # % œ!ñ!! A diagonal matrix such as! #! is an extreme example for which bandwidth œ "Þ. Õ!! ' Use of the term band matrix is a bit subjective: in fact, we could think of any matrix as a band matrix if, say, all entries are nonzero, then the matrix is ju= t one that has the largest possible bandwidth. In practice, however, the term band matrix is used to refer to a matrix in which the bandwidth is relatively small compared to the size of the matrix. Therefore the matrix is sparse a relatively large number of entries are!. Look at the simple problem about steady heat flow in Exercise 33 in Section 1.1 in the text; a more complicated problem then occurs in Exercise 31, Section 2.5.

Given the temperatures at the nodes around the edge of the plate, we want to approximate the temperature B" ßÞÞÞßB) at the interior nodes numbered 1,...8. This leads, as in Exercise 33, Section 1.1 to an equation EB œ, with a band matrix E À %!!!!! & %!!!!! "&! %!!!!! %!!! "! B œ!!! %!!!!! %! "! Ö Ù Ö Ù!!!!! % #! Õ!!!!! % Õ! If we can find an PY decomposition of E, then P and Y will also be band matrices ( Why? Think about how you find P and Y from E). Using the MATLAB command ÒPß Yß TÓ œ lu (A) gives an PY decomposition. Rounded to 4 decimal places, we get P œ "Þ!!!!!!!!!!!!Þ#&!! "Þ!!!!!!!!!!!Þ#&!!!Þ!''( "Þ!!!!!!!!!!!Þ#''(!Þ#)&( "Þ!!!!!!!!!!!Þ#'(*!Þ!) "Þ!!!!!!!!!!!Þ#*"(!Þ#*#" "Þ!!!!!! Ö Ù!!!!!Þ#'*(!Þ!)'" "Þ!!!!! Õ!!!!!!Þ#*%) 0.2931 "Þ!!!! Y œ %Þ!!!! Þ!!!! Þ!!!!!!!!!! Þ(&!!!Þ#&!! Þ!!!!!!!!!! Þ( Þ!''( Þ!!!!!!!!!! Þ%#)'!Þ#)&( Þ!!!!!!!!!! Þ(!) Þ!) Þ!!!!!!!!!! Þ*"*!Þ#*#" Þ!!!! Ö Ù!!!!!! Þ(!&# Þ!)'" Õ!!!!!!! Þ)') The PY decomposition can be used to solve EB œ,, as we showed earlier in these notes. The important thing in the example is not actually finding the solution, which turns out to be

Þ*&'* 'Þ&))& %Þ#*# (Þ*(" B œ &Þ'!#* )Þ('!) Ö Ù *Þ%""& Õ"#Þ!%" What's interesting and useful to notice: ñ From earlier in these notes: if we need to solve EB œ, for different vectors,, then factoring E œ PY saves time many times ñ A new observation is that for a very large 8 8 sparse band matrix E (perhaps thousands of rows and columns but with a relatively narrow band of nonzero entries), we can store E with much less computer space than is needed for a general 8 8 matrix: we only need to store the entries in the upper and lower bands, and two numbers for the upper and lower bandwidths; no need to waste space storage space for all the other matrix entries which are known to be!'s. Since P and Y are also sparse band matrices, they are also relatively economical to store. ñ Using Eß P and Y can let us avoid computations that use E. Even if E is a sparse band matrix, E may have no band structure and take up a lot more storage space than E, Pß and YÞ For instance, in the preceding example, E œ!þ#*&!þ!)''!þ!*%&!þ!&!*!þ!")!þ!##(!þ!"!!!þ!!)#!þ!)''!þ#*&!þ!&!*!þ!*%&!þ!##(!þ!")!þ!!)#!þ!"!!!þ!*%&!þ!&!*!þ#("!þ"!*!þ"!%&!þ!&*"!þ!")!þ!##(!þ!&!*!þ!*%&!þ"!*!þ#("!þ!&*"!þ"!%&!þ!##(!þ!")!þ!")!þ!##(!þ"!%&!þ!&*"!þ#(" 0.1093!Þ!*%&!Þ!&!*!Þ! 227!Þ 0318!Þ 0591!Þ 1045!Þ 1093!Þ 3271!Þ 0509!Þ!*%& Ö Ù!Þ!"!!!Þ!!)#!Þ!")!Þ!##(!Þ!*%&!Þ!&!*!Þ#*&!Þ!)'' Õ!Þ!!)#!Þ!"!!!Þ!##(!Þ!")!Þ!&!*!Þ!*%&!Þ!)''!Þ#*&