solve EB œ,, we can row reduce the augmented matrix

Size: px
Start display at page:

Download "solve EB œ,, we can row reduce the augmented matrix"

Transcription

1 PY Decomposition: Factoring E œ P Y Motivation To solve EB œ,, we can row reduce the augmented matrix Ò + + ÞÞÞ+ ÞÞÞ + l, Ó µ ÞÞÞ µ Ò- - ÞÞÞ- ÞÞÞ-. Ó " # 3 8 " # 3 8 When we get to Ò-" -# ÞÞÞ -3 ÞÞÞ-8Ó in an echelon form ( this is the forward part of the row reduction process), then we can either i) use back substitution to solve for B ii) continue on with the second ( backward ) part of the row reduction process until Ò-" -# ÞÞÞ -3 ÞÞÞ-8 Ó is in row reduced echelon form, at which point it's easy to read off the solutions of EB œ, Þ A computer might take the first approach; the second might be better for hand calculations. But both i) and ii) take about the same number of arithmetic operations (see the first paragraph about back substitution in Sec. 1.2 (p. 19) of the textbook). Whether we use i) or ii) to solve EB œ,, it is sometimes necessary in applications to solve a large system EB œ, many times for the same E but changing, each time perhaps a system with millions of variables and solving thousands of times! For example, read the description of the aircraft design problem at the beginning of Chapter 2. It's too inefficient to row reduce Ò + + ÞÞÞ+ ÞÞÞ + l, Ó each time, even if a computer is doing the work. " # 3 8 The general idea: the same EROs would be repeated to row reduce the coefficient matrix Ò + " + # ÞÞÞ + 8Ó to an echelon form Y each time, is changed. To avoid the repetitions, these necessary row operations can be recorded in a matrix. This results in a factorization of E into the form E œ ^Y. Sometimes we can choose the EROs so that ^ is a unit lower triangular matrix and, in that case, we will rename the matrix ^ as P (for lower ). The result is called an PY decomposition or PY factorization of E. Terminology: A lower triangular matrix is a square matrix that has only! entries above the main!!!!! diagonal, for example Ö Ùà the matrix is unit lower triangular if it's lower! Õ! triangular and has only "'s on the diagonal, for example Ö ÙÞ The definitions "! Õ " for upper triangular and unit upper triangular are the same except that words above the main diagonal are replaced by below the main diagonal.

2 Here's the more formal definition: The PY Decomposition An PY decomposition of an 7 8 matrix E is a factorization E œ P Y ÚY is an echelon form of E, where ÛP is a square unit lower triangular matrix Ü A couple of observations: ñ Y is an echelon form of E, so Y must be the same shape as E À 7 8 ñ P must be square 7 7 so that all the matrices in E PY have the right sizes. And since P is unit lower triangular, P must be invertible Ðwhy? how many pivot positions in the rref of P? Look at the examples above ). ñ If E and Y happen to be square, then Y, because it's an echelon form, will automatically be upper triangular. But even when Y is not square, Y has only!'s below the leading entry in each row, so Y still resembles an upper triangular matrix. It's not always possible to find an PY decomposition for E. But when it is possible, the number of steps to find P and Y is not much different from the number of steps required to solve EB œ, (one time!) by row reduction. And if we can factor E in this way, then it takes relatively few additional steps to solve EB œ, for B each time a new, is chosen. Why? If we can writeeb œ PYB œ, ß then we can substitute YB œ C. Then the original matrix equation EB œ, is replaced by two new ones. œ P C œ, C œ YB (*) This is an improvement (as Example 1 will show) because: i) it's easy to solve PC œ, for C because P is lower triangular; and ii) after finding Cß then solving YB œ C is also easy, because Y is already in echelon form iii) P and Y remain the same whenever we change, Þ Look at the details in the following example. Example 1 Suppose " # # EB œ, is the equation # "! B œ " Õ! % # Õ Here is an PY factorization of E ( don't worry for now about where it came from): " # # "! œ " # # % Õ! % # Õ %! " Õ ##!!

3 To solve " # " # # # "! B œ # % B œ ". Õ! % # Õ %! " Õ ##!! Õ Å Å Å EB œ P Y B œ, i) substitute YB œ C to get the two equations (*) œ P C œ, that is, C œ YB Ú # Ý PC œ # "! C œ " Õ %! " Õ Û " # ÝC œ YB œ! % B Ü Õ!! ## ii) First solve for C Þ To do this, we could further reduceòp, Ó to row reduced echelon form, or we can use forward substitution just like back substitution but starting with the top equation because that's where the simplest equation is. Either way we need only a few steps to get C because P is lower triangular. Using forward substitution gives: # Ú C" œ # # "! C œ ", so ÛC# œ " #C" œ and then Õ %! " Õ Ü % C œ C œ so C œ C" # C # œ ÕC Õ # iii) Substitute C into the second equation to get " # #! % B œ, which we Õ ##!! Õ can quickly solve with back substitution or further row reduction. Using back substitution we start at the bottom and work up:: B" "" So B œ B œ Ö Ù # ÕB Õ Ú * ÝB œ Ð## Ñ œ ## "& ÛB# œ %ÐB Ñ œ, so B# œ Ý ÜB œ # B #ÐB Ñ œ " # ) & "" * ## ) "" & "" ""

4 Example 2 For the matrix E œ " # # "! in Example 1, how could you actually find the Õ! % # PY decomposition? First, row reduce E to an echelon form Y. ( It turns out that we need to be careful about what EROs we use during the row reduction but for now, just follow along; the discussion that comes later will explain what caution is necessary.) Each elementary row operation that we use corresponds to left multiplication by an elementary matrix I 3, and we record those matrices at each step. so we get E œ " # " # " # # "! µ! % µ! % œ Y (an echelon form) Õ! % # Õ! % # Õ ##!! Å Å I " œ # "! I# œ! "! Õ!! " Õ %! " I# I" E œ Y. Then E œ ÐI# I" Ñ Y œ I" I# ÐI" I# Ñ Y Æ Æ Æ Æ Æ " # " # # "! œ # "! Y œ # % Õ! % # Õ!! " Õ % Õ %! "! " Õ ##!! In this example, I" I# turns out to be unit lower triangular, so we're done: let P œ I" I#, and E œ PY is the decomposition we want. Is it just luck that I" I# turned out to be unit lower triangular? No, it's because of the particular steps we used in the row reduction: see the discussion below.

5 Example 1 continued: same coefficient matrix E but different, Now that E œ PY is factored, we can solve another equation EB œ " # easily: Õ! See above: here only, has changed! Æ Ú " # "! Ý C œ # Õ %! " Õ! Û " # Ý! % B œ C Ü Õ!! ÚC " œ " By forward substitution ÛC# œ # 2Ð"Ñ œ 0 Ü % C œ Ð0Ñ œ 0 ## ß so " C œ! Þ Õ! " # " Then the second equation becomes! % B œ! and, by back substitution we get Õ ##!! Õ! ÚB œ! " ÛB# œ! %Ð!Ñ so B# œ! so B œ! Þ ÜB œ "Ð!Ñ #Ð!Ñ œ " Õ! " When does the method used in Example # produce an PY decomposition? We can always perform the steps illustrated in Example # to get a factorization E œ ^ Y, where Y is an echelon form of E and ^ is a product of elementary matrices Ðin Example #, ^ œ I" I# Ñ. But ^ won't always turn out to be lower triangular (so that ^ might not deserve the name P ). When does the method work? Suppose ( as in Example 2) that the only type of ERO used to row reduce E is add a multiple of a row to a lower row. Let these EROs correspond to the elementary matrices I" ßÞÞÞßI:. Then I ÞÞÞ I E œ Y and therefore E œ I ÞÞÞ I YÞ : " " Each matrix I3 and its inverse I3 will be unit lower triangular Ðwhy? Ñ. Then P œ I " ÞÞÞ I: is also lower triangular with "'s on its diagonal. ( Convince yourself that a product of unit lower triangular matrices is a unit lower triangular matrix.) Note that i) if a row rescaling ERO had been used, then the corresponding elementary matrix would not have only "'s on the diagonal and therefore the product I ÞÞÞ I: might not be unit lower triangular. " :

6 ii) if a row interchange ( swap ) ERO had been used, then the corresponding elementary matrix would not be lower triangular and therefore the product I ÞÞÞ I might not be lower triangular. " : To summarize: the method illustrated in Example 2 always produces an PY decomposition for E if the only EROs used to row reduce E to Y are of the form add a multiple of a row to a lower row. Ð** Ñ In that case: if the EROs used correspond to multiplication by elementary matrices I ßÞÞÞÞß I, theni ÞÞÞ I E œ Y is in echelon form and " : : " E œ PY, and P œ ÐI : ÞÞÞ I" Ñ œ I " ÞÞÞ I: Ð*** Ñ is unit lower triangular The restriction in (**) sounds very restrictive until to stop to think about it: ñ When you do row reduction systematically, according to the standard steps as outlined in the text, you never add a multiple of a row to a higher row. The reason for using a row replacement ERO is to create!'s below a pivot. In other words, adding multiples of rows only to lower rows doesn't actually constrain you at all: it's just standard procedure in row reductions. ñ No row rescaling can be an arithmetic inconvenience, but nothing more than an inconvenience. When row reducing by hand, rescaling a row may be helpful to simplify arithmetic, but rescaling is never necessary to get to an echelon form of E. ( Of course, row rescaling might be needed to proceed on to the row reduced echelon form because rescaling may be necessary to get "'s in the pivot positions. ) Side comment: If we did allow row rescaling, it would only mean that we could might end up with a lower triangular P in which some diagonal entries are not "'s. From the point of view of solving EB œ,, this would not be a disaster. However, the text follows the rule that P should have only "'s on its diagonal, so we'll stick to that convention.! #! But Ð Ñ really does impose some restriction on us:: for example, E œ #!! simply Õ!! # cannot be reduced to echelon form using (**) a row interchange is necessary. Here the method in Example 1 simply will not work to produce an PY decomposition. However ( see the last section of the online version of these notes), there is a work-around that is almost as good as an PY decomposition in such situations.

7 To find P more efficiently If E is large, then row reducing E to echelon form might involve a very large number of elementary matrices I3 in the formula Ð Ñ so that using this formula to find P is too much work. Fortunately, there is a technique, illustrated in Example 3, to write down P, entry by entry, as you row reduce E. ( The text gives a slightly different presentation, and offers an explanation of why it works. It's really nothing more than careful bookkeeping. Since these notes are just a survey about PY decompositions, we will only write down the method. ) Assume E is 7 8 and that it can be row-reduced to Y following the rule Ð** ÑÞ Then P will be a square 7 7 unit lower triangular matrix. You can write down P using these steps: ñ Write down an empty 7 7 matrix P ñ Put "'s on the diagonal of P and!'s in each position above the diagonal ñ Whenever you perform a row operation add - times row 3 to a lower row 4, then put - in the Ð4ß3Ñ position of P Careful: notice the sign change on -, and put - in the Ð4ß3Ñ position of Pß not in the Ð3ß4Ñ position ñ When E has been reduced to echelon form, stop. If there are any remaining empty positions in P, fill them with!'s. If we apply these steps while row reducing in Example 2, here's what we'd get: ñ E was, so start with P œ * "! Õ * * " ñ The row operations we used in Example 1 were: add # Ðrow 1 Ñ to row #, so put # in the Ð#ß"Ñ position in P: that is, P#" œ # % % % add Ð row #Ñ to row 3, so put in the Ðß#Ñ entry in P: that is, P œ # Now we have P œ # "! Õ * " ñ Fill in the rest of P with!'sß to get % P œ # "! ß the same as we got by multiplying elementary matrices Õ %! "

8 Example 3 In this example, E is a little larger and not square. But otherwise, the calculations are parallel to the ones we did earlier. If possible, "!! # " find an PY factorization of E œ #! # ) and use it to solve Õ! " EB œ, œ # Õ "!! # " "!! # " E œ #! # ) µ!! & # "# & Õ! " Õ! " "!! # " "!! # " µ!! & # "# & µ!! & # "# & œ Y Õ!! # & # Õ " "!!!! Y is in echelon form, and we used only EROs of type (**).. To find P, which must be À start with P œ "!. The row operations we used Õ " were Ú add #* Ð row "Ñ to Ð row #Ñ Ý Û add "* Ð row "Ñ to Ð row Ñ so P œ # "! Þ Then Õ # " Ý & Ü # add Ð row #Ñ to Ð row Ñ & P Y E œ "!! # " #! # ) œ "!! # " # "!!! & # "# & Õ! " Õ # " Õ " "!!!! We can solve EB œ, œ # by writing Õ & & & & &

9 Ú Ý Û Ý Ü PC œ,, that is C œ YBß that is C" # "! C # œ, œ # Õ # " ÕC Õ & B" B# "!! # "!! & # "# & C" B œ C# Õ " "!!!! B% & & Ö Ù ÕC B& ÕB ' The first equation gives C œ C" C # œ # #ÐC" Ñ œ!, and then the second ÕC Õ # C C Õ # " & # B" B# "!! # " B equation becomes!! & # "# & œ!. So the solution is Õ " " B%!!!! Ö Ù Õ # & & B& ÕB B " "" %B& #B ' ""! % # B# B #! "!! B #B& B' % %! # B œ œ œ B# B& B' B% B&!!! "! Ö Ù Ö Ù Ö Ù Ö Ù Ö Ù Ö Ù B& B &!! "! ÕB Õ B Õ! Õ! Õ! Õ " ' ' ' MATERIAL BEYOND THIS POINT IN THESE NOTES IS OPTIONAL

10 A Useful Related Decomposition Sometimes row interchanges are simply unavoidable to reduce E to echelon form. Other times, even when row interchanges could be avoided, they are desirable in computer computations to reduce roundoff errors (see the Numerical Note about partial pivoting in the textbook, p. 17). What happens if row interchanges are used? We can still get a factorization that resembles E œ PY and every bit as useful for solving equations. Here's how.. Note: as before, all row replacement operations follow the rule (**), and we do not allow rescaling of rows. 1) Look ahead ( on scratch paper!) at what you need to do to reduce E to an echelon form, using (**) and, if necessary, row interchanges. Make note what row interchanges will be used. Then go back to the beginning and rearrange rows in E that is, do all these row interchanges first. The result is E rearranged. Let's look at this in more detail: Suppose the row interchanges we use are represented by the elementary matrices I" ßÞÞÞßI5Þ Then the rearranged E is just I 5 ÞÞÞ I" E œ TE, where T œ I 5 ÞÞÞ I". Since T œ I 5 ÞÞÞ I" œ I 5 ÞÞÞ I" M we see that T is just the final result of when the same row interchanges are performed on M that is, T is just M with some rows rearranged. T is called a permutation matrix; the multiplication TE rearranges ( permutes ) the rows of E in exactly the same way that the rows of M were permuted to create T. A permutation matrix T is the same as a square matrix with exactly one " in each row and exactly one " in each column. Why? T is a product of invertible (elementary) matrices, so T is invertible and T œ I " ÞÞÞ I: is also a permutation matrix (why? what does each I3 do to M? ). ( If you know the steps to write down T, it's just as easy to actually write down the matrix T. Or, perhaps you can convince yourself that T œ T X Þ) Since T TE œ E, multiplication by T rearranges the rows of TE to restore the original matrix E. 2) T he matrix E rearranged is TE, and TE can be row reduced to an echelon form only using row replacements (**) no need for further row interchanges. Therefore we can use our previous method to find an PY decomposition of TE À TE œ PY 3) Then E œ ÐT PÑY. Here, T P is a permuted lower triangular matrix, meaning lower triangular with some rows interchanged. MATLAB Help gives T P the cute name psychologically l ower triangular matrix, a term never used anywhere else to my knowledgeþ This factorization is just as useful for equation solving as the PY decomposition. For example, we can solve EB œ ÐT PÑYB œ, by first substituting C œ YB. It's still easy to solve ÐT PÑC œ, because T P is psychologically lower triangular ( see illustration below), and

11 then knowing C, it's easy to solve YB œ C because Y is in echelon form.. For example suppose ÐT PÑC œ, is the equation # C "!! ÙÖC # " Ö ÙÖ Ù œ Ö Ù # # " " C " Õ" " "! ÕC Õ! % ß permuted lower triangular matrix We can simply imitate forward substation, but solve working the rows in the order that (rearranged) would make a lower triangular matrix (simplest row first, the row that involves only the variable C" ÑÞ That is, Ú ÝC" œ ", from the second row, then C# œ! #C" œ #, from the first row, then Û Ý C œ! C" C# œ ", from the fourth row, then ÜC œ " #C #C C œ #, from the third row % " # (In some books or applications, when you see E œ PY it might even be assumed that P is permuted lower triangular rather than lower triangular. Be careful when reading other books to check what the author means.) OR you could solve EB œ ÐT PÑYB œ, as follows: The equation TEB œ T, has exactly the same solutions as EB œ, because: ñ If TEB œ T, is true, then multiplying both sides by T shows that B is also a for EB œ,, and ñ If EB œ, is true, then multiplying by T shows that TEB œ T, is also true. If you think in terms of writing out the linear systems of equations: TEB œ T, is the same system of equations as EB œ, but with the equations listed in some other interchanged order determined by the permutation matrix T. Since we have an PY decomposition for TEß solving TEB œ PYB œ T, for B is easyþ " " # " " " Example 3 Let E œ Ö ÙÞ " % % Õ! " " 1) On scratch paper, do enough steps row reducing E to echelon form ( no row rescaling allowed, and, following standard procedure, only adding multiples of rows to lower rows ) to see what row interchanges, if any, are necessary. It turns out that only one, interchanging rows 2 and 4 at a certain stage, will be enough.

12 2) So go back to the start and perform that row interchange first, creating " " #!! "! " TE œ Ö Ù, where T œ Ö Ù " % %!! "! Õ" " " Õ! 3) Row reduce TE to an echelon form Y, keeping track of the EROs used: " " # " " # " " # " " #! " " Ð"Ñ! " " Ð#Ñ! " " ÐÑ! " " Ö Ù µ Ö Ù µ Ö Ù µ Ö Ù " % %! 2! 2!! 1 Õ" " " Õ" " " Õ!! Õ!! " " # Ð%Ñ! " " µ Ö Ù œ Y!! 1 Õ!!! The row operations were (1) add row(1) to row 3 (2) add row(1) to row 4 (3) add 3 row(2) to row Ð%Ñ add 1*row(3) to row 4 Using the method described earlier to find P more efficiently we can write down P as we row reduce TE À * * * * Ö Ù Ä Ö Ù Ä Ö Ù Ä Ö Ù * * 1 0 " * 1 0 " * 1 0 " 1 0 Õ * * * 1 Õ * * * 1 Õ" * * 1 Õ" 1!!! Ä Ö Ù Ä Ö Ù œ P " "! " "! Õ" " " Õ"! " " Then TE œ P Y " " #! " " #! " "!! " " Ö Ù œ Ö Ù Ö Ù " % % "! 1 Õ" " " Õ"! " " Õ!!!

13 # # " ( To solve EB œ Ö Ùß write TEB œ PYB œ T, œ Ö Ù Ðrows 2,4 of, interchanged, as Õ( Õ" # ( for E earlier Ñ. We then solve PYB œ Ö Ù in the same way as before Õ" Factoring E with slight variations from E œ PY PHY decompositions Suppose we can row reduce E using only EROs described in (**) and get the factorization E œ PY. If we loosen up and then allow rescaling of the rows of Y, we can convert all the leading entries (pivot positions) in Y to "'s. This amounts to a factorization E œ PHY where, as before, P is unit lower triangular, H is a (square) diagonal matrix and Y is an echelon form of E which (by rescaling) has only "'s in its pivot positions. Since P is unit lower triangular, this produces more symmetry in appearance between P and Y. ( For example: suppose E is square and invertible, and that E œ LDU as described. Then P and Y will both be square, with P unit lower triangular and Y unit upper triangular! 's below the diagonal and only "'s on the diagonal. Why must Y have that form? ) For Y, this is an aesthetically nicer form, but frankly, it doesn't help much in solving a system of equations. To illustrate the (easy) details: first factor as E œ PY. Create a diagonal matrix H as follows: If Row of 3 Y has a pivot position: Let : 3Ð Á!Ñ be the number in the pivot position. Factor : 3 out of Row 3 of Y th (leaving a " in the pivot position) and use the number : 3 as the 3 diagonal entry in H. th Otherwise, make! the 3 diagonal element of H. w After factoring out the : 3's from the rows of Y, what's left is a new matrix Y that has only "'s in its pivot positions, and HY w œ Y (because multiplication on the left by a diagonal matrix just w w multiples each row of Y by the corresponding diagonal element of HÑÞ Therefore E œ PHY Þ At this point, we abuse the notation and write Y in place of Y w we do this only for neatness, since the rightmost matrix in this factorization is traditionally called Y : just remember that the Y in E œ PHY usually isn't the same as the Y in the original factorization E œ PY Þ E œ PHY is cleverly called an PHY matrices from Example 2. decomposition of EÞ Here's an illustration using the E œ "!! # " #! # ) œ "!! # " # "!!! & # "# & Õ! " Õ # " Õ " "!!!! & & &

14 P Y Ð old Y Ñ "!! # " œ # "! 0 5! # "#!! " & & " Õ # " Õ " 0 0 Õ!!! "! & & P H Y Ð new Y Ñ Note for those who use MATLAB MATLAB has a command ÒPß Yß TÓ œ lu(a) ( lu is lowercase in MATLAB ) Unless you're very comfortable with MATLAB and the material, you should probably avoid confusion by only using the Matlab command on square matrices E ( See the technical note, below.) In that case, MATLAB` should give you for which TE œ PY. 1) a square unit lower triangular P 2) Y in echelon form, and 3) a permutation matrix T If it's possible simply to factor E œ PYß then MATLAB might just give T œ M œ the identity matrix. However, sometimes MATLAB will permute some rows even when it's not mathematically necessary (because doing so, in large complicated problems, will help reduce roundoff errors). See the Numerical Note about partial pivoting in the textbook, p. 17). If E is not square, then MATLAB gives a result so that TE œ PY is still true, but the shapes of the matrices may vary from our discussion. Technical note from a department MATLAB expert: If A is 7 8, then ÒPßYßTÓ œ lu ÐEÑ should return an 7 7 unit lower triangular L, 7 8 upper triangular U, and 7 7 permutation P satisfying PA=LU. In fact, the dimensions of what MATLAB returns depends on whether 7 œ 8, 7 8, or 7 8. Evidently the algorithm performs Gaussian elimination with partial pivoting as long as it can on A, in place, then extracts relevant parts of the changed A as L and U, with 1's inserted into L. P is stored as a vector but output as a matrix. Thus U will come out as 7 8 and P will be 7 7, but L will be 7 5 with 5 œ min Ð7ß8Ñ because its zero-rows are superfluous. V. Wickerhauser)

15 Here's a small-scale illustration of another way in which the PY decomposition can be helpful in some large, applied computations. A band matrix refers to a square 8 8 matrix in which all the nonzero entries lie in a band that surrounds the main diagonal. For example, in the following matrix all nonzero entries are in the band that consists of one superdiagonal above the main diagonal and two subdiagonals below the main diagonal. The number of superdiagonals in the band is called the upper bandwidth ("ß in the example below) and the number of subdiagonals in the band is the lower bandwidth (2, in the example). The total number of diagonals in the band œ Ðupper bandwidth lower bandwidth Ñ is called the bandwidth (4, in the example). " "! # " %! (all other entries Ö Ù # " " " Õ " # % œ!ñ!! A diagonal matrix such as! #! is an extreme example for which bandwidth œ "Þ. Õ!! ' Use of the term band matrix is a bit subjective: in fact, we could think of any matrix as a band matrix if, say, all entries are nonzero, then the matrix is just one that has the largest possible bandwidth. In practice, however, the term band matrix is used to refer to a matrix in which the bandwidth is relatively small compared to the size of the matrix. Therefore the matrix is sparse a relatively large number of entries are!. Look at the simple problem about steady heat flow in Exercises 33-4 in Section 1.1 in the text; a more larger problem, set up the same way, then occurs in Exercise 31, Section 2.5.

16 Given the temperatures at the nodes around the edge of the plate, we want to approximate the temperature B" ßÞÞÞßB) at the interior nodes numbered 1,...8. This leads, as in Exercise 33, Section 1.1 to an equation EB œ, with a band matrix E À %!!! & %!!! "&! %!!! %! "! B œ!!! %!!!!! %! "! Ö Ù Ö Ù!!!!! % #! Õ!!!!! % Õ! If we can find an PY decomposition of E, then P and Y will also be band matrices ( Why? Think about how you find P and Y from E). Using the MATLAB command ÒPß Yß TÓ œ lu (A) gives an PY decomposition. Rounded to 4 decimal places, we get P œ "Þ!!!!!!!!!!!!Þ#&!! "Þ!!!!!!!!!!!Þ#&!!!Þ!''( "Þ!!!!!!!!!!!Þ#''(!Þ#)&( "Þ!!!!!!!!!!!Þ#'(*!Þ!) "Þ!!!!!!!!!!!Þ#*"(!Þ#*#" "Þ!!!!!! Ö Ù!!!!!Þ#'*(!Þ!)'" "Þ!!!!! Õ!!!!!!Þ#*%) "Þ!!!!

17 Y œ %Þ!!!! Þ!!!! Þ!!!!!!!!!! Þ(&!!!Þ#&!! Þ!!!!!!!!!! Þ( Þ!''( Þ!!!!!!!!!! Þ%#)'!Þ#)&( Þ!!!!!!!!!! Þ(!) Þ!) Þ!!!!!!!!!! Þ*"*!Þ#*#" Þ!!!! Ö Ù!!!!!! Þ(!&# Þ!)'" Õ!!!!!!! Þ)') The PY decomposition can be used to solve EB œ,, as we showed earlier in these notes. The important thing in the example is not actually finding the solution, which turns out to be Þ*&'* 'Þ&))& %Þ#*# (Þ*(" B œ &Þ'!#* )Þ('!) Ö Ù *Þ%""& Õ"#Þ!%" What's interesting and useful to notice: ñ From earlier in these notes: if we need to solve EB œ, for different vectors,, then factoring E œ PY saves time many times ñ A new observation is that for a very large 8 8 sparse band matrix E (perhaps thousands of rows and columns but with a relatively narrow band of nonzero entries), we can store E with much less computer space than is needed for a general 8 8 matrix: we only need to store the entries in the upper and lower bands, and two numbers for the upper and lower bandwidths; no need to waste space storage space for all the other matrix entries which are known to be!'s. Since P and Y are also sparse band matrices, they are also relatively economical to store. ñ Using Eß P and Y can let us avoid computations that use E. Even if E is a sparse band matrix, E may have no band structure and take up a lot more storage space than E, Pß and YÞ For instance, in the preceding example,

18 E œ!þ#*&!þ!)''!þ!*%&!þ!&!*!þ!")!þ!##(!þ!"!!!þ!!)#!þ!)''!þ#*&!þ!&!*!þ!*%&!þ!##(!þ!")!þ!!)#!þ!"!!!þ!*%&!þ!&!*!þ#("!þ"!*!þ"!%&!þ!&*"!þ!")!þ!##(!þ!&!*!þ!*%&!þ"!*!þ#("!þ!&*"!þ"!%&!þ!##(!þ!")!þ!")!þ!##(!þ"!%&!þ!&*"!þ#(" !Þ!*%&!Þ!&!*!Þ! 227!Þ 0318!Þ 0591!Þ 1045!Þ 1093!Þ 3271!Þ 0509!Þ!*%& Ö Ù!Þ!"!!!Þ!!)#!Þ!")!Þ!##(!Þ!*%&!Þ!&!*!Þ#*&!Þ!)'' Õ!Þ!!)#!Þ!"!!!Þ!##(!Þ!")!Þ!&!*!Þ!*%&!Þ!)''!Þ#*&

solve EB œ,, we can row reduce the augmented matrix

solve EB œ,, we can row reduce the augmented matrix Motivation To PY Decomposition: Factoring E œ P Y solve EB œ,, we can row reduce the augmented matrix Ò + + ÞÞÞ+ ÞÞÞ + l, Ó µ ÞÞÞ µ Ò- - ÞÞÞ- ÞÞÞ-. Ó " # 3 8 " # 3 8 we get to Ò-" -# ÞÞÞ -3 ÞÞÞ-8Ó in an

More information

B œ c " " ã B œ c 8 8. such that substituting these values for the B 3 's will make all the equations true

B œ c   ã B œ c 8 8. such that substituting these values for the B 3 's will make all the equations true System of Linear Equations variables Ð unknowns Ñ B" ß B# ß ÞÞÞ ß B8 Æ Æ Æ + B + B ÞÞÞ + B œ, "" " "# # "8 8 " + B + B ÞÞÞ + B œ, #" " ## # #8 8 # ã + B + B ÞÞÞ + B œ, 3" " 3# # 38 8 3 ã + 7" B" + 7# B#

More information

Linear Equations in Linear Algebra

Linear Equations in Linear Algebra 1 Linear Equations in Linear Algebra 1.1 SYSTEMS OF LINEAR EQUATIONS LINEAR EQUATION x 1,, x n A linear equation in the variables equation that can be written in the form a 1 x 1 + a 2 x 2 + + a n x n

More information

: œ Ö: =? À =ß> real numbers. œ the previous plane with each point translated by : Ðfor example,! is translated to :)

: œ Ö: =? À =ß> real numbers. œ the previous plane with each point translated by : Ðfor example,! is translated to :) â SpanÖ?ß@ œ Ö =? > @ À =ß> real numbers : SpanÖ?ß@ œ Ö: =? > @ À =ß> real numbers œ the previous plane with each point translated by : Ðfor example, is translated to :) á In general: Adding a vector :

More information

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3 Math 0 Row Reduced Echelon Form Techniques for solving systems of linear equations lie at the heart of linear algebra. In high school we learn to solve systems with or variables using elimination and substitution

More information

Lecture 9: Elementary Matrices

Lecture 9: Elementary Matrices Lecture 9: Elementary Matrices Review of Row Reduced Echelon Form Consider the matrix A and the vector b defined as follows: 1 2 1 A b 3 8 5 A common technique to solve linear equations of the form Ax

More information

CHAPTER 8: MATRICES and DETERMINANTS

CHAPTER 8: MATRICES and DETERMINANTS (Section 8.1: Matrices and Determinants) 8.01 CHAPTER 8: MATRICES and DETERMINANTS The material in this chapter will be covered in your Linear Algebra class (Math 254 at Mesa). SECTION 8.1: MATRICES and

More information

Handout 1 EXAMPLES OF SOLVING SYSTEMS OF LINEAR EQUATIONS

Handout 1 EXAMPLES OF SOLVING SYSTEMS OF LINEAR EQUATIONS 22M:33 J. Simon page 1 of 7 22M:33 Summer 06 J. Simon Example 1. Handout 1 EXAMPLES OF SOLVING SYSTEMS OF LINEAR EQUATIONS 2x 1 + 3x 2 5x 3 = 10 3x 1 + 5x 2 + 6x 3 = 16 x 1 + 5x 2 x 3 = 10 Step 1. Write

More information

Row Reduced Echelon Form

Row Reduced Echelon Form Math 40 Row Reduced Echelon Form Solving systems of linear equations lies at the heart of linear algebra. In high school we learn to solve systems in or variables using elimination and substitution of

More information

1 GSW Sets of Systems

1 GSW Sets of Systems 1 Often, we have to solve a whole series of sets of simultaneous equations of the form y Ax, all of which have the same matrix A, but each of which has a different known vector y, and a different unknown

More information

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3 Linear Algebra Row Reduced Echelon Form Techniques for solving systems of linear equations lie at the heart of linear algebra. In high school we learn to solve systems with or variables using elimination

More information

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) D. ARAPURA Gaussian elimination is the go to method for all basic linear classes including this one. We go summarize the main ideas. 1.

More information

Systems of Equations 1. Systems of Linear Equations

Systems of Equations 1. Systems of Linear Equations Lecture 1 Systems of Equations 1. Systems of Linear Equations [We will see examples of how linear equations arise here, and how they are solved:] Example 1: In a lab experiment, a researcher wants to provide

More information

Introduction to Diagonalization

Introduction to Diagonalization Introduction to Diagonalization For a square matrix E, a process called diagonalization can sometimes give us more insight into how the transformation B ÈE B works. The insight has a strong geometric flavor,

More information

MAT 343 Laboratory 3 The LU factorization

MAT 343 Laboratory 3 The LU factorization In this laboratory session we will learn how to MAT 343 Laboratory 3 The LU factorization 1. Find the LU factorization of a matrix using elementary matrices 2. Use the MATLAB command lu to find the LU

More information

Æ Å not every column in E is a pivot column (so EB œ! has at least one free variable) Æ Å E has linearly dependent columns

Æ Å not every column in E is a pivot column (so EB œ! has at least one free variable) Æ Å E has linearly dependent columns ßÞÞÞß @ is linearly independent if B" @" B# @# ÞÞÞ B: @: œ! has only the trivial solution EB œ! has only the trivial solution Ðwhere E œ Ò@ @ ÞÞÞ@ ÓÑ every column in E is a pivot column E has linearly

More information

Chapter 4. Solving Systems of Equations. Chapter 4

Chapter 4. Solving Systems of Equations. Chapter 4 Solving Systems of Equations 3 Scenarios for Solutions There are three general situations we may find ourselves in when attempting to solve systems of equations: 1 The system could have one unique solution.

More information

Review. Example 1. Elementary matrices in action: (a) a b c. d e f = g h i. d e f = a b c. a b c. (b) d e f. d e f.

Review. Example 1. Elementary matrices in action: (a) a b c. d e f = g h i. d e f = a b c. a b c. (b) d e f. d e f. Review Example. Elementary matrices in action: (a) 0 0 0 0 a b c d e f = g h i d e f 0 0 g h i a b c (b) 0 0 0 0 a b c d e f = a b c d e f 0 0 7 g h i 7g 7h 7i (c) 0 0 0 0 a b c a b c d e f = d e f 0 g

More information

Here are proofs for some of the results about diagonalization that were presented without proof in class.

Here are proofs for some of the results about diagonalization that were presented without proof in class. Suppose E is an 8 8 matrix. In what follows, terms like eigenvectors, eigenvalues, and eigenspaces all refer to the matrix E. Here are proofs for some of the results about diagonalization that were presented

More information

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Review of matrices. Let m, n IN. A rectangle of numbers written like A = Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an

More information

Introduction to Mathematical Programming

Introduction to Mathematical Programming Introduction to Mathematical Programming Ming Zhong Lecture 6 September 12, 2018 Ming Zhong (JHU) AMS Fall 2018 1 / 20 Table of Contents 1 Ming Zhong (JHU) AMS Fall 2018 2 / 20 Solving Linear Systems A

More information

MATLAB Project: LU Factorization

MATLAB Project: LU Factorization Name Purpose: To practice Lay's LU Factorization Algorithm and see how it is related to MATLAB's lu function. Prerequisite: Section 2.5 MATLAB functions used: *, lu; and ludat and gauss from Laydata4 Toolbox

More information

Section 5.6. LU and LDU Factorizations

Section 5.6. LU and LDU Factorizations 5.6. LU and LDU Factorizations Section 5.6. LU and LDU Factorizations Note. We largely follow Fraleigh and Beauregard s approach to this topic from Linear Algebra, 3rd Edition, Addison-Wesley (995). See

More information

Chapter 1 Linear Equations. 1.1 Systems of Linear Equations

Chapter 1 Linear Equations. 1.1 Systems of Linear Equations Chapter Linear Equations. Systems of Linear Equations A linear equation in the n variables x, x 2,..., x n is one that can be expressed in the form a x + a 2 x 2 + + a n x n = b where a, a 2,..., a n and

More information

Linear Systems of n equations for n unknowns

Linear Systems of n equations for n unknowns Linear Systems of n equations for n unknowns In many application problems we want to find n unknowns, and we have n linear equations Example: Find x,x,x such that the following three equations hold: x

More information

Matrix Factorization Reading: Lay 2.5

Matrix Factorization Reading: Lay 2.5 Matrix Factorization Reading: Lay 2.5 October, 20 You have seen that if we know the inverse A of a matrix A, we can easily solve the equation Ax = b. Solving a large number of equations Ax = b, Ax 2 =

More information

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS chapter MORE MATRIX ALGEBRA GOALS In Chapter we studied matrix operations and the algebra of sets and logic. We also made note of the strong resemblance of matrix algebra to elementary algebra. The reader

More information

The Leontief Open Economy Production Model

The Leontief Open Economy Production Model The Leontief Open Economy Production Model We will look at the idea for Leontief's model of an open economy a term which we will explain below. It starts by looking very similar to an example of a closed

More information

Linear Programming The Simplex Algorithm: Part II Chapter 5

Linear Programming The Simplex Algorithm: Part II Chapter 5 1 Linear Programming The Simplex Algorithm: Part II Chapter 5 University of Chicago Booth School of Business Kipp Martin October 17, 2017 Outline List of Files Key Concepts Revised Simplex Revised Simplex

More information

Lecture 2e Row Echelon Form (pages 73-74)

Lecture 2e Row Echelon Form (pages 73-74) Lecture 2e Row Echelon Form (pages 73-74) At the end of Lecture 2a I said that we would develop an algorithm for solving a system of linear equations, and now that we have our matrix notation, we can proceed

More information

THE NULLSPACE OF A: SOLVING AX = 0 3.2

THE NULLSPACE OF A: SOLVING AX = 0 3.2 32 The Nullspace of A: Solving Ax = 0 11 THE NULLSPACE OF A: SOLVING AX = 0 32 This section is about the space of solutions to Ax = 0 The matrix A can be square or rectangular One immediate solution is

More information

Chapter 2 - Linear Equations

Chapter 2 - Linear Equations Chapter 2 - Linear Equations 2. Solving Linear Equations One of the most common problems in scientific computing is the solution of linear equations. It is a problem in its own right, but it also occurs

More information

1 Last time: linear systems and row operations

1 Last time: linear systems and row operations 1 Last time: linear systems and row operations Here s what we did last time: a system of linear equations or linear system is a list of equations a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22

More information

Section 3.5 LU Decomposition (Factorization) Key terms. Matrix factorization Forward and back substitution LU-decomposition Storage economization

Section 3.5 LU Decomposition (Factorization) Key terms. Matrix factorization Forward and back substitution LU-decomposition Storage economization Section 3.5 LU Decomposition (Factorization) Key terms Matrix factorization Forward and back substitution LU-decomposition Storage economization In matrix analysis as implemented in modern software the

More information

The Spotted Owls 5 " 5. = 5 " œ!þ")45 Ð'!% leave nest, 30% of those succeed) + 5 " œ!þ("= 5!Þ*%+ 5 ( adults live about 20 yrs)

The Spotted Owls 5  5. = 5  œ!þ)45 Ð'!% leave nest, 30% of those succeed) + 5  œ!þ(= 5!Þ*%+ 5 ( adults live about 20 yrs) The Spotted Owls Be sure to read the example at the introduction to Chapter in the textbook. It gives more general background information for this example. The example leads to a dynamical system which

More information

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark DM559 Linear and Integer Programming LU Factorization Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark [Based on slides by Lieven Vandenberghe, UCLA] Outline

More information

Next topics: Solving systems of linear equations

Next topics: Solving systems of linear equations Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:

More information

Mon Feb Matrix algebra and matrix inverses. Announcements: Warm-up Exercise:

Mon Feb Matrix algebra and matrix inverses. Announcements: Warm-up Exercise: Math 2270-004 Week 5 notes We will not necessarily finish the material from a given day's notes on that day We may also add or subtract some material as the week progresses, but these notes represent an

More information

SOLVING LINEAR SYSTEMS

SOLVING LINEAR SYSTEMS SOLVING LINEAR SYSTEMS We want to solve the linear system a, x + + a,n x n = b a n, x + + a n,n x n = b n This will be done by the method used in beginning algebra, by successively eliminating unknowns

More information

Section Gaussian Elimination

Section Gaussian Elimination Section. - Gaussian Elimination A matrix is said to be in row echelon form (REF) if it has the following properties:. The first nonzero entry in any row is a. We call this a leading one or pivot one..

More information

The Leontief Open Economy Production Model

The Leontief Open Economy Production Model The Leontief Open Economy Production Model We will look at the idea for Leontief's model of an open economy a term which we will explain below. It starts by looking very similar to an example of a closed

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

Math 344 Lecture # Linear Systems

Math 344 Lecture # Linear Systems Math 344 Lecture #12 2.7 Linear Systems Through a choice of bases S and T for finite dimensional vector spaces V (with dimension n) and W (with dimension m), a linear equation L(v) = w becomes the linear

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l

More information

Linear Algebraic Equations

Linear Algebraic Equations Linear Algebraic Equations 1 Fundamentals Consider the set of linear algebraic equations n a ij x i b i represented by Ax b j with [A b ] [A b] and (1a) r(a) rank of A (1b) Then Axb has a solution iff

More information

Solving Linear Systems Using Gaussian Elimination

Solving Linear Systems Using Gaussian Elimination Solving Linear Systems Using Gaussian Elimination DEFINITION: A linear equation in the variables x 1,..., x n is an equation that can be written in the form a 1 x 1 +...+a n x n = b, where a 1,...,a n

More information

And, even if it is square, we may not be able to use EROs to get to the identity matrix. Consider

And, even if it is square, we may not be able to use EROs to get to the identity matrix. Consider .2. Echelon Form and Reduced Row Echelon Form In this section, we address what we are trying to achieve by doing EROs. We are trying to turn any linear system into a simpler one. But what does simpler

More information

Relations. Relations occur all the time in mathematics. For example, there are many relations between # and % À

Relations. Relations occur all the time in mathematics. For example, there are many relations between # and % À Relations Relations occur all the time in mathematics. For example, there are many relations between and % À Ÿ% % Á% l% Ÿ,, Á ß and l are examples of relations which might or might not hold between two

More information

Definition Suppose M is a collection (set) of sets. M is called inductive if

Definition Suppose M is a collection (set) of sets. M is called inductive if Definition Suppose M is a collection (set) of sets. M is called inductive if a) g M, and b) if B Mß then B MÞ Then we ask: are there any inductive sets? Informally, it certainly looks like there are. For

More information

Review for Exam Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions.

Review for Exam Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions. Review for Exam. Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions. x + y z = 2 x + 2y + z = 3 x + y + (a 2 5)z = a 2 The augmented matrix for

More information

Topic 15 Notes Jeremy Orloff

Topic 15 Notes Jeremy Orloff Topic 5 Notes Jeremy Orloff 5 Transpose, Inverse, Determinant 5. Goals. Know the definition and be able to compute the inverse of any square matrix using row operations. 2. Know the properties of inverses.

More information

36 What is Linear Algebra?

36 What is Linear Algebra? 36 What is Linear Algebra? The authors of this textbook think that solving linear systems of equations is a big motivation for studying linear algebra This is certainly a very respectable opinion as systems

More information

Introduction to Systems of Equations

Introduction to Systems of Equations Introduction to Systems of Equations Introduction A system of linear equations is a list of m linear equations in a common set of variables x, x,, x n. a, x + a, x + Ù + a,n x n = b a, x + a, x + Ù + a,n

More information

1.5 Gaussian Elimination With Partial Pivoting.

1.5 Gaussian Elimination With Partial Pivoting. Gaussian Elimination With Partial Pivoting In the previous section we discussed Gaussian elimination In that discussion we used equation to eliminate x from equations through n Then we used equation to

More information

2 Systems of Linear Equations

2 Systems of Linear Equations 2 Systems of Linear Equations A system of equations of the form or is called a system of linear equations. x + 2y = 7 2x y = 4 5p 6q + r = 4 2p + 3q 5r = 7 6p q + 4r = 2 Definition. An equation involving

More information

A 2. =... = c c N. 's arise from the three types of elementary row operations. If rref A = I its determinant is 1, and A = c 1

A 2. =... = c c N. 's arise from the three types of elementary row operations. If rref A = I its determinant is 1, and A = c 1 Theorem: Let A n n Then A 1 exists if and only if det A 0 proof: We already know that A 1 exists if and only if the reduced row echelon form of A is the identity matrix Now, consider reducing A to its

More information

CHAPTER 8: MATRICES and DETERMINANTS

CHAPTER 8: MATRICES and DETERMINANTS (Section 8.1: Matrices and Determinants) 8.01 CHAPTER 8: MATRICES and DETERMINANTS The material in this chapter will be covered in your Linear Algebra class (Math 254 at Mesa). SECTION 8.1: MATRICES and

More information

The following steps will help you to record your work and save and submit it successfully.

The following steps will help you to record your work and save and submit it successfully. MATH 22AL Lab # 4 1 Objectives In this LAB you will explore the following topics using MATLAB. Properties of invertible matrices. Inverse of a Matrix Explore LU Factorization 2 Recording and submitting

More information

4 Elementary matrices, continued

4 Elementary matrices, continued 4 Elementary matrices, continued We have identified 3 types of row operations and their corresponding elementary matrices. To repeat the recipe: These matrices are constructed by performing the given row

More information

Elementary Linear Algebra

Elementary Linear Algebra Elementary Linear Algebra Linear algebra is the study of; linear sets of equations and their transformation properties. Linear algebra allows the analysis of; rotations in space, least squares fitting,

More information

Example: A Markov Process

Example: A Markov Process Example: A Markov Process Divide the greater metro region into three parts: city such as St. Louis), suburbs to include such areas as Clayton, University City, Richmond Heights, Maplewood, Kirkwood,...)

More information

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.] Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the

More information

Elementary matrices, continued. To summarize, we have identified 3 types of row operations and their corresponding

Elementary matrices, continued. To summarize, we have identified 3 types of row operations and their corresponding Elementary matrices, continued To summarize, we have identified 3 types of row operations and their corresponding elementary matrices. If you check the previous examples, you ll find that these matrices

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost

More information

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Direct Methods Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) Linear Systems: Direct Solution Methods Fall 2017 1 / 14 Introduction The solution of linear systems is one

More information

EBG # 3 Using Gaussian Elimination (Echelon Form) Gaussian Elimination: 0s below the main diagonal

EBG # 3 Using Gaussian Elimination (Echelon Form) Gaussian Elimination: 0s below the main diagonal EBG # 3 Using Gaussian Elimination (Echelon Form) Gaussian Elimination: 0s below the main diagonal [ x y Augmented matrix: 1 1 17 4 2 48 (Replacement) Replace a row by the sum of itself and a multiple

More information

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra CS227-Scientific Computing Lecture 4: A Crash Course in Linear Algebra Linear Transformation of Variables A common phenomenon: Two sets of quantities linearly related: y = 3x + x 2 4x 3 y 2 = 2.7x 2 x

More information

(17) (18)

(17) (18) Module 4 : Solving Linear Algebraic Equations Section 3 : Direct Solution Techniques 3 Direct Solution Techniques Methods for solving linear algebraic equations can be categorized as direct and iterative

More information

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij Topics Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij or a ij lives in row i and column j Definition of a matrix

More information

Matrix decompositions

Matrix decompositions Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers

More information

Math 1021, Linear Algebra 1. Section: A at 10am, B at 2:30pm

Math 1021, Linear Algebra 1. Section: A at 10am, B at 2:30pm Math 1021, Linear Algebra 1. Section: A at 10am, B at 2:30pm All course information is available on Moodle. Text: Nicholson, Linear algebra with applications, 7th edition. We shall cover Chapters 1,2,3,4,5:

More information

Section Matrices and Systems of Linear Eqns.

Section Matrices and Systems of Linear Eqns. QUIZ: strings Section 14.3 Matrices and Systems of Linear Eqns. Remembering matrices from Ch.2 How to test if 2 matrices are equal Assume equal until proved wrong! else? myflag = logical(1) How to test

More information

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4 Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix

More information

Gaussian Elimination and Back Substitution

Gaussian Elimination and Back Substitution Jim Lambers MAT 610 Summer Session 2009-10 Lecture 4 Notes These notes correspond to Sections 31 and 32 in the text Gaussian Elimination and Back Substitution The basic idea behind methods for solving

More information

A Review of Matrix Analysis

A Review of Matrix Analysis Matrix Notation Part Matrix Operations Matrices are simply rectangular arrays of quantities Each quantity in the array is called an element of the matrix and an element can be either a numerical value

More information

1300 Linear Algebra and Vector Geometry

1300 Linear Algebra and Vector Geometry 1300 Linear Algebra and Vector Geometry R. Craigen Office: MH 523 Email: craigenr@umanitoba.ca May-June 2017 Introduction: linear equations Read 1.1 (in the text that is!) Go to course, class webpages.

More information

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns 5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns (1) possesses the solution and provided that.. The numerators and denominators are recognized

More information

MATH 612 Computational methods for equation solving and function minimization Week # 6

MATH 612 Computational methods for equation solving and function minimization Week # 6 MATH 612 Computational methods for equation solving and function minimization Week # 6 F.J.S. Spring 2014 University of Delaware FJS MATH 612 1 / 58 Plan for this week Discuss any problems you couldn t

More information

Solving Systems of Linear Equations

Solving Systems of Linear Equations LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

More information

Solving Consistent Linear Systems

Solving Consistent Linear Systems Solving Consistent Linear Systems Matrix Notation An augmented matrix of a system consists of the coefficient matrix with an added column containing the constants from the right sides of the equations.

More information

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix EE507 - Computational Techniques for EE 7. LU factorization Jitkomut Songsiri factor-solve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization

More information

March 19 - Solving Linear Systems

March 19 - Solving Linear Systems March 19 - Solving Linear Systems Welcome to linear algebra! Linear algebra is the study of vectors, vector spaces, and maps between vector spaces. It has applications across data analysis, computer graphics,

More information

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4 Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math Week # 1 Saturday, February 1, 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x

More information

Proofs Involving Quantifiers. Proof Let B be an arbitrary member Proof Somehow show that there is a value

Proofs Involving Quantifiers. Proof Let B be an arbitrary member Proof Somehow show that there is a value Proofs Involving Quantifiers For a given universe Y : Theorem ÐaBÑ T ÐBÑ Theorem ÐbBÑ T ÐBÑ Proof Let B be an arbitrary member Proof Somehow show that there is a value of Y. Call it B œ +, say Þ ÐYou can

More information

Linear Algebra. Chapter Linear Equations

Linear Algebra. Chapter Linear Equations Chapter 3 Linear Algebra Dixit algorizmi. Or, So said al-khwarizmi, being the opening words of a 12 th century Latin translation of a work on arithmetic by al-khwarizmi (ca. 78 84). 3.1 Linear Equations

More information

The Gauss-Jordan Elimination Algorithm

The Gauss-Jordan Elimination Algorithm The Gauss-Jordan Elimination Algorithm Solving Systems of Real Linear Equations A. Havens Department of Mathematics University of Massachusetts, Amherst January 24, 2018 Outline 1 Definitions Echelon Forms

More information

8 Square matrices continued: Determinants

8 Square matrices continued: Determinants 8 Square matrices continued: Determinants 8.1 Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You

More information

MAC1105-College Algebra. Chapter 5-Systems of Equations & Matrices

MAC1105-College Algebra. Chapter 5-Systems of Equations & Matrices MAC05-College Algebra Chapter 5-Systems of Equations & Matrices 5. Systems of Equations in Two Variables Solving Systems of Two Linear Equations/ Two-Variable Linear Equations A system of equations is

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Linear Equation: a 1 x 1 + a 2 x a n x n = b. x 1, x 2,..., x n : variables or unknowns

Linear Equation: a 1 x 1 + a 2 x a n x n = b. x 1, x 2,..., x n : variables or unknowns Linear Equation: a x + a 2 x 2 +... + a n x n = b. x, x 2,..., x n : variables or unknowns a, a 2,..., a n : coefficients b: constant term Examples: x + 4 2 y + (2 5)z = is linear. x 2 + y + yz = 2 is

More information

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems AMS 209, Fall 205 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems. Overview We are interested in solving a well-defined linear system given

More information

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Matrix Factorization Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

Chapter 1. Vectors, Matrices, and Linear Spaces

Chapter 1. Vectors, Matrices, and Linear Spaces 1.4 Solving Systems of Linear Equations 1 Chapter 1. Vectors, Matrices, and Linear Spaces 1.4. Solving Systems of Linear Equations Note. We give an algorithm for solving a system of linear equations (called

More information

1.1 SOLUTIONS. Replace R2 by R2 + (2)R1 and obtain: 2. Scale R2 by 1/3: Replace R1 by R1 + ( 5)R2:

1.1 SOLUTIONS. Replace R2 by R2 + (2)R1 and obtain: 2. Scale R2 by 1/3: Replace R1 by R1 + ( 5)R2: SOLUIONS Notes: he key exercises are 7 (or or ), 9, and 5 For brevity, the symbols R, R,, stand for row (or equation ), row (or equation ), and so on Additional notes are at the end of the section x +

More information

Numerical Linear Algebra

Numerical Linear Algebra Chapter 3 Numerical Linear Algebra We review some techniques used to solve Ax = b where A is an n n matrix, and x and b are n 1 vectors (column vectors). We then review eigenvalues and eigenvectors and

More information

Homework Set #1 Solutions

Homework Set #1 Solutions Homework Set #1 Solutions Exercises 1.1 (p. 10) Assignment: Do #33, 34, 1, 3,, 29-31, 17, 19, 21, 23, 2, 27 33. (a) True. (p. 7) (b) False. It has five rows and six columns. (c) False. The definition given

More information

Notes on Row Reduction

Notes on Row Reduction Notes on Row Reduction Francis J. Narcowich Department of Mathematics Texas A&M University September The Row-Reduction Algorithm The row-reduced form of a matrix contains a great deal of information, both

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information