solve EB œ,, we can row reduce the augmented matrix

Size: px
Start display at page:

Download "solve EB œ,, we can row reduce the augmented matrix"

Transcription

1 Motivation To PY Decomposition: Factoring E œ P Y solve EB œ,, we can row reduce the augmented matrix Ò + + ÞÞÞ+ ÞÞÞ + l, Ó µ ÞÞÞ µ Ò- - ÞÞÞ- ÞÞÞ-. Ó " # 3 8 " # 3 8 we get to Ò-" -# ÞÞÞ -3 ÞÞÞ-8Ó in an echelon form ( this is the forward part of the row reduction process ). Then we can i) use back substitution to solve for Bß or ii) continue on with the second ( backward ) part of the row reduction process until Ò-" -# ÞÞÞ -3 ÞÞÞ-8 Ó is in row reduced echelon form, at which point it's easy to read off the solutions of EB œ, Þ A computer might take the first approach; the second might be better for hand calculations. But both i) and ii) take about the same number of arithmetic operations (see the first paragraph about back substitution in Sec. 1.2 (p. 19) of the textbook). Whether we use i) or ii) to solve EB œ,, it is sometimes necessary in applications to solve a large system EB œ, many times for the same E but changing, each time perhaps a system with millions of variables and solving thousands of times! For example, read the description of the aircraft design problem at the beginning of Chapter 2. It's too inefficient to row reduce Ò + + ÞÞÞ+ ÞÞÞ + l, Ó each time, even if a computer is doing the work. " # 3 8 One way to avoid these repeated row reductions is to try to factor E, once and for all, in a way that makes it relatively easy to solve EB œ, again each time, changes. This is the motivation for the PY decomposition (also called an PY factorization ). The PY Decomposition An PY decomposition of an 7 8 matrix E is a factorization E œ P Y ÚY is an echelon form of E, and where ÛP is a square unit lower triangular matrix Ü For a square matrix, lower triangula r means all!'s above the diagonal. Unit lower triangular means only "'s on the diagonal. A couple of observations: ñ Y has the same shape, 7 8, as E because Y is an echelon form of E. ñ If E and Y happen to be square, then the echelon form Y is automatically upper triangular. But even when Y is not square, Y has only!'s below the leading entry in each row, so Y still resembles an upper triangular matrix. ñ Since P is square, P must be 7 7 for the product PY to make sense. Since P is unit lower triangular, P is always invertible Ðwhy? think about the row reduced echelon form for PÑ It's not always possible to find an PY -decomposition for E. But when it is possible, the number of steps to find P and Y is not much different from the number of steps required to solve

2 EB œ, (one time!) by row reduction. And if we can factor E in this way, then it takes relatively few additional steps to solve EB œ, for B each time a new, is chosen. Why? If we can writeeb œ PYB œ, ß then we can substitute YB œ C. Then the original matrix equation EB œ, is replaced by two new ones. œ P C œ, C œ YB (*) This is an improvement (as Example 1 will show) because: i) it's easy to solve PC œ, for C because P is lower triangular; and ii) after finding Cß then solving YB œ C is also easy, because Y is already in echelon form iii) P and Y remain the same whenever we change, Þ Example 1 Find an PY decomposition of E œ " # # "! and use it to solve the matrix Õ! % # equation " # # # "! B œ " Þ Õ! % # Õ First, reduce E to an echelon form Y. ( It turns out that we need to be careful about what EROs we use during the row reduction but for now, just follow along; the discussion that comes later will explain what caution is necessary.) Each elementary row operation that we use corresponds to left multiplication by an elementary matrix, and we record those matrices at each step. so we get I 3 E œ " # " # " # # "! µ! % µ! % œ Y (an echelon form) Õ! % # Õ! % # Õ ##!! Å Å I " œ # "! I# œ! "! Õ!! " Õ %! " I# I" E œ Y. Then E œ ÐI# I" Ñ Y œ I" I# ÐI" I# Ñ Y Æ Æ Æ Æ Æ " # " # # "! œ # "! Y œ # % Õ! % # Õ!! " Õ % Õ %! "! " Õ ##!! In this example, I" I# is unit lower triangular, so we let P œ I" I#, and E œ PY is the decomposition we want. (Is it just luck that I" I# turned out to be unit lower triangular? No, it's because we were careful about the steps in the row reduction: see the discussion below.)

3 To solve " # " # # # "! B œ # % B œ ". Õ! % # Õ %! " Õ ##!! Õ Å Å Å EB œ P Y B œ, substitute YB œ C to get the two equations (*) œ P C œ, that is, C œ YB Ú # PC œ # "! C œ " Õ %! " Õ Û " # C œ YB œ! % B Ü Õ!! ## First solve the top equation for C Þ To do this, we can reduceòp, Ó to row reduced echelon form, or we can use forward substitution just like back substitution but starting with the top equation. Either way we need only a few steps to get C because P is lower triangular. Using forward substitution gives: C œ #à " C# œ " #C" œ à and then C" # % C œ C# œ, so C œ C # œ ÕC Õ " # # Then the second equation becomes! % B œ, which we can quickly Õ ##!! Õ solve with back substitution or further row reduction. Using back substitution gives: B œ Ð Ñ œ * ## ## "& & # "" # "" B œ %ÐB Ñ œ, so B œ, and B œ # B #ÐB Ñ œ " # B " "" ). Therefore B œ B œ Ö Ù "" # ÕB Õ ) & "" * ##

4 Example 1, continued: same coefficient matrix E but different, Now that we have factored E œ PY we can solve another equation EB œ " # easily: Õ! See above: here only, has changed! Æ Ú " # "! C œ # Õ %! " Õ! Û " #! % B œ C Ü Õ ##!! " % By forward substitution C" œ "; C# œ # 2Ð"Ñ œ 0; C œ Ð0Ñ œ 0: so! C œ Þ Õ! " # " Then the second equation becomes! % B œ! and, by back substitution, Õ ##!! Õ! " B œ!; B# œ! %Ð!Ñ so B# œ!; and B" œ "Ð!Ñ #Ð!Ñ œ ": so B œ! Þ Õ! When does the method in Example 1 produce an PY decomposition? We can always perform the steps illustrated in Example 1 to get a factorization E œ ^ Y, where Y is an echelon form of E and ^ is a product of elementary matrices Ðin Example 1, ^ œ I" I# Ñ. But ^ won't always turn out to be lower triangular (so that ^ might not deserve the name P ). When does the method work? ( Compare the comments below to Example 1. ) Suppose ( as in Example ") that the only type of ERO used to row reduce E is add a multiple of a row to a lower row. Let these EROs correspond to the elementary matrices I" ßÞÞÞßI:. Then I : ÞÞÞ I" E œ Y and therefore E œ I " ÞÞÞ I: YÞ Each matrix I3 and its inverse I3 will be unit lower triangular Ðwhy? Ñ. Then P œ I " ÞÞÞ I: is also lower triangular with "'s on its diagonal. (Convince yourself that a product of unit lower triangular matrices is a unit lower triangular matrix.) However, note that i) if a row rescaling ERO had been used, then the corresponding elementary matrix would not have only "'s on the diagonal and therefore the product I ÞÞÞ I: might not be unit lower triangular. " ii) if a row interchange ( swap ) ERO had been used, then the corresponding elementary matrix would not be lower triangular and therefore the product I ÞÞÞ I: might not be lower triangular. "

5 To summarize: the method illustrated in Example 1 always produces an PY decomposition for E if the only EROs used to row reduce E to Y are of the form add a multiple of a row to a lower row. Ð Ñ In that case: if the EROs used correspond to multiplication by elementary matrices I ßÞÞÞÞß I, theni ÞÞÞ I E œ Y is in echelon form and " : : " E œ PY, where P œ ÐI : ÞÞÞ I" Ñ œ I " ÞÞÞ I: Ð Ñ is unit lower triangular Stopping to think, you can see that the restriction in Ð Ñ is not really as severe as it sounds. For example, ñ When you do row reduction systematically, according to the standard steps as outlined in the text, you never add a multiple of a row to a higher row. The reason for using this ERO is to create!'s below a pivot. In other words, adding multiples of rows only to lower rows doesn't actually constrain you at all: it's just standard procedure in row reductions. ñ No row rescaling is at most an inconvenience. When row reducing by hand, row rescaling may be helpful to simplify arithmetic, but rescaling is never necessary to get to an echelon form of E. ( Of course, row rescaling might be needed to proceed on to the row reduced echelon form to do that, rescaling may be needed to convert leading entries into "'s.) Side comment: If we did allow row rescaling, it would only mean that we could might end up with a lower triangular P in which some diagonal entries Á ". From an equation-solving point of view, this would not be a disaster. However, the text follows the rule that P should have only "'s on its diagonal, so we'll stick to that convention.! #! But Ð Ñ really does impose some restriction: for example, E œ #!! simply cannot be Õ!! # reduced to echelon form without swapping rows " and #. Here the method in Example 1 simply will not work to produce an PY decomposition. However ( see the last section of these notes), there is a work-around that is almost as good as an PY decomposition in such situations.

6 To find P more efficiently If E is large, then row reducing E to echelon form might involve a very large number of elementary matrices I3 in the formula Ð Ñ so that using this formula to find P is too much work. Fortunately, there is a technique, illustrated in Example 2, to write down P, entry by entry, as E is being row reduced. ( The text gives a slightly different presentation, and offers an explanation of why it works. It's really nothing more than careful bookkeeping. Since these notes are just a survey about PY decompositions, we will only write down the method. ) Assume E is 7 8 and that it can be row-reduced to Y following the rule Ð ÑÞ Then P will be a square 7 7 matrix, and you can write down P using these steps: ñ Put "'s on the diagonal of P and!'s everywhere above the diagonal ñ Whenever you use a row operation add - times row 3 to a lower row 4, then put - in the Ð4ß3Ñ position of P Ðcareful: notice the sign change on -, and put - in the Ð4ß3Ñ position in Pß not in the Ð3ß4Ñ positionñ ñ When E has been reduced to echelon form, fill in any remaining empty positions in P with!'s. If we apply these steps while row reducing in Example 1, here's what we'd get: ñ E was, so start with P œ * "! Õ * * " ñ The row operations we used in Example 1 were: add # Ðrow 1 Ñ to row #, so put # in the Ð#ß"Ñ position in P: that is, P#" œ # % % % add Ð row #Ñ to row 3, so put in the Ðß#Ñ entry in P: that is, P œ # Now we have P œ # "! Õ * " ñ Fill in the rest of P with!'sß to get % P œ # "! ß the same as we got by multiplying elementary matrices Õ %! "

7 Example 2 In this example, E is a little larger and not square. But otherwise, the calculations are quite parallel to those in Example 1. If possible, "!! # " find an PY factorization of E œ #! # ) and use it to solve Õ! " EB œ, œ # Õ "!! # " "!! # " E œ #! # ) µ!! & # "# & Õ! " Õ! " "!! # " "!! # " µ!! & # "# & µ!! & # "# & œ Y Õ!! # & # Õ " "!!!! Y is in echelon form, and no rescaling or row swaps were used in the row reduction. To find P À start with P œ "!. The row operations we used were Õ " Ú add #* Ð row "Ñ to Ð row #Ñ Û add "* Ð row "Ñ to Ð row Ñ so P œ # "! Þ Then Õ # " & Ü # add Ð row #Ñ to Ð row Ñ & P Y E œ "!! # " #! # ) œ "!! # " # "!!! & # "# & Õ! " Õ # " Õ " "!!!! We can solve EB œ, œ # by writing Õ & & & & & Ú Û Ü PC œ,, that is C œ YBß that is C" # "! C # œ, œ # Õ # " ÕC Õ & B" "!! # " B#!! & # "# & C" B œ C# Õ " "!!!! B% & & Ö Ù ÕC B& ÕB '

8 The first equation gives C œ C" C # œ # #ÐC" Ñ œ!, and then the second ÕC Õ # C C Õ # " & # B" B# "!! # " B equation becomes!! & # "# & œ!. So the solution is Õ " " B%!!!! Ö Ù Õ # & & B& ÕB ' B " "" %B& #B ' ""! % # B# B #! "!! B #B& B' % %! # B œ œ œ B# B& B' B% B&!!! "! Ö Ù Ö Ù Ö Ù Ö Ù Ö Ù Ö Ù B& B &!! "! ÕB Õ B Õ! Õ! Õ! Õ " ' ' Factoring E with slight variations from E œ PY PHY decompositions Suppose we row reduce E according to ( ) and get the factorization E œ PY. If we loosen up and also allow rescaling during the row reduction of E, we can get a factorization E œ PHY where as before P is unit lower triangular, H is a (square) diagonal matrix and Y is an echelon form of E which (by rescaling) has only "'s in its pivot positions. Since P is unit lower triangular, this produces more symmetry in appearance between P and Y. (For example: suppose E is square and invertible, and that E œ LDU as described. Then P and Y will both be square, with P unit lower triangular and Y unit upper triangular! 's below the diagonal and only "'s on the diagonal. Why must Y have that form? ) For Y, this is an aesthetically nicer form, but frankly, it doesn't help much in solving the system of equations. To illustrate the (easy) details: first factor with E œ PY. Create a diagonal matrix H as follows: If Row 3 of Y has a pivot position: let : 3Ð Á!Ñ be the number in the pivot position. th Factor : out of Row of Y, and use the number : as the 3 diagonal entry in H th If Row 3 of Y has no pivot position (a row of!'s), use! as the 3 diagonal entry in H. Use! for all the other entries in H. w After factoring out the : 3's out of the rows of Y, what's left is a new matrix Y that has only "'s w in its pivot positions, and HY œ Y (since multiplication on the left by a diagonal matrix just w w multiples each row of Y by the corresponding diagonal element of HÑÞ Therefore E œ PHY Þ At this point, we abuse the notation and use Y in place of Y w we do this only for neatness, since the rightmost matrix in this factorization is traditionally called Y : just remember that the Y in E PHY usually isn't the same as the Y in the original factorization E œ PYÞ E œ PHY is cleverly called an PHY matrices from Example 2. decomposition of EÞ Here's an illustration using the

9 E œ "!! # " #! # ) œ "!! # " # "!!! & # "# & Õ! " Õ # " Õ " "!!!! & & & P Y Ð old Y Ñ "!! # " œ # "! 0 5! # "#!! " & & " Õ # " Õ " 0 0 Õ!!! "! & & P H Y Ð new Y Ñ A Useful Related Decomposition Sometimes row interchanges are simply unavoidable to reduce E to echelon form. Other times, even when row interchanges can be avoided, they are desirable in computer computations to reduce roundoff errors (see the Numerical Note about partial pivoting in the text on p. 17). So what happens if row interchanges are used? We can still get a factorization similar to E œ PY and every bit as useful for solving equations. Here's how. Note: we will again not allow rescaling of rows; but the preceding example suggests a way you could bring those into the picture if you wanted to. 1) Look at how you will use to row reduce E to an echelon form Y and note what row interchanges will be used. Then go back to the beginning and rearrange rows in E: do all these row interchanges first. The result is rearranged E. Look at this in more detail: Suppose the row interchanges we use are represented by the elementary matrices I" ßÞÞÞßI5Þ Then the rearranged E is just I 5 ÞÞÞ I" E œ TE, where T œ I 5 ÞÞÞ I". Since T œ I 5 ÞÞÞ I" œ I 5 ÞÞÞ I" M we see that T is just the final result of when the same row interchanges are performed on M that is, T is just M with some rows rearranged. T is called a permutation matrix; the multiplication TE rearranges ( permutes ) the rows of E in exactly the same way that the rows of M were permuted to create T. A permutation matrix T is the same as a square matrix with exactly one " in each row and exactly one " in each column. Why? T is a product of invertible (elementary) matrices, so T is invertible and T œ I " ÞÞÞ I: is also a permutation matrix (why? what does each I3 do to M?). (If you know the steps to write down T, it's just as easy to actually write down the matrix X T. Or, perhaps you can convince yourself that T œ T Þ) Since T TE œ E, multiplication by T rearranges the rows of TE to restore the original matrix E.

10 2) We now have TEÐthe rearranged E ") which can be row reduced to an echelon form without using any further row interchanges. Therefore we can use our previous method to get an PY decomposition of TE À TE œ PY Ð Ñ 3) Then E œ ÐT PÑY. Here, T P is a permuted lower triangular matrix, meaning lower triangular with some rows interchanged. MATLAB Help gives T P the cute name psychologically l ower triangular matrix, a term never used anywhere else to my knowledgeþ This factorization is just as useful for equation solving as the PY decomposition. For example, we can solve EB œ ÐT PÑYB œ, by first substituting C œ YB. It's still easy to solve ÐT PÑC œ, because T P is psychologically lower triangular ( see illustration below), and then knowing C, it's easy to solve YB œ C because Y is in echelon form.. For example suppose ÐT PÑC œ, is the equation # C "!! ÙÖC # " Ö ÙÖ Ù œ Ö Ù # # " " C " Õ" " "! ÕC Õ! % ß permuted lower triangular matrix We can simply imitate forward substation, but solve using the rows in the order that (rearranged) would make a lower triangular matrix. That is, Ú C" œ ", from the second row, then C# œ! #C" œ #, from the first row, then Û C œ! C" C# œ ", from the fourth row, then ÜC œ " #C #C C œ #, from the third row % " # (In some books or applications, when you see E œ PY it might even be assumed that P is permuted lower triangular rather than lower triangular. Be careful when reading other books to check what the author means.) OR to solve EB œ ÐT PÑYB œ,, you can reason as follows: The equation TEB œ T, has exactly the same solutions as EB œ, because: If TEB œ T, is true, then multiplying both sides by T shows that B is also a solution EB œ, also, and vice-versa. If you think in terms of writing out the linear systems of equations: TEB œ T, is the same system of equations as EB œ, but with the equations listed in some other interchanged order determined by the permutation matrix T. Since we have an PY decomposition for TEß solving TEB œ PYB œ T, for B is easyþ

11 " " # " " " Example 3 Let E œ Ö ÙÞ " % % Õ! " " 1) On scratch paper, do enough steps row reducing E to echelon form ( no row rescaling allowed, and, following standard procedure, only adding multiples of rows to lower rows ) to see what row interchanges, if any, are necessary. It turns out that only one, interchanging rows 2 and 4 at a certain stage, will be enough. 2) So go back to the start and perform that row interchange first, creating " " #!! "! " TE œ Ö Ù, where T œ Ö Ù " % %!! "! Õ" " " Õ! 3) Row reduce TE to an echelon form Y, keeping track of the EROs used: " " # " " # " " # " " #! " " Ð"Ñ! " " Ð#Ñ! " " ÐÑ! " " Ö Ù µ Ö Ù µ Ö Ù µ Ö Ù " % %! 2! 2!! 1 Õ" " " Õ" " " Õ!! Õ!! " " # Ð%Ñ! " " µ Ö Ù œ Y!! 1 Õ!!! The row operations were (1) add row(1) to row 3 (2) add row(1) to row 4 (3) add 3 row(2) to row Ð%Ñ add 1*row(3) to row 4 Using the method described earlier to find P more efficiently we can write down P as we row reduce TE À * * * * Ö Ù Ä Ö Ù Ä Ö Ù Ä Ö Ù * * 1 0 " * 1 0 " * 1 0 " 1 0 Õ * * * 1 Õ * * * 1 Õ" * * 1 Õ" 1

12 !!! Ä Ö Ù Ä Ö Ù œ P " "! " "! Õ" " " Õ"! " " Then TE œ P Y " " #! " " #! " "!! " " Ö Ù œ Ö Ù Ö Ù " % % "! 1 Õ" " " Õ"! " " Õ!!! # # " ( To solve EB œ Ö Ùß write TEB œ PYB œ T, œ Ö Ù Ðrows 2,4 of, interchanged, as Õ( Õ" # ( for E earlier Ñ. We then solve PYB œ Ö Ù in the same way as before Õ" For those who use MATLAB MATLAB has a command ÒPß Yß TÓ œ lu(a) ( lu is lowercase in MATLAB ) For beginners, anyway, I recommend only using it when E is square. It should give you 1) a square unit lower triangular P 2) Y in echelon form, and 3) a permutation matrix T for which TE œ PY. If it's possible simply to factor E œ PYß then MATLAB might just give T œ M œ the identity matrix. However, sometimes MATLAB will permute some rows even when it's not mathematically necessary (because doing so, in large complicated problems, will help reduce roundoff errors) If E is not square, then MATLAB gives some a result so that TE œ PY is still true, but the shapes of the matrices may vary from our discussion.

13 Here's a small-scale illustration of another way in which the PY decomposition can be helpful in some large, applied computations. A band matrix refers to a square 8 8 matrix in which all the nonzero entries lie in a band that surrounds the main diagonal. For example, in the following matrix all nonzero entries are in the band that consists of one superdiagonal above the main diagonal and two subdiagonals below the main diagonal. The number of superdiagonals in the band is called the upper bandwidth ("ß in the example below) and the number of subdiagonals in the band is the lower bandwidth (2, in the example). The total number of diagonals in the band œ Ðupper bandwidth lower bandwidth Ñ is called the bandwidth (4, in the example). " "! # " %! ( and all other entries Ö Ù # " " " Õ " # % œ!ñ!! A diagonal matrix such as! #! is an extreme example for which bandwidth œ "Þ. Õ!! ' Use of the term band matrix is a bit subjective: in fact, we could think of any matrix as a band matrix if, say, all entries are nonzero, then the matrix is ju= t one that has the largest possible bandwidth. In practice, however, the term band matrix is used to refer to a matrix in which the bandwidth is relatively small compared to the size of the matrix. Therefore the matrix is sparse a relatively large number of entries are!. Look at the simple problem about steady heat flow in Exercise 33 in Section 1.1 in the text; a more complicated problem then occurs in Exercise 31, Section 2.5.

14 Given the temperatures at the nodes around the edge of the plate, we want to approximate the temperature B" ßÞÞÞßB) at the interior nodes numbered 1,...8. This leads, as in Exercise 33, Section 1.1 to an equation EB œ, with a band matrix E À %!!!!! & %!!!!! "&! %!!!!! %!!! "! B œ!!! %!!!!! %! "! Ö Ù Ö Ù!!!!! % #! Õ!!!!! % Õ! If we can find an PY decomposition of E, then P and Y will also be band matrices ( Why? Think about how you find P and Y from E). Using the MATLAB command ÒPß Yß TÓ œ lu (A) gives an PY decomposition. Rounded to 4 decimal places, we get P œ "Þ!!!!!!!!!!!!Þ#&!! "Þ!!!!!!!!!!!Þ#&!!!Þ!''( "Þ!!!!!!!!!!!Þ#''(!Þ#)&( "Þ!!!!!!!!!!!Þ#'(*!Þ!) "Þ!!!!!!!!!!!Þ#*"(!Þ#*#" "Þ!!!!!! Ö Ù!!!!!Þ#'*(!Þ!)'" "Þ!!!!! Õ!!!!!!Þ#*%) "Þ!!!! Y œ %Þ!!!! Þ!!!! Þ!!!!!!!!!! Þ(&!!!Þ#&!! Þ!!!!!!!!!! Þ( Þ!''( Þ!!!!!!!!!! Þ%#)'!Þ#)&( Þ!!!!!!!!!! Þ(!) Þ!) Þ!!!!!!!!!! Þ*"*!Þ#*#" Þ!!!! Ö Ù!!!!!! Þ(!&# Þ!)'" Õ!!!!!!! Þ)') The PY decomposition can be used to solve EB œ,, as we showed earlier in these notes. The important thing in the example is not actually finding the solution, which turns out to be

15 Þ*&'* 'Þ&))& %Þ#*# (Þ*(" B œ &Þ'!#* )Þ('!) Ö Ù *Þ%""& Õ"#Þ!%" What's interesting and useful to notice: ñ From earlier in these notes: if we need to solve EB œ, for different vectors,, then factoring E œ PY saves time many times ñ A new observation is that for a very large 8 8 sparse band matrix E (perhaps thousands of rows and columns but with a relatively narrow band of nonzero entries), we can store E with much less computer space than is needed for a general 8 8 matrix: we only need to store the entries in the upper and lower bands, and two numbers for the upper and lower bandwidths; no need to waste space storage space for all the other matrix entries which are known to be!'s. Since P and Y are also sparse band matrices, they are also relatively economical to store. ñ Using Eß P and Y can let us avoid computations that use E. Even if E is a sparse band matrix, E may have no band structure and take up a lot more storage space than E, Pß and YÞ For instance, in the preceding example, E œ!þ#*&!þ!)''!þ!*%&!þ!&!*!þ!")!þ!##(!þ!"!!!þ!!)#!þ!)''!þ#*&!þ!&!*!þ!*%&!þ!##(!þ!")!þ!!)#!þ!"!!!þ!*%&!þ!&!*!þ#("!þ"!*!þ"!%&!þ!&*"!þ!")!þ!##(!þ!&!*!þ!*%&!þ"!*!þ#("!þ!&*"!þ"!%&!þ!##(!þ!")!þ!")!þ!##(!þ"!%&!þ!&*"!þ#(" !Þ!*%&!Þ!&!*!Þ! 227!Þ 0318!Þ 0591!Þ 1045!Þ 1093!Þ 3271!Þ 0509!Þ!*%& Ö Ù!Þ!"!!!Þ!!)#!Þ!")!Þ!##(!Þ!*%&!Þ!&!*!Þ#*&!Þ!)'' Õ!Þ!!)#!Þ!"!!!Þ!##(!Þ!")!Þ!&!*!Þ!*%&!Þ!)''!Þ#*&

solve EB œ,, we can row reduce the augmented matrix

solve EB œ,, we can row reduce the augmented matrix PY Decomposition: Factoring E œ P Y Motivation To solve EB œ,, we can row reduce the augmented matrix Ò + + ÞÞÞ+ ÞÞÞ + l, Ó µ ÞÞÞ µ Ò- - ÞÞÞ- ÞÞÞ-. Ó " # 3 8 " # 3 8 When we get to Ò-" -# ÞÞÞ -3 ÞÞÞ-8Ó

More information

B œ c " " ã B œ c 8 8. such that substituting these values for the B 3 's will make all the equations true

B œ c   ã B œ c 8 8. such that substituting these values for the B 3 's will make all the equations true System of Linear Equations variables Ð unknowns Ñ B" ß B# ß ÞÞÞ ß B8 Æ Æ Æ + B + B ÞÞÞ + B œ, "" " "# # "8 8 " + B + B ÞÞÞ + B œ, #" " ## # #8 8 # ã + B + B ÞÞÞ + B œ, 3" " 3# # 38 8 3 ã + 7" B" + 7# B#

More information

: œ Ö: =? À =ß> real numbers. œ the previous plane with each point translated by : Ðfor example,! is translated to :)

: œ Ö: =? À =ß> real numbers. œ the previous plane with each point translated by : Ðfor example,! is translated to :) â SpanÖ?ß@ œ Ö =? > @ À =ß> real numbers : SpanÖ?ß@ œ Ö: =? > @ À =ß> real numbers œ the previous plane with each point translated by : Ðfor example, is translated to :) á In general: Adding a vector :

More information

Linear Equations in Linear Algebra

Linear Equations in Linear Algebra 1 Linear Equations in Linear Algebra 1.1 SYSTEMS OF LINEAR EQUATIONS LINEAR EQUATION x 1,, x n A linear equation in the variables equation that can be written in the form a 1 x 1 + a 2 x 2 + + a n x n

More information

MATLAB Project: LU Factorization

MATLAB Project: LU Factorization Name Purpose: To practice Lay's LU Factorization Algorithm and see how it is related to MATLAB's lu function. Prerequisite: Section 2.5 MATLAB functions used: *, lu; and ludat and gauss from Laydata4 Toolbox

More information

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3 Linear Algebra Row Reduced Echelon Form Techniques for solving systems of linear equations lie at the heart of linear algebra. In high school we learn to solve systems with or variables using elimination

More information

Introduction to Diagonalization

Introduction to Diagonalization Introduction to Diagonalization For a square matrix E, a process called diagonalization can sometimes give us more insight into how the transformation B ÈE B works. The insight has a strong geometric flavor,

More information

Lecture 9: Elementary Matrices

Lecture 9: Elementary Matrices Lecture 9: Elementary Matrices Review of Row Reduced Echelon Form Consider the matrix A and the vector b defined as follows: 1 2 1 A b 3 8 5 A common technique to solve linear equations of the form Ax

More information

1 GSW Sets of Systems

1 GSW Sets of Systems 1 Often, we have to solve a whole series of sets of simultaneous equations of the form y Ax, all of which have the same matrix A, but each of which has a different known vector y, and a different unknown

More information

THE NULLSPACE OF A: SOLVING AX = 0 3.2

THE NULLSPACE OF A: SOLVING AX = 0 3.2 32 The Nullspace of A: Solving Ax = 0 11 THE NULLSPACE OF A: SOLVING AX = 0 32 This section is about the space of solutions to Ax = 0 The matrix A can be square or rectangular One immediate solution is

More information

Row Reduced Echelon Form

Row Reduced Echelon Form Math 40 Row Reduced Echelon Form Solving systems of linear equations lies at the heart of linear algebra. In high school we learn to solve systems in or variables using elimination and substitution of

More information

Linear Systems of n equations for n unknowns

Linear Systems of n equations for n unknowns Linear Systems of n equations for n unknowns In many application problems we want to find n unknowns, and we have n linear equations Example: Find x,x,x such that the following three equations hold: x

More information

Systems of Equations 1. Systems of Linear Equations

Systems of Equations 1. Systems of Linear Equations Lecture 1 Systems of Equations 1. Systems of Linear Equations [We will see examples of how linear equations arise here, and how they are solved:] Example 1: In a lab experiment, a researcher wants to provide

More information

Handout 1 EXAMPLES OF SOLVING SYSTEMS OF LINEAR EQUATIONS

Handout 1 EXAMPLES OF SOLVING SYSTEMS OF LINEAR EQUATIONS 22M:33 J. Simon page 1 of 7 22M:33 Summer 06 J. Simon Example 1. Handout 1 EXAMPLES OF SOLVING SYSTEMS OF LINEAR EQUATIONS 2x 1 + 3x 2 5x 3 = 10 3x 1 + 5x 2 + 6x 3 = 16 x 1 + 5x 2 x 3 = 10 Step 1. Write

More information

Introduction to Mathematical Programming

Introduction to Mathematical Programming Introduction to Mathematical Programming Ming Zhong Lecture 6 September 12, 2018 Ming Zhong (JHU) AMS Fall 2018 1 / 20 Table of Contents 1 Ming Zhong (JHU) AMS Fall 2018 2 / 20 Solving Linear Systems A

More information

Æ Å not every column in E is a pivot column (so EB œ! has at least one free variable) Æ Å E has linearly dependent columns

Æ Å not every column in E is a pivot column (so EB œ! has at least one free variable) Æ Å E has linearly dependent columns ßÞÞÞß @ is linearly independent if B" @" B# @# ÞÞÞ B: @: œ! has only the trivial solution EB œ! has only the trivial solution Ðwhere E œ Ò@ @ ÞÞÞ@ ÓÑ every column in E is a pivot column E has linearly

More information

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3 Math 0 Row Reduced Echelon Form Techniques for solving systems of linear equations lie at the heart of linear algebra. In high school we learn to solve systems with or variables using elimination and substitution

More information

Math 344 Lecture # Linear Systems

Math 344 Lecture # Linear Systems Math 344 Lecture #12 2.7 Linear Systems Through a choice of bases S and T for finite dimensional vector spaces V (with dimension n) and W (with dimension m), a linear equation L(v) = w becomes the linear

More information

The Spotted Owls 5 " 5. = 5 " œ!þ")45 Ð'!% leave nest, 30% of those succeed) + 5 " œ!þ("= 5!Þ*%+ 5 ( adults live about 20 yrs)

The Spotted Owls 5  5. = 5  œ!þ)45 Ð'!% leave nest, 30% of those succeed) + 5  œ!þ(= 5!Þ*%+ 5 ( adults live about 20 yrs) The Spotted Owls Be sure to read the example at the introduction to Chapter in the textbook. It gives more general background information for this example. The example leads to a dynamical system which

More information

MAT 343 Laboratory 3 The LU factorization

MAT 343 Laboratory 3 The LU factorization In this laboratory session we will learn how to MAT 343 Laboratory 3 The LU factorization 1. Find the LU factorization of a matrix using elementary matrices 2. Use the MATLAB command lu to find the LU

More information

The Leontief Open Economy Production Model

The Leontief Open Economy Production Model The Leontief Open Economy Production Model We will look at the idea for Leontief's model of an open economy a term which we will explain below. It starts by looking very similar to an example of a closed

More information

Matrix Factorization Reading: Lay 2.5

Matrix Factorization Reading: Lay 2.5 Matrix Factorization Reading: Lay 2.5 October, 20 You have seen that if we know the inverse A of a matrix A, we can easily solve the equation Ax = b. Solving a large number of equations Ax = b, Ax 2 =

More information

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) D. ARAPURA Gaussian elimination is the go to method for all basic linear classes including this one. We go summarize the main ideas. 1.

More information

Chapter 1 Linear Equations. 1.1 Systems of Linear Equations

Chapter 1 Linear Equations. 1.1 Systems of Linear Equations Chapter Linear Equations. Systems of Linear Equations A linear equation in the n variables x, x 2,..., x n is one that can be expressed in the form a x + a 2 x 2 + + a n x n = b where a, a 2,..., a n and

More information

Chapter 4. Solving Systems of Equations. Chapter 4

Chapter 4. Solving Systems of Equations. Chapter 4 Solving Systems of Equations 3 Scenarios for Solutions There are three general situations we may find ourselves in when attempting to solve systems of equations: 1 The system could have one unique solution.

More information

Section 5.6. LU and LDU Factorizations

Section 5.6. LU and LDU Factorizations 5.6. LU and LDU Factorizations Section 5.6. LU and LDU Factorizations Note. We largely follow Fraleigh and Beauregard s approach to this topic from Linear Algebra, 3rd Edition, Addison-Wesley (995). See

More information

Elementary matrices, continued. To summarize, we have identified 3 types of row operations and their corresponding

Elementary matrices, continued. To summarize, we have identified 3 types of row operations and their corresponding Elementary matrices, continued To summarize, we have identified 3 types of row operations and their corresponding elementary matrices. If you check the previous examples, you ll find that these matrices

More information

The Leontief Open Economy Production Model

The Leontief Open Economy Production Model The Leontief Open Economy Production Model We will look at the idea for Leontief's model of an open economy a term which we will explain below. It starts by looking very similar to an example of a closed

More information

Relations. Relations occur all the time in mathematics. For example, there are many relations between # and % À

Relations. Relations occur all the time in mathematics. For example, there are many relations between # and % À Relations Relations occur all the time in mathematics. For example, there are many relations between and % À Ÿ% % Á% l% Ÿ,, Á ß and l are examples of relations which might or might not hold between two

More information

Example: A Markov Process

Example: A Markov Process Example: A Markov Process Divide the greater metro region into three parts: city such as St. Louis), suburbs to include such areas as Clayton, University City, Richmond Heights, Maplewood, Kirkwood,...)

More information

Linear Programming The Simplex Algorithm: Part II Chapter 5

Linear Programming The Simplex Algorithm: Part II Chapter 5 1 Linear Programming The Simplex Algorithm: Part II Chapter 5 University of Chicago Booth School of Business Kipp Martin October 17, 2017 Outline List of Files Key Concepts Revised Simplex Revised Simplex

More information

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS chapter MORE MATRIX ALGEBRA GOALS In Chapter we studied matrix operations and the algebra of sets and logic. We also made note of the strong resemblance of matrix algebra to elementary algebra. The reader

More information

Section 3.5 LU Decomposition (Factorization) Key terms. Matrix factorization Forward and back substitution LU-decomposition Storage economization

Section 3.5 LU Decomposition (Factorization) Key terms. Matrix factorization Forward and back substitution LU-decomposition Storage economization Section 3.5 LU Decomposition (Factorization) Key terms Matrix factorization Forward and back substitution LU-decomposition Storage economization In matrix analysis as implemented in modern software the

More information

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Review of matrices. Let m, n IN. A rectangle of numbers written like A = Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an

More information

CHAPTER 8: MATRICES and DETERMINANTS

CHAPTER 8: MATRICES and DETERMINANTS (Section 8.1: Matrices and Determinants) 8.01 CHAPTER 8: MATRICES and DETERMINANTS The material in this chapter will be covered in your Linear Algebra class (Math 254 at Mesa). SECTION 8.1: MATRICES and

More information

And, even if it is square, we may not be able to use EROs to get to the identity matrix. Consider

And, even if it is square, we may not be able to use EROs to get to the identity matrix. Consider .2. Echelon Form and Reduced Row Echelon Form In this section, we address what we are trying to achieve by doing EROs. We are trying to turn any linear system into a simpler one. But what does simpler

More information

Here are proofs for some of the results about diagonalization that were presented without proof in class.

Here are proofs for some of the results about diagonalization that were presented without proof in class. Suppose E is an 8 8 matrix. In what follows, terms like eigenvectors, eigenvalues, and eigenspaces all refer to the matrix E. Here are proofs for some of the results about diagonalization that were presented

More information

Homework Set #1 Solutions

Homework Set #1 Solutions Homework Set #1 Solutions Exercises 1.1 (p. 10) Assignment: Do #33, 34, 1, 3,, 29-31, 17, 19, 21, 23, 2, 27 33. (a) True. (p. 7) (b) False. It has five rows and six columns. (c) False. The definition given

More information

SOLVING LINEAR SYSTEMS

SOLVING LINEAR SYSTEMS SOLVING LINEAR SYSTEMS We want to solve the linear system a, x + + a,n x n = b a n, x + + a n,n x n = b n This will be done by the method used in beginning algebra, by successively eliminating unknowns

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l

More information

Next topics: Solving systems of linear equations

Next topics: Solving systems of linear equations Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:

More information

36 What is Linear Algebra?

36 What is Linear Algebra? 36 What is Linear Algebra? The authors of this textbook think that solving linear systems of equations is a big motivation for studying linear algebra This is certainly a very respectable opinion as systems

More information

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4 Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix

More information

Introduction to Systems of Equations

Introduction to Systems of Equations Introduction to Systems of Equations Introduction A system of linear equations is a list of m linear equations in a common set of variables x, x,, x n. a, x + a, x + Ù + a,n x n = b a, x + a, x + Ù + a,n

More information

The Gauss-Jordan Elimination Algorithm

The Gauss-Jordan Elimination Algorithm The Gauss-Jordan Elimination Algorithm Solving Systems of Real Linear Equations A. Havens Department of Mathematics University of Massachusetts, Amherst January 24, 2018 Outline 1 Definitions Echelon Forms

More information

Linear Algebraic Equations

Linear Algebraic Equations Linear Algebraic Equations 1 Fundamentals Consider the set of linear algebraic equations n a ij x i b i represented by Ax b j with [A b ] [A b] and (1a) r(a) rank of A (1b) Then Axb has a solution iff

More information

4 Elementary matrices, continued

4 Elementary matrices, continued 4 Elementary matrices, continued We have identified 3 types of row operations and their corresponding elementary matrices. To repeat the recipe: These matrices are constructed by performing the given row

More information

MATH 612 Computational methods for equation solving and function minimization Week # 6

MATH 612 Computational methods for equation solving and function minimization Week # 6 MATH 612 Computational methods for equation solving and function minimization Week # 6 F.J.S. Spring 2014 University of Delaware FJS MATH 612 1 / 58 Plan for this week Discuss any problems you couldn t

More information

MAC1105-College Algebra. Chapter 5-Systems of Equations & Matrices

MAC1105-College Algebra. Chapter 5-Systems of Equations & Matrices MAC05-College Algebra Chapter 5-Systems of Equations & Matrices 5. Systems of Equations in Two Variables Solving Systems of Two Linear Equations/ Two-Variable Linear Equations A system of equations is

More information

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra CS227-Scientific Computing Lecture 4: A Crash Course in Linear Algebra Linear Transformation of Variables A common phenomenon: Two sets of quantities linearly related: y = 3x + x 2 4x 3 y 2 = 2.7x 2 x

More information

2 Systems of Linear Equations

2 Systems of Linear Equations 2 Systems of Linear Equations A system of equations of the form or is called a system of linear equations. x + 2y = 7 2x y = 4 5p 6q + r = 4 2p + 3q 5r = 7 6p q + 4r = 2 Definition. An equation involving

More information

Elementary Linear Algebra

Elementary Linear Algebra Matrices J MUSCAT Elementary Linear Algebra Matrices Definition Dr J Muscat 2002 A matrix is a rectangular array of numbers, arranged in rows and columns a a 2 a 3 a n a 2 a 22 a 23 a 2n A = a m a mn We

More information

The following steps will help you to record your work and save and submit it successfully.

The following steps will help you to record your work and save and submit it successfully. MATH 22AL Lab # 4 1 Objectives In this LAB you will explore the following topics using MATLAB. Properties of invertible matrices. Inverse of a Matrix Explore LU Factorization 2 Recording and submitting

More information

Review. Example 1. Elementary matrices in action: (a) a b c. d e f = g h i. d e f = a b c. a b c. (b) d e f. d e f.

Review. Example 1. Elementary matrices in action: (a) a b c. d e f = g h i. d e f = a b c. a b c. (b) d e f. d e f. Review Example. Elementary matrices in action: (a) 0 0 0 0 a b c d e f = g h i d e f 0 0 g h i a b c (b) 0 0 0 0 a b c d e f = a b c d e f 0 0 7 g h i 7g 7h 7i (c) 0 0 0 0 a b c a b c d e f = d e f 0 g

More information

Definition Suppose M is a collection (set) of sets. M is called inductive if

Definition Suppose M is a collection (set) of sets. M is called inductive if Definition Suppose M is a collection (set) of sets. M is called inductive if a) g M, and b) if B Mß then B MÞ Then we ask: are there any inductive sets? Informally, it certainly looks like there are. For

More information

x + 2y + 3z = 8 x + 3y = 7 x + 2z = 3

x + 2y + 3z = 8 x + 3y = 7 x + 2z = 3 Chapter 2: Solving Linear Equations 23 Elimination Using Matrices As we saw in the presentation, we can use elimination to make a system of linear equations into an upper triangular system that is easy

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns 5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns (1) possesses the solution and provided that.. The numerators and denominators are recognized

More information

Chapter 2 - Linear Equations

Chapter 2 - Linear Equations Chapter 2 - Linear Equations 2. Solving Linear Equations One of the most common problems in scientific computing is the solution of linear equations. It is a problem in its own right, but it also occurs

More information

Section Gaussian Elimination

Section Gaussian Elimination Section. - Gaussian Elimination A matrix is said to be in row echelon form (REF) if it has the following properties:. The first nonzero entry in any row is a. We call this a leading one or pivot one..

More information

MATH 310, REVIEW SHEET 2

MATH 310, REVIEW SHEET 2 MATH 310, REVIEW SHEET 2 These notes are a very short summary of the key topics in the book (and follow the book pretty closely). You should be familiar with everything on here, but it s not comprehensive,

More information

Section Matrices and Systems of Linear Eqns.

Section Matrices and Systems of Linear Eqns. QUIZ: strings Section 14.3 Matrices and Systems of Linear Eqns. Remembering matrices from Ch.2 How to test if 2 matrices are equal Assume equal until proved wrong! else? myflag = logical(1) How to test

More information

8 Square matrices continued: Determinants

8 Square matrices continued: Determinants 8 Square matrices continued: Determinants 8.1 Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You

More information

Proofs Involving Quantifiers. Proof Let B be an arbitrary member Proof Somehow show that there is a value

Proofs Involving Quantifiers. Proof Let B be an arbitrary member Proof Somehow show that there is a value Proofs Involving Quantifiers For a given universe Y : Theorem ÐaBÑ T ÐBÑ Theorem ÐbBÑ T ÐBÑ Proof Let B be an arbitrary member Proof Somehow show that there is a value of Y. Call it B œ +, say Þ ÐYou can

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost

More information

Lecture 2e Row Echelon Form (pages 73-74)

Lecture 2e Row Echelon Form (pages 73-74) Lecture 2e Row Echelon Form (pages 73-74) At the end of Lecture 2a I said that we would develop an algorithm for solving a system of linear equations, and now that we have our matrix notation, we can proceed

More information

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen

More information

Mon Feb Matrix algebra and matrix inverses. Announcements: Warm-up Exercise:

Mon Feb Matrix algebra and matrix inverses. Announcements: Warm-up Exercise: Math 2270-004 Week 5 notes We will not necessarily finish the material from a given day's notes on that day We may also add or subtract some material as the week progresses, but these notes represent an

More information

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark DM559 Linear and Integer Programming LU Factorization Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark [Based on slides by Lieven Vandenberghe, UCLA] Outline

More information

Math Week 1 notes

Math Week 1 notes Math 2270-004 Week notes We will not necessarily finish the material from a given day's notes on that day. Or on an amazing day we may get farther than I've predicted. We may also add or subtract some

More information

Solving Consistent Linear Systems

Solving Consistent Linear Systems Solving Consistent Linear Systems Matrix Notation An augmented matrix of a system consists of the coefficient matrix with an added column containing the constants from the right sides of the equations.

More information

Topic 15 Notes Jeremy Orloff

Topic 15 Notes Jeremy Orloff Topic 5 Notes Jeremy Orloff 5 Transpose, Inverse, Determinant 5. Goals. Know the definition and be able to compute the inverse of any square matrix using row operations. 2. Know the properties of inverses.

More information

Elementary Linear Algebra

Elementary Linear Algebra Elementary Linear Algebra Linear algebra is the study of; linear sets of equations and their transformation properties. Linear algebra allows the analysis of; rotations in space, least squares fitting,

More information

Introduction to Determinants

Introduction to Determinants Introduction to Determinants For any square matrix of order 2, we have found a necessary and sufficient condition for invertibility. Indeed, consider the matrix The matrix A is invertible if and only if.

More information

A Review of Matrix Analysis

A Review of Matrix Analysis Matrix Notation Part Matrix Operations Matrices are simply rectangular arrays of quantities Each quantity in the array is called an element of the matrix and an element can be either a numerical value

More information

Solving Linear Systems Using Gaussian Elimination

Solving Linear Systems Using Gaussian Elimination Solving Linear Systems Using Gaussian Elimination DEFINITION: A linear equation in the variables x 1,..., x n is an equation that can be written in the form a 1 x 1 +...+a n x n = b, where a 1,...,a n

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Direct Methods Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) Linear Systems: Direct Solution Methods Fall 2017 1 / 14 Introduction The solution of linear systems is one

More information

A 2. =... = c c N. 's arise from the three types of elementary row operations. If rref A = I its determinant is 1, and A = c 1

A 2. =... = c c N. 's arise from the three types of elementary row operations. If rref A = I its determinant is 1, and A = c 1 Theorem: Let A n n Then A 1 exists if and only if det A 0 proof: We already know that A 1 exists if and only if the reduced row echelon form of A is the identity matrix Now, consider reducing A to its

More information

(17) (18)

(17) (18) Module 4 : Solving Linear Algebraic Equations Section 3 : Direct Solution Techniques 3 Direct Solution Techniques Methods for solving linear algebraic equations can be categorized as direct and iterative

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

22A-2 SUMMER 2014 LECTURE 5

22A-2 SUMMER 2014 LECTURE 5 A- SUMMER 0 LECTURE 5 NATHANIEL GALLUP Agenda Elimination to the identity matrix Inverse matrices LU factorization Elimination to the identity matrix Previously, we have used elimination to get a system

More information

Rectangular Systems and Echelon Forms

Rectangular Systems and Echelon Forms CHAPTER 2 Rectangular Systems and Echelon Forms 2.1 ROW ECHELON FORM AND RANK We are now ready to analyze more general linear systems consisting of m linear equations involving n unknowns a 11 x 1 + a

More information

Gaussian Elimination and Back Substitution

Gaussian Elimination and Back Substitution Jim Lambers MAT 610 Summer Session 2009-10 Lecture 4 Notes These notes correspond to Sections 31 and 32 in the text Gaussian Elimination and Back Substitution The basic idea behind methods for solving

More information

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.] Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the

More information

Linear Algebra, Summer 2011, pt. 2

Linear Algebra, Summer 2011, pt. 2 Linear Algebra, Summer 2, pt. 2 June 8, 2 Contents Inverses. 2 Vector Spaces. 3 2. Examples of vector spaces..................... 3 2.2 The column space......................... 6 2.3 The null space...........................

More information

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II) Math 59 Lecture 2 (Tue Mar 5) Gaussian elimination and LU factorization (II) 2 Gaussian elimination - LU factorization For a general n n matrix A the Gaussian elimination produces an LU factorization if

More information

REPLACE ONE ROW BY ADDING THE SCALAR MULTIPLE OF ANOTHER ROW

REPLACE ONE ROW BY ADDING THE SCALAR MULTIPLE OF ANOTHER ROW 20 CHAPTER 1 Systems of Linear Equations REPLACE ONE ROW BY ADDING THE SCALAR MULTIPLE OF ANOTHER ROW The last type of operation is slightly more complicated. Suppose that we want to write down the elementary

More information

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij Topics Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij or a ij lives in row i and column j Definition of a matrix

More information

MIDTERM 1 - SOLUTIONS

MIDTERM 1 - SOLUTIONS MIDTERM - SOLUTIONS MATH 254 - SUMMER 2002 - KUNIYUKI CHAPTERS, 2, GRADED OUT OF 75 POINTS 2 50 POINTS TOTAL ) Use either Gaussian elimination with back-substitution or Gauss-Jordan elimination to solve

More information

Numerical Linear Algebra

Numerical Linear Algebra Chapter 3 Numerical Linear Algebra We review some techniques used to solve Ax = b where A is an n n matrix, and x and b are n 1 vectors (column vectors). We then review eigenvalues and eigenvectors and

More information

Math 308 Midterm Answers and Comments July 18, Part A. Short answer questions

Math 308 Midterm Answers and Comments July 18, Part A. Short answer questions Math 308 Midterm Answers and Comments July 18, 2011 Part A. Short answer questions (1) Compute the determinant of the matrix a 3 3 1 1 2. 1 a 3 The determinant is 2a 2 12. Comments: Everyone seemed to

More information

1.1 SOLUTIONS. Replace R2 by R2 + (2)R1 and obtain: 2. Scale R2 by 1/3: Replace R1 by R1 + ( 5)R2:

1.1 SOLUTIONS. Replace R2 by R2 + (2)R1 and obtain: 2. Scale R2 by 1/3: Replace R1 by R1 + ( 5)R2: SOLUIONS Notes: he key exercises are 7 (or or ), 9, and 5 For brevity, the symbols R, R,, stand for row (or equation ), row (or equation ), and so on Additional notes are at the end of the section x +

More information

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4 Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math Week # 1 Saturday, February 1, 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x

More information

Chapter 1. Vectors, Matrices, and Linear Spaces

Chapter 1. Vectors, Matrices, and Linear Spaces 1.4 Solving Systems of Linear Equations 1 Chapter 1. Vectors, Matrices, and Linear Spaces 1.4. Solving Systems of Linear Equations Note. We give an algorithm for solving a system of linear equations (called

More information

Example Let VœÖÐBßCÑ À b-, CœB - ( see the example above). Explain why

Example Let VœÖÐBßCÑ À b-, CœB - ( see the example above). Explain why Definition If V is a relation from E to F, then a) the domain of V œ dom ÐVÑ œ Ö+ E À b, F such that Ð+ß,Ñ V b) the range of V œ ran( VÑ œ Ö, F À b+ E such that Ð+ß,Ñ V " c) the inverse relation of V œ

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

Modern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur

Modern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur Modern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur Lecture 02 Groups: Subgroups and homomorphism (Refer Slide Time: 00:13) We looked

More information

EBG # 3 Using Gaussian Elimination (Echelon Form) Gaussian Elimination: 0s below the main diagonal

EBG # 3 Using Gaussian Elimination (Echelon Form) Gaussian Elimination: 0s below the main diagonal EBG # 3 Using Gaussian Elimination (Echelon Form) Gaussian Elimination: 0s below the main diagonal [ x y Augmented matrix: 1 1 17 4 2 48 (Replacement) Replace a row by the sum of itself and a multiple

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information