LINEAR ALGEBRA. Paul Dawkins

Size: px
Start display at page:

Download "LINEAR ALGEBRA. Paul Dawkins"

Transcription

1 LINEAR ALGEBRA Paul Dawkis

2 Table of Cotets Preface... ii Outlie... iii Systems of Equatios ad Matrices... Itroductio... Systems of Equatios... Solvig Systems of Equatios... 5 Matrices... 7 Matrix Arithmetic & Operatios... Properties of Matrix Arithmetic ad the Traspose Iverse Matrices ad Elemetary Matrices Fidig Iverse Matrices Special Matrices LU-Decompositio Systems Revisited... 8 Determiats Itroductio The Determiat Fuctio... 9 Properties of Determiats...00 The Method of Cofactors...07 Usig Row Reductio To Compute Determiats...5 Cramer s Rule... Euclidea -Space... 5 Itroductio...5 Vectors...6 Dot Product & Cross Product...40 Euclidea -Space...54 Liear Trasformatios...6 Examples of Liear Trasformatios...7 Vector Spaces... 8 Itroductio...8 Vector Spaces...8 Subspaces...9 Spa...0 Liear Idepedece... Basis ad Dimesio... Chage of Basis...9 Fudametal Subspaces...5 Ier Product Spaces...6 Orthoormal Basis...7 Least Squares...8 QR-Decompositio...9 Orthogoal Matrices...99 Eigevalues ad Eigevectors Itroductio...05 Review of Determiats...06 Eigevalues ad Eigevectors...5 Diagoalizatio Paul Dawkis i

3 Preface Here are my olie otes for my Liear Algebra course that I teach here at Lamar Uiversity. Despite the fact that these are my class otes they should be accessible to ayoe watig to lear Liear Algebra or eedig a refresher. These otes do assume that the reader has a good workig kowledge of basic Algebra. This set of otes is fairly self cotaied but there is eough Algebra type problems (arithmetic ad occasioally solvig equatios) that ca show up that ot havig a good backgroud i Algebra ca cause the occasioal problem. Here are a couple of warigs to my studets who may be here to get a copy of what happeed o a day that you missed.. Because I wated to make this a fairly complete set of otes for ayoe watig to lear Liear Algebra I have icluded some material that I do ot usually have time to cover i class ad because this chages from semester to semester it is ot oted here. You will eed to fid oe of your fellow class mates to see if there is somethig i these otes that was t covered i class.. I geeral I try to work problems i class that are differet from my otes. However, with a Liear Algebra course while I ca make up the problems off the top of my head there is o guaratee that they will work out icely or the way I wat them to. So, because of that my class work will ted to follow these otes fairly close as far as worked problems go. With that beig said I will, o occasio, work problems off the top of my head whe I ca to provide more examples tha just those i my otes. Also, I ofte do t have time i class to work all of the problems i the otes ad so you will fid that some sectios cotai problems that were t worked i class due to time restrictios.. Sometimes questios i class will lead dow paths that are ot covered here. I try to aticipate as may of the questios as possible i writig these otes up, but the reality is that I ca t aticipate all the questios. Sometimes a very good questio gets asked i class that leads to isights that I ve ot icluded here. You should always talk to someoe who was i class o the day you missed ad compare these otes to their otes ad see what the differeces are. 4. This is somewhat related to the previous three items, but is importat eough to merit its ow item. THESE NOTES ARE NOT A SUBSTITUTE FOR ATTENDING CLASS!! Usig these otes as a substitute for class is liable to get you i trouble. As already oted ot everythig i these otes is covered i class ad ofte material or isights ot i these otes is covered i class. 007 Paul Dawkis ii

4 Outlie Here is a listig ad brief descriptio of the material i this set of otes. Systems of Equatios ad Matrices Systems of Equatios I this sectio we ll itroduce most of the basic topics that we ll eed i order to solve systems of equatios icludig augmeted matrices ad row operatios. Solvig Systems of Equatios Here we will look at the Gaussia Elimiatio ad Gauss-Jorda Method of solvig systems of equatios. Matrices We will itroduce may of the basic ideas ad properties ivolved i the study of matrices. Matrix Arithmetic & Operatios I this sectio we ll take a look at matrix additio, subtractio ad multiplicatio. We ll also take a quick look at the traspose ad trace of a matrix. Properties of Matrix Arithmetic We will take a more i depth look at may of the properties of matrix arithmetic ad the traspose. Iverse Matrices ad Elemetary Matrices Here we ll defie the iverse ad take a look at some of its properties. We ll also itroduce the idea of Elemetary Matrices. Fidig Iverse Matrices I this sectio we ll develop a method for fidig iverse matrices. Special Matrices We will itroduce Diagoal, Triagular ad Symmetric matrices i this sectio. LU-Decompositios I this sectio we ll itroduce the LU-Decompositio a way of factorig certai kids of matrices. Systems Revisited Here we will revisit solvig systems of equatios. We will take a look at how iverse matrices ad LU-Decompositios ca help with the solutio process. We ll also take a look at a couple of other ideas i the solutio of systems of equatios. Determiats The Determiat Fuctio We will give the formal defiitio of the determiat i this sectio. We ll also give formulas for computig determiats of ad matrices. Properties of Determiats Here we will take a look at quite a few properties of the determiat fuctio. Icluded are formulas for determiats of triagular matrices. The Method of Cofactors I this sectio we ll take a look at the first of two methods form computig determiats of geeral matrices. Usig Row Reductio to Fid Determiats Here we will take a look at the secod method for computig determiats i geeral. Cramer s Rule We will take a look at yet aother method for solvig systems. This method will ivolve the use of determiats. Euclidea -space Vectors I this sectio we ll itroduce vectors i -space ad -space as well as some of the importat ideas about them. 007 Paul Dawkis iii

5 Dot Product & Cross Product Here we ll look at the dot product ad the cross product, two importat products for vectors. We ll also take a look at a applicatio of the dot product. Euclidea -Space We ll itroduce the idea of Euclidea -space i this sectio ad exted may of the ideas of the previous two sectios. Liear Trasformatios I this sectio we ll itroduce the topic of liear trasformatios ad look at may of their properties. Examples of Liear Trasformatios We ll take a look at quite a few examples of liear trasformatios i this sectio. Vector Spaces Vector Spaces I this sectio we ll formally defie vectors ad vector spaces. Subspaces Here we will be lookig at vector spaces that live iside of other vector spaces. Spa The cocept of the spa of a set of vectors will be ivestigated i this sectio. Liear Idepedece Here we will take a look at what it meas for a set of vectors to be liearly idepedet or liearly depedet. Basis ad Dimesio We ll be lookig at the idea of a set of basis vectors ad the dimesio of a vector space. Chage of Basis I this sectio we will see how to chage the set of basis vectors for a vector space. Fudametal Subspaces Here we will take a look at some of the fudametal subspaces of a matrix, icludig the row space, colum space ad ull space. Ier Product Spaces We will be lookig at a special kid of vector spaces i this sectio as well as defie the ier product. Orthoormal Basis I this sectio we will develop ad use the Gram-Schmidt process for costructig a orthogoal/orthoormal basis for a ier product space. Least Squares I this sectio we ll take a look at a applicatio of some of the ideas that we will be discussig i this chapter. QR-Decompositio Here we will take a look at the QR-Decompositio for a matrix ad how it ca be used i the least squares process. Orthogoal Matrices We will take a look at a special kid of matrix, the orthogoal matrix, i this sectio. Eigevalues ad Eigevectors Review of Determiats I this sectio we ll do a quick review of determiats. Eigevalues ad Eigevectors Here we will take a look at the mai sectio i this chapter. We ll be lookig at the cocept of Eigevalues ad Eigevectors. Diagoalizatio We ll be lookig at diagoalizable matrices i this sectio. 007 Paul Dawkis iv

6 Systems of Equatios ad Matrices Itroductio We will start this chapter off by lookig at the applicatio of matrices that almost every book o Liear Algebra starts off with, solvig systems of liear equatios. Lookig at systems of equatios will allow us to start gettig used to the otatio ad some of the basic maipulatios of matrices that we ll be usig ofte throughout these otes. Oce we ve looked at solvig systems of liear equatios we ll move ito the basic arithmetic of matrices ad basic matrix properties. We ll also take a look at a couple of other ideas about matrices that have some ice applicatios to the solutio to systems of equatios. Oe word of warig about this chapter, ad i fact about this complete set of otes for that matter, we ll start out i the first sectio or to doig a lot of the details i the problems, but towards the ed of this chapter ad ito the remaiig chapters we will leave may of the details to you to check. We start off by doig lots of details to make sure you are comfortable workig with matrices ad the various operatios ivolvig them. However, we will evetually assume that you ve become comfortable with the details ad ca check them o your ow. At that poit we will quit showig may of the details. Here is a listig of the topics i this chapter. Systems of Equatios I this sectio we ll itroduce most of the basic topics that we ll eed i order to solve systems of equatios icludig augmeted matrices ad row operatios. Solvig Systems of Equatios Here we will look at the Gaussia Elimiatio ad Gauss- Jorda Method of solvig systems of equatios. Matrices We will itroduce may of the basic ideas ad properties ivolved i the study of matrices. Matrix Arithmetic & Operatios I this sectio we ll take a look at matrix additio, subtractio ad multiplicatio. We ll also take a quick look at the traspose ad trace of a matrix. Properties of Matrix Arithmetic We will take a more i depth look at may of the properties of matrix arithmetic ad the traspose. Iverse Matrices ad Elemetary Matrices Here we ll defie the iverse ad take a look at some of its properties. We ll also itroduce the idea of Elemetary Matrices. Fidig Iverse Matrices I this sectio we ll develop a method for fidig iverse matrices. Special Matrices We will itroduce Diagoal, Triagular ad Symmetric matrices i this sectio. 007 Paul Dawkis

7 LU-Decompositios I this sectio we ll itroduce the LU-Decompositio a way of factorig certai kids of matrices. Systems Revisited Here we will revisit solvig systems of equatios. We will take a look at how iverse matrices ad LU-Decompositios ca help with the solutio process. We ll also take a look at a couple of other ideas i the solutio of systems of equatios. 007 Paul Dawkis

8 Systems of Equatios Let s start off this sectio with the defiitio of a liear equatio. Here are a couple of examples of liear equatios. 5 6x 8y+ 0z = 7x x = 9 I the secod equatio ote the use of the subscripts o the variables. This is a commo otatioal device that will be used fairly extesively here. It is especially useful whe we get ito the geeral case(s) ad we wo t kow how may variables (ofte called ukows) there are i the equatio. So, just what makes these two equatios liear? There are several mai poits to otice. First, the ukows oly appear to the first power ad there are t ay ukows i the deomiator of a fractio. Also otice that there are o products ad/or quotiets of ukows. All of these ideas are required i order for a equatio to be a liear equatio. Ukows oly occur i umerators, they are oly to the first power ad there are o products or quotiets of ukows. The most geeral liear equatio is, ax + ax + ax = b () where there are ukows, x, x,, x, ad a, a,, a, b are all kow umbers. Next we eed to take a look at the solutio set of a sigle liear equatio. A solutio set (or ofte just solutio) for () is a set of umbers t, t,, t so that if we set x = t, x = t,, x = t the () will be satisfied. By satisfied we mea that if we plug these umbers ito the left side of () ad do the arithmetic we will get b as a aswer. The first thig to otice about the solutio set to a sigle liear equatio that cotais at least two variables with o-zero coefficets is that we will have a ifiite umber of solutios. We will also see that while there are ifiitely may possible solutios they are all related to each other i some way. Note that if there is oe or less variables with o-zero coefficiets the there will be a sigle solutio or o solutios depedig upo the value of b. Let s fid the solutio sets for the two liear equatios give at the start of this sectio. Example Fid the solutio set for each of the followig liear equatios. 5 (a) 7x x = [Solutio] 9 (b) 6x 8y+ 0z = [Solutio] Solutio 5 (a) 7x x = 9 The first thig that we ll do here is solve the equatio for oe of the two ukows. It does t matter which oe we solve for, but we ll usually try to pick the oe that will mea the least 007 Paul Dawkis

9 amout (or at least simpler) work. I this case it will probably be slightly easier to solve for x so let s do that. 5 7x x = 9 5 7x = x 9 5 x = x 6 7 Now, what this tells us is that if we have a value for x the we ca determie a correspodig value for x. Sice we have a sigle liear equatio there is othig to restrict our choice of x ad so we we ll let x be ay umber. We will usually write this as x = t, where t is ay umber. Note that there is othig special about the t, this is just the letter that I usually use i these cases. Others ofte use s for this letter ad, of course, you could choose it to be just about aythig as log as it s ot a letter represetig oe of the ukows i the equatio (x i this case). Oce we ve chose x we ll write the geeral solutio set as follows, 5 x = t x = t 6 7 So, just what does this tell us as far as actual umber solutios go? We ll choose ay value of t ad plug i to get a pair of umbers x ad x that will satisfy the equatio. For istace pickig a couple of values of t completely at radom gives, t = 0: x =, x = 0 7 t = 7 : 5 x = ( 7) =, x = We ca easily check that these are i fact solutios to the equatio by pluggig them back ito the equatio. 5 t = 0: 7 ( 0) = t = 7 : 7( ) ( 7) = 9 So, for each case whe we plugged i the values we got for x ad x we got - out of the equatio as we were supposed to. Note that sice there a ifiite umber of choices for t there are i fact a ifiite umber of possible solutios to this liear equatio. [Retur to Problems] 007 Paul Dawkis 4

10 (b) 6x 8y+ 0z = We ll do this oe with a little less detail sice it works i essetially the same maer. The fact that we ow have three ukows will chage thigs slightly but ot overly much. We will first solve the equatio for oe of the variables ad agai it wo t matter which oe we chose to solve for. 0z = 6x+ 8y 4 z = x+ y I this case we will eed to kow values for both x ad y i order to get a value for z. As with the first case, there is othig i this problem to restrict out choices of x ad y. We ca therefore let them be ay umber(s). I this case we ll choose x = t ad y = s. Note that we chose differet letters here sice there is o reaso to thik that both x ad y will have exactly the same value (although it is possible for them to have the same value). The solutio set to this liear equatio is the, 4 x = t y = s z = t+ s So, if we choose ay values for t ad s we ca get a set of umber solutios as follows. 4 x= 0 y = z = ( 0) + ( ) = x= y = 5 z = + ( 5) = As with the first part if we take either set of three umbers we ca plug them ito the equatio to verify that the equatio will be satisfied. We ll do oe of them ad leave the other to you to check ( 5) + 0 = = 5 [Retur to Problems] The variables that we got to choose for values for ( x i the first example ad x ad y i the secod) are sometimes called free variables. We ow eed to start talkig about the actual topic of this sectio, systems of liear equatios. A system of liear equatios is othig more tha a collectio of two or more liear equatios. Here are some examples of systems of liear equatios. 007 Paul Dawkis 5

11 x+ y = 9 x y = 4x 5x + x = 9 6x + x = 9 x + 0x = 5x x = 7 7x x 4x = 5 x 0x =4 x x + x x + x = 4 5 x + x x + 9x = 0 4 7x + 0x + x + 6x 9x =7 4 5 As we ca see from these examples systems of equatio ca have ay umber of equatios ad/or ukows. The system may have the same umber of equatios as ukows, more equatios tha ukows, or fewer equatios tha ukows. A solutio set to a system with ukows, x, x,, x, is a set of umbers, t, t,, t, so that if we set x = t, x = t,, x = t the all of the equatios i the system will be satisfied. Or, i other words, the set of umbers t, t,, t is a solutio to each of the idividual equatios i the system. For example, x =, y = 5 is a solutio to the first system listed above, x+ y = 9 x y = because, ( ) + ( 5) = 9 & ( ) ( 5) = () However, x = 5, y = is ot a solutio to the system because, ( 5) + ( ) = 9 & ( 5) ( ) = We ca see from these calculatios that x = 5, y = is NOT a solutio to the first equatio, but it IS a solutio to the secod equatio. Sice this pair of umbers is ot a solutio to both of the equatios i () it is ot a solutio to the system. The fact that it s a solutio to oe of them is t material. I order to be a solutio to the system the set of umbers must be a solutio to each ad every equatio i the system. It is completely possible as well that a system will ot have a solutio at all. Cosider the followig system. x 4y = 0 x 4y = () It is clear (hopefully) that this system of equatios ca t possibly have a solutio. A solutio to this system would have to be a pair of umbers x ad y so that if we plugged them ito each equatio it will be a solutio to each equatio. However, sice the left side is idetical this would mea that we d eed a x ad a y so that x 4y is both 0 ad - for the exact same pair of umbers. This clearly ca t happe ad so () does ot have a solutio. 007 Paul Dawkis 6

12 Likewise, it is possible for a system to have more tha oe solutio, although we do eed to be careful here as we ll see. Let s take a look at the followig system. x+ y = 8 (4) 8x 4y = We ll leave it to you to verify that all of the followig are four of the ifiitely may solutios to the first equatio i this system. x= 0, y = 8 x=, y =, x= 4, y = 0 x= 5, y = 8 Recall from our work above that there will be ifiitely may solutios to a sigle liear equatio. We ll also leave it to you to verify that these four solutios are also four of the ifiitely may solutios to the secod equatio i (4). Let s ivestigate this a little more. Let s just fid the solutio to the first equatio (we ll worry about the secod equatio i a secod). Followig the work we did i Example we ca see that the ifiitely may solutios to the first equatio i (4) are x= t y = t+ 8, t is ay umber Now, if we also fid just the solutios to the secod equatio i (4) we get x= t y = t+ 8, t is ay umber These are exactly the same! So, this meas that if we have a actual umeric solutio (foud by choosig t above ) to the first equatio it will be guarateed to also be a solutio to the secod equatio ad so will be a solutio to the system (4). This meas that we i fact have ifiitely may solutios to (4). Let s take a look at the three systems we ve bee workig with above i a little more detail. This will allow us to see a couple of ice facts about systems. Sice each of the equatios i (),(), ad (4) are liear i two ukows (x ad y) the graph of each of these equatios is that of a lie. Let s graph the pair of equatios from each system o the same graph ad see what we get. 007 Paul Dawkis 7

13 From the graph of the equatios for system () we ca see that the two lies itersect at the poit (,5) ad otice that, as a poit, this is the solutio to the system as well. I other words, i this case the solutio to the system of two liear equatios ad two ukows is simply the itersectio poit of the two lies. Note that this idea is validated i the solutio to systems () ad (4). System () has o solutio ad we ca see from the graph of these equatios that the two lies are parallel ad hece will ever itersect. I system (4) we had ifiitely may solutios ad the graph of these equatios shows us that they are i fact the same lie, or i some ways they itersect at a ifiite umber of poits. Now, to this poit we ve bee lookig at systems of two equatios with two ukows but some of the ideas we saw above ca be exteded to geeral systems of equatios with m ukows. First, there is a ice geometric iterpretatio to the solutio of systems with equatios i two or three ukows. Note that the umber of equatios that we ve got wo t matter the iterpretatio will be the same. 007 Paul Dawkis 8

14 If we ve got a system of liear equatios i two ukows the the solutio to the system represets the poit(s) where all (ot some but ALL) the lies will itersect. If there is o solutio the the lies give by the equatios i the system will ot itersect at a sigle poit. Note i the o solutio case if there are more tha two equatios it may be that ay two of the equatios will itersect, but there wo t be a sigle poit were all of the lies will itersect. If we ve got a system of liear equatios i three ukows the the graphs of the equatios will be plaes i D-space ad the solutio to the system will represet the poit(s) where all the plaes will itersect. If there is o solutio the there are o poit(s) where all the plaes give by the equatios of the system will itersect. As with lies, it may be i this case that ay two of the plaes will itersect, but there wo t be ay poit where all of the plaes itersect at that poit. O a side ote we should poit out that lies ca itersect at a sigle poit or if the equatios give the same lie we ca thik of them as itersectig at ifiitely may poits. Plaes ca itersect at a poit or o a lie (ad so will have ifiitely may itersectio poits) ad if the equatios give the same plae we ca thik of the plaes as itersectig at ifiitely may places. We eed to be a little careful about the ifiitely may itersectio poits case. Whe we re dealig with equatios i two ukows ad there are ifiitely may solutios it meas that the equatios i the system all give the same lie. However, whe dealig with equatios i three ukows ad we ve got ifiitely may solutios we ca have oe of two cases. Either we ve got plaes that itersect alog a lie, or the equatios will give the same plae. For systems of equatios i more tha three variables we ca t graph them so we ca t talk about a geometric iterpretatio, but we ca still say that a solutio to such a system will represet the poit(s) where all the equatios will itersect eve if we ca t visualize such a itersectio poit. From the geometric iterpretatio of the solutio to two equatios i two ukows we kow that we have oe of three possible solutios. We will have either o solutio (the lies are parallel), oe solutio (the lies itersect at a sigle poit) or ifiitely may solutios (the equatios are the same lie). There is simply o other possible umber of solutios sice two lies that itersect will either itersect exactly oce or will be the same lie. It turs out that this is i fact the case for a geeral system. Theorem Give a system of equatios ad m ukows there will be oe of three possibilities for solutios to the system.. There will be o solutio.. There will be exactly oe solutio.. There will be ifiitely may solutios. If there is o solutio to the system we call the system icosistet ad if there is at least oe solutio to the system we call it cosistet. Now that we ve got some of the basic ideas about systems take care of we eed to start thikig about how to use liear algebra to solve them. Actually that s ot quite true. We re ot goig to do ay solvig util the ext sectio. I this sectio we just wat to get some of the basic otatio ad ideas ivolved i the solvig process out of the way before we actually start tryig to solve them. 007 Paul Dawkis 9

15 We re goig to start off with a simplified way of writig the system of equatios. For this we will eed the followig geeral system of equatios ad m ukows. a x + a x + + a x = b m m a x + a x + + a x = b m m a x + a x + + a x = b m m (5) I this system the ukows are x, x,, xm ad the a ij ad b i are kow umbers. Note as well how we ve subscripted the coefficiets of the ukows (the a ij). The first subscript, i, deotes the equatio that the subscript is i ad the secod subscript, j, deotes the ukow that it multiples. For istace, a 6 would be i the coefficiet of x 6 i the third equatio. Ay system of equatios ca be writte as a augmeted matrix. A matrix is just a rectagular array of umbers ad we ll be lookig at these i great detail i this course so do t worry too much at this poit about what a matrix is. Here is the augmeted matrix for the geeral system i (5). a a a m b a a am b a a am b Each row of the augmeted matrix cosists of the coefficiets ad costat o the right of the equal sig form a give equatio i the system. The first row is for the first equatio, the secod row is for the secod equatio etc. Likewise each of the first colums of the matrix cosists of the coefficiets from the ukows. The first colum cotais the coefficiets of x, the secod colum cotais the coefficiets of x, etc. The fial colum (the m+ st colum) cotais all the costats o the right of the equal sig. Note that the augmeted part of the ame arises because we tack the b i s oto the matrix. If we do t tack those o ad we just have a a a m a a a m a a am ad we call this the coefficiet matrix for the system. 007 Paul Dawkis 0

16 Example Write dow the augmeted matrix for the followig system. x 0x + 6x x = 4 x + 9x 5x = 4 4x + x 9x + x = 7 4 Solutio There really is t too much to do here other tha write dow the system Notice that the secod equatio did ot cotai a x ad so we cosider its coefficiet to be zero. Note as well that give a augmeted matrix we ca always go back to a system of equatios. Example For the give augmeted matrix write dow the correspodig system of equatios Solutio So sice we kow each row correspods to a equatio we have three equatios i the system. Also, the first two colums represet coefficiets of ukows ad so we ll have two ukows while the third colum cosists of the costats to the right of the equal sig. Here s the system that correspods to this augmeted matrix. 4x x = 5x 8x = 4 9x + x = There is oe fial topic that we eed to discuss i this sectio before we move oto actually solvig systems of equatio with liear algebra techiques. I the ext sectio where we will actually be solvig systems our mai tools will be the three elemetary row operatios. Each of these operatios will operate o a row (which should t be too surprisig give the ame ) i the augmeted matrix ad sice each row i the augmeted matrix correspods to a equatio these operatios have equivalet operatios o equatios. Here are the three row operatios, their equivalet equatio operatios as well as the otatio that we ll be usig to deote each of them. Row Operatio Equatio Operatio Notatio Multiply row i by the costat c Multiply equatio i by the costat c cr i Iterchage rows i ad j Iterchage equatios i ad j Ri R j Add c times row i to row j Add c times equatio i to equatio j R j + cri 007 Paul Dawkis

17 The first two operatios are fairly self explaatory. The third is also a fairly simple operatio however there are a couple thigs that we eed to make clear about this operatio. First, i this operatio oly row (equatio) j actually chages. Eve though we are multiplyig row (equatio) i by c that is doe i our heads ad the results of this multiplicatio are added to row (equatio) j. Also, whe we say that we add c time a row to aother row we really mea that we add correspodig etries of each row. Let s take a look at some examples of these operatios i actio. Example 4 Perform each of the idicated row operatios o give augmeted matrix (a) R [Solutio] (b) R [Solutio] (c) R R [Solutio] (d) R + 5R [Solutio] (e) R R [Solutio] Solutio I each of these we will actually perform both the row ad equatio operatio to illustrate that they are actually the same operatio ad that the ew augmeted matrix we get is i fact the correct oe. For referece purposes the system correspodig to the augmeted matrix give for this problem is, x+ 4x x = 6x x 4x = 0 7x + x x = 5 Note that at each part we will go back to the origial augmeted matrix ad/or system of equatios to perform the operatio. I other words, we wo t be usig the results of the previous part as a startig poit for the curret operatio. (a) R Okay, i this case we re goig to multiply the first row (equatio) by -. This meas that we will multiply each elemet of the first row by - or each of the coefficiets of the first equatio by -. Here is the result of this operatio x x + x = xx 4x = x+ x x = 5 [Retur to Problems] 007 Paul Dawkis

18 (b) R This is similar to the first oe. We will multiply each elemet of the secod row by oe-half or each coefficiet of the secod equatio by oe-half. Here are the results of this operatio. x+ 4x x = 4 5 x x x = x+ x x = 5 Do ot get excited about the fractio showig up. Fractios are goig to be a fact of life with much of the work that we re goig to be doig so get used to seeig them. Note that ofte i cases like this we will say that we divided the secod row by istead of multiplied by oe-half. [Retur to Problems] (c) R R I this case were just goig to iterchage the first ad third row or equatio x+ x x = xx 4x = 0 4 x+ 4x x = [Retur to Problems] (d) R + 5R Okay, we ow eed to work a example of the third row operatio. I this case we will add 5 times the third row (equatio) to the secod row (equatio). So, for the row operatio, i our heads we will multiply the third row times 5 ad the add each etry of the results to the correspodig etry i the secod row. Here are the idividual computatios for this operatio. st etry : = 4 ( )( ) ( )( ) ( )( ) + ( )( ) = d etry : + 5 = 4 rd etry : 4+ 5 =9 th 4 etry : For the correspodig equatio operatio we will multiply the third equatio by 5 to get, 5x+ 5x 5x = 5 the add this to the secod equatio to get, 4x + 4x 9x = 5 Puttig all this together gives ad rememberig that it s the secod row (equatio) that we re actually chagig here gives, 007 Paul Dawkis

19 4 x+ 4x x = x+ 4x 9x = x+ x x = 5 It is importat to remember that whe multiplyig the third row (equatio) by 5 we are doig it i our head ad do t actually chage the third row (equatio). [Retur to Problems] (e) R R I this case we ll ot go ito the detail that we did i the previous part. Most of these types of operatios are doe almost completely i our head ad so we ll do that here as well so we ca start gettig used to it. I this part we are goig to subtract times the secod row (equatio) from the first row (equatio). Here are the results of this operatio x+ 7x + x = xx 4x = x+ x x = 5 It is importat whe doig this work i our heads to be careful of mius sigs. I operatios such as this oe there are ofte a lot of them ad it easy to lose track of oe or more whe you get i a hurry. [Retur to Problems] Okay, we ve ot got most of the basics dow that we ll eed to start solvig systems of liear equatios usig liear algebra techiques so it s time to move oto the ext sectio. 007 Paul Dawkis 4

20 Solvig Systems of Equatios I this sectio we are goig to take a look at usig liear algebra techiques to solve a system of liear equatios. Oce we have a couple of defiitios out of the way we ll see that the process is a fairly simple oe. Well, it s fairly simple to write dow the process ayway. Applyig the process is fairly simple as well but for large systems ca take quite a few steps. So, let s get the defiitios out of the way. A matrix (ay matrix, ot just a augmeted matrix) is said to be i reduced row-echelo form if it satisfies all four of the followig coditios.. If there are ay rows of all zeros the they are at the bottom of the matrix.. If a row does ot cosist of all zeros the its first o-zero etry (i.e. the left most ozero etry) is a. This is called a leadig.. I ay two successive rows, either of which cosists of all zeroes, the leadig of the lower row is to the right of the leadig of the higher row. 4. If a colum cotais a leadig the all the other etries of that colum are zero. A matrix (agai ay matrix) is said to be i row-echelo form if it satisfies items of the reduced row-echelo form defiitio. Notice from these defiitios that a matrix that is i reduced row-echelo form is also i rowechelo form while a matrix i row-echelo form may or may ot be i reduced row-echelo form. Example The followig matrices are all i row-echelo form Noe of the matrices i the previous example are i reduced row-echelo form. The etries that are prevetig these matrices from beig i reduced row-echelo form are highlighted i red ad uderlied (for those without color priters...). I order for these matrices to be i reduced rowechelo form all of these highlighted etries would eed to be zeroes. Notice that we did t highlight the etries above the i the fifth colum of the third matrix. Sice this is ot a leadig (i.e. the leftmost o-zero etry) we do t eed the umbers above it to be zero i order for the matrix to be i reduced row-echelo form. 007 Paul Dawkis 5

21 Example The followig matrices are all i reduced row-echelo form I the secod matrix o the first row we have all zeroes i the etries. This is perfectly acceptable ad so do t worry about it. This matrix is i reduced row-echelo form, the fact that it does t have ay o-zero etries does ot chage that fact sice it satisfies the coditios. Also, i the secod matrix of the secod row otice that the last colum does ot have zeroes above the i that colum. That is perfectly acceptable sice the i that colum is ot a leadig for the fourth row. Notice from Examples ad that the oly real differece betwee row-echelo form ad reduced row-echelo form is that a matrix i row-echelo form is oly required to have zeroes below a leadig while a matrix i reduced row-echelo from must have zeroes both below ad above a leadig. Okay, let s ow start thikig about how to use liear algebra techiques to solve systems of liear equatios. The process is actually quite simple. To solve a system of equatios we will first write dow the augmeted matrix for the system. We will the use elemetary row operatios to reduce the augmeted matrix to either row-echelo form or to reduced row-echelo form. Ay further work that we ll eed to do will deped upo where we stop. If we go all the way to reduced row-echelo form the i may cases we will ot eed to do ay further work to get the solutio ad i those times where we do eed to do more work we will geerally ot eed to do much more work. Reducig the augmeted matrix to reduced rowechelo form is called Gauss-Jorda Elimiatio. If we stop at row-echelo form we will have a little more work to do i order to get the solutio, but it is geerally fairly simple arithmetic. Reducig the augmeted matrix to row-echelo form ad the stoppig is called Gaussia Elimiatio. At this poit we should work a couple of examples. Example Use Gaussia Elimiatio ad Gauss-Jorda Elimiatio to solve the followig system of liear equatios. x+ x x = 4 x+ x + x = x+ x = Solutio Sice we re asked to use both solutio methods o this system ad i order for a matrix to be i 007 Paul Dawkis 6

22 reduced row-echelo form the matrix must also be i row-echelo form. Therefore, we ll start off by puttig the augmeted matrix i row-echelo form, the stop to fid the solutio. This will be Gaussia Elimiatio. After doig that we ll go back ad pick up from row-echelo form ad further reduce the matrix to reduced row echelo form ad at this poit we ll have performed Gauss-Jorda Elimiatio. So, let s start off by gettig the augmeted matrix for this system. 4 0 As we go through the steps i this first example we ll mark the etry(s) that we re goig to be lookig at i each step i red so that we do t lose track of what we re doig. We should also poit out that there are may differet paths that we ca take to get this matrix ito row-echelo form ad each path may well produce a differet row-echelo form of the matrix. Keep this i mid as you work these problems. The path that you take to get this matrix ito row-echelo form should be the oe that you fid the easiest ad that may ot be the oe that the perso ext to you fids the easiest. Regardless of which path you take you are oly allowed to use the three elemetary row operatios that we looked i the previous sectio. So, with that out of the way we eed to make the leftmost o-zero etry i the top row a oe. I this case we could use ay three of the possible row operatios. We could divide the top row by - ad this would certaily chage the red - ito a oe. However, this will also itroduce fractios ito the matrix ad while we ofte ca t avoid them let s ot put them i before we eed to. Next, we could take row three ad add it to row oe, or we could take three times row ad add it to row oe. Either of these would also chage the red - ito a oe. However, this row operatio is the oe that is most proe to arithmetic errors so while it would work let s ot use it uless we eed to. This leaves iterchagig ay two rows. This is a operatio that wo t always work here to get a ito the spot we wat, but whe it does it will usually be the easiest operatio to use. I this case we ve already got a oe i the leftmost etry of the secod row so let s just iterchage the first ad secod rows ad we ll get a oe i the leftmost spot of the first row pretty much for free. Here is this operatio. 4 R R Now, the ext step we ll eed to take is chagig the two umbers i the first colum uder the leadig ito zeroes. Recall that as we move dow the rows the leadig MUST move off to the right. This meas that the two umbers uder the leadig i the first colum will eed to become zeroes. Agai, there are ofte several row operatios that ca be doe to do this. However, i most cases addig multiples of the row cotaiig the leadig (the first row i this case) oto the rows we eed to have zeroes is ofte the easiest. Here are the two row operatios that we ll do i this step. 007 Paul Dawkis 7

23 R + R 4 R R Notice that sice each operatio chaged a differet row we wet ahead ad performed both of them at the same time. We will ofte do this whe multiple operatios will all chage differet rows. We ow eed to chage the red 5 ito a oe. I this case we ll go ahead ad divide the secod row by 5 sice this wo t itroduce ay fractios ito the matrix ad it will give us the umber we re lookig for R Next, we ll use the third row operatio to chage the red -6 ito a zero so the leadig of the third row will move to the right of the leadig i the secod row. This time we ll be usig a multiple of the secod row to do this. Here is the work i this step R + R Notice that i both steps were we eeded to get zeroes below a leadig we added multiples of the row cotaiig the leadig to the rows i which we wated zeroes. This will always work i this case. It may be possible to use other row operatios, but the third ca always be used i these cases. The fial step we eed to get the matrix ito row-echelo form is to chage the red - ito a oe. To do this we do t really have a choice here. Sice we eed the leadig oe i the third row to be i the third or fourth colum (i.e. to the right of the leadig oe i the secod colum) we MUST retai the zeroes i the first ad secod colum of the third row. Iterchagig the secod ad third row would defiitely put a oe i the third colum of the third row, however, it would also chage the zero i the secod colum which we ca t allow. Likewise we could add the first row to the third row ad agai this would put a oe i the third colum of the third row, but this operatio would also chage both of the zeroes i frot of it which ca t be allowed. Therefore, our oly real choice i this case is to divide the third row by -. This will retai the zeroes i the first ad secod colum ad chage the etry i the third colum ito a oe. Note that this step will ofte itroduce fractios ito the matrix, but at this poit that ca t be avoided. Here is the work for this step. 0 6 R At this poit the augmeted matrix is i row-echelo form. So if we re goig to perform 007 Paul Dawkis 8

24 Gaussia Elimiatio o this matrix we ll stop ad go back to equatios. Doig this gives, x+ x + x = 0 6 x + x = x = At this poit solvig is quite simple. I fact we ca see from this that x =. Pluggig this ito the secod equatio gives x = 4. Fially, pluggig both of these ito the first equatio gives x =. Summarizig up the solutio to the system is, x = x = 4 x = This substitutio process is called back substitutio. Now, let s pick back up at the row-echelo form of the matrix ad further reduce the matrix ito reduced row-echelo form. The first step i doig this will be to chage the umbers above the leadig i the third row ito zeroes. Here are the operatios that will do that for us. R R R R The fial step is the to chage the red above the leadig oe i the secod row ito a zero. Here is this operatio R R We are ow i reduced row-echelo form so all we eed to do to perform Gauss-Jorda Elimiatio is to go back to equatios. 0 0 x = x = x = We ca see from this that oe of the ice cosequeces to Gauss-Jorda Elimiatio is that whe there is a sigle solutio to the system there is o work to be doe to fid the solutio. It is geerally give to us for free. Note as well that it is the same solutio as the oe that we got by usig Gaussia Elimiatio as we should expect. Before we proceed with aother example we eed to give a quick fact. As was poited out i this example there are may paths we could take to do this problem. It was also oted that the path we chose would affect the row-echelo form of the matrix. This will ot be true for the reduced row-echelo form however. There is oly oe reduced row-echelo form of a give matrix o matter what path we chose to take to get to that poit. If we kow ahead of time that we are goig to go to reduced row-echelo form for a matrix we will ofte take a differet path tha the oe used i the previous example. I the previous 007 Paul Dawkis 9

25 example we first got the matrix i row-echelo form by gettig zeroes uder the leadig s ad the wet back ad put the matrix i reduced row-echelo form by gettig zeroes above the leadig s. If we kow ahead of time that we re goig to wat reduced row-echelo form we ca just take care of the matrix i a colum by colum basis i the followig maer. We first get a leadig i the correct colum the istead of usig this to covert oly the umbers below it to zero we ca use it to covert the umbers both above ad below to zero. I this way oce we reach the last colum ad take care of it of course we will be i reduced row-echelo form. We should also poit out the differeces betwee Gauss-Jorda Elimiatio ad Gaussia Elimiatio. With Gauss-Jorda Elimiatio there is more matrix work that eeds to be performed i order to get the augmeted matrix ito reduced row-echelo form, but there will be less work required i order to get the solutio. I fact, if there s a sigle solutio the the solutio will be give to us for free. We will see however, that if there are ifiitely may solutios we will still have a little work to do i order to arrive at the solutio. With Gaussia Elimiatio we have less matrix work to do sice we are oly reducig the augmeted matrix to row-echelo form. However, we will always eed to perform back substitutio i order to get the solutio. Which method you use will probably deped o which you fid easier. Okay let s do some more examples. Sice we ve doe oe example i excruciatig detail we wo t be botherig to put as much detail ito the remaiig examples. All operatios will be show, but the explaatios of each operatio will ot be give. Example 4 Solve the followig system of liear equatios. x x + x = x+ x x = x x + x = Solutio First, the istructios to this problem did ot specify which method to use so we ll eed to make a decisio. No matter which method we chose we will eed to get the augmeted matrix dow to row-echelo form so let s get to that poit ad the see what we ve got. If we ve got somethig easy to work with we ll stop ad do Gaussia Elimiatio ad if ot we ll proceed to reduced row-echelo form ad do Gauss-Jorda Elimiatio. So, let s start with the augmeted matrix ad the proceed to put it ito row-echelo form ad agai we re ot goig to put i quite the detail i this example as we did with the first oe. So, here is the augmeted matrix for this system. ad here is the work to put it ito row-echelo form. R + R R R R Paul Dawkis 0

26 R R 8 R Okay, we re ow i row-echelo form. Let s go back to equatio ad see what we ve got. x x + x = x x = 0= Hmmmm. That last equatio does t look correct. We ve got a couple of possibilities here. We ve either just maaged to prove that 0= (ad we kow that s ot true), we ve made a mistake (always possible, but we have t i this case) or there s aother possibility we have t thought of yet. Recall from Theorem i the previous sectio that a system has oe of three possibilities for a solutio. Either there is o solutio, oe solutio or ifiitely may solutios. I this case we ve got o solutio. Whe we go back to equatios ad we get a equatio that just clearly ca t be true such as the third equatio above the we kow that we ve got ot solutio. Note as well that we did t really eed to do the last step above. We could have just as easily arrived at this coclusio by lookig at the secod to last matrix sice 0=8 is just as icorrect as 0=. So, to close out this problem, the official aswer that there is o solutio to this system. I order to see how a simple chage i a system ca lead to a totally differet type of solutio let s take a look at the followig example. Example 5 Solve the followig system of liear equatios. x x + x = x+ x x = x x + x =7 Solutio The oly differece betwee this system ad the previous oe is the -7 i the third equatio. I the previous example this was a. Here is the augmeted matrix for this system. 7 Now, sice this is essetially the same augmeted matrix as the previous example the first few steps are idetical ad so there is o reaso to show them here. After takig the same steps as above (we wo t eed the last step this time) here is what we arrive at Paul Dawkis

27 For some good practice you should go through the steps above ad make sure you arrive at this matrix. I this case the last lie coverts to the equatio 0= 0 ad this is a perfectly acceptable equatio because after all zero is i fact equal to zero! I other words, we should t get excited about it. At this poit we could stop covert the first two lies of the matrix to equatios ad fid a solutio. However, i this case it will actually be easier to do the oe fial step to go to reduced row-echelo form. Here is that step. 0 4 R R We are ow i reduced row-echelo form so let s covert to equatios ad see what we ve got. x+ x =4 x x = Okay, we ve got more ukows tha equatios ad i may cases this will mea that we have ifiitely may solutios. To see if this is the case for this example let s otice that each of the equatios has a x i it ad so we ca solve each equatio for the remaiig variable i terms of x as follows. x = 4 x x = + x So, we ca choose x to be ay value we wat to, ad hece it is a free variable (recall we saw these i the previous sectio), ad each choice of x will give us a differet solutio to the system. So, just like i the previous sectio whe we ll reame the x ad write the solutio as follows, x =4 t x = + t x = t t is ay umber We therefore get ifiitely may solutios, oe for each possible value of t ad sice t ca be ay real umber there are ifiitely may choices for t. Before movig o let s first address the issue of why we used Gauss-Jorda Elimiatio i the previous example. If we d used Gaussia Elimiatio (which we defiitely could have used) the system of equatios would have bee. x x + x = x x = To arrive at the solutio we d have to solve the secod equatio for x first ad the substitute this ito the first equatio before solvig for x. I my mid this is more work ad work that I m 007 Paul Dawkis

28 more likely to make a arithmetic mistake tha if we d just goe to reduced row-echelo form i the first place as we did i the solutio. There is othig wrog with usig Gaussia Elimiatio o a problem like this, but the back substitutio is defiitely more work whe we ve got ifiitely may solutios tha whe we ve got a sigle solutio. Okay, to this poit we ve worked othig but systems with the same umber of equatios ad ukows. We eed to work a couple of other examples where this is t the case so we do t get too locked ito this kid of system. Example 6 Solve the followig system of liear equatios. x 4x = 0 5x+ 8x =7 x+ x = Solutio So, let s start with the augmeted matrix ad reduce it to row-echelo form ad see if what we ve got is ice eough to work with or if we should go the extra step(s) to get to reduced row-echelo form. Let s start with the augmeted matrix Notice that this time i order to get the leadig i the upper left corer we re probably goig to just have to divide the row by ad deal with the fractios that will arise. Do ot go to great legths to avoid fractios, they are a fact of life with these problems ad so while it s okay to try to avoid them, sometimes it s just goig to be easier to deal with it ad work with them. So, here s the work for reducig the matrix to row-echelo form R + 5R R R R R R 8R Okay, we re i row-echelo form ad it looks like if we go back to equatios at this poit we ll eed to do oe quick back substitutio ivolvig umbers ad so we ll go ahead ad stop here at this poit ad do Gaussia Elimiatio. Here are the equatios we get from the row-echelo form of the matrix ad the back substitutio x x = x = + = 4 x = Paul Dawkis

29 So, the solutio to this system is, x = x = 4 Example 7 Solve the followig system of liear equatios. 7x + x x 4x + x = x x + x + x = 4 5 4x x 8x+ 0x5 = Solutio First, let s otice that we are guarateed to have ifiitely may solutios by the fact above sice we ve got more ukows tha equatios. Here s the augmeted matrix for this system I this example we ca avoid fractios i the first row simply by addig twice the secod row to the first to get our leadig i that row. So, with that as our iitial step here s the work that will put this matrix ito row-echelo form R R R + R R R R 4R R R + R R We are ow i row-echelo form. Notice as well that i several of the steps above we took advatage of the form of several of the rows to simplify the work somewhat ad i doig this we did several of the steps i a differet order tha we ve doe to this poit. Remember that there are o set paths to take through these problems! Because of the fractios that we ve got here we re goig to have some work to do regardless of whether we stop here ad do Gaussia Elimiatio or go the couple of extra steps i order to do Gauss-Jorda Elimiatio. So with that i mid let s go all the way to reduced row-echelo form so we ca say that we ve got aother example of that i the otes. Here s the remaiig work R R Paul Dawkis 4

30 R+ 4R We re ow i reduced row-echelo form ad so let s go back to equatios ad see what we ve got. 8 8 x x4 x5 = x = + x4 + x5 5 5 x = x x4 x5 = x = + x4 + x5 So, we ve got two free variables this time, x 4 ad x 5, ad otice as well that ulike ay of the other ifiite solutio cases we actually have a value for oe of the variables here. That will happe o occasio so do t worry about it whe it does. Here is the solutio for this system. 8 8 x = + t+ s x = x = + t+ s 5 5 x = t x = s s ad t are ay umbers 4 5 Now, with all the examples that we ve worked to this poit hopefully you ve gotte the idea that there really is t ay oe set path that you always take through these types of problems. Each system of equatios is differet ad so may eed a differet solutio path. Do t get too locked ito ay oe solutio path as that ca ofte lead to problems. Homogeeous Systems of Liear Equatios We ve got oe more topic that we eed to discuss briefly i this sectio. A system of liear equatios i m ukows i the form ax + ax + + a mxm = 0 ax + ax + + amxm = 0 a x+ ax+ + amxm = 0 is called a homogeeous system. The oe characteristic that defies a homogeeous system is the fact that all the equatios are set equal to zero ulike a geeral system i which each equatio ca be equal to a differet (probably o-zero) umber. Hopefully, it is clear that if we take x = 0 x = 0 x = 0 x m = 0 we will have a solutio to the homogeeous system of equatios. I other words, with a homogeeous system we are guarateed to have at least oe solutio. This meas that Theorem from the previous sectio ca the be reduced to the followig for homogeeous systems. 007 Paul Dawkis 5

31 Theorem Give a homogeeous system of equatios ad m ukows there will be oe of two possibilities for solutios to the system. 4. There will be exactly oe solutio, x = 0, x = 0, x = 0,, x m = 0. This solutio is called the trivial solutio. 5. There will be ifiitely may o-zero solutios i additio to the trivial solutio. Note that whe we say o-zero solutio i the above fact we mea that at least oe of the x i s i the solutio will ot be zero. It is completely possible that some of them will still be zero, but at least oe will ot be zero i a o-zero solutio. We ca make a further reductio to Theorem from the previous sectio if we assume that there are more ukows tha equatios i a homogeeous system as the followig theorem shows. Theorem Give a homogeeous system of liear equatios i m ukows if m> (i.e. there are more ukows tha equatios) there will be ifiitely may solutios to the system. 007 Paul Dawkis 6

32 Matrices I the previous sectio we used augmeted matrices to deote a system of liear equatios. I this sectio we re goig to start lookig at matrices i more geerality. A matrix is othig more tha a rectagular array of umbers ad each of the umbers i the matrix is called a etry. Here are some examples of matrices [ 0 9] [ ] The size of a matrix with rows ad m colums is deoted by m. I deotig the size of a matrix we always list the umber of rows first ad the umber of colums secod. Example Give the size of each of the matrices above. Solutio size : size : 9 0 I this matrix the umber of rows is equal to the umber of colums. Matrices that have the same umber of rows as colums are called square matrices. 4 size : 4 7 This matrix has a sigle colum ad is ofte called a colum matrix. [ 0 9] size : 5 This matrix has a sigle row ad is ofte called a row matrix. [ ] size : Ofte whe dealig with matrices we will drop the surroudig brackets ad just write Paul Dawkis 7

33 Note that sometimes colum matrices ad row matrices are called colum vectors ad row vectors respectively. We do eed to be careful with the word vector however as i later chapters the word vector will be used to deote somethig much more geeral tha a colum or row matrix. Because of this we will, for the most part, be usig the terms colum matrix ad row matrix whe eeded istead of the colum vector ad row vector. There are a lot of otatioal issues that we re goig to have to get used to i this class. First, upper case letters are geerally used to refer to matrices while lower case letters geerally are used to refer to umbers. These are geeral rules, but as you ll see shortly there are exceptios to them, although it will usually be easy to idetify those exceptios whe they happe. We will ofte eed to refer to specific etries i a matrix ad so we ll eed a otatio to take care of that. The etry i the i th row ad j th colum of the matrix A is deoted by, a OR A ij I the first otatio the lower case letter we use to deote the etries of a matrix will always match with the upper case letter we use to deote the matrix. So the etries of the matrix B will be deoted by b ij. I both of these otatios the first (left most) subscript will always give the row the etry is i ad the secod (right most) subscript will always give the colum the etry is i. So, c 49 will be the etry i the 4 th row ad 9 th colum of C (which is assumed to be a matrix sice it s a upper case letter ). Usig the lower case otatio we ca deote a geeral m matrix, A, as follows, a a a m a a am a a a m a a a m A OR A = = a a am a a am m We do t geerally subscript the size of the matrix as we did i the secod case, but o occasio it may be useful to make the size clear ad i those cases we ted to subscript it as show i the secod case. The otatio above for a geeral matrix is fairly cumbersome so we ve also got some much more compact otatio that we ll use whe we ca. Whe possible we ll use the followig to deote a geeral matrix. a ij a ij A m m The first two we ted to use whe we eed to talk about the geeral etry of a matrix (such as certai formulas) but do t really care what that etry is. Also, we ll deote the size if it s importat or eeded for whatever we re doig, but otherwise we ll ot bother with the size. The third otatio is really othig more tha the stadard otatio with the size deoted. We ll use this oly whe we eed to talk about a matrix ad the size is importat but the etries are t. We wo t ru ito this oe too ofte, but we will o occasio. We will be dealig extesively with colum ad row matrices i later chapters/sectios so we eed to take care of some otatio for those. There are the mai exceptio to the upper case/lower case covetio we adopted earlier for matrices ad their etries. Colum ad row 007 Paul Dawkis 8 ( ) ij

34 matrices ted to be deoted with a lower case letter that has either bee bolded or has a arrow over it as follows, a a a= a = b= b = [ b b bm ] a I writte documets, such as this, colum ad row matrices ted to be i bold face while o the chalkboard of a classroom they ted to get arrows writte over them sice it s ofte difficult o a chalkboard to differetiate a letter that s i bold from oe that is t. Also, otice with colum ad row matrices the etries are still deoted with lower case letters that match the letter that represets the matrix ad i this case sice there is either a sigle colum or a sigle row there was o reaso to double subscript the etries. Next we eed to get a quick defiitio out of the way for square matrices. Recall that a square matrix is a matrix whose size is (i.e. it has the same umber of rows as colums). I a square matrix the etries a, a,, a (see the shaded portio of the matrix below) are called the mai diagoal. The ext topic that we eed to discuss i this sectio is that of partitioed matrices ad submatrices. Ay matrix ca be partitioed ito smaller submatrices simply by addig i horizotal ad/or vertical lies betwee selected rows ad/or colums. Example Here are several partitios of a geeral 5 matrix. (a) a a a a a a A A A= a a a = A A a4 a4 a 4 a5 a5 a 5 I this case we partitioed the matrix ito four submatrices. Also otice that we simplified the matrix ito a more compact form ad i this compact form we ve mixed ad matched some of our otatio. The partitioed matrix ca be thought of as a smaller matrix with four etries, except this time each of the etries are matrices istead of umbers ad so we used capital letters to represet the etries ad subscripted each o with the locatio i portioed matrix. Be careful ot to cofuse the locatio subscripts o each of the submatrices with the size of each submatrix. I this case A is a sub matrix of A, A is a sub matrix of A, A is a 007 Paul Dawkis 9

35 sub matrix of A, ad A is a sub matrix of A. (b) a a a a a a A= a a a = [ c c c ] a4 a4 a4 a5 a5 a 5 I this case we partitioed A ito three colum matrices each represetig oe colum i the origial matrix. Agai, ote that we used the stadard colum matrix otatio (the bold face letters) ad subscripted each o with the locatio i the partitioed matrix. The c i i the partitioed matrix are sometimes called the colum matrices of A. (c) a a a r a a a r a a a r a4 a4 a4 r4 a5 a5 a 5 r5 A = = Just as we ca partitio a matrix ito each of its colums as we did i the previous part we ca also partitio a matrix ito each of its rows. The r i i the partitioed matrix are sometimes called the row matrices of A. The previous example showed three of the may possible ways to partitio up the matrix. There are, of course, may other ways to partitio this matrix. We wo t be partitioig up too may matrices here, but we will be doig it o occasio, so it s a useful idea to remember. Also ote that whe we do partitio up a matrix ito its colum/row matrices we will geerally put i the bars separatig the colums/rows as we ve doe here to idicate that we ve got a partitioed matrix. To close out this sectio we re goig to itroduce a couple of special matrices that we ll see show up o occasio. The first matrix is the zero matrix. The zero matrix is pretty much what the ame implies. It is a m matrix whose etries are all zeroes. The otatio we ll use for the zero matrix is 0 m for a geeral zero matrix or 0 for a zero colum or row matrix. Here are a couple of zero matrices just so we ca say we have some i the otes = [ ] 0 = = Paul Dawkis 0

36 If the size of a colum or row zero matrix is importat we will sometimes subscript the size o those as well just to make it clear what the size is. Also, if the size of a full zero matrix is ot importat or implied from the problem we will drop the size from 0 m ad just deote it by 0. The secod special matrix we ll look at i this sectio is the idetity matrix. The idetity matrix is a square matrix usually deoted by I or just I if the size is uimportat or clear from the cotext of the problem. The etries o the mai diagoal of the idetity matrix are all oes ad all the other etries i the idetity matrix are zeroes. Here are a couple of idetity matrices I = I4 = As we ll see idetity matrices will arise fairly regularly. Here is a ice theorem about the reduced row-echelo form of a square matrix ad how it relates to the idetity matrix. Theorem If A is a matrix the the reduced row-echelo form of the matrix will either cotai at least oe row of all zeroes or it will be I, the idetity matrix. Proof : This is a simple eough theorem to prove that we may as well. Let s suppose that B is the reduced row-echelo form of the matrix. If B has at least oe row of all zeroes we are doe so let s suppose that B does ot have a row of all zeroes. This meas that every row has a leadig i it. Now, we kow that the leadig of a row must be the right of the leadig of the row immediately above it. Because we are assumig that B is square ad does t have ay rows of all zeroes we ca actually locate each of the leadig s i B. First, let s suppose that the leadig i the first row is NOT b (i.e. b = 0 ). The ext possible locatio of the leadig i the first row would the be b. So, let s suppose that this is where the leadig is. So, upo assumig this we ca say that B must have the followig form. 0 b b 0 0 b b B = 0 0 b b Now, let s assume the best possible sceario happes. That is the leadig of each of the lower rows is exactly oe colum to the right of the leadig above it. This however, leads us to istat problems. Because our first leadig is i the secod colum by the time we reach the - st row our leadig will be i the th colum ad this will i tur force the th row to be a row of all zeroes which cotradicts our iitial assumptio. If you re ot sure you believe this cosider the 4 4 case. 007 Paul Dawkis

37 Sure eough a row of all zeroes i the 4 th row. Now, we assumed the best possible sceario for the leadig s i the lower rows ad ra ito problems. If the leadig jumps to the right say colums (or or 4, etc.) we will ru ito the same kid of problem oly we ll ed up with more tha oe row of all zeroes. Likewise if the leadig i the first row is i ay of b, b4,, b we will have the same problem. So, i order to meet the assumptio that we do t have ay rows of all zeroes we kow that the leadig i the first row must be at b. Usig a similar argumet to that above we ca see that if the leadig o ay of the lower rows jumps to the right more tha oe colum we will have a leadig i the th colum prior to hittig the th row. This will i tur force at least the th row to be a row of all zeroes which will agai cotradict our iitial assumptio. Therefore we kow that the leadig oe i the first row is at b ad the oly hope of ot havig a row of all zeroes at the bottom is to have the leadig s of a row be exactly oe colum to the right of the leadig of the row above it. This meas that the leadig i the secod row must be at b, the leadig i the third row must be at b, etc. Evetually we ll hit the th row ad i this row the leadig must be at b. Therefore the leadig s of B must be o the diagoal ad because B is the reduced row-echelo form of A we also kow that all the etries above ad below the leadig s must be zeroes. This however, is exactly I. Therefore, if B does ot have a row of all zeroes i it the we must have that B = I. 007 Paul Dawkis

38 Matrix Arithmetic & Operatios Oe of the biggest impedimets that some people have i learig about matrices for the first time is tryig to take everythig that they kow about arithmetic of real umbers ad traslate that over to matrices. As you will evetually see much of what you kow about arithmetic of real umbers will also be true here, but there is also a few ideas/facts that will o loger hold here. To make matters worse there are some rules of arithmetic of real umbers that will work occasioally with matrices but wo t work i geeral. So, keep this i mid as you go through the ext couple of sectios ad do t be too surprised whe somethig does t quite work out as you expect it to. This sectio is devoted mostly to developig the arithmetic of matrices as well as itroducig a couple of operatios o matrices that do t really have a equivalet operatio i real umbers. We will see some of the differeces betwee arithmetic of real umbers ad matrices metioed above i this sectio. We will also see more of them i the ext sectio whe we delve ito the properties of matrix arithmetic i more detail. Okay, let s start off matrix arithmetic by defiig just what we mea whe we say that two matrices are equal. Defiitio If A ad B are both m matrices the we say that A = B provided correspodig etries from each matrix are equal. Or i other words, A = B provided aij = bij for all i ad j. Matrices of differet sizes caot be equal. Example Cosider the followig matrices. 9 9 b 9 A B C = = = 7 7 For these matrices we have that A C ad B C sice they are differet sizes ad so ca t be equal. The fact that C is essetially the first colum of both A ad B is ot importat to determiig equality i this case. The size of the two matrices is the first thig we should look at i determiig equality. Next, A = B provided we have b =. If b the we will have A B. Next we eed to move o to additio ad subtractio of two matrices. Defiitio If A ad B are both m matrices the A± B is a ew m matrix that is foud by addig/subtractig correspodig etries from each matrix. Or i other words, A± B= aij± b ij Matrices of differet sizes caot be added or subtracted. 007 Paul Dawkis

39 Example For the followig matrices perform the idicated operatio, if possible A= B C = = (a) A + B (b) B A (c) A + C Solutio (a) Both A ad B are the same size ad so we kow the additio ca be doe i this case. Oce we kow the additio ca be doe there really is t all that much to do here other tha to just add the correspodig etries here to get the results A+ B= 7 4 (b) Agai, sice A ad B are the same size we ca do the differece ad as like the previous part there really is t all that much to do. All that we eed to be careful with is the order. Just like with real umber arithmetic B A is differet from A B. So, i this case we ll subtract the etries of A from the etries of B B A= 5 4 (c) I this case because A ad C are differet sizes the additio ca t be doe. Likewise, A C, C A, B + C. C B, ad B C ca t be doe for the same reaso. We ow eed to move ito multiplicatio ivolvig matrices. However, there are actually two kids of multiplicatio to look at : Scalar Multiplicatio ad Matrix Multiplicatio. Let s start with scalar multiplicatio. Defiitio If A is ay matrix ad c is ay umber the the product (or scalar multiple), ca, is a ew matrix of the same size as A ad it s etries are foud by multiplyig the origial etries of A by c. I other words ca = c a for all i ad j. ij Note that i the field of Liear Algebra a umber is ofte called a scalar ad hece the ame scalar multiple sice we are multiplyig a matrix by a scalar (umber). From this poit o we will geerally call umbers scalars. Before doig a example we eed to get aother quick defiitio out of the way. If A, A,, A are all matrices of the same size ad c, c,, c are scalars the the liear combiatio of A, A,, A with coefficiets c, c,, c is, ca + ca + + ca This may seem like a silly thig to defie but we ll be usig liear combiatio i quite a few places i this class ad so we eed to get used to seeig them. 007 Paul Dawkis 4

40 Example Give the matrices A= B 7 0 C 5 = = compute A+ B C. Solutio So, we re really beig asked to compute a liear combiatio here. We ll do that by first computig the scalar multiplies ad the performig the additio ad subtractio. Note as well that i the case of the third scalar multiple we are goig to cosider the scalar to be a positive ad leave the mius sig out i frot of the matrix. Here is the work for this problem A+ B C = = We ow eed to move ito matrix multiplicatio, however before we do the geeral case let s look at a special case first sice this will help with the geeral case. Suppose that we have the followig two matrices, b b a= [ a a a ] b= b So, a is a row matrix ad b is a colum matrix ad they have the same umber of etries. The the product of a ad b is defied to be, ab = ab + ab + + ab It is importat to ote that this product ca oly be doe if a ad b have the same umber of etries. If they have a differet umber of etries the this product is ot defied. Example 4 Compute ab give that, 4 a= [ 4 0 ] b = 8 Solutio There is ot really a whole lot to do here other tha use the defiitio give above. ab = = ( )( ) ( )( ) ( )( ) Now let s move oto geeral matrix multiplicatio. 007 Paul Dawkis 5

41 Defiitio 4 If A is a pmatrix ad B is a p m matrix the the product (or matrix multiplicatio) is a ew matrix with size m whose ij th etry is foud by multiplyig row i of A times colum j of B. So, just like with additio ad subtractio, we eed to be careful with the sizes of the two matrices we re dealig with. However, with multiplicatio we eed to be a little more careful. This defiitio tells us that the product AB is oly defied if A (i.e. the first matrix listed i the product) has the same umber of colums as B (i.e. the secod matrix listed i the product) has rows. If the umber of colums of the first matrix listed is ot the same as the umber of rows of the secod matrix listed the the product is ot defied. A easy way to check that a product is defied is to write dow the two matrices i the order that we wat to multiply them ad udereath them write dow the sizes as show below. A B = AB p p m m If the two ier umbers are equal the the product is defied ad the size of the product will be give by the outside umbers. Example 5 Compute AC ad CA for the followig two matrices, if possible A= C = Solutio Okay, let s first do AC. Here are the sizes for A ad C. A C = AC 4 4 So, the two ier umbers (4 ad 4) are the same ad so the multiplicatio ca be doe ad we ca see that the ew size of the matrix is. Now, let s actually do the multiplicatio. We ll go through the first couple of etries i the product i detail ad the do the remaiig etries a little quicker. To get the umber i the first row ad first colum of AC we ll multiply the first row of A by the first colum of B as follows, = ()() ( )( ) ( )( ) ( )( ) If we ext wat the etry i the first row ad secod colum of AC we ll multiply the first row of A by the secod colum of B as follows, = 5 ()( ) ( )( ) ( )( ) ( )( ) Okay, at this poit, let s stop ad isert these ito the product so we ca make sure that we ve got our bearigs. Here s the product so far, 007 Paul Dawkis 6

42 = As we ca see we ve got four etries left to compute. For these we ll give the row ad colum multiplicatios but leave it to you to make sure we used the correct row/colum ad put the result i the correct place. Here s the remaiig work = 7 ()( ) ( )( ) ( )( ) ( )( ) ( )( ) ( )( ) ( )( ) ( )( ) ( )( ) ( )( ) ( )( ) ( )( ) ( )( ) ( )( ) ( )( ) ( )( ) = = = 8 Here is the completed product = Now let s do CA. Here are the sizes for this product. C A = CA 4 4 N/A Okay, i this case the two ier umbers ( ad ) are NOT the same ad so this product ca t be doe. So, with this example we ve ow ru across the first real differece betwee real umber arithmetic ad matrix arithmetic. Whe dealig with real umbers the order i which we write a product does t affect the actual result. For istace ()()=6 ad ()()=6. We ca flip the order ad we get the same aswer. With matrices however, we will have to be very careful ad pay attetio to the order i which the product is writte dow. As this example has show the product AC could be computed while the product CA i ot defied. Now, do ot take the previous example ad assume that all products will work that way. It is possible for both AC ad CA to be defied as we ll see i the ext example. Example 6 Compute BD ad DB for the give matrices, if possible B= 0 8 D 6 = Solutio First, otice that both of these matrices are matrices ad so both BD ad DB are defied. Agai, it s worth poitig out that this example differs from the previous example i that both the products are defied i this example rather tha oly oe beig defied as i the previous 007 Paul Dawkis 7

43 example. Also ote that i both cases the product will be a ew matrix. I this example we re goig to leave the work of verifyig the products to you. It is good practice so you should try ad verify at least oe of the followig products BD = = DB = = This example leads us to yet aother differece (although it s related to the first) betwee real umber arithmetic ad matrix arithmetic. I this example both BD ad DB were defied. Notice however that the products were defiitely ot the same. There is othig wrog with this so do t get excited about it whe it does happe. Note however that this does t mea that the two products will ever be the same. It is possible for them to be the same ad we ll see at least oe case where the two products are the same i a couple of sectios. For the sake of completeess if A is a pmatrix ad B is a p m matrix the the etry i the i th row ad j th colum of AB is give by the followig formula, AB = a b + a b + a b + + a b ( ) ij i j i j i j i p p j This formula ca be useful o occasio, but is really used mostly i proofs ad computer programs that compute the product of matrices. O occasio it ca be coveiet to kow a sigle row or a sigle colum from a product ad ot the whole product itself. The followig theorem tells us how to get our hads o just that. Theorem Assumig that A ad B are appropriately sized so that AB is defied the,. The i th row of AB is give by the matrix product : [i th row of A]B.. The j th colum of AB is give by the matrix product : A[j th colum of B]. Example 7 Compute the secod row ad third colum of AC give the followig matrices A= C = Solutio These are the matrices from Example 5 ad so we ca verify the results of usig this fact oce we re doe. Let s fid the secod row first. So, accordig to the fact this meas we eed to multiply the secod row of A by C. Here is that work. 007 Paul Dawkis 8

44 8 5 0 [ 5 8 9] = [ 56 8] Sure eough, this is the correct secod row of the product AC. Next, let s use the fact to get the third colum. This meas that we ll eed to multiply A by the third colum of B. Here is that work = Ad sure eough, this also gives us the correct aswer. We ca use this fact about how to get idividual rows or colums of a product as well as the idea of a partitioed matrix that we saw i the previous sectio to derive a couple of ew ways to fid the product of two matrices. Let s start by assumig we ve got two matrices A (size p) ad B (size p m ) so we kow the product AB is defied. Now, for the first ew way of fidig the product let s partitio A ito its row matrices as follows, a a a p r a a a p A r = = a a ap r Now, from the fact we kow that the i th row of AB is [i th row of A]B, or r B i. Usig this idea the product AB ca the be writte as a ew partitioed matrix as follows. r rb B AB r B r = = r rb For the secod ew way of fidig the determiate we ll partitio B ito its colum matrices as, b b b m b b b m B = = [ c c cm ] bp bp bpm We ca the use the fact that t he j th colum of AB is give by A[j th colum of B] ad so the product AB ca be writte as a ew partitioed matrix as follows. 007 Paul Dawkis 9

45 [ c c c ] [ c c c ] AB = A = A A A m Example 8 Use both of the ew methods for computig products to fid AC for the followig matrices A= C = Solutio So, oce agai we kow the aswer to this so we ca use it to check our results agaist the aswer from Example 5. First, let s use the row matrices of A. Here are the two row matrices of A. r = [ 0 4] r = [ 5 8 9] ad here are the rows of the product rc = [ 0 4] = [ ] rc = = Puttig these together gives, rc 5 7 AC = C = 56 8 r ad this is the correct aswer. [ ] [ ] Now let s compute the product usig colums. Here are the three colum matrices for C c = c = c = Here are the colums of the product Ac = = m 007 Paul Dawkis 40

46 Ac = = Ac = = Puttig all this together as follows gives the correct aswer. 5 7 AB = [ Ac Ac Ac ] = 56 8 We ca also write certai kids of matrix products as a liear combiatio of colum matrices. Cosider A a p matrix ad x a p colum matrix. We ca easily compute this product directly as follows, a a a p x ax + a x + + a pxp a a a p x ax ax a px p A x = = a a ap xp ax+ ax + + apxp Now, usig matrix additio we ca write the resultat matrix as follows, ax + a x + + a pxp ax a x a pxp ax ax a px p ax a a x px p = ax + ax+ + apxp a a x ax pxp Now, each of the p colum matrices o the right above ca also be rewritte as a scalar multiple as follows. ax ax apxp a a ap ax ax a px p a a a p = x + x + + x p a a x ax pxp a a a p Fially, the colum matrices that are multiplied by the x i s are othig more tha the colum matrices of A. So, puttig all this together gives us, a a a p a a a p Ax= x + x + + x p = xc+ xc + + xpcp a a a p 007 Paul Dawkis 4

47 where c, c,, cp are the colum matrices of A. Writte i this matter we ca see that Ax ca be writte as the liear combiatio of the colum matrices of A, c, c,, cp, with the etries of x, x, x, xp, as coefficiets. Example 9 Compute Ax directly ad as a liear combiatio for the followig matrices. 4 A = x = Solutio We ll leave it to you to verify that the direct computatio of the product gives, 4 Ax = 9 = Here is the liear combiatio method of computig the product. 4 Ax = = = = 9 7 This is the same result that we got by the direct computatio. Matrix multiplicatio also gives us a very ice ad compact way of writig systems of equatios. I fact we eve saw most of it as we itroduced the above idea. Let s start out with a geeral system of equatios ad m ukows. 007 Paul Dawkis 4

48 a x + a x + + a x = b m m a x + a x + + a x = b m m a x + a x + + a x = b m m Now, istead of thikig of these as a set of equatios let s thik of each side as a vector of size as follows, ax + ax + + a mxm b ax ax amx m b = ax + ax+ + amxm b I the work above we saw that the left side of this ca be writte as the followig matrix product, a a a m x b a a a m x b = a a am xm b If we ow deote the coefficiet matrix by A, the colum matrix cotaiig the ukows by x ad the colum matrix cotaiig the b i s by b. we ca write the system i the followig matrix form, A x= b I may of the sectio to follow we ll write geeral systems of equatios as A x= b give its compact ature i order to save space. Now that we ve gotte the basics of matrix arithmetic out of the way we eed to itroduce a couple of matrix operatios that do t really have ay equivalet operatios with real umbers. T Defiitio 5 If A is a m matrix the the traspose of A, deoted by A, is a m T matrix that is obtaied by iterchagig the rows ad colums of A. So, the first row of A is T the first colum of A, the secod row of A is the secod colum of A, etc. Likewise, the first T T colum of A is the first row of A, the secod colum of A is the secod row of A, etc. O occasio you ll see the traspose defied as follows, A T = a ij A = a ji for all i ad j m m Notice the differece i the subscripts. Uder this defiitio, the etry i the i th row ad j th colum of A will be i the j th row ad i th T colum of A. Notice that these two defiitios are really the same defiitio, they just do t look like they are the same at first glace. Defiitio 6 If A is a square matrix of size the the trace of A, deoted by tr(a), is the sum of the etries o mai diagoal. Or, tr ( A) = a + a + + a If A is ot square the the trace is ot defied. 007 Paul Dawkis 4

49 Example 0 Determie the traspose ad trace (if it is defied) for each of the followig matrices A= B 9 7 C 5 = = D= [ 5] E = 7 0 Solutio There really is t all that much to do here other tha to go through the defiitios. Note as well that the trace will oly ot be defied for A ad C sice these matrices are ot square. 4 5 T 0 A = tr ( A) : Not defied sice A is ot square. 7 0 T B 9 5 = 0 tr( B) = + + = [ ] ( ) T C = 9 8 tr c : Not defied sice C is ot square. T D [ ] ( D) = 5 tr = 5 T E 7 = tr ( E) = + 0 = 7 0 T T I the previous example ote that D = D ad that E = E. I these cases the matrix is called symmetric. So, i the previous example D ad E are symmetric while A, B, ad C, are ot symmetric. 007 Paul Dawkis 44

50 Properties of Matrix Arithmetic ad the Traspose I this sectio we re goig to take a quick look at some of the properties of matrix arithmetic ad of the traspose of a matrix. As metioed i the previous sectio most of the basic rules of real umber arithmetic are still valid i matrix arithmetic. However, there are a few that are o loger valid i matrix arithmetic as we ll be seeig. We ve already see oe of the real umber properties that does t hold i matrix arithmetic. If a ad b are two real umbers the we kow by the commutative law for multiplicatio of real umbers that ab = ba (i.e. ()()=()()=6 ). However, if A ad B are two matrices such that AB is defied we saw a example i the previous sectio i which BA was ot defied as well as a example i which BA was defied ad yet AB BA. I other words, we do t have a commutative law for matrix multiplicatio. Note that does t mea that we ll ever have AB= BA for some matrices A ad B, it is possible for this to happe (as we ll see i the ext sectio) we just ca t guaratee that this will happe if both AB ad BA are defied. Now, let s take a quick look at the properties of real umber arithmetic that are valid i matrix arithmetic. Properties I the followig set of properties a ad b are scalars ad A, B, ad C are matrices. We ll assume that the size of the matrices i each property are such that the operatio i that property is defied.. A+ B= B+ A Commutative law for additio A+ B+ C = A+ B + C Associative law for additio. ( ) ( ). A( BC) ( AB) C 4. A( B ± C ) = AB ± AC Left distributive law = Associative law for multiplicatio 5. ( B ± C) A= BA± CA Right distributive law 6. a( B± C) = ab± ac 7. ( a± b) C = ac± bc 8. ( ab) C = a ( bc) 9. a( BC) = ( ab) C = B( ac) With real umber arithmetic we did t eed both 4. ad 5. sice we ve also got the commutative law for multiplicatio. However, sice we do t have the commutative law for matrix multiplicatio we really do eed both 4. ad 5. Also, properties are simply distributive or associative laws for dealig with scalar multiplicatio. Now, let s take a look at couple of other idea from real umber arithmetic ad see if they have equivalet ideas i matrix arithmetic. We ll start with the followig idea. From real umber arithmetic we kow that a = a = a. Or, i other words, if we multiply a umber by (oe) does t chage the umber. The idetity matrix will give the same result i matrix multiplicatio. If A is a m matrix the we have, 007 Paul Dawkis 45

51 I A= AI = A m Note that we really do eed differet idetity matrices o each side of A that will deped upo the size of A. Example Cosider the followig matrix A = 7 4 The, IA 4 = = AI = = Now, just like the idetity matrix takes the place of the umber (oe) i matrix multiplicatio, the zero matrix (deoted by 0 for a geeral matrix ad 0 for a colum/row matrix) will take the place of the umber 0 (zero) i most of the matrix arithmetic. Note that we said most of the matrix arithmetic. There are a couple of properties ivolvig 0 i real umbers that are ot ecessarily valid i matrix arithmetic. Let s first start with the properties that are still valid. Zero Matrix Properties I the followig properties A is a matrix ad 0 is the zero matrix sized appropriately for the idicated operatio to be valid.. A+ 0= 0+ A= A. A A= 0. 0 A= A 4. 0A = 0 ad A 0= 0 Now, i real umber arithmetic we kow that if ab = ac ad a 0 the we must have b= c (sometimes called the cacellatio law). We also kow that if ab = 0 the we have a = 0 ad/or b = 0 (sometimes called the zero factor property). Neither of these properties of real umber arithmetic are valid i geeral for matrix arithmetic. 007 Paul Dawkis 46

52 Example Cosider the followig three matrices. 4 A= B= C = We ll leave it to you to verify that, 9 0 AB = = AC 8 0 Clearly A 0 ad just as clearly B C ad yet we do have AB = AC. So, at least i this case, the cacellatio law does ot hold. We should be careful ad ot read too much ito the results of the previous example. The cacellatio law will ot be valid i geeral for matrix multiplicatio. However, there are times whe a variatio of the cacellatio law will be valid as we ll see i the ext sectio. Example Cosider the followig two matrices. 6 A= B 4 = 8 We ll leave it to you to verify that, 0 0 AB = 0 0 So, we ve got AB = 0 despite the fact that A 0 ad B 0. So, i this case the zero factor property does ot hold i this case. Now, agai, we eed to be careful. There are times whe we will have a variatio of the zero factor property, however there will be o zero factor property for the multiplicatio of ay two radom matrices. The ext topic that we eed to take a look at is that of powers of matrices. At this poit we ll just work with positive expoets. We ll eed the ext sectio before we ca deal with egative expoets. Let s start off with the followig defiitios. Defiitio If A is a square matrix the, 0 A = I A = AA A, > 0 times We ve also got several of the stadard iteger expoet properties that we are used to workig with. Properties of Matrix Expoets If A is a square matrix ad ad m are itegers the, + AA = A A = A ( ) m m m m We ca also talk about pluggig matrices ito polyomials usig the followig defiitio. If we have the polyomial, 007 Paul Dawkis 47

53 ( ) p x = a x + a x + + a x+ a 0 ad A is a square matrix the, p A = a A + a A + + a A+ a I ( ) 0 where the idetity matrix o the costat term a 0 has the same size as A. Example 4 Evaluate each of the followig for the give matrix. 7 A = 5 (a) A (b) A p A where p( x) = 6x + 0x 9 (c) ( ) Solutio (a) There really is t much to do with this problem. We ll leave it to you to verify the multiplicatio here A = 5 = (b) I this case we may as well take advatage of the fact that we ve got the result from the first part already. Agai, we ll leave it to you to verify the multiplicatio A = A A= = (c) I this case we ll eed the result from the secod part. Outside of that there really is t much to do here. p( A) = 6A + 0A9I = = = = The last topic i this sectio that we eed to take care of is some quick properties of the traspose of a matrix. 007 Paul Dawkis 48

54 Properties of the Traspose If A ad B are matrices whose sizes are such that the give operatios are defied ad c is ay scalar the, T. ( A ) T = A. ( ) T T T A± B = A ± B ca = ca. ( ) T T AB = B A 4. ( ) T T T The first three of these properties should be fairly obvious from the defiitio of the traspose. The fourth is a little trickier to see, but is t that bad to verify. Proof of #4 : We kow that the etry i the i th row ad j th colum of AB is give by, AB = a b + a b + a b + + a b ( ) ij i j i j i j i p p j ) T We also kow that the etry i the i th row ad j th colum of ( AB is foud simply by iterchagig the subscripts i ad j ad so it is, AB T = AB = a b + a b + a b + + a b (( ) ) ( ) ji ij j i j i j i j p pi T T Now, let s deote the etries of A ad B as a ij ad b ij respectively. Agai, based o the defiitio of the traspose we also kow that, T T A = aij = a ji B = b ij = b ji ad so from this we see that aij = aji ad bij = bji. Fially, the etry i the i th row ad j th T T colum of B A is give by, T T B A = b a + b a + b a + + b a ( ) ij i j i j i j i p p j Now, plug i for a ij ad b ij ad we get that, T T B A = b a + b a + b a + + b a ( ) ij i j i j i j ip p j = ba + ba + ba + + b a i j i j i j pi j p j i j i j i j p pi T (( ) ) = a b + a b + a b + + a b = AB So, just what have we doe here? We ve maaged to show that the etry i the i th row ad j th colum of ( AB ) T is equal to the etry i the i th row ad j th T T colum of B A. Therefore, sice each of the etries are equal the matrices must also be equal. ij Note that #4 ca be aturally exteded to more tha two matrices. For example, ( ) T T T T ABC = C B A 007 Paul Dawkis 49

55 Iverse Matrices ad Elemetary Matrices Our mai goal i this sectio is defie iverse matrices ad to take a look at some ice properties ivolvig matrices. We wo t actually be fidig ay iverse matrices i this sectio. That is the topic of the ext sectio. We ll also take a quick look at elemetary matrices which as we ll see i the ext sectio we ca use to help us fid iverse matrices. Actually, that s ot totally true. We ll use them to help us devise a method for fidig iverse matrices, but we wo t be explicitly usig them to fid the iverse. So, let s start off with the defiitio of the iverse matrix. Defiitio If A is a square matrix ad we ca fid aother matrix of the same size, say B, such that AB = BA = I the we call A ivertible ad we say that B is a iverse of the matrix A. If we ca t fid such a matrix B we call A a sigular matrix. Note that we oly talk about iverse matrices for square matrices. Also ote that if A is ivertible it will o occasio be called o-sigular. We should also poit out that we could also say that B is ivertible ad that A is the iverse of B. Before proceedig we eed to show that the iverse of a matrix is uique, that is for a give ivertible matrix A there is exactly oe iverse for the matrix. Theorem Suppose that A is ivertible ad that both B ad C are iverses of A. The B = C ad we will deote the iverse as A. Proof : Sice B is a iverse of A we kow that AB = I. Now multiply both sides of this by C C AB = CI = C. However, by the associative law of matrix multiplicatio we ca also to get ( ) write C( AB ) as ( ) ( ) see that C = C( AB) = B or C = B. C AB = CA B = I B = B. Therefore, puttig these two pieces together we So, the iverse for a matrix is uique. To deote this fact we ow will deote the iverse of the matrix A as from this poit o. A Example Give the matrix A verify that the idicated matrix is i fact the iverse. 4 5 A= A 5 5 = 5 Solutio To verify that we do i fact have the iverse we ll eed to check that AA = A A = I This is easy eough to do ad so we ll leave it to you to verify the multiplicatio. 007 Paul Dawkis 50

56 AA A A = 5 5 = = = As the defiitio of a iverse matrix suggests, ot every matrix will have a iverse. Here is a example of a matrix without a iverse. Example The matrix below does ot have a iverse. 9 B = This is fairly simple to see. If B has a matrix the it must be a matrix. So, let s just take ay old, c c c C = c c c c c c Now let s thik about the product BC. We kow that the d row of BC ca be foud by lookig at the followig matrix multiplicatio, c c c d row of B C = [ 0 0 0] c c c = [ 0 0 0] c c c So, the secod row of BC is [ ], but if C is to be the iverse of B the product BC must be the idetity matrix ad this meas that the secod row must i fact be [ 0 0 ]. Now, C was a geeral matrix ad we ve show that the secod row of BC is all zeroes ad hece the product will ever be the idetity matrix ad so B ca t have a iverse ad so is a sigular matrix. I the previous sectio we itroduced the idea of matrix expoetiatio. However, we eeded to restrict ourselves to positive expoets. We ca ow take a look at egative expoets. Defiitio If A is a square matrix ad > 0 the, A = ( A ) = A A A times 007 Paul Dawkis 5

57 Example Compute A for the matrix, 4 A = 5 5 Solutio From Example we kow that the iverse of A is, = So, this is easy eough to compute A = ( A ) = A = = Next, let s take a quick look at some ice facts about the iverse matrix. Theorem Suppose that A ad B are ivertible matrices of the same size. The, (a) AB is ivertible ad ( ) AB = B A. (b) A is ivertible ad ( A ) (c) For = 0,,, = A. A is ivertible ad ( A ) A ( A ) = =. ca = A c (d) If c is ay o-zero scalar the ca is ivertible ad ( ) (e) T T A is ivertible ad ( A ) ( A ) T =. Proof : Note that i each case i order to prove that the give matrix is ivertible all we eed to do is show that the iverse is what we claim it to be. Also, do t get excited about showig that the iverse is what we claim it to be. I these cases all we eed to do is show that the product (both left ad right product) of the give matrix ad what we claim is the iverse is the idetity matrix. That s it. Also, do ot get excited about the iverse otatio. For example, i the first oe we state that ( ) AB = B A. Remember that the ( AB) is just the otatio that we use to deote the iverse of AB. This otatio will ot be used i the proof except i the fial step to deote the iverse. (a) Now, as suggested above showig this is ot really all that difficult. All we eed to do is AB B A I B A AB = I. Here is that work. show that ( )( ) = ad ( )( ) 007 Paul Dawkis 5

58 ( )( ) ( ) ( )( ) ( ) AB B A = A BB A = AIA = AA = I B A AB = B A A B= B IB= B B= I So, we ve show both ad so we ow kow that AB is i fact ivertible (sice we ve foud the iverse!) ad that ( ) AB = B A. (b) Now, we kow from the fact that A is ivertible that AA = A A = I But this is tellig us that if we multiply A by A o both sides the we ll get the idetity matrix. But this is exactly what we eed to show that A is ivertible ad that its iverse is A. (c) The best way to prove this part is by a proof techique called iductio. However, there s a chace that a good may of you do t kow that ad that is t the poit of this class. Luckily, for this part ayway, we ca at least outlie aother way to prove this. A A = A A = I. We ll show oe of the iequalities ad leave the other to you to verify sice the work is pretty much idetical. ( A )( A ) = AA A A A A times times = AA A ( AA ) A A A but AA = I so, times times = AA A A A A times times = etc. = AA A A To officially prove this part we ll eed to show that ( )( ) ( )( ) ( ) ( ) ( ) = = A AA A agai AA I = AA = I Agai, we ll leave the secod product to you to verify, but the work is idetical. After doig this product we ca see that A = A = A. A is ivertible ad ( ) ( ) (d) To prove this part we ll eed to show that ( ca) A = A ( ca) = I. As with the c c last part we ll do half the work ad leave the other half to you to verify. ( ca) A = c ( AA ) = ()( I ) = I c c so, 007 Paul Dawkis 5

59 = A. c Upo doig the secod product we ca see that ca is ivertible ad ( ) T T (e) The part will require us to show that ( ) ( ) T T A A A A I = = ad i keepig with traditio of the last couple parts we ll do the first oe ad leave the secod oe to you to verify. This oe is a little tricky at first, but oce you realize the correct formula to use it s ot too bad. T A A T ad the remember that ( ) T T T CD = D C. Usig this fact Let s start with ( ) (backwards) o T ( ) A A gives us, T Note that we used the fact that T I ( ) T ( ) T T T A A = A A = I = I ca = I here which we ll leave to you to verify. So, upo showig the secod product we ll have that T T A is ivertible ad ( A ) ( A ) T =. Note that the first part of this theorem ca be easily exteded to more tha two matrices as follows, ABC = C B A ( ) Now, i the previous sectio we saw that i geeral we do t have a cacellatio law or a zero factor property. However, if we restrict ourselves just a little we ca get variatios of both of these. Theorem Suppose that A is a ivertible matrix ad that B, C, ad D are matrices of the same size as A. (a) If AB = AC the B = C (b) If AD = 0 the D = 0 Proof : (a) Sice we kow that A is ivertible we kow that A exists so multiply o the left by get, A AB= A AC IB = IC B= C (b) Agai we kow that A exists so multiply o the left by A to get, A AD= A 0 ID = 0 D = 0 A to Note that this theorem oly required that A be ivertible, it is completely possible that the other matrices are sigular. 007 Paul Dawkis 54

60 Note as well with the first oe that we ve got to remember that matrix multiplicatio is ot commutative ad so if we have AB = CA the there is o reaso to thik that B = C eve if A is ivertible. Because we do t kow that CA = AC we ve got to leave this as is. Also whe we multiply both sides of the equatio by A we ve got multiply each side o the left or each side o the right, which is agai because we do t have the commutative law with matrix multiplicatio. So, if we tried the above proof o AB = CA we d have, A AB = A CA OR ABA = CAA B= A CA ABA = C I either case we do t have B = C. Okay, it is ow time to take a quick look at Elemetary matrices. Defiitio A square matrix is called a elemetary matrix if it ca be obtaied by applyig a sigle elemetary row operatio to the idetity matrix of the same size. Here are some examples of elemetary matrices ad the row operatios that produced them. Example 4 The followig matrices are all elemetary matrices. Also give is the row operatio o the appropriately sized idetity matrix R o I R R4 o I R 7 R o I R o I 0 0 Note that the fourth example above shows that ay idetity matrix is also a elemetary matrix sice we ca thik of arrivig at that matrix by takig oe times ay row (ot just the secod as we used) of the idetity matrix. Here s a really ice theorem about elemetary matrices that we ll be usig extesively to develop a method for fidig the iverse of a matrix. 007 Paul Dawkis 55

61 Theorem 4 Suppose E is a elemetary matrix that was foud by applyig a elemetary row operatio to I. The if A is a m matrix EA is the matrix that will result by applyig the same row operatio to A. Example 5 For the followig matrix perform the row operatio R+ 4R o it ad the fid the elemetary matrix, E, for this operatio ad verify that EA will give the same result A = Solutio Performig the row operatio is easy eough R 4R Now, we ca fid E simply by applyig the same operatio to I ad so we have, 4 0 E = We just eed to verify that EA is the the same matrix that we got above EA = = Sure eough the same matrix as the theorem predicted. Now, let s go back to Example 4 for a secod ad otice that we ca apply a secod row operatio to get the give elemetary matrix back to the origial idetity matrix. Example 6 Give the operatio that will take the elemetary matrices from Example 4 back to the origial idetity matrix R R R Paul Dawkis 56

62 R 7R R These kids of operatios are called iverse operatios ad each row operatio will have a iverse operatio associated with it. The followig table gives the iverse operatio for each row operatio. Row operatio Iverse Operatio Multiply row i by c 0 Multiply row i by c Iterchage rows i ad j Iterchage rows i ad j Add c times row i to row j Add c times row i to row j Now that we ve got iverse operatios we ca give the followig theorem. Theorem 5 Suppose that E is the elemetary matrix associated with a particular row operatio ad that E 0 is the elemetary matrix associated with the iverse operatio. The E is ivertible ad E = E 0 Proof : This is actually a really simple proof. Let s start with EE. 0 We kow from Theorem 4 that this is the same as if we d applied the iverse operatio to E, but we also kow that iverse operatios will take a elemetary matrix back to the origial idetity matrix. Therefore we have, EE 0 = I Likewise, if we look at EE 0 this will be the same as applyig the origial row operatio to E 0. However, if you thik about it this will oly udo what the iverse operatio did to the idetity matrix ad so we also have, EE0 = I Therefore, we ve proved that EE0 = E0E = I ad so E is ivertible ad E = E 0. Now, suppose that we ve got two matrices of the same size A ad B. If we ca reach B by applyig a fiite umber of row operatios to A the we call the two matrices row equivalet. Note that this will also mea that we ca reach A from B by applyig the iverse operatios i the reverse order. 007 Paul Dawkis 57

63 Example 7 Cosider 4 A = 5 8 the 4 B = 4 is row equivalet to A because we reached B by first multiplyig row of A by - ad the addig times row oto row. For the practice let s do these operatios usig elemetary matrices. Here are the elemetary matrices (ad their iverses) for the operatios o A. 0 0 R : E = 0 E = R + R : E = E = Now, to reach B Theorem 4 tells us that we eed to multiply the left side of A by each of these i the same order as we applied the operatios EEA = = = = B 4 Sure eough we get B as we should. Now, sice A ad B are row equivalet this meas that we should be able to get to A from B by applyig the iverse operatios i the reverse order. Let s see if that does i fact work E E B= = = = A 5 8 So, we sure eough ed up with the correct matrix ad agai remember that each time we multiplied the left side by a elemetary matrix Theorem 4 tells us that is the same thig as applyig the associated row operatio to the matrix. 007 Paul Dawkis 58

64 Fidig Iverse Matrices I the previous sectio we itroduced the idea of iverse matrices ad elemetary matrices. I this sectio we eed to devise a method for actually fidig the iverse of a matrix ad as we ll see this method will, i some way, ivolve elemetary matrices, or at least the row operatios that they represet. The first thig that we ll eed to do is take care of a couple of theorems. Theorem If A is a matrix the the followig statemets are equivalet. (a) A is ivertible. (b) The oly solutio to the system A x = 0 is the trivial solutio. (c) A is row equivalet to I. (d) A is expressible as a product of elemetary matrices. Before we get ito the proof let s say a couple of words about just what this theorem tells us ad how we go about provig somethig like this. First, whe we have a set of statemets ad whe we say that they are equivalet the what we re really sayig is that either they are all true or they are all false. I other words, if you kow oe of these statemets is true about a matrix A the they are all true for that matrix. Likewise, if oe of these statemets is false for a matrix A the they are all false for that matrix. To prove a set of equivalet statemets we eed to prove a strig of implicatios. This strig has to be able to get from ay oe statemet to ay other through a fiite umber of steps. I this case we ll prove the followig chai ( a) ( b) ( c) ( d) ( a). By doig this if we kow oe of them to be true/false the we ca follow this chai to get to ay of they others. The actual proof will ivolve four parts, oe for each implicatio. To prove a give implicatio we ll assume the statemet o the left is true ad show that this must i some way also force the statemet o the right to also be true. So, let s get goig. Proof : ( a) ( b) : So we ll assume that A is ivertible ad we eed to show that this assumptio also implies that A x = 0 will have oly the trivial solutio. That s actually pretty easy to do. Sice A is ivertible we kow that A exists. So, start by assumig that x 0 is ay solutio to the system, plug this ito the system ad the multiply (o the left) both sides by A to get, A Ax0 = A 0 Ix0 = 0 x0 = 0 So, A x = 0 has oly the trivial solutio ad we ve maaged to prove this implicatio. ( b) ( c) : Here we re assumig that A x = 0 will have oly the trivial solutio ad we ll eed to show that A is row equivalet to I. Recall that two matrices are row equivalet if we ca get from oe to the other by applyig a fiite set of elemetary row operatios. 007 Paul Dawkis 59

65 Let s start off by writig dow the augmeted matrix for this system. a a a 0 a a a 0 a a a 0 Now, if we were goig to solve this we would use elemetary row operatios to reduce this to reduced row-echelo form, Now we kow that the solutio to this system must be, x = 0, x = 0,, x = 0 by assumptio. Therefore, we also kow what the reduced row-echelo form of the augmeted matrix must be sice that must give the above solutio. The reduced-row echelo form of this augmeted matrix must be, Now, the etries i the last colum do ot affect the values i the etries i the first colums ad so if we take the same set of elemetary row operatios ad apply them to A we will get I ad so A is row equivalet to I sice we ca get to I by applyig a fiite set of row operatios to A. Therefore this implicatio has bee prove. ( c) ( d) : I this case we re goig to assume that A is row equivalet to I ad we ll eed to show that A ca be writte as a product of elemetary matrices. So, sice A is row equivalet to I we kow there is a fiite set of elemetary row operatios that we ca apply to A that will give us I. Let s suppose that these row operatios are represeted by the elemetary matrices E, E,, Ek. The by Theorem 4 of the previous sectio we kow that applyig each row operatio to A is the same thig as multiplyig the left side of A by each of the correspodig elemetary matrices i the same order. So, we the kow that we will have the followig. E E E A= I k Now, by Theorem 5 from the previous sectio we kow that each of these elemetary matrices is ivertible ad their iverses are also elemetary matrices. So multiply the above equatio (o the left) by Ek,, E, E (i that order) to get, A= E E E I = E E E k k So, we see that A is a product of elemetary matrices ad this implicatio is prove. ( d) ( a) : Here we ll be assumig that A is a product of elemetary matrices ad we eed to show that A is ivertible. This is probably the easiest implicatio to prove. 007 Paul Dawkis 60

66 First, A is a product of elemetary matrices. Now, by Theorem 5 from the previous sectio we kow each of these elemetary matrices is ivertible ad by Theorem (a) also from the previous sectio we kow that a product of ivertible matrices is also ivertible. Therefore, A is ivertible sice it ca be writte as a product of ivertible matrices ad we ve prove this implicatio. This theorem ca actually be exteded to iclude a couple more equivalet statemets, but to do that we eed aother theorem. Theorem Suppose that A is a square matrix the (a) If B is a square matrix such that BA= I the A is ivertible ad A = B. (b) If B is a square matrix such that AB = I the A is ivertible ad A = B. Proof : (a) This proof will eed part (b) of Theorem. If we ca show that A x = 0 has oly the trivial solutio the by Theorem we will kow that A is ivertible. So, let x 0 be ay solutio to A x = 0. Plug this ito the equatio ad the multiply both sides (o the left by B. Ax0 = 0 BAx0 = B0 Ix0 = 0 x0 = 0 So, this shows that ay solutio to A x = 0 must be the trivial solutio ad so by Theorem if oe statemet is true they all are ad so A is ivertible. We kow from the previous sectio that iverses are uique ad because BA= I we must the also have A = B. (b) I this case let s let x 0 be ay solutio to B x = 0. The multiplyig both sides (o the left) of this by A we ca use a similar argumet to that used i (a) to show that x 0 must be the trivial solutio ad so B is a ivertible matrix ad that i fact B = A. Now, this is t quite what we were asked to prove, but it does i fact give us the proof. Because B is ivertible ad its iverse is A (by the above work) we kow that, AB = BA = I but this is exactly what it meas for A to be ivertible ad that A = B. So, we are doe. So, what s the big deal with this theorem? We ll recall i the last sectio that i order to show that a matrix, B, was the iverse of A we eeded to show that AB = BA = I. I other words, we eeded to show that both of these products were the idetity matrix. Theorem tells us that all we really eed to do is show oe of them ad we get the other oe for free. This theorem gives us is the ability to add two equivalet statemets to Theorem. Here is the improved Theorem. 007 Paul Dawkis 6

67 Theorem If A is a matrix the the followig statemets are equivalet. (a) A is ivertible. (b) The oly solutio to the system A x = 0 is the trivial solutio. (c) A is row equivalet to I. (d) A is expressible as a product of elemetary matrices. (e) A x= bhas exactly oe solutio for every matrix b. (f) A x= bis cosistet for every matrix b. Note that (e) ad (f) appear to be the same o the surface, but recall that cosistet oly says that there is at least oe solutio. If a system is cosistet there may be ifiitely may solutios. What this part is tellig us is that if the system is cosistet for ay choice of b that we choose to put ito the system the we will i fact oly get a sigle solutio. If eve oe b gives ifiitely may solutios the (f) is false, which i tur makes all the other statemets false. Okay so how do we go about provig this? We ve already proved that the first four statemets are equivalet above so there s o reaso to redo that work. This meas that all we eed to do is prove that oe of the origial statemets implies the ew two ew statemets ad these i tur imply oe of the four origial statemets. We ll do this by provig the followig implicatios a e f a. ( ) ( ) ( ) ( ) Proof : ( a) ( e) : Okay with this implicatio we ll assume that A is ivertible ad we ll eed to show that A x= bhas exactly oe solutio for every matrix b. This is actually very simple to do. Sice A is ivertible we kow that A so we ll do the followig. A Ax= A b Ix= A b x= A b So, if A is ivertible we ve show that the solutio to the system will be x= b ad sice matrix multiplicatio is uique (i.e. we are t goig to get two differet aswers from the multiplicatio) the solutio must also be uique ad so there is exactly oe solutio to the system. ( e) ( f ) : This implicatio is trivial. We ll start off by assumig that the system A x= bhas exactly oe solutio for every matrix b but that also meas that the system is cosistet every matrix b ad so we re doe with the proof of this implicatio. ( f ) ( a) : Here we ll start off by assumig that A x= bis cosistet for every matrix b ad we ll eed to show that this implies A is ivertible. So, if A x= bis cosistet for every matrix b it is cosistet for the followig systems. A 007 Paul Dawkis 6

68 Ax= 0 Ax= 0 Ax= Sice we kow each of these systems have solutios let x, x,, x be those solutios ad form a ew matrix, B, with these solutios as its colums. I other words, B = x x x [ ] Now let s take a look at the product AB. We kow from the matrix arithmetic sectio that the i th colum of AB will be give by Ax i ad we kow what each of these products will be sice x i is a solutio to oe of the systems above. So, let s use all this kowledge to see what the product AB is AB = [ Ax Ax Ax ] = = I 0 0 So, we ve show that AB= I, but by Theorem this meas that A must be ivertible ad so we re doe with the proof. Before proceedig let s otice that part (c) of this theorem is also tellig us that if we reduced A dow to reduced row-echelo form the we d have I. This ca also be see i the proof i b c. Theorem of the implicatio ( ) ( ) So, just how does this theorem help us to determie the iverse of a matrix? Well, first let s assume that A is i fact ivertible ad so all the statemets i Theorem are true. Now, go back to the proof of the implicatio ( c) ( d). I this proof we saw that there were elemetary matrices, E, E,, Ek, so that we d get the followig, E E E A= I k A Sice we kow A is ivertible we kow that exists ad so multiply (o the right) each side of this to get, E EEAA = IA A = E EEI k k What this tell us is that we eed to fid a series of row operatio that will reduce A to I ad the apply the same set of operatios to I ad the result will be the iverse, A. 007 Paul Dawkis 6

69 Okay, all this is fie. We ca write dow a buch of symbols to tell us how to fid the iverse, but that does t always help to actually fid the iverse. The work above tells us that we eed to idetify a series of elemetary row operatios that will reduce A to I ad the apply those operatios to I. We ll it turs out that we ca do both of these steps simultaeously ad we do t eed to mess aroud with the elemetary matrices. Let s start off by supposig that A is a ivertible matrix ad the form the followig ew matrix. [ A I ] Note that all we did here was tack o I to the origial matrix A. Now, if we apply a row operatio to this it will be equivalet to applyig it simultaeously to both A ad to I. So, all we eed to do is fid a series of row operatios that will reduce the A portio of this to I, makig sure to apply the operatios to the whole matrix. Oce we ve doe this we will have, I A provided A is i fact ivertible of course. We ll deal with sigular matrices i a bit. Let s take a look at a couple of examples. Example Determie the iverse of the followig matrix give that it is ivertible. 4 A = 5 5 Solutio Note that this is the we looked at i Example of the previous sectio. I that example stated (ad proved) that the iverse was, 5 A = 5 We ca ow show how we arrived at this for the iverse. We ll first form the ew matrix Next we ll fid row operatios that will covert the first two colums ito I ad the third ad fourth colums should the cotai A. Here is that work, 4 0 R+ R R 5R R R R So, the first two colums are i fact I ad i the third ad fourth colums we ve got the iverse, 5 A = Paul Dawkis 64

70 Example Determie the iverse of the followig matrix give that it is ivertible. 0 C = 5 0 Solutio Okay we ll first form the ew matrix, ad we ll use elemetary row operatios to reduce the first three rows to I ad the the last three rows will be the iverse of C. Here is that work R + R R + R R 6 R 5R R + 5R R R 7 R R 5R R4R So, we ve gotte the first three colums reduced to I ad that meas the last three must be the iverse. C = We ll leave it to you to verify that CC C C I = =. Okay, so far we ve see how to use the method above to determie a iverse, but what happes if a matrix does t have a iverse? We ll it turs out that we ca also use this method to determie that as well ad it geerally does t take quite as much work as it does to actually fid the iverse (if it exists of course.). Let s take a look at a example of that. 007 Paul Dawkis 65

71 Example Show that the followig matrix does ot have a iverse, i.e. show the matrix is sigular. 6 B = Solutio Okay, the problem statemet says that the matrix is sigular, but let s preted that we did t kow that ad work the problem as we did i the previous two examples. That meas we ll eed the ew matrix, Now, let s get started o gettig the first three colums reduced to I R + R R + R R R At this poit let s stop ad examie the third row i a little more detail. I order for the first three colums to be I the first three etries of the last row MUST be [ 0 0 ] which we clearly do t have. We could use a multiple of row or row to get a i the third spot, but that would i tur chage at least oe of the first two etries away from 0. That s a problem sice they must remai zeroes. I other words, there is oe way to make the third etry i the third row a without also chagig oe or both of the first two etries ito somethig other tha zero ad so we will ever be able to make the first three colums ito I. So, there are o sets of row operatios that will reduce B to I ad hece B is NOT row equivalet to I. Now, go back to Theorem. This was a set of equivalet statemets ad if oe is false they are all false. We ve just maaged to show that part (c) is false ad that meas that part (a) must also be false. Therefore, B must be a sigular matrix. The idea used i this last example to show that B was sigular ca be used i geeral. If, i the course of reducig the ew matrix, we ever ed up with a row i which all the etries to the left of the dashed lie are zeroes we will kow that the matrix must be sigular. We ll leave this sectio off with a quick formula that ca always be used to fid the iverse of a ivertible matrix as well as a way to quickly determie if the matrix is ivertible. The 007 Paul Dawkis 66

72 above method is ice i that it always works, but it ca be cumbersome to use so the followig formula ca help to make thigs go quicker for matrices. Theorem 4 The matrix a b A = c d will be ivertible if ad bc 0 ad sigular if ad bc = 0. If the matrix is ivertible its iverse will be, d b A = ad bc c a Let s do a quick example or two of this fact. Example 4 Use the fact to show that 4 A = 5 5 is a ivertible matrix ad fid its iverse. Solutio We ve already looked at this oe above, but let s do it here so we ca cotrast the work betwee the two methods. First, we eed, ad bc = =0 0 ( )( ) ( )( ) So, the matrix is i fact ivertible by the fact ad here is the iverse, A 5 5 = = Example 5 Determie if the followig matrix is sigular. 4 B = 6 Solutio Not much to do with this oe. ( 4)( ) ( )( 6) = 0 So, by the fact the matrix is sigular. If you d like to see a couple more example of fidig iverses check out the sectio o Special Matrices, there are a couple more examples there. 007 Paul Dawkis 67

73 Special Matrices This sectio is devoted to a couple of special matrices that we could have talked about pretty much aywhere, but due to the desire to keep most of these sectios as small as possible they just did t fit i aywhere. However, we ll eed a couple of these i the ext sectio ad so we ow eed to get them out of the way. Diagoal Matrix This first oe that we re goig to take a look at is a diagoal matrix. A square matrix is called diagoal if it has the followig form. d d 0 0 D= 0 0 d d I other words, i a diagoal matrix is ay matrix i which the oly potetially o-zero etries are oe the mai diagoal. Ay etry off the mai diagoal must be zero ad ote that it is possible to have oe or more of the mai diagoal etries be zero. We ve also bee dealig with a diagoal matrix already to this poit if you thik about it a little. The idetity matrix is a diagoal matrix. Here is a ice theorem about diagoal matrices. Theorem Suppose D is a diagoal matrix ad d, d, d are the etries o the mai diagoal. If oe or more of the d i s are zero the the matrix is sigular. O the other had if di 0 for all i the the matrix is ivertible ad the iverse is, d d D = d d Proof : First, recall Theorem from the previous sectio. This theorem tells us that if D is row equivalet to the idetity matrix the D is ivertible ad if D is ot row equivalet to the idetity the D is sigular. 007 Paul Dawkis 68

74 If oe of the d i s are zero the we ca reduce D to the idetity simply dividig each of the rows its diagoal etry (which we ca do sice we ve assumed oe of them are zero) ad so i this case D will be row equivalet to the idetity. Therefore, i this case D is ivertible. We ll leave it to you to verify that the iverse is what we claim it to be. You ca either compute this directly usig the method from the previous sectio or you ca verify that DD = D D = I. Now, suppose that at least oe of the d i is equal to zero. I this case we will have a row of all zeroes, ad because D is a diagoal matrix all the etries above the mai diagoal etry i this row will also be zero ad so there is o way for us to use elemetary row operatios to put a ito the mai diagoal ad so i this case D will ot be row equivalet to the idetity ad hece must be sigular. Powers of diagoal matrices are also easy to compute. If D is a diagoal matrix ad k is ay iteger the k d k 0 d 0 0 k k D = 0 0 d 0 k d Triagular Matrix The ext kid of matrix we wat to take a look at will be triagular matrices. I fact there are actually two kids of triagular matrix. For a upper triagular matrix the matrix must be square ad all the etries below the mai diagoal are zero ad the mai diagoal etries ad the etries above it may or may ot be zero. A lower triagular matrix is just the opposite. The matrix is still a square matrix ad all the etries of a lower triagular matrix above the mai diagoal are zero ad the mai diagoal etries ad those below it may or may ot be zero. Here are the geeral forms of a upper ad lower triagular matrix. u u u u 0 u u u U = 0 0 u u Upper Triagular u l l l 0 0 L = l l l 0 Lower Triagular l l l l I these forms the u ij ad l ij may or may ot be zero. 007 Paul Dawkis 69

75 If we do ot care if the matrix is upper or lower triagular we will geerally just call it triagular. Note as well that a diagoal matrix ca be thought of as both a upper triagular matrix ad a lower triagular matrix. Here s a ice theorem about the ivertibility of a triagular matrix. Theorem If A is a triagular matrix with mai diagoal etries a, a,, a the if oe or more of the a ii s are zero the matrix will be sigular. O the other had if aii 0 for all i the the matrix is ivertible. Here is the outlie of the proof. Proof Outlie : First assume that aii 0 for all i. I this case we ca divide each row by a ii (sice it s ot zero) ad that will put a i the mai diagoal etry for each row. Now use the third row operatio to elimiate all the o-zero etries above the mai diagoal etry for a upper triagular matrix or below it for a lower triagular matrix. Whe doe with these operatios we will have reduced A to the idetity matrix. Therefore, i this case A is row equivalet to the idetity ad so must be ivertible. Now assume that at least oe of the a ii are zero. I this case we ca t get a i the mai diagoal etry just be dividig by a ii as we did i the first place. Now, for a secod let s suppose we have a upper triagular matrix. I this case we could use the third row operatio usig oe of the rows above this to get a ito the mai diagoal etry, however, this will also put o-zero etries ito the etries to the left of this as well. I other words, we re ot goig to be able to reduce A to the idetity matrix. The same type of problem will arise if we ve got a lower triagular matrix. I this case, A will ot be row equivalet to the idetity ad so will be sigular. Here is aother set of theorems about triagular matrices that we are t goig to prove. Theorem (a) The product of lower triagular matrices will be a lower triagular matrix. (b) The product of upper triagular matrices will be a upper triagular matrix. (c) The iverse of a ivertible lower triagular matrix will be a lower triagular matrix. (d) The iverse of a ivertible upper triagular matrix will be a upper triagular matrix. The proof of these will pretty much follow from how products ad iverses are foud ad so will be left to you to verify. The fial kid of matrix that we wat to look at i this sectio is that of a symmetric matrix. I fact we ve already see these i a previous sectio we just did t have the space to ivestigate them i more detail i that sectio so we re goig to do it here. 007 Paul Dawkis 70

76 For completeess sake we ll give the defiitio here agai. Suppose that A is a m matrix, T the A will be called symmetric if A= A. Note that the first requiremet for a matrix to be symmetric is that the matrix must be square. T T Sice the size of A will be m there is o way A ad A ca be equal if A is ot square sice they wo t have the same size. Example The followig matrices are all symmetric A= B C 6 7 = = [ 0] We ll leave it to you to compute the trasposes of each of these ad verity that they are i fact symmetric. Notice with the secod matrix (B) above that you ca always quickly idetify a symmetric matrix by lookig at the diagoals off the mai diagoal. The diagoals right above ad below the mai diagoal cosists of the etries -0,, 8 are idetical. Likewise, the diagoals two above ad below the mai diagoal cosists of the etries, -4 ad agai are idetical. Fially, the diagoals that are three above ad below the mai diagoal is idetical as well. This idea we see i the secod matrix above will be true i ay symmetric matrix. Here is a ice set of facts about arithmetic with symmetric matrices. Theorem 4 If A ad B are symmetric matrices of the same size ad c is ay scalar the, (a) A ± B is symmetric. (b) ca is symmetric. T (c) A is symmetric. Note that the product of two symmetric matrices is probably ot symmetric. To see why this is cosider the followig. Suppose both A ad B are symmetric matrices of the same size the, ( ) T T T AB = B A = BA Notice that we used oe of the properties of trasposes we foud earlier i the first step ad the fact that A ad B are symmetric i the last step. So what this tells us is that uless A ad B commute we wo t have ( AB) T = AB ad the product wo t be symmetric. If A ad B do commute the the product will be symmetric. T T T Now, if A is ay m matrix the because A will have size m both AA ad AA will T T be defied ad i fact will be square matrices where AA has size ad AA has size m m. Here are a couple of quick facts about symmetric matrices. 007 Paul Dawkis 7

77 Theorem 5 T T (a) For ay matrix A both AA ad AA are symmetric. (b) If A is a ivertible symmetric matrix the A is symmetric. T T (c) If A is ivertible the AA ad AA are both ivertible. Proof : (a) We ll show that symmetric we ll eed to show that ( T AA is symmetric ad leave the other to you to verify. To show that T) T AA various properties of traspose matrices that we ve got. T T AA = A A = A A = AA T AA is T = AA. This is actually quite simple if we recall the T T T T T ( ) ( ) ( ) (b) I this case all we eed is a theorem from a previous sectio to show that ( A ) Here is the work, T T ( ) ( ) ( ) A = A = A = A T T (c) If A is ivertible the we also kow that A is ivertible ad we sice the product of T T ivertible matrices is ivertible both AA ad AA are ivertible. = A. Let s fiish this sectio with a example or two illustratig the results of some of the theorems above. Example Give the followig matrices compute the idicated quatities A= B 0 7 C 0 0 = = D= 0 6 E = (a) AB [Solutio] (b) C [Solutio] T (c) DD [Solutio] (d) [Solutio] E Solutio (a) AB There really is t much to do here other tha the multiplicatio ad we ll leave it to you to verify the actual multiplicatio A = So, as suggested by Theorem the product of upper triagular matrices is i fact a upper triagular matrix. 007 Paul Dawkis 7

78 C [Retur to Problems] (b) Here s the work for fidig C R R R 9R R R R C = So, agai as suggested by Theorem the iverse of a lower triagular matrix is also a lower triagular matrix. [Retur to Problems] T (c) DD Here s the traspose ad the product T D = T 6 4 DD= 0 6 = So, as suggested by Theorem 5 this product is symmetric eve though D was ot symmetric (or square for that matter). [Retur to Problems] E (d) Here is the work for fidig E R + R E = Paul Dawkis 7

79 0 0 0 R+ R R R R + R So, the iverse is 0 E = 0 0 ad as suggested by Theorem 5 the iverse is symmetric. [Retur to Problems] 007 Paul Dawkis 74

80 LU Decompositio I this sectio we re goig to discuss a method for factorig a square matrix A ito a product of a lower triagular matrix, L, ad a upper triagular matrix, U. Such a factorizatio ca be used to solve systems of equatios as we ll see i the ext sectio whe we revisit that topic. Let s start the sectio out with a defiitio ad a theorem. Defiitio If A is a square matrix ad it ca be factored as A= LU where L is a lower triagular matrix ad U is a upper triagular matrix, the we say that A has a LU- Decompositio of LU. Theorem If A is a square matrix ad it ca be reduced to a row-echelo form, U, without iterchagig ay rows the A ca be factored as A= LU where L is a lower triagular matrix. We re ot goig to prove this theorem but let s examie it i some detail ad we ll fid a way to determie a way of determiig L. Let s start off by assumig that we ve got a square matrix A ad that we are able to reduce it row-echelo form U without iterchagig ay rows. We kow that each row operatio that we used has a correspodig elemetary matrix, so let s suppose that the elemetary matrices correspodig to the row operatios we used are E, E,, Ek. We kow from Theorem 4 i a previous sectio that multiplyig these to the left side of A i the same order we applied the row operatios will be the same as actually applyig the operatios. So, this meas that we ve got, E E E A= U k We also kow that elemetary matrices are ivertible so let s multiply each side by the iverses, Ek,, E, E, i that order to get, A = E E E U k Now, it ca be show that provided we avoid iterchagig rows the elemetary row operatios that we eeded to reduce A to U will all have correspodig elemetary matrices that are lower triagular matrices. We also kow from the previous sectio that iverses of lower triagular matrices are lower triagular matrices ad products of lower triagular matrices are lower triagular matrices. I other words, L= E E Ek is a lower triagular matrix ad so usig this we get the LU-Decompositio for A of A = LU. Let s take a look at a example of this. Example Determie a LU-Decompositio for the followig matrix. 6 9 A = Solutio So, first let s go through the row operatios to get this ito row-echelo form ad remember that we are t allowed to do ay iterchagig of rows. Also, we ll do this step by step so that we ca 007 Paul Dawkis 75

81 keep track of the row operatios that we used sice we re goig to eed to write dow the elemetary matrices that are associated with them evetually R R R R + R R R R Okay so, we ve got our hads o U. U = Now we eed to get L. This is goig to take a little more work. We ll eed the elemetary matrices for each of these, or more precisely their iverses. Recall that we ca get the elemetary matrix for a particular row operatio by applyig that operatio to the appropriately sized idetity matrix ( i this case). Also recall that the iverse matrix ca be foud by applyig the iverse operatio to the idetity matrix. Here are the elemetary matrices ad their iverses for each of the operatios above R E = 0 0 E 0 0 = R R E = 0 E 0 = R + 4R E = 0 0 E 0 0 = Paul Dawkis 76

82 R 9R E4 = 0 0 E = R E5 = 0 0 E = Okay, we kow ca compute L. L= E E E E4 E = = Fially, we ca verify that we ve gotte a LU-Decompositio with a quick computatio = = A So we did all the work correctly. That was a lot of work to determie L. There is a easier way to do it however. Let s start off with a geeral L with * i place of the potetially o-zero terms. * 0 0 L = * * 0 * * * Let s start with the mai diagoal ad go back ad look at the operatios that was required to get s o the diagoal whe we were computig U. To get a i the first row we had to multiply that row by. We did t eed to do aythig to get a i the secod row, but for the sake argumet let s say that we actually multiplied that row by. Fially, we multiplied the third row by to get a i the mai diagoal etry i that row. 9 Next go back ad look at the L that we had for this matrix. The mai diagoal etries are,, ad -9. I other words, they are the reciprocal of the umbers we used i computig U. This will always be the case. The mai diagoal of L the usig this idea is, 007 Paul Dawkis 77

83 0 0 L = * 0 * * 9 Now, let s take a look at the two etries uder the i the first colum. Agai go back to the operatios used to fid U ad take a look at the operatios we used to get zeroes i these two spots. To get a zero i the secod row we added R oto R ad to get a zero i the third row we added 4R oto R. Agai, go back to the L we foud ad otice that these two etries are ad -4. Or, they are the egative of the multiple of the first row that we added oto that particular row to get that etry to be zero. Fillig these i we ow arrive at, 0 0 L = 0 4 * 9 Fially, i determiig U we 9R oto R to get the etry i the third row ad secod colum to be zero ad i the L we foud this etry is 9. Agai, it s the egative of the multiple of the secod row we used to make this etry zero. This gives us the fial etry i L. 0 0 L = This process we just wet through will always work i determiig L for our LU-Decompositio provided we follow the process above to fid U. I fact that is the oe drawback to this process. We eed to fid U usig exactly the same steps we used i this example. I other words, multiply/divide the first row by a appropriate scalar to get a i the first colum the zero out the etries below that oe. Next, multiply/divide the secod row by a appropriate scalar to get a i the mai diagoal etry of the secod row ad the zero out all the etries below this. Cotiue i this fashio util you ve dealt with all the colums. This will sometimes lead to some messy fractios. Let s take a look at aother example ad this time we ll use the procedure outlied above to fid L istead of dealig with all the elemetary matrices. Example Determie a LU-Decompositio for the followig matrix. 4 B = Solutio So, we first eed to reduce B to row-echelo form without usig row iterchages. Also, if we re goig to use the process outlied above to fid L we ll eed to do the reductio i the same maer as the first example. Here is that work. 007 Paul Dawkis 78

84 So, U is, 4 R 5R R R 7 R R R R R U = Now, let s get L. Agai, we ll start with a geeral L ad the mai diagoal etries will be the reciprocal of the scalars we eeded to multiply each row by to get a oe i the mai diagoal etry. This gives, L = * 0 * * Now, for the remaiig etries, go back to the process ad look for the multiple that was eeded to get a zero i that spot ad this etry will be the egative of that multiple. This gives us our fial L L = As a fial check we ca always do a quick multiplicatio to verify that we do i fact get B from this factorizatio = = B So, it looks like we did all the work correctly. We ll leave this sectio by poitig out a couple of facts about LU-Decompositios. First, give a radom square matrix, A, the oly way we ca guaratee that A will have a LU- Decompositio is if we ca reduce it to row-echelo form without iterchagig ay rows. If we do have to iterchage rows the there is a good chace that the matrix will NOT have a LU- Decompositio. 007 Paul Dawkis 79

85 Secod, otice that every time we ve talked about a LU-Decompositio of a matrix we ve used the word a ad ot the LU-Decompositio. This choice of words is itetioal. As the choice suggests there is o sigle uique LU-Decompositio for A. To see that LU-Decompositios are ot uique go back to the first example. I that example we computed the followig LU-Decompositio = However, we ve also got the followig LU-Decompositio = This is clearly a LU-Decompositio sice the first matrix is lower triagular ad the secod is upper triagular ad you should verify that upo multiplyig they do i fact give the show matrix. If you would like to see a further example of a LU-Decompositio worked out there is a example i the ext sectio. 007 Paul Dawkis 80

86 Systems Revisited We opeed up this chapter talkig about systems of equatios ad we spet a couple of sectios o them ad the we moved away from them ad have t really talked much about them sice. It s time to come back to systems ad see how some of the ideas we ve bee talkig about sice the ca be used to help us solve systems. We ll also take a quick look at a couple of other ideas about systems that we did t look at earlier. First let s recall that ay system of equatios ad m ukows, ax + ax + + a mxm = b ax + ax + + amxm = b a x+ ax+ + amxm = b ca be writte i matrix form as follows. a a a m x b a a a m x b = a a am xm b Ax= b I the matrix form A is called the coefficiet matrix ad each row cotais the coefficiets of the correspodig equatios, x is a colum matrix that cotais all the ukows from the system of equatios ad fially b is a colum matrix cotaiig the costats o the right of the equal sig. Now, let s see how iverses ca be used to solve systems. First, we ll eed to assume that the coefficiet matrix is a square matrix. I other words there are the same umber of equatios as ukows i our system. Let s also assume that A is ivertible. I this case we actually saw i the proof of Theorem i the sectio o fidig iverses that the solutio to A x= bis uique (i.e. oly a sigle solutio exists) ad that it s give by, x= A b So, if we ve got the iverse of the coefficiet matrix i had (ot always a easy thig to fid of course ) we ca get the solutio based o a quick matrix multiplicatio. Let s see a example of this. Example Use the iverse of the coefficiet matrix to solve the followig system. x+ x = 6 x+ x + x =7 5x x = 0 Solutio Okay, let s first write dow the matrix form of this system. 007 Paul Dawkis 8

87 0 x 6 x = x 0 Now, we foud the iverse of the coefficiet matrix back i Example of the Fidig Iverses sectio so here is the coefficiet matrix ad its iverse. 0 A= A = The solutio to the system i matrix form is the, 6 x= A b = 7 5 = Now sice each of the etries of x are oe of the ukows i the origial system above the system to the origial system is the, 5 x = x = 5 x = So, provided we have a square coefficiet matrix that is ivertible ad we just happe to have our hads o the iverse of the coefficiet matrix we ca fid the solutio to the system fairly easily. Next, let s look at how the topic of the previous sectio (LU-Decompositios) ca be used to solve systems of equatios. First let s recall how LU-Decompositios work. If we have a square matrix, A, (so we ll agai be workig the same umber of equatios as ukows) the if we ca reduce it to row-echelo form without usig ay row iterchages the we ca write it as A = LU where L is a lower triagular matrix ad U is a upper triagular matrix. So, let s start with a system A x= b where the coefficiet matrix, A, is a square ad has a LU-Decompositio of A= LU. Now, substitute this ito the system for A to get, LU x= b Next, let s just take a look at Ux. This will be a colum matrix ad let s call it y. So, we ve got U x = y. So, just what does this do for us? Well let s write the system i the followig maer. Ly = b where Ux= y As we ll see it s very easy to solve L y = b for y ad oce we kow y it will be very easy to solve U x = y for x which will be the solutio to the origial system. It s probably easiest to see how this method works with a example so let s work oe. 007 Paul Dawkis 8

88 Example Use the LU-Decompositio method to fid the solutio to the followig system of equatios. x+ 6x 9x = 0 x+ 5x x =4 4x+ x + 0x = Solutio First let s write dow the matrix form of the system. 6 9 x 0 5 x 4 = 4 0 x Now, we foud a LU-Decompositio to this coefficiet matrix i Example of the previous sectio. From that example we see that, = Accordig to the method outlied above this meas that we actually eed to solve the followig two systems. 0 0 y 0 x y 0 y 4 0 x y = = y 0 0 x y i order. So, let s get started o the first oe. Notice that we do t really eed to do aythig other tha write dow the equatios that are associated with this system ad solve usig forward substitutio. The first equatio will give us y for free ad oce we kow that the secod equatio will give us y. Fially, with these two values i had the third equatio will give us y. Here is that work. y = 0 y = 0 y+ y =4 y =4 9 4y+ 9y 9y = y = 9 The secod system that we eed to solve is the, x 0 0 x 4 = x 9 Agai, otice that to solve this all we eed to do is write dow the equatios ad do back substitutio. The third equatio will give us x for free ad pluggig this ito the secod 007 Paul Dawkis 8

89 equatio will give us x, etc. Here s the work for this. 9 x+ x x = 0 x = 9 x + x =4 x = x = x = 9 9 The solutio to the origial system is the show above. Notice that while the fial aswers where a little messy the work was othig more tha a little arithmetic ad was t terribly difficult. Let s work oe more of these sice there s a little more work ivolved i this tha the iverse matrix method of solvig a system. Example Use the LU-Decompositio method to fid a solutio to the followig system of equatios. x+ 4x x = x x + x = 7 4x + x =9 Solutio Oce agai, let s first get the matrix form of the system. 4 x x 7 = 0 4 x 9 Now let s get a LU-Decompositio for the coefficiet matrix. Here s the work that will reduce it to row-echelo form. Remember that the result of this will be U. 4 R R R R R 7 + 4R R So, U is the, U 7 = Now, to get L remember that we start off with a geeral lower triagular matrix ad o the mai diagoals we put the reciprocal of the scalar used i the work above to get a oe i that spot. The, i the etries below the mai diagoal we put the egative of the multiple used to get a zero 007 Paul Dawkis 84

90 i that spot above. L is the, 0 0 L = We ll leave it to you to verify that A = LU. Now let s solve the system. This will mea we eed to solve the followig two systems. 0 0 y x y y x y = = 0 4 y x y Here s the work for the first system. y = y = y+ 4y = 7 y = 8 4y y =9 y = Now let s get the actual solutio by solvig the secod system. x x = x Here is the substitutio work for this system. x x + x = x = x x = x = 8 8 x = x = So there s the solutio to this system. Before movig oto the ext topic of this sectio we should probably address why we eve bothered with this method. It seems like a lot of work to solve a system of equatios ad whe solvig systems by had it ca be a lot of work. However, because the method for fidig L ad U is a fairly straightforward process ad oce those are foud the method for solvig the system is also very straightforward this is a perfect method for use i computer systems whe programmig the solutio to systems. So, while it seems like a lot of work, it is a method that is very easy to program ad so is a very useful method. The remaiig topics i this sectio do t really rely o previous sectios as the first part of this sectio has. Istead we just eed to look at a couple of ideas about solvig systems that we did t have room to put ito the sectio o solvig systems of equatios. 007 Paul Dawkis 85

91 First we wat to take a look at the followig sceario. Suppose that we eed so solve a system of equatios oly there are two or more sets of the b i s that we eed to look at. For istace suppose we wated to solve the followig systems of equatios. Ax= b Ax= b Ax= b k Agai, the coefficiet matrix is the same for all these systems ad the oly thig that is differet is the b i s. We could use ay of the methods looked at so far to solve these systems. However, each of the methods we ve looked at so far would require us to do each system idividually ad that could potetially lead to a lot of work. There is oe method however that ca be easily exteded to solve multiple systems simultaeously provided they all have the same coefficiet matrix. I fact the method is the very first oe we looked at. I that method we solved systems by addig the colum matrix b, oto the coefficiet matrix ad the reducig it to row-echelo or reduced row-echelo form. For the systems above this would require workig with the followig augmeted matrices. A b A b A b [ ] [ ] [ ] k However, if you thik about it almost the whole reductio process revolves aroud the colums i the augmeted matrix that are associated with A ad ot the b colum. So, istead of doig these idividually let s add all of them oto the coefficiet matrix as follows. [ A b b bk ] All we eed to do this is reduce this to reduced row-echelo form ad we ll have the aswer to each of the systems. Let s take a look at a example of this. Example 4 Fid the solutio to each of the followig systems. x x + 4x = x x + 4x = 0 xx x = xx x = 5 5xx x = 5xx x =8 Solutio So, we ve got two systems with the same coefficiet matrix so let s form the followig matrix. Note that we ll leave the vertical bars i to make sure we remember the last two colums are really b s for the systems we re solvig Now, we just eed to reduce this to reduced row-echelo form. Here is the work for that. 4 0 R R R 5R Paul Dawkis 86

92 R R R R + R 0 8 R 0 5 R4R R+ R Okay from the solutio to the first system is i the fourth colum sice that is the b for the first system ad likewise the solutio to the secod system is i the fifth colum. Therefore, the solutio to the first system is, 7 8 x = x = x = ad the solutio to the secod system is, x = x = x = 7 The remaiig topic to discuss i this sectio gives us a method for aswerig the followig questio. Give a m matrix A determie all the m matrices, b, for which A x= b is cosistet, that is A x= b has at least oe solutio. This is a questio that ca arise fairly ofte ad so we should take a look at how to aswer it. Of course if A is ivertible (ad hece square) this aswer is that A x= b is cosistet for all b as we saw i a earlier sectio. However, what if A is t square or is t ivertible? The method we re goig to look at does t really care about whether or ot A is ivertible but it really should be poited out that we do kow the aswer for ivertible matrices. It s easiest to see how these work with a example so let s jump ito oe. Example 5 Determie the coditios (if ay) o b, b, ad b i order for the followig system to be cosistet. x x + 6x = b x+ x x = b x+ x + 8x = b Solutio Okay, we re goig to use the augmeted matrix method we first looked at here ad reduce the matrix dow to reduced row-echelo form. The fial form will be a little messy because of the presece of the b i s but other tha that the work is idetical to what we ve bee doig to this poit. 007 Paul Dawkis 87

93 Here is the work. 6 b R + R 6 b b R + R 0 5 b + b 8 b b + b 6 b 6 b R R + 5R 0 5 b b 0 5 b b b + b 0 0 b 5b b R + 5R 0 6b + 0b + b 0 0 4b b 9b R+ R R6R 0 0 5b 6b b 0 0 5b 6b b 0 0 b 5b b 0 0 b 5b b Okay, just what does this all mea? Well go back to equatios ad let s see what we ve got. x = 4b b 9b x = 5b 6b b x = b 5b b So, what this says is that o matter what our choice of b, b, ad b we ca fid a solutio usig the geeral solutio above ad i fact there will always be exactly oe solutio to the system for a give choice of b. Therefore, there are o coditios o b, b, ad b i order for the system to be cosistet. Note that the result of the previous example should t be too surprisig give that the coefficiet matrix is ivertible. Now, we eed to see what happes if the coefficiet matrix is sigular (i.e.ot ivertible). Example 6 Determie the coditios (if ay) o b, b, ad b i order for the followig system to be cosistet. x+ x x = b x 5x + x = b x 8x + x = b Solutio We ll do this oe i the same maer as the previous oe. So, covert to a augmeted matrix ad start the reductio process. As we ll see i this case we wo t eed to go all the way to reduced row-echelo form to get the aswer however. b R + R b 5 b R R 0 b b + 8 b b b 007 Paul Dawkis 88

94 R b 7R b 7b 9b 0 b + b Okay, let s stop here ad see what we ve got. The last row correspods to the followig equatio. 0= b 7b 9b If the right side of this equatio is NOT zero the this equatio will ot make ay sese ad so the system wo t have a solutio. If however, it is zero the this equatio will ot be a problem ad sice we ca take the first two rows ad fiish out the process to fid a solutio for ay give values of b ad b we ll have a solutio. This the gives us our coditio that we re lookig for. I order for the system to have a solutio, ad hece be cosistet, we must have b = 7b + 9b 007 Paul Dawkis 89

95 Determiats Itroductio By this poit i your mathematical career you should have ru across fuctios. The fuctios f x where x is a real umber ad the that you ve probably see to this poit have had the form ( ) output of the fuctio is also a real umber. Some examples of fuctios are f ( x) = x ad f ( x) = cos( x) si ( x). Not all fuctios however eed to take a real umber as a argumet. For istace we could have a fuctio f ( X ) that takes a matrix X ad outputs a real umber. I this chapter we are goig to be lookig at oe such fuctio, the determiat fuctio. The determiat fuctio is a fuctio that will associate a real umber with a square matrix. The determiat fuctio is a fuctio that wo t be seeig all that ofte i the rest of this course, but it will show up o occasio. Here is a listig of the topics i this chapter. The Determiat Fuctio We will give the formal defiitio of the determiat i this sectio. We ll also give formulas for computig determiats of ad matrices. Properties of Determiats Here we will take a look at quite a few properties of the determiat fuctio. Icluded are formulas for determiats of triagular matrices. The Method of Cofactors I this sectio we ll take a look at the first of two methods form computig determiats of geeral matrices. Usig Row Reductio to Fid Determiats Here we will take a look at the secod method for computig determiats i geeral. Cramer s Rule We will take a look at yet aother method for solvig systems. This method will ivolve the use of determiats. 007 Paul Dawkis 90

96 The Determiat Fuctio We ll start off the chapter by defiig the determiat fuctio. This is ot such a easy thig however as it ivolves some ideas ad otatio that you probably have t ru across to this poit. So, before we actually defie the determiat fuctio we eed to get some prelimiaries out of the way. First, a permutatio of the set of itegers {,,, } is a arragemet of all the itegers i the list without omissio or repetitios. A permutatio of {,,, } will typically be deoted by ( i i i ) i is the secod umber i the,,, where i is the first umber i the permutatio, permutatio, etc. Example List all permutatios of {, }. Solutio This oe is t too bad because there are oly two itegers i the list. We eed to come up with all the possible ways to arrage these two umbers. Here they are. (, ) (, ) Example List all the permutatios of {,, } Solutio This oe is a little harder to do, but still is t too bad. We eed all the arragemets of these three umbers i which o umber is repeated or omitted. Here they are. (,, ) (,, ) (,, ) (,,) (,, ) (,, ) From this poit o it ca be somewhat difficult to fid permutatios for lists of umbers with more tha umbers i it. Oe way to make sure that you get all of them is to write dow a permutatio tree. Here is the permutatio tree for{,, }. At the top we list all the umbers i the list ad from this top umber we ll brach out with each of the remaiig umbers i the list. At the secod level we ll agai brach out with each of the umbers from the list ot yet writte dow alog that brach. The each brach will represet a permutatio of the give list of umbers As you ca see the umber of permutatios for a list will quick grow as we add umbers to the,,,, or ay list list. I fact it ca be show that there are! permutatios of the list { } cotaiig distict umbers, but we re goig to be workig with {,,, } so that s the oe 007 Paul Dawkis 9

97 we ll referece. So, the list {,,, 4 } will have ( )( )( )( ) {,,, 4, 5 } will have 5! ( 5)( 4)( )( )( ) 0 = = permutatios, etc. 4! = 4 = 4 permutatios, the list Next we eed to discuss iversios i a permutatio. A iversio will occur i the permutatio ( i, i,, i ) wheever a larger umber precedes a smaller umber. Note as well we do t mea that the smaller umber is immediately to the right of the larger umber, but aywhere to the right of the larger umber. Example Determie the umber of iversios i each of the followig permutatios.,, 4, [Solutio] (a) ( ) (b) (,, 4, ) [Solutio] (c) ( 4,,, ) [Solutio] (d) (,,, 4,5 ) [Solutio] (e) (,5, 4,, ) [Solutio] Solutio (a) (,, 4, ) Okay, to cout the umber of iversios we will start at the left most umber ad cout the umber of umbers to the right that are smaller. We the move to the secod umber ad do the same thig. We cotiue i this fashio util we get to the ed. The total umber of iversios are the the sum of all these. We ll do this first oe i detail ad the do the remaiig oes much quicker. We ll mark the umber we re lookig at i red ad to the side give the umber of iversios for that particular umber.,,4, iversios ( ) ( ) ( ),,4, 0 iversios,, 4, iversio I the first case there are two umbers to the right of that are smaller tha so there are two iversios there. I the secod case we re lookig at the smallest umber i the list ad so there wo t be ay iversios there. The with 4 there is oe umber to the right that is smaller tha 4 ad so we pick up aother iversio. There is o reaso to look at the last umber i the permutatio sice there are o umbers to the right of it ad so wo t itroduce ay iversios. The permutatio (,, 4, ) has a total of iversios. [Retur to Problems] (b) (,, 4, ) We ll do this oe much quicker. There are 0 0,, 4,. Note that each umber i the sum above represets the umber of iversio for the umber i that positio i the permutatio. [Retur to Problems] + + = iversios i ( ) 007 Paul Dawkis 9

98 (c) ( 4,,, ) There are + + = 6 iversios i ( 4,,, ). (d) (,,, 4,5 ) There are o iversios i (,,, 4, 5 ). (e) (,5, 4,, ) There are = 6 i ( ),5, 4,,. [Retur to Problems] [Retur to Problems] [Retur to Problems] Next, a permutatio is called eve if the umber of iversios is eve ad odd if the umber of iversios is odd. Example 4 Classify as eve or odd all the permutatios of the followig lists., (a) { } (b) {,, } Solutio (a) Here s a table givig all the permutatios, the umber of iversios i each ad the classificatio. (b) We ll do the same thig here We ll eed these results later i the sectio. Permutatio # Iversios Classificatio (, ) 0 eve (, ) odd Permutatio # Iversios Classificatio (,, ) 0 eve (,, ) odd (,, ) odd (,, ) eve (,, ) eve (,, ) odd Alright, let s move back ito matrices. We still have some defiitios to get out of the way before we defie the determiat fuctio, but at least we re back dealig with matrices. 007 Paul Dawkis 9

99 Suppose that we have a matrix, A, the a elemetary product from this matrix will be a product of etries from A ad oe of the etries i the product ca be from the same row or colum. Example 5 Fid all the elemetary products for, (a) a matrix [Solutio] (b) a matrix. [Solutio] Solutio (a) a matrix. Okay let s first write dow the geeral matrix. a a A = a a Each elemetary product will cotai two terms ad sice each term must come from differet rows we kow that each elemetary product must have the form, a a All we eed to do is fill i the colum subscripts ad remember i doig so that they must come from differet colums. There are really oly two possible ways to fill i the blaks i the, ad yes we did mea product above. The two ways of fillig i the blaks are (, ) ad ( ) to use the permutatio otatio there sice that is exactly what we eed. We will fill i the blaks with all the possible permutatios of the list of colum umbers, {, } i this case. So, the elemetary products for a matrix are a a a a [Retur to Problems] (b) a matrix. Agai, let s start off with a geeral matrix for referece purposes. a a a A= a a a a a a Each of the elemetary products i this case will ivolve three terms ad agai sice the must all come from differet rows we ca agai write dow the form they must take. a a a Agai, each of the colum subscripts will eed to come from differet colums ad like the case we ca get all the possible choices for these by fillig i the blaks will all the possible permutatios of {,, }. 007 Paul Dawkis 94

100 So, the elemetary products of the are, a a a a a a a a a a a a a a a a a a [Retur to Problems] A geeral matrix A, will have! elemetary products of the form a a a i i i where ( i i i ) rages over all the permutatios of {,,, },,,. We ca ow take care of the fial prelimiary defiitio that we eed for the determiat fuctio. A siged elemetary product from A will be the elemetary product a a a that is multiplied by + if ( i, i,, i ) ( i i i ) is a odd permutatio.,,, Example 6 Fid all the siged elemetary products for, (a) a matrix [Solutio] (b) a matrix. [Solutio] i i i is a eve permutatio or multiplied by - if Solutio We listed out all the elemetary products i Example 5 ad we classified all the permutatios used i them as eve or odd i Example 4. So, all we eed to do is put all this iformatio together for each matrix. (a) a matrix. Here are the siged elemetary products for the matrix. Elemetary Product aa ( ) a a ( ) Permutatio Siged Elemetary Product, - eve aa, - odd aa [Retur to Problems] (b) a matrix. Here are the siged elemetary products for the matrix. Elemetary Product aaa ( ) a a a ( ) Permutatio Siged Elemetary Product,, - eve aaa,, - odd aaa 007 Paul Dawkis 95

101 aaa ( ) aaa ( ) aaa ( ) a a a ( ),, - odd aaa,, - eve aaa,, - eve aaa,, - odd aaa [Retur to Problems] Okay, we ca ow give the defiitio of the determiat fuctio. Defiitio If A is square matrix the the determiat fuctio is deoted by det ad det(a) is defied to be the sum of all the siged elemetary products of A. Note that ofte we will call the umber det(a) the determiat of A. Also, there is some alterate otatio that is sometimes used for determiats. We will sometimes deote determiats as det ( A) = A ad this is most ofte doe with the actual matrix istead of the letter represetig the matrix. For istace for a matrix A we will use ay of the followig to deote the determiat, a a det ( A) = A = a a So, ow that we have the defiitio of the determiat fuctio i had we ca actually start writig dow some formulas. We ll give the formula for ad matrices oly because for ay matrix larger tha that the formula becomes very log ad messy ad at those sizes there are alterate methods for computig determiats that will be easier. So, with that said, we ve got all the siged elemetary products for ad matrices listed i Example 6 so let s write dow the determiat fuctio for these matrices. First the determiat fuctio for a matrix. a a det ( A) = = a a a a a a Now the determiat fuctio for a matrix. a a a det ( A) = a a a a a a = a a a + a a a + a a a a a a a a a a a a Okay, the formula for a matrix is t too bad, but the formula for a is messy ad would ot be fu to memorize. Fortuately, there is a easy way to quickly derive both of these formulas. 007 Paul Dawkis 96

102 Before we give this quick trick to derive the formulas we should poit out that what we re goig to do ONLY works for ad matrices. There is o correspodig trick for larger matrices! Okay, let s start with a matrix. Let s examie the determiat below. Notice the two diagoals that we ve put o this determiat. The diagoal that rus from left to right also covers the positive elemetary product i the formula. Likewise, the diagoal that rus from right to left covers the egative elemetary product. So, for a matrix all we eed to do is write dow the determiat, sketch i the diagoals multiply alog the diagoals the add the product if the diagoal rus from left to right ad subtract the product if the diagoal rus from right to left. Now let s take a look at a matrix. There is a similar trick that will work here, but i order to get it to work we ll first eed to tack copies the first colums oto the right side of the determiat as show below. With the additio of the two extra colums we ca see that we ve got three diagoals ruig i each directio ad that each will cover oe of the elemetary products for this matrix. Also, the diagoals that ru from left to right cover the positive elemetary products ad those that ru from right to left cover the egative elemetary product. So, as with the matrix, we ca quickly write dow the determiat fuctio formula here by simply multiplyig alog each diagoal ad the addig it if the diagoal rus left to right or subtractig it if the diagoal rus right to left. Let s take a quick look at a couple of examples with umbers just to make sure we ca do these. Example 7 Compute the determiat of each of the followig matrices. (a) A = 9 5 [Solutio] 5 4 (b) B = 8 [Solutio] 7 6 (c) C = 8 [Solutio] 007 Paul Dawkis 97

103 Solutio (a) A = 9 5 We do t really eed to sketch i the diagoals for matrices. The determiat is simply the product of the diagoal ruig left to right mius the product of the diagoal ruig from right to left. So, here is the determiat for this matrix. The oly thig we eed to worry about is payig attetio to mius sigs. It is easy to make a mistake with mius sigs i these computatios if you are t payig attetio. det ( A ) = ( )( 5) ( )( 9) = [Retur to Problems] 5 4 (b) B = 8 7 Okay, with this oe we ll copy the two colums over ad sketch i the diagoals to make sure we ve got the idea of these dow. Now, just remember to add products alog the left to right diagoals ad subtract products alog the right to left diagoals. det B = ( ) ( )( )( ) ( )( )( ) ( )( )( ) ( )( )( ) ( )( 8)( ) ( 4)( )( ) =467 [Retur to Problems] 6 (c) C = 8 We ll do this oe with a little less detail. We ll copy the colums but ot bother to actually sketch i the diagoals this time. 6 6 ( C) det = 8 8 ( )( 8)( ) ( 6)( )( ) ( )( )( ) ( 6)( )( ) ( )( )( ) ( )( 8)( ) = + + = 0 [Retur to Problems] 007 Paul Dawkis 98

104 As this example has show determiats of matrices ca be positive, egative or zero. It is agai worth otig that there are o such tricks for computig determiats for matrices larger that I the remaider of this chapter we ll take a look at some properties of determiats, two alterate methods for computig them that are ot restricted by the size of the matrix as the two quick tricks we saw i this sectio were ad a applicatio of determiats. 007 Paul Dawkis 99

105 Properties of Determiats I this sectio we ll be takig a look at some of the basic properties of determiats ad towards the ed of this sectio we ll have a ice test for the ivertibility of a matrix. I this sectio we ll give a fair umber of theorems (ad prove a few of them) as well as examples illustratig the theorems. Ay proofs that are omitted are geerally more ivolved tha we wat to get ito i this class. Most of the theorems i this sectio will ot help us to actually compute determiats i geeral. Most of these theorems are really more about how the determiats of differet matrices will relate to each other. We will take a look at a couple of theorems that will help show us how to fid determiats for some special kids of matrices, but we ll have to wait util the ext two sectios to start lookig at how to compute determiats i geeral. All of the determiats that we ll be computig i the examples i this sectio will be of a or a matrix. If you eed a refresher o how to compute determiats of these kids of matrices check out this example i the previous sectio. We wo t actually be showig ay of that work here i this sectio. Let s start with the followig theorem. Theorem Let A be a matrix ad c be a scalar the, det ca = c det A ( ) ( ) Proof : This is a really simply proof. From the defiitio of the determiat fuctio i the previous sectio we kow that the determiat is the sum of all the siged elemetary products for the matrix. So, for ca we will sum siged elemetary products that are of the form, ca ca ca = c a a a ( i )( i ) ( i ) ( i i i ) Recall that for scalar multiplicatio we multiply all the etries by c ad so we ll have a c o each etry as show above. Also, as show, we ca factor all of the c s out ad we ll get what we ve show above. Note that a a a is the siged elemetary product for A. i i i Now, if we add all the siged elemetary products for ca we ca factor the c that is o each term out of the sum ad what we re left with is the sum of all the siged elemetary products of A, or i other words, we re left with det(a). So, we re doe. Here s a quick example to verify the results of this theorem. 007 Paul Dawkis 00

106 Example For the give matrix below compute both det(a) ad det(a). 4 5 A = Solutio We ll leave it to you to verify all the details of this problem. First the scalar multiple A = The determiats. det A = 45 det A = 60 = 8 45 = det A ( ) ( ) ( )( ) ( ) Now, let s ivestigate the relatioship betwee det(a), det(b) ad det(a+b). We ll start with the followig example. Example Compute det(a), det(b) ad det(a+b) for the followig matrices. 0 6 A= B= 5 6 Solutio Here all the determiats. det A 8 det B = 6 det A+ B = 69 ( ) ( ) ( ) Notice here that for this example we have det ( A B) det ( A) det ( B) geerally be the case I fact this will There is a very special case where we will get equality for the sum of determiats, but it does t happe all that ofte. Here is the theorem detailig this special case. Theorem Suppose that A, B, ad C are all matrices ad that they differ by oly a row, say the k th row. Let s further suppose that the k th row of C ca be foud by addig the correspodig etries from the k th rows of A ad B. The i this case we will have that det C = det A + det B ( ) ( ) ( ) The same result will hold if we replace the word row with colum above. Here is a example of this theorem. Example Cosider the followig three matrices A= 6 7 B 5 C = = First, otice that we ca write C as, 007 Paul Dawkis 0

107 4 4 C = ( ) ( 5) 7 = All three matrices differ oly i the secod row ad the secod row of C ca be foud by addig the correspodig etries from the secod row of A ad B. The determiats of these matrices are, det A = 5 det B = 5 det C = 00 = ( ) ( ) ( ) ( ) Next let s look at the relatioship betwee the determiats of matrices ad their products. Theorem If A ad B are matrices of the same size the det AB = det A det B ( ) ( ) ( ) This theorem ca be exteded out to as may matrices as we wat. For istace, det ABC = det A det B det C Let s check out a example of this. ( ) ( ) ( ) ( ) Example 4 For the give matrices compute det(a), det(b), ad det(ab). 0 8 A= 7 4 B 4 = 4 0 Solutio Here s the product of the two matrices. 8 5 AB = Here are the determiats. det A = 4 det B = 84 ( ) ( ) ( AB) = = ( )( ) = ( A) ( B) det det det Here is a theorem relatig determiats of matrices ad their iverse (provided the matrix is ivertible of course ). Theorem 4 Suppose that A is a ivertible matrix the, det ( A ) = det ( A) 007 Paul Dawkis 0

108 Proof : The proof of this theorem is a direct result of the previous theorem. Sice A is ivertible we kow that AA = I. So take the determiat of both sides ad the use the previous theorem o the left side. det AA = det A det A = det I ( ) ( ) ( ) ( ) Now, all that we eed is to kow that det ( I ) = which you ca prove usig Theorem 8 below. det ( A) det ( A ) = det ( A) = det ( A ) Here s a quick example illustratig this. Example 5 For the give matrix compute det(a) ad det ( A ). 8 9 A = 5 Solutio We ll leave it to you to verify that A is ivertible ad that its iverse is, 5 9 A = Here are the determiats for both of these matrices. det ( A) = 58 det ( A ) = = 58 det A The ext theorem that we wat to take a look at is a ice test for the ivertibility of matrices. det A 0. A matrix that is ivertible is ofte called o-sigular ad a matrix that is ot ivertible is ofte called sigular. Theorem 5 A square matrix A is ivertible if ad oly if ( ) Before doig a example of this let s talk a little bit about the phrase if ad oly if that appears i this theorem. That phrase meas that this is kid of like a two way street. This theorem, because of the if ad oly if phrase, says that if we kow that A is ivertible the we will have det ( A) 0. If, o the other had, we kow that det ( A) 0 the we will also kow that A is ivertible. Most theorems preseted i these otes are ot two way streets so to speak. They oly work oe way, if however, we do have a theorem that does work both ways you will always be able to idetify it by the phrase if ad oly if. Now let s work a example to verify this theorem. ( ) 007 Paul Dawkis 0

109 Example 6 Compute the determiats of the followig two matrices. 0 6 C = B 0 = Solutio We determied the ivertibility of both of these matrices i the sectio o Fidig Iverses so we already kow what the aswers should be (at some level) for the determiats. I that sectio we determied that C was ivertible ad so by Theorem 5 we kow that the det(c) should be ozero. We also determied that B was sigular (i.e. ot ivertible) ad so we kow by Theorem 5 that det(b) should be zero. Here are those determiats of these two matrices. det C = det B = 0 ( ) ( ) Sure eough we got zero where we should have ad did t get zero where we should have. Here is a theorem relatig the determiats of a matrix ad its traspose. Theorem 6 If A is a square matrix the, det T ( A) = det ( A ) Here is a example that verifies the results of this theorem. T Example 7 Compute det(a) ad det ( ) Solutio We ll leave it to you to verify that A for the followig matrix. 5 A = ( A) ( A T ) det = det = 9 There are a couple special cases of matrices that we ca quickly fid the determiat for so let s take care of those at this poit. Theorem 7 If A is a square matrix with a row or colum of all zeroes the det ( A ) = 0 ad so A will be sigular. Proof : The proof here is fairly straight forward. The determiat is the sum of all the siged elemetary products ad each of these will have a factor from each row ad a factor from each colum. So, i particular it will have a factor from the row or colum of all zeroes ad hece will have a factor of zero makig the whole product zero. All of the products are zero ad upo summig them up we will also get zero for the determiat. 007 Paul Dawkis 04

110 Note that i the followig example we do t eed to worry about the size of the matrix ow sice this theorem gives us a value for the determiat. You might wat to check the ad to verify that the determiats are i fact zero. You also might wat to come back ad verify the other after the ext sectio where we ll lear methods for computig determiats i geeral. Example 8 Each of the followig matrices are sigular A= B 9 0 C 0 0 = = It is actually very easy to compute the determiat of ay triagular (ad hece ay diagoal) matrix. Here is the theorem that tells us how to do that. Theorem 8 Suppose that A is a triagular matrix the, det A = a a a ( ) So, what this theorem tells us is that the determiat of ay triagular matrix (upper or lower) or ay diagoal matrix is simply the product of the etries from the matrices mai diagoal. We wo t do a formal proof here. We ll just give a quick outlie. Proof Outlie : Sice we kow that the determiat is the sum of the siged elemetary products ad each elemetary products has a factor from each row ad a factor from each colum because of the triagular ature of the matrix, the oly elemetary product that wo t have at least oe zero is aa a. All the others will have at least oe zero i them. Hece the determiat of det A = a a a the matrix must be ( ) Let s take the determiat of a couple of triagular matrices. You should verify the ad matrices ad after the ext sectio come back ad verify the other. Example 9 Compute the determiat of each of the followig matrices A= 0 0 B C = = Solutio Here are these determiats. det A = 5 4 = 60 det B = 6 =6 ( ) ( )( )( ) ( ) ( )( ) det( C) = ( 0)( 0)( 6)( 5) = Paul Dawkis 05

111 We have oe fial theorem to give i this sectio. I the Fidig Iverse sectio we gave a theorem that listed several equivalet statemets. Because of Theorem 5 above we ca add a statemet to that theorem so let s do that. Here is the improved theorem. Theorem 9 If A is a matrix the the followig statemets are equivalet. (a) A is ivertible. (b) The oly solutio to the system A x = 0 is the trivial solutio. (c) A is row equivalet to I. (d) A is expressible as a product of elemetary matrices. (e) A x= bhas exactly oe solutio for every matrix b. (f) A x= bis cosistet for every matrix b. det A 0 (g) ( ) 007 Paul Dawkis 06

112 The Method of Cofactors I this sectio we re goig to examie oe of the two methods that we re goig to be lookig at for computig the determiat of a geeral matrix. We ll also see how some of the ideas we re goig to look at i this sectio ca be used to determie the iverse of a ivertible matrix. So, before we actually give the method of cofactors we eed to get a couple of defiitios take care of. Defiitio If A is a square matrix the the mior of a ij, deoted by of the submatrix that results from removig the i th row ad j th colum of A. Defiitio If A is a square matrix the the cofactor of a ij, deoted by ( ) i + j M ij. Let s take a look at computig some miors ad cofactors. M ij, is the determiat C ij, is the umber Example For the followig matrix compute the cofactors C, C 4, ad C A = Solutio I order to compute the cofactors we ll first eed the mior associated with each cofactor. Remember that i order to compute the mior we will remove the i th row ad j th colum of A. So, to compute M (which we ll eed for C ) we ll eed to compute the determiate of the matrix we get by removig the st row ad d colum of A. Here is that work. We ve marked out the row ad colum that we elimiated ad we ll leave it to you to verify the determiat computatio. Now we ca get the cofactor. C + ( ) M ( ) ( ) = = 60 = 60 Let s ow move oto the secod cofactor. Here is the work for the mior. 007 Paul Dawkis 07

113 The cofactor i this case is, C ( ) M ( ) ( ) = = 508 = Here is the work for the fial cofactor. C + 5 ( ) M ( ) ( ) = = 50 = 50 Notice that the cofactor is really just ± M ij depedig upo i ad j. If the subscripts of the cofactor add to a eve umber the we leave the mior aloe (i.e. o - sig) whe writig dow the cofactor. Likewise, if the subscripts o the cofactor sum to a odd umber the we add a - to the mior whe writig dow the cofactor. We ca use this fact to derive a table that will allow us to quickly determie whether or ot we should add a - oto the mior or leave it aloe whe writig dow the cofactor. Let s start with C. I this case the subscripts sum to a eve umber ad so we do t tack o a mius sig to the mior. Now, let s move alog the first row. The ext cofactor would the be C ad i this case the subscripts add to a odd umber ad so we tack o a mius sig to the mior. For the ext cofactor, C, we would leave the mior aloe ad for the ext, C 4, we d tack a mius sig o, etc. As you ca see from this work, if we start at the leftmost etry of the first row we have a + i frot of the mior ad the as we move across the row the sigs alterate. If you thik about it, this will also happe as we move dow the first colum. I fact, this will happe as we move across ay row ad dow ay colum. We ca summarize this idea i the followig sig matrix that will tell us if we should leave the mior aloe (i.e. tack o a + ) or chage its sig (i.e. tack o a - ) whe writig dow the cofactor Okay, we ca ow talk about how to use cofactors to compute the determiat of a geeral square matrix. I fact there are two ways we ca used cofactors as the followig theorem shows. 007 Paul Dawkis 08

114 Theorem If A is a matrix. (a) Choose ay row, say row i, the, A = a C + a C + a C ( ) det i i i i i i (b) Choose ay colum, say colum j, the, A = a C + a C + + a C ( ) det j j j j j j What this theorem tells us is that if we pick ay row all we eed to do is go across that row ad multiply each etry by its cofactor, add all these products up ad we ll have the determiat for the matrix. It also says that we could do the same thig oly istead of goig across ay row we could move dow ay colum. The process of movig across a row or dow a colum is ofte called a cofactor expasio. Let s work some examples of this so we ca see it i actio. Example For the followig matrix compute the determiat usig the give cofactor expasios. 4 A = (a) Expad alog the first row. [Solutio] (b) Expad alog the third row. [Solutio] (c) Expad alog the secod colum. [Solutio] Solutio First, otice that accordig to the theorem we should get the same result i all three parts. (a) Expad alog the first row. Here is the cofactor expasio i terms of symbols for this part. det A = ac + ac + ac ( ) Now, let s plug i for all the quatities. We will just plug i for the etries. For the cofactors we ll write dow the mior ad a + or a - depedig o which sig each mior eeds. We ll determie these sigs by goig to our sig matrix above startig at the first etry i the particular row/colum we re expadig alog ad the as we move alog that row or colum we ll write dow the appropriate sig. Here is the work for this expasio. 6 6 det ( A) = ( 4)( + ) + ( )( ) + ()( + ) ( ) ( ) ( )( ) = = Paul Dawkis 09

115 We ll leave it to you to verify the determiat computatios. [Retur to Problems] (b) Expad alog the third row. We ll do this oe without all the explaatios. det A = a C + a C + a C ( ) 4 4 ( 7)( + ) + ( 5)( ) + ( 0)( + ) 6 6 =7 ( ) 54 ( ) + ( 0)( 0) =54 So, the same aswer as the first part which is good sice that was supposed to happe. Notice that the sigs for the cofactors i this case were the same as the sigs i the first case. This is because the first ad third row of our sig matrix are idetical. Also, otice that we did t really eed to compute the third cofactor sice the third etry was zero. We did it here just to get oe more example of a cofactor ito the otes. [Retur to Problems] (c) Expad alog the secod colum. Let s take a look at the fial expasio. I this oe we re goig dow a colum ad otice that from our sig matrix that this time we ll be startig the cofactor sigs off with a - ulike the first two expasios. det ( A) = ac + ac + ac 4 4 ( )( ) + ( 6)( + ) + ( 5)( ) =( ) 6( 7) 5( 4) =54 Agai, the same as the first two as we expected. [Retur to Problems] There was aother poit to the previous problem apart from showig that the row or colum we choose to expad alog wo t matter. Because we are allowed to expad alog ay row that meas uless the problem statemet forces to use a particular row or colum we will get to choose the row/colum to expad alog. Whe choosig we should choose a row/colum that will reduce the amout of work we ve got to do if possible. Comparig the parts of the previous example should suggest to us somethig we should be lookig for i makig this choice. I part (b) it was poited out that we did t really eed to compute the third cofactor sice the third etry i that row was zero. Choosig to expad alog a row/colum with zeroes i it will istatly cut back o the umber of cofactors that we ll eed to compute. So, whe allowed to choose which row/colum to expad 007 Paul Dawkis 0

116 alog we should look for the oe with the most zeroes. I the case of the previous example that meas that the quickest expasios would be either the rd row or the rd colum sice both of those have a zero i them ad oe of the other rows/colums do. So, let s take a look at a couple more examples. Example Usig a cofactor expasio compute the determiat of, A = Solutio Sice the row or colum to use for the cofactor expasio was ot give i the problem statemet we get to choose which oe we wat to use. Recallig the brief discussio after the last example we kow that we wat to choose the row/colum with the most zeroes i it sice that will mea we wo t have to compute cofactors for each etry that is a zero. So, it looks like the secod row would be a good choice for the expasio sice it has two zeroes i it. Here is the expasio for this row. As with the previous expasios we ll explicitly give the + or - for the cofactors ad the miors as well so you ca see where everythig i the expasio is comig from. 7 5 det( A) = ( )( ) ( 0)( + ) M + ( 0)( ) M + ( )( + ) We did t bother to write dow the miors M ad M because of the zero etry. How we choose to compute the determiats for the first ad last etry is up to us at this poit. We could use a cofactor expasio o each of them or we could use the techique we leared i the first sectio of this chapter. Either way will get the same aswer ad we ll leave it to you to verify these determiats. The determiat for this matrix is, det A = = 88 ( ) ( ) ( ) Example 4 Usig a cofactor expasio compute the determiat of, B = Solutio This is a large matrix, but if you check out the third colum we ll see that there is oly oe ozero etry i that colum ad so that looks like a good colum to do a cofactor expasio o. Here s the cofactor expasio for this matrix. Agai, we explicitly added i the + ad Paul Dawkis

117 ad wo t bother to write dow the miors for the zero etries. ( ) ( )( ) ( )( ) ( )( ) det B = 0 + M + 0 M M ( )( ) + ( 0)( + ) Now, i order to complete this problem we ll eed to take the determiat of a 4 4 matrix ad the oly way that we ve got to do that is to oce agai do a cofactor expasio o it. I this case it looks like the third row will be the best optio sice it s got more zero etries tha ay other row or colum. This time we ll just put i the terms that come from o-zero etries. Here is the remaider of this problem. Also do t forget that there is still a coefficiet of i frot of this determiat! 4 4 det ( B) = = ( 5)( ) 4 + ( )( ) = ( 5( 6) + ( )( 4) ) = This last example has show oe of the drawbacks to this method. Oce the size of the matrix gets large there ca be a lot of work ivolved i the method. Also, for aythig larger tha a 4 4 matrix you are almost assured of havig to do cofactor expasios multiple times util the size of the matrix gets dow to ad other methods ca be used. There is a way to simplify thigs dow somewhat, but we ll eed to the topic of the ext sectio before we ca show that. Now let s move oto the fial topic of this sectio. It turs out that we ca also use cofactors to determie the iverse of a ivertible matrix. To see how this is doe we ll first eed a quick defiitio. M Paul Dawkis

118 Defiitio Let A be a matrix ad C ij be the cofactor of a ij. The matrix of cofactors from A is, C C C C C C C C C The adjoit of A is the traspose of the matrix of cofactors ad is deoted by adj(a). Example 5 Compute the adjoit of the followig matrix. 4 A = Solutio We eed the cofactors for each of the etries from this matrix. This is the matrix from Example ad i that example we computed all the cofactors except for C ad C so here are those computatios. C = ( ) = ( )( 5) = C 4 = ( ) = ( )( 4) =4 7 5 Here are the others from Example. C = 5 C = C = 5 C = 7 C = C = 4 C =0 The matrix of cofactors is the, The adjoit is the, adj( A) = We started this portio of this sectio off by sayig that we were goig to see how to use cofactors to determie the iverse of a matrix. Here is the theorem that will tell us how to do that. 007 Paul Dawkis

119 Theorem If A is a ivertible matrix the A = det ( A) adj ( A) Example 6 Use the adjoit matrix to compute the iverse of the followig matrix. 4 A = Solutio We ve doe most of the work for this problem already. I Example we determied that det ( A ) = 54 ad i Example 5 we foud the adjoit to be 5 5 adj( A) = Therefore, the iverse of the matrix is, A = 7 4 = You might wat to verify this usig the row reductio method we used i the previous chapter for the practice. 007 Paul Dawkis 4

120 Usig Row Reductio To Compute Determiats I this sectio we ll take a look at the secod method for computig determiats. The idea i this sectio is to use row reductio o a matrix to get it dow to a row-echelo form. Sice we re computig determiats we kow that the matrix, A, we re workig with will be square ad so the row-echelo form of the matrix will be a upper triagular matrix ad we kow how to quickly compute the determiat of a triagular matrix. So, sice we already kow how to do row reductio all we eed to kow before we ca work some problems is how the row operatios used i the row reductio process will affect the determiat. Before proceedig we should poit out that there are a set of elemetary colum operatios that mirror the elemetary row operatios. We ca multiply a colum by a scalar, c, we ca iterchage two colums ad we add a multiple of oe colum oto aother colum. The operatios could just as easily be used as row operatios ad so all the theorems i this sectio will make ote of that. We ll just be usig row operatios however i our examples. Here is the theorem that tells us how row or colum operatios will affect the value of the determiat of a matrix. Theorem Let A be a square matrix. (a) If B is the matrix that results from multiplyig a row or colum of A by a scalar, c, the det ( B) = cdet ( A) (b) If B is the matrix that results from iterchagig two rows or two colums of A the det ( B) = det ( A) (c) If B is the matrix that results from addig a multiple of oe row of A oto aother row of A or addig a multiple of oe colum of A oto aother colum of A the det B = det A ( ) ( ) Notice that the row operatio that we ll be usig the most i the row reductio process will ot chage the determiat at all. The operatios that we re goig to eed to worry about are the first two ad the secod is easy eough to take care of. If we iterchage two rows the determiat chages by a mius sig. We are goig to have to be a little careful with the first oe however. Let s check out a example of how this method works i order to see what s goig o. Example Use row reductio to compute the determiat of the followig matrix. A 4 = 7 5 Solutio There is of course o real reaso to do row reductio o this matrix i order to compute the determiat. We ca fid it easily eough at this poit. I fact let s do that so we ca check the results of our work after we do row reductio o this. det A = = 04 ( ) ( )( ) ( )( ) Okay, ow let s do with row reductio to see what we ve got. We eed to reduce this dow to row-echelo form ad while we could easily use a multiple of the third row to get a i the first etry of the first row let s just divide the first row by 4 sice that s the oe operatio we re goig 007 Paul Dawkis 5

121 to eed to careful with. So, let s do the first operatio ad see what we ve got. A 4 R = 4 = So, we called the result B ad let s see what the determiat of this matrix is. det ( B) = ( )( 5) ( 7)( ) = 6 = det ( A) 4 So, the results of the theorem are verified for this step. The ext step is the to covert the -7 ito a zero. Let s do that ad see what we get. B R + 7R = = Accordig to the theorem C should have the same determiat as B ad it does (you should verify this statemet). The fial step is to covert the 6 ito a. 6 R C = D 0 6 = 0 Now, we ve got the followig, det ( D) = = det ( C) 6 Oce agai the theorem is verified. Now, just how does all of this help us to fid the determiat of the origial matrix? We could work our way backwards from det(d) ad figure out what det(a) is. However, there is a way to modify our work above that will allow us to also get the aswer oce we reach row-echelo form. To see how we do this let s go back to the first operatio that we did ad we saw whe we were doe we had, det ( B) = det ( A) OR det ( A) = 4 det ( B) 4 Writte i aother way this is, det det ( A) = = ( ) = ( B) Notice that the determiats, whe writte i the matrix form, are pretty much what we origially wrote dow whe doig the row operatio. Therefore, istead of writig dow the row operatio as we did above let s just use this matrix form of the determiat ad write the row operatio as follows. 4 4 R det ( A ) = ( 4) 7 5 = 7 5 B C 007 Paul Dawkis 6

122 I goig from the matrix o the left to the matrix o the right we performed the operatio 4 R ad i the process we chaged the value of the determiat. So, sice we ve got a equal sig here we eed to also modify the determiat of the matrix o the right so that it will remai equal to the determiat of the matrix o the left. As show above, we ca do this by multiplyig the matrix o the right by the reciprocal of the scalar we used i the row operatio. Let s complete this ad otice that i the secod step we are t goig to chage the value of the determiat sice we re addig a multiple of the secod row oto the first row so we ll ot chage the value of the determiat o the right. I the fial operatio we divided the secod row by 6 ad so we ll eed to multiply the determiat o the right by 6 to persevere the equality of the determiats. Here is the complete work for this problem usig these ideas. 4 4 R det ( A) = ( 4) 7 5 = 7 5 R + 7R ( 4) = R = ( )( ) Okay, we re dow to row-echelo form so let s strip out all the itermediate steps out ad see what we ve got. det ( A ) = ( 4)( 6) 0 The matrix o the right is triagular ad we kow that determiats of triagular matrices are just the product of the mai diagoal etries ad so the determiat of A is, det A = 4 6 = 04 ( ) ( )( )( )( ) Now, that was a lot of work to compute the determiat ad i geeral we would t use this method o a matrix, but by doig it o oe here it allowed us to ivestigate the method i a detail without havig to deal with a lot of steps. There are a couple of issues to poit out before we move ito aother more complicated problem. First, we did t do ay row iterchages i the above example, but the theorem tells us that will oly chage the sig o the determiat. So, if we do a row iterchage i our work we ll just tack a mius sig oto the determiat. Secod, we took the matrix all the way dow to row-echelo form, but if you stop to thik about it there s really othig special about that i this case. All we eed to do is reduce the matrix to a triagular matrix ad the use the fact that ca quickly fid the determiat of ay triagular matrix. 007 Paul Dawkis 7

123 From this poit o we ll ot be goig all the way to row-echelo form. We ll just make sure that we reduce the matrix dow to a triagular matrix ad the stop ad compute the determiat. Example Use row reductio to compute the determiat of the followig matrix. 0 A = Solutio We ll do this oe with less explaatio. Just remember that if we iterchage rows tack a mius sig oto the determiat ad if we multiply a row by a scalar we ll eed to multiply the ew determiat by the reciprocal of the scalar R R det ( A) = = R + R = R = 0 8 ( 0) 0 5 R + R ( ) = Okay, we ve gotte the matrix dow to triagular form ad so at this poit we ca stop ad just take the determiat of that ad make sure to keep the scalars that are multiplyig it. Here is the fial computatio for this problem. 49 det ( A) = 0()() =98 5 Example Use row reductio to compute the determiat of the followig matrix A = Solutio Okay, there s goig to be some work here so let s get goig o it. 007 Paul Dawkis 8

124 R 0 0 det ( A) = ( ) = R + 4R 0 0 R4 R ( ) = R 0 0 ( )( ) = R + 7R 0 0 ( )( ) 4 = R ( )( ) 8 = R + R 4 ( )( ) = Okay, that was a lot of work, but we ve gotte it ito a form we ca deal with. Here s the determiat det ( A) = ( )( ) = Now, as the previous example has show us, this method ca be a lot of work ad its work that if we are t payig attetio it will be easy to make a mistake. There is a method that we could have used here to sigificatly reduce our work ad it s ot eve a ew method. Notice that with this method at each step we have a ew determiat that eeds computig. We cotiued dow util we got a triagular matrix sice that would be easy for us to compute. However, there s othig keepig us from stoppig at ay step ad usig some other method for computig the determiat. I fact, if you look at our work, after the secod step we ve gotte a colum with a i the first etry ad zeroes below it. If we were i the previous 007 Paul Dawkis

125 sectio we d just do a cofactor expasio alog this colum for this determiat. So, let s do that. No oe ever said we could t mix the methods from this ad the previous sectio i a problem. Example 4 Use row reductio ad a cofactor expasio to compute the determiat of the matrix i Example. Solutio. Okay, this ew method says to use row reductio util we get a matrix that would be easy to do a cofactor expasio o. As oted earlier that meas oly doig the first two steps. So, for the sake of completeess here are those two steps agai R 0 0 det ( A) = ( ) = R + 4R 0 0 R4 R ( ) = 0 0 At this poit we ll just do a cofactor expasio alog the first colum. 0 det ( A) = ( ) ( )( + ) ( 0) C 0 + ( 0) C + ( 0) C 0 = At this poit we ca use ay method to compute the determiat of the ew matrix so we ll leave it to you to verify that det ( A ) = ( )( 468) = 404 There is oe fial idea that we eed to discuss i this sectio before movig o. Theorem Suppose that A is a square matrix ad that two of its rows are proportioal or two of det A = 0. its colums are proportioal. The ( ) Whe we say that two rows or two colums are proportioal that meas that oe of the rows(colums) is a scalar times aother row(colum) of the matrix. We re ot goig to prove this theorem but it you thik about it, it should make some sese. Let s suppose that two rows are proportioal. So we kow that oe of the rows is a scalar multiple of aother row. This meas we ca use the third row operatio to make oe of the rows all zero. 007 Paul Dawkis 0

126 From Theorem above we kow that both of these matrices must have the same determiat ad from Theorem 7 from the Determiat Properties sectio we kow that if a matrix has a row or colum of all zeroes, the that matrix is sigular, i.e. its determiat is zero. Therefore both matrices must have a zero determiat. Here is a quick example showig this. Example 5 Show that the followig matrix is sigular. 4 A = Solutio We ca use Theorem above upo oticig that the third row is - times the first row. That s all we eed to use this theorem. So, techically we ve aswered the questio. However, let s go through the steps outlied above to also show that this matrix is sigular. To do this we d do oe row reductio step to get the row of all zeroes ito the matrix as follows. 4 4 R + R det ( A) = 5 5 = We kow by Theorem above that these two matrices have the same determiat. The because we see a row of all zeroes we ca ivoke Theorem 7 from the Determiat Properties to say that the determiat o the right must be zero, ad so be sigular. The, as we poited out, these two matrices have the same determiat ad so we ve also got det A = 0 ad so A is sigular. ( ) You might wat to verify that this matrix is sigular by computig its determiat with oe of the other methods we ve looked at for the practice. We ve ow looked at several methods for computig determiats ad as we ve see each ca be log ad proe to mistakes. O top of that for some matrices oe method may work better tha the other. So, whe faced with a determiat you ll eed to look at it ad determie which method to use ad uless otherwise specified by the problem statemet you should use the oe that you fid the easiest to use. Note that this may ot be the method that somebody else chooses to use, but you should t worry about that. You should use the method you are the most comfortable with. 007 Paul Dawkis

127 Cramer s Rule I this sectio we re goig to come back ad take oe more look at solvig systems of equatios. I this sectio we re actually goig to be able to get a geeral solutio to certai systems of equatios. It wo t work o all systems of equatios ad as we ll see if the system is too large it will probably be quicker to use oe of the other methods that we ve got for solvig systems of equatios. So, let s jump ito the method. Theorem Suppose that A is a ivertible matrix. The the solutio to the system A x= b is give by, det ( A) det ( A) det ( A ) x =, x =,, x = det ( A) det ( A) det ( A) where A i is the matrix foud by replacig the i th colum of A with b. Proof : The proof to this is actually pretty simple. First, because we kow that A is ivertible the we kow that the iverse exists ad that det ( A) 0. We also kow that the solutio to the system ca be give by, x= b From the sectio o cofactors we kow how to defie the iverse i terms of the adjoit of A. Usig this gives us, C C C b C C C b adj( A) x= b= det ( A) det ( A) C C C b A Recall that C ij is the cofactor of a ij. Also ote that the subscripts o the cofactors above appear to be backwards but they are correctly placed. Recall that we get the adjoit by first formig a matrix with C i the i th row ad j th colum ad the takig the traspose to get the adjoit. ij Now, multiply out the matrices to get, x = det ( A) bc + bc + bc bc + bc + bc bc + bc + bc The etry i the i th row of x, which is x i i the solutio, is bc + bc + bc xi = det A i i i ( ) 007 Paul Dawkis

128 Next let s defie, A i a a a i b a i+ a a a a i b a i a + = a a a i b a i+ a So, A i is the matrix we get by replacig the i th colum of A with b. Now, if we were to compute the determiate of A i by expadig alog the i th colum the products would be oe of the b i s times the appropriate cofactor. Notice however that sice the oly differece betwee A i ad A is the i th colum ad so the cofactors of we get by expadig A i alog the i th colum will be exactly the same as the cofactors we would get by expadig A alog the i th colum. Therefore, the determiat of A i is give be, det ( Ai) = bc i + bci + bci where C ki is the cofactor of a ki from the matrix A. Note however that this is exactly the umerator of x i ad so we have, det ( Ai ) xi = det ( A) as we wated to prove. Let s work a quick example to illustrate the method. Example Use Cramer s Rule to determie the solutio to the followig system of equatios. x x + 5x = 4x+ x + 7x = 0 x+ 4x x = Solutio First let s put the system ito matrix form ad verify that the coefficiet matrix is ivertible. 5 x 4 7 x 0 = 4 x A x = b det ( A ) = 87 0 So, the coefficiet matrix is ivertible ad Cramer s Rule ca be used o the system. We ll also eed det(a) i a bit so it s good that we ow have it. Let s ow write dow the formulas for the solutio to this system. 007 Paul Dawkis

129 ( ) ( ) ( ) ( ) ( ) ( ) det A det A det A x = x = x = det A det A det A where A is the matrix formed by replacig the st colum of A with b, A is the matrix formed by replacig the d colum of A with b, ad A is the matrix formed by replacig the rd colum of A with b. We ll leave it to you to verify the followig determiats. 5 det ( A ) = 0 7 = 4 5 det ( A ) = =7 det ( A ) = 4 0 =07 4 The solutio to the system is the, 7 07 x = x = x = Now, the solutio to this system had some somewhat messy solutios ad that would have made the row reductio method proe to mistake. However, sice this solutio required us to compute 4 determiats as you ca see if your system gets too large this would be a very time cosumig method to use. For example a system with 5 equatios ad 5 ukows would require us to compute determiats. At that poit, regardless of how messy the fial aswers are there is a good chace that the row reductio method would be easier. 007 Paul Dawkis 4

130 Euclidea Space Itroductio I this chapter we are goig to start lookig at the idea of a vector ad the ultimate goal of this chapter will be to defie somethig called Euclidea -space. I this chapter we ll be lookig at some very specific examples of vectors so we ca build up some of the ideas that surroud them. We will reserve geeral vectors for the ext chapter. We will also be takig a quick look at the topic of liear trasformatios. Liear trasformatios are a very importat idea i the study of Liear Algebra. Here is a listig of the topics i this chapter. Vectors I this sectio we ll itroduce vectors i -space ad -space as well as some of the importat ideas about them. Dot Product & Cross Product Here we ll look at the dot product ad the cross product, two importat products for vectors. We ll also take a look at a applicatio of the dot product. Euclidea -Space We ll itroduce the idea of Euclidea -space i this sectio ad exted may of the ideas of the previous two sectios. Liear Trasformatios I this sectio we ll itroduce the topic of liear trasformatios ad look at may of their properties. Examples of Liear Trasformatios We ll take a look at quite a few examples of liear trasformatios i this sectio. 007 Paul Dawkis 5

131 Vectors I this sectio we re goig to start takig a look at vectors i -space (ormal two dimesioal space) ad -space (ormal three dimesioal space). Later i this chapter we ll be expadig the ideas here to -space ad we ll be lookig at a much more geeral defiitio of a vector i the ext chapter. However, if we start i -space ad -space we ll be able to use a geometric iterpretatio that may help uderstad some of the cocepts we re goig to be lookig at. So, let s start off with defiig a vector i -space or -space. A vector ca be represeted geometrically by a directed lie segmet that starts at a poit A, called the iitial poit, ad eds at a poit B, called the termial poit. Below is a example of a vector i -space. Vectors are typically deoted with a boldface lower case letter. For istace we could represet the vector above by v, w, a, or b, etc. Also whe we ve explicitly give the iitial ad termial poits we will ofte represet the vector as, v = AB where the positioig of the upper case letters is importat. The A is the iitial poit ad so is listed first while the termial poit, B, is listed secod. As we ca see i the figure of the vector show above a vector imparts two pieces of iformatio. A vector will have a directio ad a magitude (the legth of the directed lie segmet). Two vectors with the same magitude but differet directios are differet vectors ad likewise two vectors with the same directio but differet magitude are differet. Vectors with the same directio ad same magitude are called equivalet ad eve though they may have differet iitial ad termial poits we thik of them as equal ad so if v ad u are two equivalet vectors we will write, v = u To illustrate this idea all of the vectors i the image below (all i -space) are equivalet sice they have the same directio ad magitude. 007 Paul Dawkis 6

132 It is ofte difficult to really visualize a vector without a frame of referece ad so we will ofte itroduce a coordiate system to the picture. For example, i -space, suppose that v is ay vector whose iitial poit is at the origi of the rectagular coordiate system ad its termial v, v as show below. poit is at the coordiates ( ) I these cases we call the coordiates of the termial poit the compoets of v ad write, v = v, v ( ) We ca do a similar thig for vectors i -space. Before we get ito that however, let s make sure that you re familiar with all the cocepts we might ru across i dealig with -space. Below is a poit i -space. 007 Paul Dawkis 7

133 Just as a poit i -space is described by a pair ( x, y ) we describe a poit i -space by a triple ( x, yz., ) Next if we take each pair of coordiate axes ad look at the plae they form we call these the coordiate plaes ad deote them as xy-plae, yz-plae, ad xz-plae respectively. Also ote that if we take the geeral poit ad move it straight ito oe of the coordiate plaes we get a ew poit where oe of the coordiates is zero. For istace i the xy-plae we have the poit ( x, y,0), etc. Just as i -space, suppose that we ve got a vector v whose iitial poit is the origi of the v, v, v as show below, coordiate system ad whose termial poit is give by ( ) Just as i -space we call (,, ) v v v the compoets of v ad write, v = ( v, v, v ) 007 Paul Dawkis 8

134 Before proceedig ay further we should briefly talk about the otatio we re usig because it ca be cofusig sometimes. We are usig the otatio ( v, v, v ) to represet both a poit i - space ad a vector i -space as show i the figure above. This is somethig you ll eed to get used to. I this class ( v, v, v ) ca be either a poit or a vector ad we ll eed to be careful ad pay attetio to the cotext of the problem, although i may problems it wo t really matter. We ll be able to use it as a poit or a vector as we eed to. The same commet could be made for poits/vectors i -space. Now, let s get back to the discussio at had ad otice that the compoet form of the vector is really tellig how to get from the iitial poit of the vector to the termial poit of the vector. For example, lets suppose that v = ( v, v) is a vector i -space with iitial poit A = ( x, y). The first compoet of the vector, v, is the amout we have to move to the right (if v is positive) or to the left (if v is egative). The secod compoet tells us how much to move up or dow depedig o the sig of v. The termial poit of v is the give by, B = x + v, y + v ( ) Likewise if v = ( v, v, v ) is a vector i -space with iitial poit A ( x, y, z ) poit is give by, = the termial B = ( x+ v, y+ v, z+ v) Notice as well that if the iitial poit is the origi the the fial poit will be B ( v, v, v) we oce agai see that ( v, v, v ) ca represet both a poit ad a vector. = ad This ca all be tured aroud as well. Let s suppose that we ve got two poits i -space, A= ( x, y) ad B = ( x, y). The the vector with iitial poit A ad termial poit B is give by, AB = x x, y y ( ) Note that the order of the poits is importat. The compoets are foud by subtractig the coordiates of the iitial poit from the coordiates of the termial poit. If we tured this aroud ad wated the vector with iitial poit B ad termial poit A we d have, BA= x x, y y ( ) Of course we ca also do this i -space. Suppose that we wat the vector that has a iitial poit of A= ( x, y, z) ad a termial poit of B = ( x, y, z). This vector is give by, AB = x x, y y, z z Let s see a example of this. ( ) Example Fid the vector that starts at A = ( 4,,9) ad eds at ( 7,0,6) B =. 007 Paul Dawkis 9

135 Solutio There really is t much to do here other tha use the formula above. v = AB = 74,0,6 9 =,, Here is a sketch showig the poits ad the vector. ( ( ) ) ( ) Okay, it s ow time to move ito arithmetic of vectors. For each operatio we ll look at both a geometric ad a compoet iterpretatio. The geometric iterpretatio will help with uderstadig just what the operatio is doig ad the compoet iterpretatio will help us to actually do the operatio. There are two quick topics that we first eed to address i vector arithmetic. The first is the zero vector. The zero vector, deoted by 0, is a vector with o legth. Because the zero vector has o legth it is hard to talk about its directio so by covetio we say that the zero vector ca have ay directio that we eed for it to have i a give problem. The ext quick topic to discuss is that of egative of a vector. If v is a vector the the egative of the vector, deoted by v, is defied to be the vector with the same legth as v but has the opposite directio as v as show below. We ll see how to compute the egative vector i a bit. Also ote that sometimes the egative is called the additive iverse of the vector v. Okay let s start off the arithmetic with additio. 007 Paul Dawkis 0

136 Defiitio Suppose that v ad w are two vectors the to fid the sum of the two vectors, deoted v+ w, we positio w so that its iitial poit coicides with the termial poit of v. The ew vector whose iitial poit is the iitial poit of v ad whose termial poit is the termial poit of w will be the sum of the two vectors, or v+ w. Below are three sketches of what we ve got here with additio of vectors i -space. I terms of = v, v w = w, w. compoets we have v ( ) ad ( ) The sketch o the left matches the defiitio above. We first sketch i v ad the sketch w startig where v left off. The resultat vector is the the sum. I the middle we have the sketch for w+ v ad as we ca see we get exactly the same resultat vector. From this we ca see that we will have, v+ w = w+ v The sketch o the right merges the first two sketches ito oe ad also adds i the compoets for each of the vectors. It s a little busy, but you ca see that the coordiates of the sum are ( v+ w, v + w). Therefore, for the vectors i -space we ca compute the sum of two vectors usig the followig formula. v+ w = v + w, v + w ( ) Likewise, if we have two vectors i -space, say v = ( v, v, v ) ad w = ( w, w, w ) we ll have, ( v w, v w, v w ) v+ w = + + +, the Now that we ve got additio ad the egative of a vector out of the way we ca do subtractio. Defiitio Suppose that we have two vectors v ad w the the differece of w from v, deoted by v w is defied to be, v w = v+ ( w ) If we make a sketch, i -space, for the summatio form of the differece we the followig sketch. 007 Paul Dawkis

137 Now, while this sketch shows us what the vector for the differece is as a summatio we geerally like to have a sketch that relates to the two origial vectors ad ot oe of the vectors ad the egative of the other. We ca do this by recallig that ay two vectors are equal if they have the same magitude ad directio. Upo recallig this we ca pick up the vector represetig the differece ad movig it as show below. Fially, if we were to go back to the origial sketch add i compoets for the vectors we will see that i -space we ca compute the differece as follows, v w = ( vw, v w) ad if the vectors are i -space the differece is, v w = v w, v w, v w ( ) Note that both additio ad subtractio will exted aturally to more tha two vectors. The fial arithmetic operatio that we wat to take a look at is scalar multiples. Defiitio Suppose that v is a vector ad c is a o-zero scalar (i.e. c is a umber) the the scalar multiple, cv, is the vector whose legth is c times the legth of v ad is i the directio of v if c is positive ad i the opposite directio of v is c is egative. Here is a sketch of some scalar multiples of a vector v. 007 Paul Dawkis

138 Note that we ca see from this that scalar multiples are parallel. I fact it ca be show that if v ad w are two parallel vectors the there is a o-zero scalar c such that v = cw, or i other words the two vectors will be scalar multiples of each other. It ca also be show that if v is a vector i either -space or -space the the scalar multiple ca be computed as follows, cv = cv, cv OR cv = cv, cv, cv ( ) ( ) At this poit we ca give a formula for the egative of a vector. Let s examie the scalar multiple, ( ) v. This is a vector whose legth is the same as v sice = ad is i the opposite directio of v sice the scalar is egative. Hece this vector represets the egative of v. I -space this gives, v = ( ) v = ( v, v, v) ad i -space we ll have, v = v = v, v ( ) ( ) Before we move o to a example let s get some properties of vector arithmetic writte dow. Theorem If u, v, ad w are vectors i -space or -space ad c ad k are scalars the, (a) u+ v= v+ u (b) u+ ( v+ w) = ( u+ v) + w (c) u+ 0= 0+ u = u (d) u u= u+ ( u) = 0 (e) u= u ck u= c ku = k cu (f) ( ) ( ) ( ) (g) ( c+ k) u= cu+ ku (h) c( u+ v) = cu+ cv 007 Paul Dawkis

139 The proof of all these comes directly from the compoet defiitio of the operatios ad so are left to you to verify. At this poit we should probably do a couple of examples of vector arithmetic to say that we ve doe that. Example Give the followig vectors compute the idicated quatity. a= ( 4, 6) b= (, 7) c= (,5 ) u=,,6 v= 0, 4, w = 9,, ( ) ( ) ( ) (a) w [Solutio] (b) a+ b [Solutio] (c) a c [Solutio] (d) a b+ 0c [Solutio] (e) 4u+ vw [Solutio] Solutio There really is t too much to these other tha to compute the scalar multiples ad the do the additio ad/or subtractio. For the first three we ll iclude sketches so you ca visualize what s goig o with each operatio. (a) w w = ( 9,,) Here is a sketch of this vector as well as w. (b) a+ b ( 4 ( ), 6 ( 7) ) (, ) a+ b = + + = Here is a sketch of a ad b as well as the sum. [Retur to Problems] 007 Paul Dawkis 4

140 [Retur to Problems] (c) a c ( 4 ( ), 6 5) ( 5, ) a c = = Here is a sketch of a ad c as well as the differece [Retur to Problems] (d) a b+ 0c ( ) ( ) ( ) ( ) a b+ 0c = 4, 6 9, =, 65 [Retur to Problems] (e) 4u+ vw 4u+ v w = ( 4, 8, 4) + ( 0, 4, ) ( 8, 4, 6) = ( 4, 8, 9) [Retur to Problems] There is oe fial topic that we eed to discus i this sectio. We are ofte iterested i the legth or magitude of a vector so we ve got a ame ad otatio to use whe we re talkig about the magitude of a vector. 007 Paul Dawkis 5

141 Defiitio 4 If v is a vector the the magitude of the vector is called the orm of the vector ad deoted by v. Furthermore, if v is a vector i -space the, ad if v is i -space we have, v v = v + v = v + v + v I the -space case the formula is fairly easy to see from a geometric perspective. Let s suppose v = v, v ad we wat to fid the magitude (or legth) of this vector. Let s that we have ( ) cosider the followig sketch of the vector. Sice we kow that the compoets of v are also the coordiates of the termial poit of the vector whe its iitial poit is the origi (as it is here) we kow the the legths of the sides of a right triagle as show. The usig the Pythagorea Theorem we ca fid the legth of the hypoteuse, but that is also the legth of the vector. A similar argumet ca be doe o the - space versio. From above we kow that cv is a scalar multiple of v ad that its legth is c times the legth of v ad so we have, cv = c v We ca also get this from the defiitio of the orm. Here is the -space case, the -space argumet is idetical. ( ) ( ) ( ) ( ) cv = cv + cv + cv = c v + v + v = c v + v + v = c There is oe orm that we ll be particularly iterested i o occasio. Suppose v is a vector i - space or -space. We call v a uit vector if v =. Let s compute a couple of orms. v 007 Paul Dawkis 6

142 Example Compute the orms of the give vectors. v = 5,,9 (a) ( ) (b) j = ( 0,,0 ) (c) = (, 4) w ad 5 w Solutio Not much to do with these other tha to use the formula. v = = 5 (a) ( ) (b) j = + + = = 0 0, so j is a uit vector! (c) Okay with this oe we ve got two orms to compute. Here is the first oe. ( ) w To get the secod we ll first eed, = + = = 4 w =, ad here is the orm usig the fact that cv = c v w = w = ( 5) = As a check let s also compute this usig the formula for the orm w = + = + = = Both methods get the same aswer as they should. Notice as well that w is ot a uit vector but w is a uit vector. 5 We ow eed to take a look at a couple facts about the orm of a vector. Theorem Give a vector v i -space or -space the v 0. Also, v = 0 if ad oly if v = 0. Proof : The proof of the first part of this comes directly from the defiitio of the orm. The orm is defied to be a square root ad by covetio the value of a square root is always greater tha or equal to zero ad so a orm will always be greater tha or equal to zero. Now, for the secod part, recall that whe we say if ad oly if i a theorem statemet we re sayig that this is kid of a two way street. This statemet is sayig that if v = 0 the we must also have v = 0 ad i the reverse it s also sayig that if v = 0 the we must also have v = Paul Dawkis 7

143 To prove this we eed to make each assumptio ad the prove that this will imply the other portio of the statemet. We re oly goig to show the proof for the case where v is i -space. The proof for i -space is idetical. So, assume that v = ( v, v) ad let s start the proof by assumig that v = 0. Pluggig ito the formula for the orm gives, 0= v + v v + v = 0 As show, the oly way we ll get zero out of a square root is if the quatity uder the radical is zero. Now at this poit we ve got a sum of squares equalig zero. The oly way this will happe is if the idividual terms are zero. So, this meas that, v = 0 & v = 0 v = 0,0 = 0 So, if v = 0 we must have v = 0. ( ) Next, let s assume that v = 0. I this case simply plug the compoets ito the formula for the orm ad a quick computatio will show that v = 0 ad so we re doe. Theorem Give a o-zero vector v i -space or -space defie a ew vector u is a uit vector. u= v v, the Proof : This is a really simple proof, just otice that u is a scalar multiple of v ad take the orm of u. u = v v Now we kow that v > 0 because orms are always greater tha or equal to zero, but will oly be zero if we have the zero vector. I this case we ve explicitly assumed that we do t have the zero vector ad so we ow the orm will be strictly positive ad this will allow us to drop the absolute value bars o the orm whe we do the computatio. We ca ow do the followig, u = v = v = v v v v = So, u is a uit vector. This theorem tells us that we ca always tur a o-zero vector ito a uit vector simply by dividig by the orm. Note as well that because all we re doig to compute this ew uit vector is scalar multiplicatio by a positive umber this ew uit vector will poit i the same directio as the origial vector. 007 Paul Dawkis 8

144 Example 4 Give = (,, ) v fid a uit vector that, (a) poits i the same directio as v (b) poits i the opposite directio as v Solutio (a) Now, as poited out after the proof of the previous theorem, the uit vector computed i the theorem will poit i the same directio as v so all we eed to do is compute the orm of v ad the use the theorem to fid a uit vector that will poit i the same directio as v. v ( ) ( ) 4 = + + = u = (,, ) =,, (b) We ve doe most of the work for this oe. Sice u is a uit vector that poits i the same directio as v the its egative will be a uit vector that poits i the opposite directios as v. So, here is the egative of u. u =,, Fially, here is a sketch of all three of these vectors. 007 Paul Dawkis 9

145 Dot Product & Cross Product I this sectio we re goig to be takig a look at two special products of vectors, the dot product ad the cross product. However, before we look at either o of them we eed to get a quick defiitio out of the way. Suppose that u ad v are two vectors i -space or -space that are placed so that their iitial poits are the same. The the agle betwee u ad v is agle θ that is formed by u ad v such that 0 θ π. Below are some examples of agles betwee vectors. Notice that there are always two agles that are formed by the two vectors ad the oe that we will always chose is the oe that satisfies 0 θ π. We ll be usig this agle with both products. So, let s get started by takig a look at the dot product. Of the two products we ll be lookig at i this sectio this is the oe we re goig to ru across most ofte i later sectios. We ll start with the defiitio. Defiitio If u ad v are two vectors i -space or -space ad θ is the agle betwee them the the dot product, deoted by ui vis defied as, uv i = u vcosθ Note that the dot product is sometimes called the scalar product or the Euclidea ier product. Let s see a quick example or two of the dot product. Example Compute the dot product for the followig pairs of vectors. = 0,0, v =,0, which makes the agle betwee them 45. (a) u ( ) ad ( ) (b) u = ( 0,, ) ad = (,, ) v which makes the agle betwee them 90. Solutio For referece purposes here is a sketch of the two sets of vectors. 007 Paul Dawkis 40

146 (a) There really is t too much to do here with this problem. u = = v = = 8 = uv i = ( )( ) cos( 45) = 6 = 6 (b) Nor is there a lot of work to do here. u = = 5 v = = 6 uv i ( )( ) ( ) ( ) = 5 6 cos 90 = 0 0 = 0 Now, there should be a questio i everyoe s mid at this poit. Just how did we arrive at those agles above? They are the correct agles, but just how did we get them? That is the problem with this defiitio of the dot product. If you do t have the agles betwee two vectors you ca t easily compute the dot product ad sometimes fidig the correct agles is ot the easiest thig to do. Fortuately, there is aother formula that we ca use to compute the formula that relies oly o the compoets of the vectors ad ot the agle betwee them. Theorem Suppose that u = ( u, u, u ) ad = ( v, v, v ) Likewise, if u = ( u, u ) ad = ( v, v ) v are two vectors i -space the, uv= i uv + uv + uv v are two vectors i -space the, uv= i uv + uv Proof : We ll just prove the -space versio of this theorem. The -space versio has a similar proof. Let s start out with the followig figure. 007 Paul Dawkis 4

147 So, these three vectors form a triagle ad the legths of each side is u, v, ad Now, from the Law of Cosies we kow that, v u = v + u v u cosθ v u. Now, plug i the defiitio of the dot product ad solve for ui v. v u = v + u ui v ( ) ( ) Next, we kow that = ( v u, v u, v u ) uv i = v + u vu () v u ad so we ca compute v u. Note as well that because of the square o the orm we wo t have a square root. We ll also do all of the multiplicatios. v u = v u + v u + v u ( ) ( ) ( ) ( ) = v vu + u + v v u + u + v v u + u = v + v + v + u + u + u vu + v u + v u The first three terms of this are othig more tha the formula for are the formula for u. So, let s plug this ito (). uiv= v + u v + u vu + vu + vu = ( ( vu + vu + vu ) ) = vu + vu + vu Ad we re doe with the proof. ( ( ( ) )) v ad the ext three terms Before we work a example usig this ew (easier to use) formula let s otice that if we rewrite the defiitio of the dot product as follows, 007 Paul Dawkis 4

148 uv i cos θ =, 0 θ π u v we ow have a very easy way to determie the agle betwee ay two vectors. I fact this is how we got the agles betwee the vectors i the first example! Example Determie the agle betwee the followig pairs of vectors. = 9, b = 4,8 (a) a ( ) ( ) (b) u = (,, 6) v = ( 4,,0) Solutio (a) Here are all the importat quatities for this problem. a = 85 b = 40 ai b= ( 9)( 4) + ( )( 8) = 0 The agle is the, cosθ = 0 = θ = 90 (b) The importat quatities for this part are, u = 46 v = 0 ui v= ( )( 4) + ( )( ) + ( 6)( 0) = 0 The agle is the, cosθ = 0 = θ = Note that we did eed to use a calculator to get this result. Twice ow we ve see two vectors whose dot product is zero ad i both cases we ve see that the agle betwee them was 90 ad so the two vectors i questio each time where perpedicular. Perpedicular vectors are called orthogoal ad as we ll see o occasio we ofte wat to kow if two vectors are orthogoal. The followig theorem will give us a ice check for this. Theorem Two o-zero vectors, u ad v, are orthogoal if ad oly if uv i = 0. Proof : First suppose that u ad v are orthogoal. This meas that the agle betwee them is 90 ad so from the defiitio of the dot product we have, ui v = u vcos( 90) = u v ( 0) = 0 ad so we have uv i = 0. Next suppose that uv i = 0, the from the defiitio of the dot product we have, 0 = uv i = u vcosθ cosθ = 0 θ = 90 ad so the two vectors are orthogoal. Note that we used the fact that the two vectors are o-zero, ad hece would have o-zero magitudes, i determiig that we must have cos 0 θ =. 007 Paul Dawkis 4

149 If we take the covetio that the zero vector is orthogoal to ay other vector we ca say that for ay two vectors u ad v they will be orthogoal provided uv i = 0. Usig this covetio meas we do t eed to worry about whether or ot we have zero vectors. Here are some ice properties about the dot product. Theorem Suppose that u, v, ad w are three vectors that are all i -space or all i -space ad that c is a scalar. The, (a) vv= v (b) uv i = vu i ui v+ w = uiv+ ui w i (this implies that v = ( v v) (c) ( ) (d) c( uv i ) = ( cu) iv= ui ( cv) (e) vv i > 0 if v 0 (f) vv i = 0 if ad oly if v = 0 i ) We ll prove the first couple ad leave the rest to you to prove sice the follow pretty much from either the defiitio of the dot product or the formula from Theorem. The proof of the last oe is early idetical to the proof of Theorem i the previous sectio. Proof : (a) The agle betwee v ad v is 0 sice they are the same vector ad so by the defiitio of the dot product we ve got. vv i = v vcos( 0) = v To get the secod part just take the square root of both sides. (b) This proof is goig to seem tricky but it s really ot that bad. Let s just look at the -space case. So, u = ( u, u, u) ad v = ( v, v, v) ad the dot product ui vis uv= i uv + uv + uv We ca also compute vu i as follows, vu= i vu + vu + vu However, sice uv = vu, etc. (they are just real umbers after all) these are idetical ad so we ve got uv i = vu i. Example Give u = ( 5, ), v = ( 0,7) ad = ( 4,0) w compute the followig. (a) ui uad u (b) ui w (c) ( u) i v ad ui ( v) Solutio (a) Okay, i this oe we ll be verifyig part (a) of the previous theorem. Note as well that because the orm is squared we ll ot eed to have the square root i the computatio. Here are the computatios for this part. 007 Paul Dawkis 44

150 ( )( ) ( )( ) uu i = = 5+ 4= 9 u ( ) = 5 + = 9 So, as the theorem suggested we do have uu i = u. (b) Here s the dot product for this part. uw= i + = So, it looks like u ad w are orthogoal. ( 5)( 4) ( )( 0) 0 (c) I this part we ll be verifyig part (d) of the previous theorem. Here are the computatios for this part. u= 0, 4 v = 0, 4 Agai, we got the result that we should expect. ( ) ( ) ( u) iv = ( 0)( 0) + ( 4)( 7) = 8 i( v) = ( 5)( 0) + ( )( 4) = 8 u We ow eed to take a look at a very importat applicatio of the dot product. Let s suppose that u ad a are two vectors i -space or -space ad let s suppose that they are positioed so that their iitial poits are the same. What we wat to do is decompose the vector u ito two compoets. Oe, which we ll deote v for ow, will be parallel to the vector a ad the other, deoted v for ow, will be orthogoal to a. See the image below to see some examples of kid of decompositio. From these figures we ca see how to actually costruct the two pieces of our decompositio. Startig at u we drop a lie straight dow util it itersects a (or the lie defied by a as i the secod case). The parallel vector v is the the vector that starts at the iitial poit of u ad eds where the perpedicular lie itersects a. Fidig v is actually really simple provided we first have v. From the image we ca see that we have, v+ v = u v = uv We ow eed to get some termiology ad otatio out of the way. The parallel vector, v, is called the orthogoal projectio of u o a ad is deoted by proj a u. Note that sometimes proj a u is called the vector compoet of u alog a. The orthogoal vector, v, is called the vector compoet of u orthogoal to a. 007 Paul Dawkis 45

151 The followig theorem gives us formulas for computig both of these vectors. Theorem 4 Suppose that u ad a 0 are both vectors i -space or -space the, ua i proj a u= a a ad the vector compoet of u orthogoal to a is give by, ua i u proja u= u a a Proof : First let v = proja u the u proja u will be the vector compoet of u orthogoal to a ad so all we eed to do is show the formula for v is what we claimed it to be. To do this let s first ote that sice v is parallel to a the it must be a scalar multiple of a sice we kow from last sectio that parallel vectors are scalar multiples of each other. Therefore there is a scalar c such that v = ca. Now, let s start with the followig, u= v+ v = ca+ v Next take the dot product of both sides with a ad distribute the dot product through the parethesis. ua i = ( ca+ v ) ia = caa i + v ia Now, aa i = a ad vi a= 0 because v is orthogoal to a. Therefore this reduces to, = c c= ua i ua i a a ad so we get, ua i v = proja u= a a We ca also take a quick orm of proj a u to get a ice formula for the magitude of the orthogoal projectio of u o a. ua i ua i ua i proj a u = a = a = a a a Let s work a quick example or two of orthogoal projectios. 007 Paul Dawkis 46

152 Example 4 Compute the orthogoal projectio of u o a ad the vector compoet of u orthogoal to a for each of the followig. =, a = 7, (a) u ( ) ( ) (b) u = ( 4,0, ) a = (,, 5) Solutio There really is t much to do here other tha to plug ito the formulas so we ll leave it to you to verify the details. (a) First, ua i = 9 a = 5 Now the orthogoal projectio of u o a is, 9 8 proj u a = ( 7, ) =, ad the vector compoet of u orthogoal to a is, u proja u = (, ), =, (b) First, ua i = 7 a = 5 Now the orthogoal projectio of u o a is, proj u a = (,, 5 ) =,, ad the vector compoet of u orthogoal to a is, u proja u = ( 4,0, ),, =,, We eed to be very careful with the otatio proj a u. I this otatio we are lookig for the orthogoal projectio of u (the secod vector listed) o a (the vector that is subscripted). Let s do a quick example illustratig this. Example 5 Give u = ( 4, 5) ad = (, ) a compute, (a) proj a u (b) proj u a Solutio (a) I this case we are lookig for the compoet of u that is parallel to a ad so the orthogoal projectio is give by, ua i proj a u= a a so let s get all the quatities that we eed. ua i = ( 4)( ) + ( 5)( ) = 9 a = () + ( ) = The projectio is the, 007 Paul Dawkis 47

153 9 9 9 proja u = (, ) =, (b) Here we are lookig for the compoet of a that is parallel to u ad so the orthogoal projectio is give by, au i proj u a= u u so let s get the quatities that we eed for this part. The projectio is the, ( ) ( ) au i = ua i = 9 u = = proju a = ( 4, 5 ) =, As this example has show we eed to pay attetio to the placemet of the two vectors i the projectio otatio. Each part above was askig for somethig differet ad as show we did i fact get differet aswers so be careful. It s ow time to move ito the secod vector product that we re goig to look at i this sectio. However before we do that we eed to itroduce the idea of the stadard uit vectors or stadard basis vectors for -space. These vectors are defied as follows, (,0,0) ( 0,,0 ) ( 0,0,) i = j= k = Each of these have a magitude of ad so are uit vectors. Also ote that each oe lies alog the coordiate axes of -space ad poit i the positive directio as show below. Notice that ay vector i -space, say u = ( u, u, u ) vectors as follows, u = ( u, u, u) ( u,0,0) ( 0, u,0) ( 0,0, u) u (,0,0) u ( 0,,0 ) u ( 0,0,) = + + = + + = ui+ u j+ u k, ca be writte i terms of these three So, for example we ca do the followig, 0,4, = 0i + 4j+ k,0, = i+ k ( ) ( ) 007 Paul Dawkis 48

154 Also ote that if we defie i = (, 0) ad = ( 0,) vectors for -space ad ay vector i -space, say = ( u, u) u= ( u, u ) = ui+ u j j these two vectors are the stadard basis u ca be writte as, We re ot goig to eed the -space versio of thigs here, but it was worth poitig out that there was a -space versio sice we ll eed that dow the road. Okay we are ow ready to look at the cross product. The first thig that we eed to poit out here is that, ulike the dot product, this is oly valid i -space. There are three differet ways of defiig it depedig o how you wat to do it. The followig defiitio gives all three defiitios. Defiitio If u ad v are two vectors i -space the the cross product, deoted by u v ad is defied i oe of three ways. u v = uv uv, uv uv, uv uv - Vector Notatio. (a) ( ) (b) u u u u u u i j k (c) u v = u u u - Usig determiats v v v u v =,, - Usig v v v v v v determiats Note that all three of these defiitios are equivalet as you ca check by computig the determiats i the secod ad third defiitio ad verifyig that you get the same formula as i the first defiitio. Notice that the cross product of two vectors is a ew vector ulike the dot product which gives a scalar. Make sure to keep these two products straight. Let s take a quick look at a example of a cross product. Example 6 Compute u v for u = ( 4, 9,) ad = (,, 7) v. Solutio You ca use either of the three defiitios above to compute this cross product. We ll use the third oe. If you do t remember how to compute determiats you might wat to go back ad check out the first sectio of the Determiats chapter. I that sectio you ll fid the formulas for computig determiats of both ad matrices. i j k i j u v= = 6i + j8k 8 j+ i+ 7k =6i 5 j+ 9k Whe we re usig this defiitio of the cross product we ll always get the aswer i terms of the 007 Paul Dawkis 49

155 stadard basis vectors. However, we ca always go back to the form we re used to. Doig this gives, u v = ( 6, 5,9) Here is a theorem listig the mai properties of the cross product. Theorem 5 Suppose u, v, ad w are vectors i -space ad c is ay scalar the u v = v u (a) ( ) (b) u ( v+ w) = ( u v) + ( u w ) (c) ( u+ v) w = ( u w) + ( v w ) (d) c( u v) = ( cu) v = u ( cv ) (e) u 0= 0 u= 0 (f) u u= 0 The proof of all these properties come directly from the defiitio of the cross product ad so are left to you to verify. There are also quite a few properties that relate the dot product ad the cross product. Here is a theorem givig those properties. Theorem 6 Suppose u, v, ad w are vectors i -space the, (a) ui ( u v) = 0 (b) vi ( u v) = 0 (c) u v = u v ( u v) (d) ( ) = ( i ) ( i ) (e) ( ) = ( i ) ( i ) u v w u w v u v w u v w u w v v w u i - This is called Lagrage s Idetity The proof of all these properties come directly from the defiitio of the cross product ad the dot product ad so are left to you to verify. The first two properties deserve some closer ispectio. That they are sayig is that give two vectors u ad v i -space the the cross product u v is orthogoal to both u ad v. The image below shows this idea. 007 Paul Dawkis 50

156 As this figure shows there are two directios i which the cross product to be orthogoal to u ad v ad there is a ice way to determie which it will be. Take your right had ad cup your figers so that they poit i the directio of rotatio that is show i the figures (i.e. rotate u util it lies o top of v) ad hold your thumb out. Your thumb will poit i the directio of the cross product. This is ofte called the right-had rule. Notice that part (a) of Theorem 5 above also gives this same result. If we flip the order i which we take the cross product (which is really what we did i the figure above whe we iterchaged the letters) we get u v = ( v u ). I other words, i oe order we get a cross product that poits i oe directio ad if we flip the order we get a ew cross product that poits i the opposite directio as the first oe. Let s work a couple more cross products to verify some of the properties listed above ad so we ca say we ve got a couple more examples i the otes. Example 7 Give u = (,, 4) ad = (,0,) v compute each of the followig. (a) u v ad v u [Solutio] (b) u u [Solutio] u u v vi u v [Solutio] (c) i ( ) ad ( ) Solutio I the solutios to these problems we will be usig the third defiitio above ad we ll be settig up the determiat. We will ot be showig the determiat computatio however, if you eed a remider o how to take determiats go back to the first sectio i the Determiat chapter for a refresher. (a) u v ad v u Let s compute u v first. i j k u v = 4 = i+ 5j+ k = (,5,) 0 Remember that we ll get the aswers here i terms of the stadard basis vectors ad these ca always be put back ito the stadard vector otatio that we ve bee usig to this poit as we did above. Now let s compute v u. i j k v u= 0 = i5j k =, 5, 4 So, as part (a) of Theorem 5 suggested we got u v = ( v u ). ( ) [Retur to Problems] (b) u u Not much to do here other tha do the cross product ad ote that part (f) of Theorem 5 implies that we should get u u = Paul Dawkis 5

157 i j k ( ) u u = 4 = 0,0,0 4 So, sure eough we got 0. (c) ui ( u v) ad vi ( u v) [Retur to Problems] We ve already got u v computed so we just eed to do a couple of dot products ad accordig to Theorem 6 both u ad v are orthogoal to u v ad so we should get zero out of both of these. ui u v = = 0 ( ) ( )( ) ( )( ) ( )( ) ( v) ( )( ) ( )( ) ( )( ) vi u = = 0 Ad we did get zero as expected. [Retur to Problems] We ll give oe theorem o cross products relatig the magitude of the cross product to the magitudes of the two vectors we re takig the cross product of. Theorem 7 Suppose that u ad v are vectors i -space ad let θ be the agle betwee them the, u v = u v siθ Let s take a look at oe fial example here. Example 8 Give u = (,, 0) ad = ( 0,,0) v verify the results of Theorem 7. Solutio Let s get the cross product ad the orms take care of first. i j k u v = 0 = ( 0,0, ) u v = = 0 0 u = = v = = Now, i order to verify Theorem 7 we ll eed the agle betwee the two vectors ad we ca use the defiitio of the dot product above to fid this. We ll first eed the dot product. uv= i cosθ = = θ = 45 ( )( ) All that s left is to check the formula. u v siθ = ( )( ) si ( 45 ) = ( )( ) = = u v So, the theorem is verified. 007 Paul Dawkis 5

158 007 Paul Dawkis 5

159 Euclidea Space I the first two sectios of this chapter we looked at vectors i -space ad -space. You probably oticed that with the exceptio of the cross product (which is oly defied i -space) all of the formulas that we had for vectors i -space were atural extesios of the -space formulas. I this sectio we re goig to exted thigs out to a much more geeral settig. We wo t be able to visualize thigs i a geometric settig as we did i the previous two sectios but thigs will exted out icely. I fact, that was why we started i -space ad -space. We wated to start out i a settig where we could visualize some of what was goig o before we geeralized thigs ito a settig where visualizatio was a very difficult thig to do. So, let s get thigs started off with the followig defiitio. Defiitio Give a positive iteger a ordered -tuple is a sequece of real umbers deoted by ( a, a,, a ). The complete set of all ordered -tuples is called -space ad is deoted by (what we were callig -space) ad (what we I the previous sectios we were lookig at were callig -space). Also the more stadard terms for -tuples ad -tuples are ordered pair ad ordered triplet ad that s the terms we ll be usig from this poit o. Also, as we poited out i the previous sectios a ordered pair, ( a, a ) ( a, a, a ), ca be thought of as either a poit or a vector i or a ordered -tuple, ( a a a ), or a ordered triplet, respectively. I geeral,,,, ca also be thought of as a poit or a vector i. Agai, we ca t really visualize a poit or a vector i, but we will thik of them as poits or vectors i ayway ad try ot to worry too much about the fact that we ca t really visualize them. Next, we eed to get the stadard arithmetic defiitios out of the way ad all of these are goig to be atural extesios of the arithmetic we saw i ad. Defiitio Suppose u = ( u u u ) ad = ( v v v ),,, v,,, are two vectors i (a) We say that u ad v are equal if, u = v u = v u = v (b) The sum of u ad v is defied to be, u+ v = ( u+ v, u + v,, u + v) (c) The egative (or additive iverse) of u is defied to be, u = ( u, u,, u ) (d) The differece of two vectors is defied to be, u v = u+ ( v ) = ( uv, u v,, u v) (e) If c is ay scalar the the scalar multiple of u is defied to be, cu = ( cu, cu,, cu ) (f) The zero vector i is deoted by 0 ad is defied to be, 0 = 0,0,,0 ( ). 007 Paul Dawkis 54

160 The basic properties of arithmetic are still valid i that we ve doe that. so let s also give those so that we ca say Theorem Suppose u = ( u u u ), v = ( v v v ) ad = ( w w w ),,, vectors i ad c ad k are scalars the, (a) u+ v= v+ u (b) u+ ( v+ w) = ( u+ v) + w (c) u+ 0= 0+ u= u (d) u u= u+ ( u) = 0 (e) u= u ck u= c ku = k cu (f) ( ) ( ) ( ) (g) ( c+ k) u= cu+ ku (h) c( u+ v) = cu+ cv,,, w,,, are The proof of all of these come directly from the defiitios above ad so wo t be give here. We ow eed to exted the dot product we saw i the previous sectio to it a ew ame as well. Defiitio Suppose u = ( u u u ) ad = ( v v v ) ad we ll be givig,,, v,,, are two vectors i the Euclidea ier product deoted by ui vis defied to be uv= i uv + uv + + uv the So, we ca see that it s the same otatio ad is a atural extesio to the dot product that we looked at i the previous sectio, we re just goig to call it somethig differet ow. I fact, this is probably the more correct ame for it ad we should istead say that we ve reamed this to the dot product whe we were workig exclusively i ad. Note that whe we add i additio, scalar multiplicatio ad the Euclidea ier product to we will ofte call this Euclidea -space. We also have atural extesios of the properties of the dot product that we saw i the previous sectio. Theorem Suppose u = ( u u u ), v = ( v v v ), ad = ( w w w ),,, vectors i ad let c be a scalar the, (a) uv i = vu i u+ v iw = uiw+ vi w (b) ( ) (c) c( uv i ) = ( cu) iv= ui ( cv) (d) uu i 0 (e) uu= i 0 if ad oly if u=0.,,, w,,, are 007 Paul Dawkis 55

161 The proof of this theorem falls directly from the defiitio of the Euclidea ier product ad are extesios of proofs give i the previous sectio ad so are t give here. The fial extesio to the work of the previous sectios that we eed to do is to give the defiitio of the orm for vectors i ad we ll use this to defie distace i. Defiitio 4 Suppose = ( u u u ) u,,, is a vector i the the Euclidea orm is, u ( ) uiu u u u = = Defiitio 5 Suppose u = ( u u u ) ad = ( v v v ),,, v,,, are two poits i Euclidea distace betwee them is defied to be, ( uv) = u v = ( ) + ( ) + + ( ), d u v u v u v the the Notice i this defiitio that we called u ad v poits ad the used them as vectors i the orm. This comes back to the idea that a -tuple ca be thought of as both a poit ad a vector ad so will ofte be used iterchageably where eeded. Let s take a quick look at a couple of examples. Example Give u = ( 9,, 4, 0,) ad = ( 0,,,,7 ) (a) u 4v (b) vu i (c) ui u (d) u (e) d ( uv, ) v compute Solutio There really is t much to do here other tha use the appropriate defiitio. (a) u 4v= ( 9,, 4,0,) 4( 0,,,,7) = ( 9,, 4, 0,) ( 0,,8, 4, 8) = 9,5,, 4, 7 (b) (c) (d) (e) ( ) vu= i ( 0)( 9) + ( )( ) + ( )( 4) + ( )( 0) + ( 7)( ) =0 ( ) uu= i = u ( ) = = ( uv ) ( ) ( ( )) ( ) ( ( )) ( ) d, = = Paul Dawkis 56

162 Just as we saw i the sectio o vectors if we have u = the we will call u a uit vector ad so the vector u from the previous set of examples is ot a uit vector Now that we ve gotte both the ier product ad the orm take care of we ca give the followig theorem. Theorem Suppose u ad v are two vectors i ad θ is the agle betwee them. The, uv i = u vcosθ Of course sice we are i it is hard to visualize just what the agle betwee the two vectors is, but provided we ca fid it we ca use this theorem. Also ote that this was the defiitio of the dot product that we gave i the previous sectio ad like that sectio this theorem is most useful for actually determiig the agle betwee two vectors. The proof of this theorem is idetical to the proof of Theorem i the previous sectio ad so is t give here. The ext theorem is very importat ad has may uses i the study of vectors. I fact we ll eed it i the proof of at least oe theorem i these otes. The followig theorem is called the Cauchy-Schwarz Iequality. Theorem 4 Suppose u ad v are two vectors i the uv i u v Proof : This proof is surprisigly simple. We ll start with the result of the previous theorem ad take the absolve value of both sides. uv i = u v cosθ However, we kow that cosθ ad so we get our result by usig this fact. ( ) ui v = u v cosθ u v = u v Here are some ice properties of the Euclidea orm. Theorem 5 Suppose u ad v are two vectors i (a) u 0 (b) u = 0 if ad oly if u=0. (c) cu = c u ad that c is a scalar the, (d) u+ v u + v - Usually called the Triagle Iequality The proof of the first two part is a direct cosequece of the defiitio of the Euclidea orm ad so wo t be give here. 007 Paul Dawkis 57

163 Proof : (c) We ll just ru through the defiitio of the orm o this oe. cu = cu + cu + + cu ( ) ( ) ( ) ( ) = c u + u + + u = c u + u + + u = c u (d) The proof of this oe is t too bad oce you see the steps you eed to take. We ll start with the followig. u+ v = ( u+ v) i ( u+ v) So, we re startig with the defiitio of the orm ad squarig both sides to get rid of the square root o the right side. Next, we ll use the properties of the Euclidea ier product to simplify this. u+ v = ui( u+ v) + vi( u+ v) = uu i + uv i + vu i + vv i = uu i + uv i + vv i ( ) Now, otice that we ca covert the first ad third terms ito orms so we ll do that. Also, ui v is a umber ad so we kow that if we take the absolute value of this we ll have uv i uv i. Usig this ad covertig the first ad third terms to orms gives, u+ v = u + uiv + v ( ) u + uiv + v We ca ow use the Cauchy-Schwarz iequality o the secod term to get, u+ v u + u v + v We re almost doe. Let s otice that the left side ca ow be rewritte as, u+ v u + v ( ) Fially, take the square root of both sides. u+ v u + v Example Give u = (,,, ) ad = ( 7,, 4, ) v verify the Cauchy-Schwarz iequality ad the Triagle Iequality. Solutio Let s first verify the Cauchy-Schwarz iequality. To do this we eed to followig quatities. uv i = = u = = 5 v = = Paul Dawkis 58

164 Now, verify the Cauchy-Schwarz iequality. uv i = =.407 = 5 70 = u v Sure eough the Cauchy-Schwarz iequality holds true. To verify the Triagle iequality all we eed is, u+ v= 5,4,, u+ v = = 59 ( ) Now verify the Triagle Iequality. u+ v = 59 = = = u + v So, the Triagle Iequality is also verified for this problem. Here are some ice properties pertaiig to the Euclidea distace. Theorem 6 Suppose u, v, ad w are vectors i d uv, 0 (a) ( ) (b) d (, ) = 0 (c) d( uv, ) = d( vu, ) (d) d(, ) d(, ) + d(, ) uv if ad oly if u=v. the, uv uw wv - Usually called the Triagle Iequality The proof of the first two parts is a direct cosequece of the previous theorem ad the proof of the third part is a direct cosequece of the defiitio of distace ad wo t be prove here. Proof (d) : Let s start off with the defiitio of distace. d ( uv, ) = u v Now, add i ad subtract out w as follows, d u, v = u w+ w v = u w + wv Next use the Triagle Iequality for orms o this. d ( uv, ) u w+ wv Fially, just reuse the defiitio of distace agai. d uv, d uw, + d wv, ( ) ( ) ( ) ( ) ( ) ( ) We have oe fial topic that eeds to be geeralized ito Euclidea -space. Defiitio 6 Suppose u ad v are two vectors i uv i = 0.. We say that u ad v are orthogoal if So, this defiitio of orthogoality is idetical to the defiitio that we saw whe we were dealig with ad. 007 Paul Dawkis 59

165 Here is the Pythagorea Theorem i. Theorem 7 Suppose u ad v are two orthogoal vectors i u+ v = u + v the, Proof : The proof of this theorem is fairly simple. From the proof of the triagle iequality for orms we have the followig statemet. u+ v = u + ( ui v) + v However, because u ad v are orthogoal we have uv i = 0 ad so we get, u+ v = u + v Example Show that u = (,0,,0, 4, ) ad = (,5, 0,,, 8) verify that the Pythagorea Theorem holds. v are orthogoal ad Solutio Showig that these two vectors is easy eough. uv i = = 0 ( )( ) ( )( ) ( )( ) ( )( ) ( )( ) ( )( ) So, the Pythagorea Theorem should hold, but let s verify that. Here s the sum u+ v = (,5,,,, 9) ad here s the square of the orms. u+ v = = 9 u v ( ) ( ) 7 ( ) ( ) ( ) = = = = 66 A quick computatio the cofirms that u+ v = u + v. We ve got oe more theorem that gives a relatioship betwee the Euclidea ier product ad the orm. This may seem like a silly theorem, but we ll actually eed this theorem towards the ed of the ext chapter. Theorem 8 If u ad v are two vectors i the, uv i = u+ v uv 4 4 Proof : The proof here is surprisigly simple. First, start with, u+ v = u + uiv + v ( ) ( i ) u v = u u v + v The first of these we ve see a couple of times already ad the secod is derived i the same maer that the first was ad so you should verify that formula. 007 Paul Dawkis 60

166 Now subtract the secod from the first to get, ( i ) 4 uv = u+ v uv Fially, divide by 4 ad we get the result we were after. uv i = u+ v uv 4 4 I the previous sectio we saw the three stadard basis vectors for, i, j, ad k. This idea ca also be exteded out to. I we will defie the stadard basis vectors or stadard uit vectors to be, e = (,0,0,,0) e = ( 0,,0,,0) e = ( 0,0,0,,) ad just as we saw i that sectio we ca write ay vector u = ( u, u,, u ) i terms of these stadard basis vectors as follows, u = u(,0,0, 0) + u( 0,,0, 0) + + u ( 0,0,0, ) = ue + u e + + u e Note that i we have e = i, e = j ad e = k. Now that we ve gotte the geeral vector i Euclidea -space take care of we eed to go back ad remember some of the work that we did i the first chapter. It is ofte coveiet to write the vector u = ( u, u,, u ) as either a row matrix or a colum matrix as follows, u u u= [ u u u ] u= u I this otatio we ca use matrix additio ad scalar multiplicatio for matrices to show that we ll get the same results as if we d doe vector additio ad scalar multiplicatio for vectors o the origial vectors. So, why do we do this? We ll let s use the colum matrix otatio for the two vectors u ad v. u v u v u= v = u v Now compute the followig matrix product. 007 Paul Dawkis 6

167 u u [ v v v ] [ uv u v u v ] [ ] T vu= = = uv i = uv i u So, we ca thik of the Euclidea ier product ca be thought of as a matrix multiplicatio usig, T uv i = vu provided we cosider u ad v as colum vectors. The atural questio this is just why is this importat? Well let s cosider the followig sceario. Suppose that u ad v are two vectors i ad that A is a matrix. Now cosider the followig ier product ad write it as a matrix multiplicatio. T ( Au) i v = v ( Au) Now, rearrage the order of the multiplicatio ad recall oe of the properties of trasposes. T T Au i v= v A u= A v u ( ) ( ) ( ) T Do t forget that we switch the order o the matrices whe we move the traspose out of the parethesis. Fially, this last matrix product ca be rewritte as a ier product. Au iv= ui A T v ( ) ( ) This tells us that if we ve got a ier product ad the first vector (or colum matrix) is multiplied by a matrix the we ca move that matrix to the secod vector (or colum matrix) if we simply take its traspose. A similar argumet ca also show that, ui Av = A T u i v ( ) ( ) 007 Paul Dawkis 6

168 Liear Trasformatios I this sectio we re goig to take a look at a special kid of fuctio that arises very aturally i the study of Liear Algebra ad has may applicatios i fields outside of mathematics such as physics ad egieerig. This sectio is devoted mostly to the basic defiitios ad facts associated with this special kid of fuctio. We will be lookig at a couple of examples, but we ll reserve most of the examples for the ext sectio. Now, the first thig that we eed to do is take a step back ad make sure that we re all familiar with some of the basics of fuctios i geeral. A fuctio, f, is a rule (usually defied by a equatio) that takes each elemet of the set A (called the domai) ad associates it with exactly oe elemet of a set B (called the codomai). The otatio that we ll be usig to deote our fuctio is f : A B Whe we see this otatio we kow that we re goig to be dealig with a fuctio that takes elemets from the set A ad associates them with elemets from the set B. Note as well that it is completely possible that ot every elemet of the set B will be associated with a elemet from A. The subset of all elemets from B that are associated with elemets from A is called the rage. I this sectio we re goig to be lookig at fuctios of the form, m f : I other words, we re goig to be lookig at fuctios that take elemets/poits/vectors from m ad associate them with elemets/poits/vectors from. These kids of fuctios are called m trasformatios ad we say that f maps ito. O a elemet basis we will also say that m f maps the elemet u from to the elemet v from. So, just what do trasformatios look like? Cosider the followig sceario. Suppose that we have m fuctios of the followig form, w = f( x, x,, x ) w = f x, x,, x Each of these fuctios takes a poit i ( ) (,,, ) w = f x x x m m, amely ( x, x,, x ) m w i. We ca ow defie a trasformatio T : as follows, T( x, x,, x) = ( w, w,, wm) I this way we associate with each poit ( x, x,, x ) from m ad we have a trasformatio. Let s take a look at a couple of trasformatios., ad maps it to the umber a poit ( w w w ),,, m from 007 Paul Dawkis 6

169 Example Give w = x 4x w = x + x w = 6x x w = 0x defie T : 4 as, T( x, x) = ( w, w, w, w4) OR T x, x = x 4 x, x + x, 6 x x, 0x 4 ( ) ( ) Note that the secod form is more coveiet sice we do t actually have to defie ay of the w s i that way ad is how we will defie most of our trasformatios. We evaluate this just as we evaluate the fuctios that we re used to workig with. Namely, pick a poit from ad plug ito the trasformatio ad we ll get a poit out of the fuctio that is 4 i. For example, Example Defie T : evaluatio of this trasformatio is, T ( 5, ) = (,,, 0) as T( x ) (, x, x 4 x x, x xx) T (,, 6) = ( 40, 5) = +. A sample Now, i this sectio we re goig to be lookig at a special kid of trasformatio called a liear trasformatio. Here is the defiitio of a liear trasformatio. m Defiitio A fuctio T : is called a liear trasformatio if for all u ad v i ad all scalars c we have, T u+ v = T u + T v T cu = ct u ( ) ( ) ( ) ( ) ( ) We looked at two trasformatios above ad oly oe of them is liear. Let s take a look at each oe ad see what we ve got. Example Determie if the trasformatio from Example is liear or ot. Solutio Okay, if this is goig to be liear the it must satisfy both of the coditios from the defiitio. I other words, both of the followig will eed to be true. T( u+ v) = T( u+ v, u + v, u + v) = T( u, u, u) + T( v, v, v) = T u + T v ( ) ( ) ( u) = (,, ) = (,, ) = ( u ) T c T cu cu cu ct u u u ct I this case let s take a look at the secod coditio. 007 Paul Dawkis 64

170 ( u) = (,, ) T c T cu cu cu ( 4 cu cu, cu cuu ) c ( 4 u u, u uu) = + = + ct ( u) ct( u) = The secod coditio is ot satisfied ad so this is ot a liear trasformatio. You might wat to verify that i this case the first is also ot satisfied. It s ot too bad, but the work does get a little messy. Example 4 Determie if the trasformatio i Example is liear or ot. Solutio To do this oe we re goig to eed to rewrite thigs just a little. The trasformatio is defied as T( x, x) = ( w, w, w, w4) where, w = x 4x w = x+ x w = 6xx w = 0x 4 Now, each of the compoets are give by a system of liear (hhmm, makes oe istatly woder if the trasformatio is also liear ) equatios ad we saw i the first chapter that we ca always write a system of liear equatios i matrix form. Let s do that for this system. w 4 w x = A w 6 x w = x w4 0 0 Now, otice that if we plug i ay colum matrix x ad do the matrix multiplicatio we ll get a ew colum matrix out, w. Let s pick a colum matrix x totally at radom ad see what we get. 4 5 = Of course, we did t pick x completely at radom. Notice that x we choose was the colum matrix represetatio of the poit from that we used i Example to show a sample evaluatio of the trasformatio. Just as importatly otice that the result, w, is the matrix 4 represetatio of the poit from that we got out of the evaluatio. I fact, this will always be the case for this trasformatio. So, i some way the evaluatio 007 Paul Dawkis 65

171 T ( x ) is the same as the matrix multiplicatio Ax ad so we ca write the trasformatio as T( x) = Ax Notice that we re kid of mixig ad matchig otatio here. O the left x represets a poit i ad o the right it is a matrix. However, this really is t a problem sice they both ca be used to represet a poit i. We will have to get used to this otatio however as we ll be usig it quite regularly. Okay, just what were we after here. We wated to determie if this trasformatio is liear or ot. With this ew way of writig the trasformatio this is actually really simple. We ll just make use of some very ice facts that we kow about matrix multiplicatio. Here is the work for this problem T u+ v = A u+ v = Au+ Av = T u + T v ( ) ( ) ( ) ( ) T( cu) = A( cu) = cau= ct( u ) So, both coditios of the defiitio are met ad so this trasformatio is a liear trasformatio. There are a couple of thigs to ote here. First, we could t write the trasformatio from Example as a matrix multiplicatio because at least oe of the equatios (okay both i this case) for the compoets i the result were ot liear. Secod, whe all the equatios that give the compoets of the result are liear the the trasformatio will be liear. If at least oe of the equatios are ot liear the the trasformatio will ot be liear either. Now, we eed to ivestigate the idea that we used i the previous example i more detail. There are two issues that we wat to take a look at. First, we saw that, at least i some cases, matrix multiplicatio ca be thought of as a liear trasformatio. As the followig theorem shows, this is i fact always the case. Theorem If A is a m matrix the its iduced trasformatio, TA : as, TA ( x) = Ax is a liear trasformatio. Proof : The proof here is really simple ad i fact we pretty much saw it last example. T u+ v = A u+ v = Au+ Av = T u + T v ( ) ( ) ( ) ( ) T ( cu) = A( cu) = cau= ct ( u ) A A A A So, the iduced fuctio, T A, satisfies both the coditios i the defiitio of a liear trasformatio ad so it is a liear trasformatio. A m, defied So, ay time we do matrix multiplicatio we ca also thik of the operatio as evaluatig a liear trasformatio. 007 Paul Dawkis 66

172 The other thig that we saw i Example 4 is that we were able, i that case, to write a liear trasformatio as a matrix multiplicatio. Agai, it turs out that every liear trasformatio ca be writte as a matrix multiplicatio. m Theorem Let T : be a liear trasformatio, the there is a m matrix such that T = TA (recall that T A is the trasformatio iduced by A). The matrix A is called the matrix iduced by T ad is sometimes deoted as A [ T] =. Proof : First let, e =,0,0,,0 e = 0,,0,,0 e = 0,0,0,, ( ) ( ) ( ) be the stadard basis vectors for ad defie A to be the m matrix whose i th colum is T e. I other words, A is give by, ( ) i Next let x be ay vector from vectors as follows, ( e ) ( e ) ( e ) A= T T T. We kow that we ca write x i terms of the stadard basis x x x x e x e x e x = = I order to prove this theorem we re goig to eed to show that for ay x (which we ve got a ice geeral oe above) we will have T( x) = T A ( x ). So, let s start off ad plug x ito T usig the geeral form as writte out above. T x = T xe + x e + + x e ( ) ( ) Now, we kow that T is a liear trasformatio ad so we ca break this up at each of the + s as follows, T x = T xe + T x e + + T x e ( ) ( ) ( ) ( ) Next, each of the x i s are scalars ad agai because T is a liear trasformatio we ca write this as, T x = xt e + x T e + + x T e ( ) ( ) ( ) ( ) Next, let s otice that this is othig more tha the followig matrix multiplicatio. x x T( x) = T( e) T( e) T( e ) x 007 Paul Dawkis 67

173 But the first matrix othig more tha A ad the secod is just x ad we whe we defie A as we did above we will get, T( x) = Ax= T A ( x ) ad so we ve prove what we eeded to. I this proof we used the stadard basis vectors to defie the matrix A. As we will see i a later chapter there are other choices of vectors that we could use here ad these will produce a differet iduced matrix, A, ad we do eed to remember that. However, whe we use the stadard basis vectors to defie A, as we re goig to i this chapter, the we do t actually eed to evaluate T at each of the basis vectors as we did i the proof. All we eed to do is what we did i Example 4, write dow the coefficiet matrix for the system of equatios that we get by writig out each of the compoets as idividual equatios. Okay, we ve doe a lot of work i this sectio ad we have t really doe ay examples so we should probably do a couple of them. Note that we are savig most of the examples for the ext sectio, so do t expect a lot here. We re just goig to do a couple so we ca say we ve doe a couple. m Example 5 The zero trasformatio is the trasformatio T : that maps every m vector x i to the zero vector i, i.e. T ( x) = 0. The matrix iduced by this trasformatio is the m zero matrix, 0 sice, T( x) = T 0 ( x) = 0x= 0 To make it clear we re usig the zero trasformatio we usually deote it by T0 ( x ). Example 6 The idetity trasformatio is the trasformatio T : (yes they are both ) that maps every x to itself, i.e. T ( x) = x. The matrix iduced by this trasformatio is the idetity matrix, I sice, T x = T x = I x= x ( ) ( ) We ll usually deote the idetity trasformatio as T ( ) I I x to make it clear we re workig with it. So, the two examples above are stadard examples ad we did eed them take care of. However, they are t really very illustrative for seeig how to costruct the matrix iduced by the trasformatio. To see how this is doe, let s take a look at some reflectios i. We ll look at reflectios i i the ext sectio. 007 Paul Dawkis 68

174 Example 7 Determie the matrix iduced by the followig reflectios. (a) Reflectio about the x-axis. [Solutio] (b) Reflectio about the y-axis. [Solutio] (c) Reflectio about the lie y = x. [Solutio] Solutio Note that all of these will be liear trasformatios of the form (a) Reflectio about the x-axis. Let s start off with a sketch of what we re lookig for here. T :. So, from this sketch we ca see that the compoets of the for the traslatio (i.e. the equatios that will map x ito w) are, w = x w = y Remember that w will be the first compoet of the trasformed poit ad w will be the secod compoet of the trasformed poit. Now, just as we did i Example 4 we ca write dow the matrix form of this system. w 0 x w = 0 y So, it looks like the matrix iduced by this reflectio is, 0 0 [Retur to Problems] (b) Reflectio about the y-axis. We ll do this oe a little quicker. Here s a sketch ad the equatios for this reflectio. 007 Paul Dawkis 69

175 The matrix iduced by this reflectio is, (c) Reflectio about the lie y = x. w w = x = y 0 0 [Retur to Problems] Here s the sketch ad equatios for this reflectio. The matrix iduced by this reflectio is, w = y w = x 0 0 [Retur to Problems] Hopefully, from these examples you re startig to get a feel for how we arrive at the iduced matrix for a liear trasformatio. We ll be seeig more of these i the ext sectio, but for ow we eed to move o to some more ideas about liear trasformatios. Let s suppose that we have two liear trasformatios iduced by the matrices A ad B, k k m k T : ad T :. If we take ay x out of T will map x ito. I other A B A 007 Paul Dawkis 70

176 words, T ( ) A x will be i m. I summary, if we take x out of we will have a trasformatio from k ad otice that we ca the apply T B to this ad its image will be i ad first apply T A to x ad the apply T B to the result to m. This process is called compositio of trasformatios ad is deoted as ( TB TA)( x) = TB( TA( x) ) Note that the order here is importat. The first trasformatio to be applied is o the right ad the secod is o the left. Now, because both of our origial trasformatios were liear we ca do the followig, ( TB TA)( x) = TB( TA( x) ) = TB( Ax) = ( BA) x ad so the compositio TB TA is the same as multiplicatio by BA. This meas that the compositio will be a liear trasformatio provided the two origial trasformatios were also liear. Note as well that we ca do compositio with as may trasformatios as we wat provided all the spaces correctly match up. For istace with three trasformatios we require the followig three trasformatios, k k p p m TA: TB : TC : ad i this case the compositio would be, T T T x = T T T x = CBA x ( )( ) ( ) C B A C B A Let s take a look at a couple of examples. ( ( )) ( ) Example 8 Determie the matrix iducted by the compositio of reflectio about the y-axis followed by reflectio about the x-axis. Solutio First, otice that reflectio about the y-axis should chage the sig o the x coordiate ad followig this by a reflectio about the x-axis should chage the sig o the y coordiate. The two trasformatios here are, T 0 A= 0 y T 0 B= 0 x The matrix iduced by the compositio is the, TB TA = BA= = A : reflectio about -axis B : reflectio about -axis Let s take a quick look at what this does to a poit. Give x i we have, 0 x x ( TB TA)( x) = = 0 y y This is what we expected to get. This is ofte called reflectio about the origi. 007 Paul Dawkis 7

177 Example 9 Determie the matrix iducted by the compositio of reflectio about the y-axis followed by aother reflectio about the y-axis. Solutio I this case if we reflect about the y-axis twice we should ed right back where we started. The two trasformatios i this case are, T 0 A= 0 y T 0 B= 0 y The iduced matrix is, TB TA = BA= I 0 = = 0 0 A : reflectio about -axis B : reflectio about -axis So, the compositio of these two trasformatios yields the idetity trasformatio. So, ( TB TA)( x) = TI ( x) = x ad the compositio will ot chage the origial x as we guessed. 007 Paul Dawkis 7

178 Examples of Liear Trasformatios This sectio is goig to be mostly devoted to givig the iduced matrices for a variety of stadard liear trasformatios. We will be workig exclusively with liear trasformatios of the form T : ad T : ad for the most part we ll be providig equatios ad sketches of the trasformatios i but we ll just be providig equatios for the cases. Let s start this sectio out with two of the trasformatios we looked at i the previous sectio just so we ca say we ve got all the mai examples here i oe sectio. Zero Trasformatio I this case very vector x is mapped to the zero vector ad so the trasformatio is, T( x) = T 0 ( x ) ad the iduced matrix is the zero matrix, 0. Idetity Trasformatio The idetity trasformatio will map every vector x to itself. The trasformatio is, T( x) = T I ( x ) ad so the iduced matrix is the idetity matrix. Reflectios We saw a variety of reflectios i i the previous sectio so we ll give those agai here agai alog with some reflectios i so we ca say that we ve got them all i oe place. Reflectio Equatios Iduced Matrix Reflectio about x-axis i Reflectio about y-axis i Reflectio about lie x Reflectio about origi i = y i Reflectio about xy-plae i Reflectio about yz-plae i Reflectio about xz-plae i w = x w = y w = x w = y w = y w = x w = x w = y w = x w = y w = z w = x w = y w = z w = x w = y w = z Paul Dawkis 7

179 Note that i the whe we say we re reflectig about a give plae, say the xy-plae, all we re doig is movig from above the plae to below the plae (or visa-versa of course) ad this meas simply chagig the sig of the other variable, z i the case of the xy-plae. Orthogoal Projectios We first saw orthogoal projectios i the sectio o the dot product. I that sectio we looked at projectios oly i the, but as we ll see evetually they ca be doe i ay settig. Here we are goig to look at some special orthogoal projectios. Let s start with the orthogoal projectios i Here is a quick sketch of both of these.. There are two of them that we wat to look at. So, we project x oto the x-axis or y-axis depedig upo which we re after. Of course we also have a variety of projectios i as well. We could project oto oe of the three axes or we could project oto oe of the three coordiate plaes. Here are the orthogoal projectios we re goig to look at i this sectio, their equatios ad their iduced matrix. Orthogoal Projectio Equatios Iduced Matrix Projectio o x-axis i w = x 0 w = Projectio o y-axis i w = w = y 0 Projectio o x-axis i w = x 0 0 w = w = Projectio o y-axis i w = w = y 0 0 w = Paul Dawkis 74

180 Projectio o z-axis i Projectio o xy-plae i Projectio o yz-plae i Projectio o xz-plae i w = 0 w w w w w = 0 = z = x = y = 0 w = 0 w = y w = z w w w = x = 0 = z Cotractios & Dilatios These trasformatios are really just facy ames for scalar multiplicatio, w = c x, where c is a oegative scalar. The trasformatio is called a cotractio if 0 c ad a dilatio if c. The iduced matrix is idetical for both a cotractio ad a dilatio ad so we ll ot give separate equatios or iduced matrices for both. Cotractio/Dilatio Equatios Iduced Matrix Cotractio/Dilatio i w = cx c 0 w = cy 0 c Cotractio/Dilatio i w = cx c 0 0 w = cy 0 c 0 w = cz 0 0 c Rotatios We ll start this discussio i. We re goig to start with a vector x ad we wat to rotate that vector through a agle θ i the couter-clockwise maer as show below. 007 Paul Dawkis 75

181 Ulike the previous trasformatio where we could just write dow the equatios we ll eed to do a little derivatio work here. First, from our basic kowledge of trigoometry we kow that x= rcosα y = rsiα ad we also kow that w = rcos α + θ w = rsi α + θ ( ) ( ) Now, through a trig formula we ca write the equatios for w as follows, w = ( rcosα ) cosθ ( rsiα) siθ w = rcosα siθ + rsiα cosθ ( ) ( ) Notice that the formulas for x ad y both show up i these formulas so substitutig i for those gives, w = xcosθ ysiθ w = xsiθ + ycosθ Fially, sice θ is a fixed agle cosθ ad siθ are fixed costats ad so there are our equatios ad the iduced matrix is, cosθ siθ siθ cosθ I we also have rotatios but the derivatios are a little trickier. The three that we ll be givig here are couter-clockwise rotatio about the three positive coordiate axes. Here is a table givig all the rotatioal equatio ad iduced matrices. Rotatio Equatios Iduced Matrix Couter-clockwise rotatio w = xcosθ ysiθ cosθ siθ through a agle θ i w = xsiθ + ycosθ siθ cosθ Couter-clockwise rotatio w = x 0 0 trough a agle of θ about w = ycosθ zsiθ 0 cosθ siθ the positive x-axis i w = ysiθ + zcosθ 0 siθ cosθ Couter-clockwise rotatio w = xcosθ + zsiθ cosθ 0 siθ trough a agle of θ about w = y 0 0 the positive y-axis i w = zcosθ xsiθ siθ 0 cosθ Couter-clockwise rotatio w = xcosθ ysiθ cosθ siθ 0 trough a agle of θ about w = xsiθ + ycosθ siθ cosθ 0 the positive z-axis i w = z 0 0 Okay, we ve give quite a few geeral formulas here, but we have t worked ay examples with umbers i them so let s do that. 007 Paul Dawkis 76

182 Example Determie the ew poit after applyig the trasformatio to the give poit. Use the iduced matrix associated with each trasformatio to fid the ew poit. x =, 4, reflected about the xz-plae. (a) ( ) (b) = ( 0,7, 9) (c) = ( 0,7, 9) x projected o the x-axis. x projected o the yz-plae. Solutio So, it would be easier to just do all of these directly rather tha usig the iduced matrix, but at least this way we ca verify that the iduced matrix gives the correct value. (a) Here s the multiplicatio for this oe. 0 0 w = = 0 0 =, 4, w =, 4, uder this trasformatio. So, the poit x ( ) maps to ( ) (b) The multiplicatio for this problem is, w = = w = 0,0,0 The projectio here is ( ) (c) The multiplicatio for the fial trasformatio i this set is, w = = w = 0,7, 9. The projectio here is ( ) Let s take a look at a couple of rotatios. Example Determie the ew poit after applyig the trasformatio to the give poit. Use the iduced matrix associated with each trasformatio to fid the ew poit. x =, 6 rotated 0 i the couter-clockwise directio. (a) ( ) (b) = ( 0,5,) (c) = (, 4, ) x rotated 90 i the couter-clockwise directio about the y-axis. x rotated 5 i the couter-clockwise directio about the z-axis. Solutio There is t much to these other tha pluggig ito the appropriate iduced matrix ad the doig the multiplicatio. 007 Paul Dawkis 77

183 (a) Here is the work for this rotatio. cos 0 si 0 + w = si 0 cos0 6 = = 6 w. The ew poit after this rotatio is the, = ( +, ) (b) The matrix multiplicatio for this rotatio is, cos 90 0 si w = = = si 90 0 cos The poit after this rotatio becomes w = (, 5, 0). Note that we could have predicted this oe. The origial poit was i the yz-plae (because the x compoet is zero) ad a 90 couterclockwise rotatio about the y-axis would put the ew poit i the xy-plae with the z compoet becomig the x compoet ad that is exactly what we got. (c) Here s the work for this part ad otice that the agle is ot oe of the stadard trig agles ad so the aswers will be i decimals. cos 5 si 5 0 w = si 5 cos = =.574 w = 4.409,.574,. The ew poit uder this rotatio is the ( ) Fially, let s take a look at some compositios of trasformatios. Example Determie the ew poit after applyig the trasformatio to the give poit. Use the iduced matrix associated with each trasformatio to fid the ew poit. x = 4,, by (i.e. x) ad the project o the y-axis. (a) Dilate ( ) (b) Project = ( 4,, ) (c) Project = ( 4, ) (d) Rotate = ( 4, ) x o the y-axis ad the dilate by. x o the x-axis ad the rotate by 45 couter-clockwise. x 45 couter-clockwise ad the project o the x-axis. Solutio Notice that the first two are the same traslatios just doe i the opposite order ad the same is true for the last two. Do you expect to get the same result from each compositio regardless of the order the trasformatios are doe? 007 Paul Dawkis 78

184 Recall as well that i compositios we ca get the iduced matrix by multiplyig the iduced matrices from each trasformatio from the right to left i the order they are applied. For istace the iduced matrix for the compositio TB TA is BA where T A is the first trasformatio applied to the poit. (a) Dilate = ( 4,, ) x by (i.e. x) ad the project o the y-axis. The iduced matrix for this compositio is, = Project o y-axis Dilate by Compositio The matrix multiplicatio for the ew poit is the, = w = 0,,0. The ew poit is the ( ) (b) Project = ( 4,, ) x o the y-axis ad the dilate by. I this case the iduced matrix is, = Dilate by Project o y-axis Compositio So, i this case the iduced matrix for the compositio is the same as the previous part. w = 0,,0. Therefore, the ew poit is also the same, ( ) (c) Project = ( 4, ) x o the x-axis ad the rotate by 45 couter-clockwise. Here is the iduced matrix for this compositio. cos45 si si 45 cos = = Rotate by 45 Project o x-axis The matrix multiplicatio for ew poit after applyig this compositio is, 0 4 w = = Paul Dawkis 79

185 The ew poit is the, w = (, ) (d) Rotate = ( 4, ) x 45 couter-clockwise ad the project o the x-axis. The iduced matrix for the fial compositio is, 0 cos45 si si 45 cos 45 = 0 0 = 0 0 Project o Rotate by 45 x-axis Note that this is differet from the iduced matrix from (c) ad so we should expect the ew poit to also be differet. The fact that the iduced matrix is differet should t be too surprisig give that matrix multiplicatio is ot a commutative operatio. The matrix multiplicatio for the ew poit is, 4 w = = The ew poit is the, = (, 0) (c). w ad as we expected it was ot the same as that from part So, as this example has show us trasformatio compositio is ot ecessarily commutative ad so we should t expect that to happe i most cases. 007 Paul Dawkis 80

186 Vector Spaces Itroductio I the previous chapter we looked at vectors i Euclidea -space ad while i ad we thought of vectors as directed lie segmets. A vector however, is a much more geeral cocept ad it does t ecessarily have to represet a directed lie segmet i or. Nor does a vector have to represet the vectors we looked at i. As we ll soo see a vector ca be a matrix or a fuctio ad that s oly a couple of possibilities for vectors. With all that said a good may of our examples will be examples from sice that is a settig that most people are familiar with ad/or ca visualize. We will however try to iclude the occasioal example that does ot lie i. The mai idea of study i this chapter is that of a vector space. A vector space is othig more tha a collectio of vectors (whatever those ow are ) that satisfies a set of axioms. Oce we get the geeral defiitio of a vector ad a vector space out of the way we ll look at may of the importat ideas that come with vector spaces. Towards the ed of the chapter we ll take a quick look at ier product spaces. Here is a listig of the topics i this chapter. Vector Spaces I this sectio we ll formally defie vectors ad vector spaces. Subspaces Here we will be lookig at vector spaces that live iside of other vector spaces. Spa The cocept of the spa of a set of vectors will be ivestigated i this sectio. Liear Idepedece Here we will take a look at what it meas for a set of vectors to be liearly idepedet or liearly depedet. Basis ad Dimesio We ll be lookig at the idea of a set of basis vectors ad the dimesio of a vector space. Chage of Basis I this sectio we will see how to chage the set of basis vectors for a vector space. Fudametal Subspaces Here we will take a look at some of the fudametal subspaces of a matrix, icludig the row space, colum space ad ull space. Ier Product Spaces We will be lookig at a special kid of vector spaces i this sectio as well as defie the ier product. Orthoormal Basis I this sectio we will develop ad use the Gram-Schmidt process for costructig a orthogoal/orthoormal basis for a ier product space. 007 Paul Dawkis 8

187 Least Squares I this sectio we ll take a look at a applicatio of some of the ideas that we will be discussig i this chapter. QR-Decompositio Here we will take a look at the QR-Decompositio for a matrix ad how it ca be used i the least squares process. Orthogoal Matrices We will take a look at a special kid of matrix, the orthogoal matrix, i this sectio. 007 Paul Dawkis 8

188 Vector Spaces As oted i the itroductio to this chapter vectors do ot have to represet directed lie segmets i space. Whe we first start lookig at may of the cocepts of a vector space we usually start with the directed lie segmet idea ad their atural extesio to vectors i because it is somethig that most people ca visualize ad get their hads o. So, the first thig that we eed to do i this chapter is to defie just what a vector space is ad just what vectors really are. However, before we actually do that we should poit out that because most people ca visualize directed lie segmets most of our examples i these otes will revolve aroud vectors i. We will try to always iclude a example or two with vectors that are t i just to make sure that we do t forget that vectors are more geeral objects, but the reality is that most of the examples will be i. So, with all that out of the way let s go ahead ad get the defiitio of a vector ad a vector space out of the way. Defiitio Let V be a set o which additio ad scalar multiplicatio are defied (this meas that if u ad v are objects i V ad c is a scalar the we ve defied u+ v ad cu i some way). If the followig axioms are true for all objects u, v, ad w i V ad all scalars c ad k the V is called a vector space ad the objects i V are called vectors. (a) u+ v is i V This is called closed uder additio. (b) cu is i V This is called closed uder scalar multiplicatio. (c) u+ v= v+ u (d) u+ ( v+ w) = ( u+ v) + w (e) There is a special object i V, deoted 0 ad called the zero vector, such that for all u i V we have u+ 0= 0+ u= u. (f) For every u i V there is aother object i V, deoted u ad called the egative of u, such u u= u+ u = 0. that ( ) (g) c( u+ v) = cu+ cv (h) ( c+ k) u= cu+ ku (i) c( ku) = ( ck) u (j) u= u We should make a couple of commets about these axioms at this poit. First, do ot get too locked ito the stadard ways of defiig additio ad scalar multiplicatio. For the most part we will be doig additio ad scalar multiplicatio i a fairly stadard way, but there will be the occasioal example where we wo t. I order for somethig to be a vector space it simply must have a additio ad scalar multiplicatio that meets the above axioms ad it does t matter how strage the additio or scalar multiplicatio might be. Next, the first two axioms may seem a little strage at first glace. It might seem like these two will be trivially true for ay defiitio of additio or scalar multiplicatio, however, we will see at least oe example i this sectio of a set that is ot closed uder a particular scalar multiplicatio. 007 Paul Dawkis 8

189 Fially, with the exceptio of the first two these axioms should all seem familiar to you. All of these axioms were i oe of the theorems from the discussio o vectors ad/or Euclidea - space i the previous chapter. However, i this case they are t properties, they are axioms. What that meas is that they are t to be prove. Axioms are simply the rules uder which we re goig to operate whe we work with vector spaces. Give a defiitio of additio ad scalar multiplicatio we ll simply eed to verify that the above axioms are satisfied by our defiitios. We should also make a quick commet about the scalars that we ll be usig here. To this poit, ad i all the examples we ll be lookig at i the future, the scalars are real umbers. However, they do t have to be real umbers. They could be complex umbers. Whe we restrict the scalars to real umbers we geerally call the vector space a real vector space ad whe we allow the scalars to be complex umbers we geerally call the vector space a complex vector space. We will be workig exclusively with real vector spaces ad from this poit o whe we see vector space it is to be uderstood that we mea a real vector space. We should ow look at some examples of vector spaces ad at least a couple of examples of sets that are t vector spaces. Some of these will be fairly stadard vector spaces while others may seem a little strage at first but are fairly importat to other areas of mathematics. Example If is ay positive iteger the the set V = with the stadard additio ad scalar multiplicatio as defied i the Euclidea -space sectio is a vector space. Techically we should show that the axioms are all met here, however that was doe i Theorem from the Euclidea -space sectio ad so we wo t do that for this example. Note that from this poit o whe we refer to the stadard vector additio ad stadard vector scalar multiplicatio we are referrig to that we defied i the Euclidea -space sectio. Example The set V = with the stadard vector additio ad scalar multiplicatio defied as, c( u, u) = ( u, cu) is NOT a vector space. Showig that somethig is ot a vector space ca be tricky because it s completely possible that oly oe of the axioms fails. I this case because we re dealig with the stadard additio all the axioms ivolvig the additio of objects from V (a, c, d, e, ad f) will be valid. Also, i this case of all the axioms ivolvig the scalar multiplicatio (b, g, h, i, ad j), oly (h) is ot valid. We ll show this i a bit, but the poit eeds to be made here that oly oe of the axioms will fail i this case ad that is eough for this set uder this defiitio of additio ad multiplicatio to ot be a vector space. First we should at least show that the set meets axiom (b) ad this is easy eough to show, i that we ca see that the result of the scalar multiplicatio is agai a poit i ad so the set is closed uder scalar multiplicatio. Agai, do ot get used to this happeig. We will see at least oe example later i this sectio of a set that is ot closed uder scalar multiplicatio as we ll defie it there. 007 Paul Dawkis 84

190 Now, to show that (h) is ot valid we ll eed to compute both sides of the equality ad show that they are t equal. c+ k u = c+ k u, u = u, c+ k u = u, cu + ku ( ) ( )( ) ( ( ) ) ( ) (, ) (, ) (, ) (, ) (, ) cu+ ku = c u u + k u u = u cu + u ku = u cu + ku So, we ca see that ( ) c+ k u cu+ ku because the first compoets are ot the same. This meas that axiom (h) is ot valid for this defiitio of scalar multiplicatio. We ll ot verify that the remaiig scalar multiplicatio axioms are valid for this defiitio of scalar multiplicatio. We ll leave those to you. All you eed to do is compute both sides of the equal sig ad show that you get the same thig o each side. Example The set as, is NOT a vector space. V = with the stadard vector additio ad scalar multiplicatio defied (,, ) = ( 0,0, cu ) c u u u Agai, there is a sigle axiom that fails i this case. We ll leave it to you to verify that the others hold. I this case it is the last axiom, (j), that fails as the followig work shows. u= u, u, u = 0,0, u = 0,0, u u, u, u = u ( ) ( ( ) ) ( ) ( ) Example 4 The set V = with the stadard scalar multiplicatio ad additio defied as, ( u, u) + ( v+ v) = ( u+ v, u + v) Is NOT a vector space. To see that this is ot a vector space let s take a look at the axiom (c). u+ v = u, u + v + v = u + v, u + v ( ) ( ) ( ) ( v v ) ( u, u ) ( v u, v u ) v+ u = + + = + + So, because oly the first compoet of the secod poit listed gets multiplied by we ca see that + + u v v u ad so this is ot a vector space. You should go through the other axioms ad determie if they are valid or ot for the practice. So, we ve ow see three examples of sets of the form V = that are NOT vector spaces so hopefully it is clear that there are sets out there that are t vector spaces. I each case we had to chage the defiitio of scalar multiplicatio or additio to make the set fail to be a vector space. However, do t read too much ito that. It is possible for a set uder the stadard scalar multiplicatio ad additio to fail to be a vector space as we ll see i a bit. Likewise, it s possible for a set of this form to have a o-stadard scalar multiplicatio ad/or additio ad still be a vector space. I fact, let s take a look at the followig example. This is probably goig to be the oly example that we re goig to go through ad do i excruciatig detail i this sectio. We re doig this for two reasos. First, you really should see all the detail that eeds to go ito actually showig that a set alog with a defiitio of additio ad scalar multiplicatio is a vector space. Secod, our 007 Paul Dawkis 85

191 defiitios are NOT goig to be stadard here ad it would be easy to get cofused with the details if you had to go through them o your ow. Example 5 Suppose that the set V is the set of positive real umbers (i.e. x > 0 ) with additio ad scalar multiplicatio defied as follows, c x + y = xy cx= x This set uder this additio ad scalar multiplicatio is a vector space. First otice that we re takig V to be oly a portio of. If we took it to be all of we would ot have a vector space. Next, do ot get excited about the defiitios of additio ad scalar multiplicatio here. Eve though they are ot they are ot additio ad scalar multiplicatio as we thik of them we are still goig to call them the additio ad scalar multiplicatio operatios for this vector space. Okay, let s go through each of the axioms ad verify that they are valid. First let s take a look at the closure axioms, (a) ad (b). Sice by x ad y are positive umbers their product xy is a positive real umber ad so the V is closed uder additio. Sice x is c positive the for ay c x is a positive real umber ad so V is closed uder scalar multiplicatio. Next we ll verify (c). We ll do this oe with some detail poitig out how we do each step. First assume that x ad y are ay two elemets of V (i.e. they are two positive real umbers). We ll ow verify (d). Agai, we ll make it clear how we re goig about each step with this oe. Assume that x, y, ad z are ay three elemets of V. Next we eed to fid the zero vector, 0, ad we eed to be careful here. We use 0 to deote the zero vector but it does NOT have to be the umber zero. I fact i this case it ca t be zero if for o other reaso tha the fact that the umber zero is t i the set V! We eed to fid a elemet that is i V so that uder our defiitio of additio we have, x + 0= 0 + x= x It looks like we should defie the zero vector i this case as : 0=. I other words the zero vector for this set will be the umber! Let s see how that works ad remember that our 007 Paul Dawkis 86

192 additio here is really multiplicatio ad remember to substitute the umber i for 0. If x is ay elemet of V, x + 0= x = x & 0 + x= x= x Sure eough that does what we wat it to do. We ext eed to defie the egative, x, for each elemet x that is i V. As with the zero vector to ot cofuse x with mius x, this is just the otatio we use to deote the egative of x. I our case we eed a elemet of V (so it ca t be mius x sice that is t i V) such that x+ x =0 ad remember that 0= i our case! ( ) Give a x i V we kow that x is strictly positive ad so x is defied (sice x is t zero) ad is positive (sice x is positive) ad therefore x is i V. Also, uder our defiitio of additio ad the zero vector we have, x + ( x ) = x = =0 x Therefore, for the set V the egative of x is x =. x So, at this poit we ve take care of the closure ad additio axioms we ow just eed to deal with the axioms relatig to scalar multiplicatio. We ll start with (g). We ll do this oe i some detail so you ca see what we re doig at each step. If x ad y are ay two elemets of V ad c is ay scalar the, So, it looks like we ve verified (g). Let s ow verify (h). If x is ay elemet of V ad c ad k are ay two scalars the, 007 Paul Dawkis 87

193 So, this axiom is verified. Now, let s verify (i). If x is ay elemet of V ad c ad k are ay two scalars the, We ve got the fial axiom to go here ad that s a fairly simple oe to verify. x = x = x Just remember that x is the otatio for scalar multiplicatio ad NOT multiplicatio of x by the umber. Okay, that was a lot of work ad we re ot goig to be showig that much work i the remaider of the examples that are vector spaces. We ll leave that up to you to check most of the axioms ow that you ve see oe doe completely out. For those examples that are t a vector space we ll show the details o at least oe of the axioms that fails. For these examples you should check the other axioms to see if they are valid or fail. Example 6 Let the set V be the poits o a lie through the origi i additio ad scalar multiplicatio. The V is a vector space. with the stadard First, let s thik about just what V is. The set V is all the poits that are o some lie through the origi i. So, we kow that the lie must have the equatio, ax + by = 0 for some a ad some b, at least oe ot zero. Also ote that a ad b are fixed costats ad are t allowed to chage. I other words we are always o the same lie. Now, a poit ( x, y ) will be o the lie, ad hece i V, provided it satisfies the equatio above. We ll show that V is closed uder additio ad scalar multiplicatio ad leave it to you to verify the remaiig axioms. Let s first show that V is closed uder additio. To do this we ll eed the = x, y v = x, y, ad we ll eed to show sum of two radom poits from V, say u ( ) ad ( ) that + = ( x + x, y + y ) u v is also i V. This amouts to showig that this poit satisfies the equatio of the lie ad that s fairly simple to do, just plug the coordiates ito the equatio ad verify we get zero. ( ) ( ) ( ) ( ) a x+ x + b y+ y = ax+ by + ax + by = 0+ 0= 0 So, after some rearragig ad usig the fact that both u ad v were both i V (ad so satisfied the equatio of the lie) we see that the sum also satisfied the lie ad so is i V. We ve ow show that V is closed uder additio. To show that V is closed uder scalar multiplicatio we ll eed to show that for ay u from V ad 007 Paul Dawkis 88

194 ay scalar, c, the c = ( cx, cy ) additio. u is also i V. This is doe pretty much as we did closed uder ( ) ( ) ( ) ( ) a cx + b cy = c ax+ by = c 0 = 0 So, cu is o the lie ad hece i V. V is therefore closed uder scalar multiplicatio. Agai we ll leave it to you to verify the remaiig axioms. Note however, that because we re workig with the stadard additio that the zero vector ad egative are the stadard zero vector ad egative that we re used to dealig with, 0= 0,0 u = x, y = x, y ( ) ( ) ( ) Note that we ca exted this example to a lie through the origi i ad still have a vector space. Showig that this set is a vector space ca be a little difficult if you do t kow the equatio of a lie i however, as may of you probably do t, ad so we wo t show the work here. Example 7 Let the set V be the poits o a lie that does NOT go through the origi i the stadard additio ad scalar multiplicatio. The V is ot a vector space. with I this case the equatio of the lie will be, ax + by = c for fixed costats a, b, ad c where at least oe of a ad b is o-zero ad c is ot zero. This set is ot closed uder additio or scalar multiplicatio. Here is the work showig that it s ot closed uder additio. Let u = ( x, y) ad v = ( x, y) be ay two poits from V (ad so they satisfy the equatio above). The, ( ) ( ) ( ) ( ) a x + x + b y + y = ax + by + ax + by = c + c = c c So the sum, + = ( x + x, y + y ) u v, does ot satisfy the equatio ad hece is ot i V ad so V is ot closed uder additio. We ll leave it to you to verify that this particular V is ot closed uder scalar multiplicatio. Also, ote that sice we are workig o a set of poits from with the stadard additio the the zero vector must be 0 = ( 0,0), but because this does t satisfy the equatio it is ot i V ad so axiom (e) is also ot satisfied. I order for V to be a vector space it must cotai the zero vector 0! You should go through the remaiig axioms ad see if there are ay others that fail. Before movig o we should ote that prior to this example all the sets that have ot bee vector spaces we ve ot bee operatig uder the stadard additio ad/or scalar multiplicatio. I this example we ve ow see that for some sets uder the stadard additio ad scalar multiplicatio will ot be vector spaces either. 007 Paul Dawkis 89

195 Example 8 Let the set V be the poits o a plae through the origi i additio ad scalar multiplicatio. The V is a vector space. The equatio of a plae through the origi i is, ax + by + cz = 0 where a, b, ad c are fixed costats ad at least oe is ot zero. with the stadard Give the equatio you ca (hopefully) see that this will work i pretty much the same maer as the Example 6 ad so we wo t show ay work here. Okay, we ve see quite a few examples to this poit, but they ve all ivolved sets that were some or all of ad so we ow eed to see a couple of examples of vector spaces whose elemets (ad hece the vectors of the set) are ot poits i. Example 9 Let ad m be fixed umbers ad let M m represet the set of all m matrices. Also let additio ad scalar multiplicatio o M m be the stadard matrix additio ad stadard matrix scalar multiplicatio. The M m is a vector space. If we let c be ay scalar ad let the vectors u ad v represet ay two m matrices (i.e. they are both objects i M m ) the we kow from our work i the first chapter that the sum, u+ v, ad the scalar multiple, cu, are also m matrices ad hece are i M m. So M m is closed uder additio ad scalar multiplicatio. Next, if we defie the zero vector, 0, to be the m zero matrix ad if the vector u is some m, A, we ca defie the egative, u, to be the matrix -A the the properties of matrix arithmetic will show that the remaider of the axioms are valid. Therefore, M m is a vector space. Note that this example ow gives us a whole host of ew vector spaces. For istace, the set of matrices, M, is a vector space ad the set of all 5 9 matrices, M 59, is a vector space, etc. Also, the vectors i this vector space are really matrices! Here s aother importat example that may appear to be eve strager yet. Example 0 Let F [ ab, ] be the set of all real valued fuctios that are defied o the iterval [ ab, ]. The give ay two vectors, f = f ( x) ad g = g( x), from [, ] F a b ad ay scalar c defie additio ad scalar multiplicatio as, f + g x = f x + g x cf x = cf x ( )( ) ( ) ( ) ( )( ) ( ) Uder these operatios F [ ab, ] is a vector space. 007 Paul Dawkis 90

196 By assumptio both f ad g are real valued ad defied o [ ab, ]. The, for both additio ad scalar multiplicatio we just goig to plug x ito both f ( x ) ad/or ( ) g x ad both of these are defied ad so the sum or the product with a scalar will also be defied ad so this space is closed uder additio ad scalar multiplicatio. The zero vector, 0, for F [ ab, ] is the zero fuctio, i.e. the fuctio that is zero for all x, ad the egative of the vector f is the vector f =f ( x). We should make a couple of quick commets about this vector space before we move o. First, ab, represets the iterval a x b (i.e. we iclude the edpoits). We could recall that the [ ] also look at the set F ( ab, ) which is the set of all real valued fuctios that are defied o ( ab, ) ( a< x< b, o edpoits) or (, ) (, ) ad we ll still have a vector space. F the set of all real valued fuctios defied o Also, depedig upo the iterval we choose to work with we may have a differet set of fuctios i the set. For istace, the fuctio x would be i F [,0] but ot i F [, 6] because of divisio by zero. I this case the vectors are ow fuctios so agai we eed to be careful with the term vector. I ca mea a lot of differet thigs depedig upo what type of vector space we re workig with. Both of the vector spaces from Examples 9 ad 0 are fairly importat vector spaces ad as we ll look at them agai i the ext sectio where we ll see some examples of some related vector spaces. There is oe fial example that we eed to look at i this sectio. Example Let V cosist of a sigle object, deoted by 0, ad defie 0+ 0= 0 c 0 = 0 The V is a vector space ad is called the zero vector space. The last thig that we eed to do i this sectio before movig o is to get a ice set of facts that fall pretty much directly from the axioms ad will be true for all vector spaces. Theorem Suppose that V is a vector space, u is a vector i V ad c is ay scalar. The, (a) 0 u= 0 (b) c 0= 0 (c) ( ) u=u (d) If c u= 0 the either c = 0 or u= Paul Dawkis 9

197 The proofs of these are really quite simple, but they oly appear that way after you ve see them. Comig up with them o your ow ca be difficult sometimes. We ll give the proof for two of them ad you should try ad prove the other two o your ow. Proof : (a) Now, this ca seem tricky, but each of these steps will come straight from a property of real umbers or oe of the axioms above. We ll start with 0u ad use the fact that we ca always write 0= 0+ 0 ad the we ll use axiom (h). 0u= 0+ 0 u= 0u+ 0u ( ) This may have seemed like a silly ad/or strage step, but it was required. We could t just add a 0u oto oe side because this would, i essece, be usig the fact that 0 u= 0 ad that s what we re tryig to prove! So, while we do t kow just what 0u is as a vector, it is i the vector space ad so we kow from axiom (f) that it has a egative which we ll deote by -0u. Add the egative to both sides ad the use axiom (f) agai to say that 0u+ ( 0u) = 0 0u+ ( 0u) = 0u+ 0u+ ( 0u) 0= 0u+ 0 Fially, use axiom (e) o the right side to get, 0= 0u ad we ve prove (a). (c) I this case if we ca show that u+ ( ) u= 0 the from axiom (f) we ll kow that ( ) the egative of u, or i other words that ( ) u= with u+ ( ) u ad use axiom (j) to rewrite the first u as follows, u+ ( ) u= u+ ( ) u u. This is t too hard to show. We ll start Next, use axiom (h) o the right side ad the a ice property of real umbers. u+ u= + u ( ) ( ) ( ) = 0u Fially, use part (a) of this theorem o the right side ad we get, u+ u= 0 ( ) u is 007 Paul Dawkis 9

198 Subspaces Let s go back to the previous sectio for a secod ad examie Example ad Example 6. I Example we saw that was a vector space with the stadard additio ad scalar multiplicatio for ay positive iteger. So, i particular is a vector space with the stadard additio ad scalar multiplicatio. I Example 6 we saw that the set of poits o a lie through the origi i with the stadard additio ad vector space multiplicatio was also a vector space. So, just what is so importat about these two examples? Well first otice that they both are usig the same additio ad scalar multiplicatio. I ad of itself that is t importat, but it will be importat for the ed result of what we wat to discus here. Next, the set of poits i the vector space of Example 6 are also i the set of poits i the vector space of Example. While it s ot importat to the discussio here ote that the opposite is t true, give a lie we ca fid poits i that are t o the lie. What we ve see here is that, at least for some vector spaces, it is possible to take certai subsets of the origial vector space ad as log as we retai the defiitio of additio ad scalar multiplicatio we will get a ew vector space. Of course, it is possible for some subsets to ot be a ew vector space. To see a example of this see Example 7 from the previous sectio. I that example we ve got a subset of with the stadard additio ad scalar multiplicatio ad yet it s ot a vector space. We wat to ivestigate this idea i more detail ad we ll start off with the followig defiitio. Defiitio Suppose that V is a vector space ad W is a subset of V. If, uder the additio ad scalar multiplicatio that is defied o V, W is also a vector space the we call W a subspace of V. Now, techically if we wated to show that a subset W of a vector space V was a subspace we d eed to show that all 0 of the axioms from the defiitio of a vector space are valid, however, i reality that does t eed to be doe. May of the axioms (c, d, g, h, i, ad j) deal with how additio ad scalar multiplicatio work, but W is iheritig the defiitio of additio ad scalar multiplicatio from V. Therefore, sice elemets of W are also elemets of V the six axioms listed above are guarateed to be valid o W. The oly oes that we really eed to worry about are the remaiig four, all of which require somethig to be i the subset W. The first two (a, ad b) are the closure axioms that require that the sum of ay two elemets from W is back i W ad that the scalar multiple of ay elemet from W will be back i W. Note that the sum ad scalar multiple will be i V we just do t kow if it will be i W. We also eed to verify that the zero vector (axiom e) is i W ad that each elemet of W has a egative that is also i W (axiom f). As the followig theorem shows however, the oly two axioms that we really eed to worry about are the two closure axioms. Oce we have those two axioms valid, we will get the zero vector ad egative vector for free. 007 Paul Dawkis 9

199 Theorem Suppose that W is a o-empty (i.e. at least oe elemet i it) subset of the vector space V the W will be a subspace if the followig two coditios are true. (a) If u ad v are i W the u+ v is also i W (i.e. W is closed uder additio). (b) If u is i W ad c is ay scalar the cu is also i W (i.e. W is closed uder scalar multiplicatio). Where the defiitio of additio ad scalar multiplicatio o W are the same as o V. Proof : To prove this theorem all we eed to do is show that if we assume the two closure axioms are valid the other 8 axioms will be give to us for free. As we discussed above the axioms c, d, g, h, i, ad j are true simply based o the fact that W is a subset of V ad it uses the same additio ad scalar multiplicatio ad so we get these for free. We oly eed to verify that assumig the two closure coditio we get axioms e ad f as well. From the secod coditio above we see that we are assumig that W is closed uder scalar multiplicatio ad so both 0u ad ( ) u must be i W, but from Theorem from the previous sectio we kow that, 0u= 0 u=u ( ) But this meas that the zero vector ad the egative of u must be i W ad so we re doe. Be careful with this proof. O the surface it may look like we ever used the first coditio of closure uder additio ad we did t use that to show that axioms e ad f were valid. However, i order for W to be a vector space it must be closed uder additio ad so without that first coditio we ca t kow whether or ot W is i fact a vector space. Therefore, eve though we did t explicitly use it i the proof it was required i order to guaratee that we have a vector space. Next we should ackowledge the followig fact. Fact Every vector space, V, has at least two subspaces. Namely, V itself ad W = { 0 } (the zero space). Because V ca be thought of as a subset of itself we ca also thik of it as a subspace of itself. Also, the zero space which is the vector space cosistig oly of the zero vector, W = { 0 } is a subset of V ad is a vector space i its ow right ad so will be a subspace of V. At this poit we should probably take a look at some examples. I all of these examples we assume that the stadard additio ad scalar multiplicatio are beig used i each case uless otherwise stated. 007 Paul Dawkis 94

200 Example Determie if the give set is a subspace of the give vector space. x, y, from i which x 0. Is this a subspace of (a) Let W be the set of all poits, ( )? [Solutio] (b) Let W be the set of all poits from? [Solutio] (c) Let W be the set of all poits from of the form ( 0,, ) of the form (,, ) x x. Is this a subspace of x x. Is this a subspace of? [Solutio] Solutio I each of these cases we eed to show either that the set is closed uder additio ad scalar multiplicatio or it is ot closed for at least oe of those. (a) Let W be the set of all poits, ( x, y ), from? i which x 0. Is this a subspace of This set is closed uder additio because, ( x, y) + ( x, y) = ( x+ x, y+ y) ad sice x, x 0 we also have x+ x 0 ad so the resultat poit is back i W. However, this set is ot closed uder scalar multiplicatio. Let c be ay egative scalar ad further assume that x > 0 the, c( x, y) = ( cx, cy) The because x > 0 ad c < 0 we must have cx < 0 ad so the resultat poit is ot i W because the first compoet is either zero or positive. Therefore, W is ot a subspace of V. (b) Let W be the set of all poits from of the form ( 0,, ) [Retur to Problems] x x. Is this a subspace of? This oe is fairly simple to check a poit will be i W if the first compoet is zero. So, let = 0, x, x y = 0, y, y be ay two poits i W ad let c be ay scalar the, x ( ) ad ( ) x+ y = ( 0, x, x) + ( 0, y, y) = ( 0, x + y, x + y) cx = ( 0, cx, cx ) So, both x+ y ad cx are i W ad so W is closed uder additio ad scalar multiplicatio ad so W is a subspace. [Retur to Problems] (c) Let W be the set of all poits from of the form (,, ) x x. Is this a subspace of? This oe is here just to keep us from makig ay assumptios based o the previous part. This set is closed uder either additio or scalar multiplicatio. I order for poits to be i W i this 007 Paul Dawkis 95

201 case the first compoet must be a. However, if x = (, x, x ) ad = (, y, y ) poits i W ad let c be ay scalar other tha we get, x+ y =, x, x +, y, y =, x + y, x + y ( ) ( ) ( ) cx = ( c, cx, cx ) y be ay two Neither of which is i W ad so W is ot a subspace. [Retur to Problems] Example Determie if the give set is a subspace of the give vector space. (a) Let W be the set of diagoal matrices of size. Is this a subspace of M? [Solutio] 0 a (b) Let W be the set of matrices of the form a a.is this a subspace of M? a a [Solutio] a (c) Let W be the set of matrices of the form 0 a.is this a subspace of M? [Solutio] Solutio (a) Let W be the set of diagoal matrices of size. Is this a subspace of M? Let u ad v be ay two diagoal matrices ad c be ay scalar the, u 0 0 v 0 0 u+ v u 0 0 v 0 0 u v 0 + u+ v = + = 0 0 u 0 0 v 0 0 u + v cu cu 0 c u = 0 0 cu Both u+ v ad cu are also diagoal matrices ad so W is closed uder additio ad scalar multiplicatio ad so is a subspace of M. [Retur to Problems] (b) Let W be the set of matrices of the form 0 a a a a a.is this a subspace of M? Let u ad v be ay two matrices from W ad c be ay scalar the, 007 Paul Dawkis 96

202 0 u 0 v 0 u + v u+ v = u u + v v = u + v u + v u u v v u + v u + v 0 cu cu cu cu = cu cu Both u+ v ad cu are also i W ad so W is closed uder additio ad scalar multiplicatio ad hece is a subspace of M. [Retur to Problems] (c) Let W be the set of matrices of the form 0 a a.is this a subspace of M? Let u ad v be ay two matrices from W the, u v 4 u + v u+ v = 0 u + 0 v = 0 u + v So, u+ v is t i W sice the etry i the first row ad first colum is t a. Therefore, W is ot closed uder additio. You should also verify for yourself that W is ot closed uder scalar multiplicatio either. I either case W is ot a subspace of M. [Retur to Problems] Do ot read too much ito the result from part (c) of this example. I geeral the set of upper triagular matrices (without restrictios, ulike part (c) from above) is a subspace of M ad the set of lower triagular matrices is also a subspace of M. You should verify this for the practice. Example Determie if the give set is a subspace of the give vector space., ab,. Is this a (a) Let C[ a b ] be the set of all cotiuous fuctios o the iterval [ ] subspace of F [ ab,, ] the set of all real valued fuctios o the iterval [, ] ab. [Solutio] (b) Let P be the set of all polyomials of degree or less. Is this a subspace of F [ ab?, ] [Solutio] (c) Let W be the set of all polyomials of degree exactly. Is this a subspace of F [ ab?, ] [Solutio] (d) Let W be the set of all fuctios such that f ( 6) = 0. Is this a subspace of F [ ab, ] where we have a 6 b? [Solutio] 007 Paul Dawkis 97

203 Solutio (a) Let C[ a, b ] be the set of all cotiuous fuctios o the iterval [, ] subspace of F[ a, b ], the set of all real valued fuctios o the iterval [, ] ab. Is this a ab. Okay, if you ve ot had Calculus you may ot kow what a cotiuous fuctio is. A quick ad dirty defiitio of cotiuity (ot mathematically correct, but useful if you have t had Calculus) is that a fuctio is cotiuous o [ ab, ] if there are o holes or breaks i the graph. Put i aother way. You ca sketch the graph of the fuctio from a to b without ever pickig up your pecil of pe. A fact from Calculus (which if you have t had please just believe this) is that the sum of two cotiuous fuctios is cotiuous ad multiplyig a cotiuous fuctio by a costats will give a ew cotiuous fuctio. So, what this fact tells us is that the set of cotiuous fuctios is closed uder stadard fuctio additio ad scalar multiplicatio ad that is what we re workig with here. So, C[ a, b ] is a subspace of [, ] F ab. [Retur to Problems] (b) Let P be the set of all polyomials of degree or less. Is this a subspace of [, ] F a b? First recall that a polyomial is said to have degree if its largest expoet is. Okay, let u = ax + + ax + a0 ad v = bx + + bx + b0 ad let c be ay scalar. The, u+ v= a + b x + + a + b x+ a + b ( ) ( ) cu = ca x + + ca x+ ca I both cases the degree of the ew polyomial is ot greater tha. Of course i the case of scalar multiplicatio it will remai degree, but with the sum, it is possible that some of the coefficiets cacel out to zero ad hece reduce the degree of the polyomial. The poit is that P is closed uder additio ad scalar multiplicatio ad so will be a subspace of F[ a, b ]. [Retur to Problems] (c) I Let W be the set of all polyomials of degree exactly. Is this a subspace of F[ a, b ]? this case W is ot closed uder additio. To see this let s take a look at the = case to keep thigs simple (the same argumet will work for other values of ) ad cosider the followig two polyomials, u= ax + bx + c v = ax + dx + e where a is ot zero, we kow this is true because each polyomial must have degree. The other costats may or may ot be zero. Both are polyomials of exactly degree (sice a is ot zero) ad if we add them we get, u+ v = ( b+ d) x+ c+ e 007 Paul Dawkis 98

204 So, the sum had degree ad so is ot i W. Therefore for = W is ot closed uder additio. We looked at = oly to make it somewhat easier to write dow the two example polyomials. We could just have easily doe the work for geeral ad we d get the same result ad so W is ot a subspace. [Retur to Problems] (d) Let W be the set of all fuctios such that f ( 6) = 0. Is this a subspace of F [ ab, ] where we have a 6 b? First otice that if we do t have a 6 b the this problem makes o sese, so we will assume that a 6 b. I this case suppose that we have two elemets from W, f = f ( x) ad g = g( x) that f ( 6) = 0 ad ( 6) 0. This meas g =. I order for W to be a subspace we ll eed to show that the sum ad a scalar multiple will also be i W. I other words, if we evaluate the sum or the scalar multiple at 6 we ll get a result of 0. However, this wo t happe. Let s take a look at the sum. The sum is, ( f + g )( 6) = f ( 6) + g( 6) = 0+ 0= 0 0 ad so the sum will ot be i W. Likewise, if c is ay scalar that is t we ll have, ( cf )( 6) = cf ( 6) = c( 0) 0 ad so the scalar is ot i W either. Therefore W is ot closed uder additio or scalar multiplicatio ad so is ot a subspace. [Retur to Problems] Before we move o let s make a couple of observatios about some of the sets we looked at i this example. First, we should just poit out that the set of all cotiuous fuctios o the iterval [ ab, ], C[ a, b ], is a fairly importat vector space i its ow right to may areas of mathematical study. Next, we saw that the set of all polyomials of degree less the or equal to, P, was a subspace of F[ a, b ]. However, if you ve had Calculus you ll kow that polyomials are cotiuous ad P ca also be thought of as a subspace of C[ a, b ] as well. I other words, subspaces ca so have subspaces themselves. Fially, here is somethig for you to thik about. I the last part we saw that the set of all fuctios for which f ( 6) = 0 was ot a subspace of F[ a, b ] with a 6 b. Let s take a more geeral look at this. For some fixed umber k let W be the set of all real valued fuctios for which f ( 6) = k. Are there ay values of k for which W will be a subspace of F [ ab, ] with a 6 b? Go back ad thik about how we did the work for that part ad that should show you 007 Paul Dawkis 99

205 that there is oe value of k (ad oly oe) for which W will be a subspace. Ca you figure out what that umber has to be? We ow eed to look at a fairly importat subspace of m that we ll be seeig i future sectios. Defiitio Suppose A is a m matrix. The ull space of A is the set of all x i that A x= 0. Let s see some examples of ull spaces that are easy to fid. Example 4 Determie the ull space of each of the followig matrices. 0 (a) A = 4 0 [Solutio] 7 (b) B = [Solutio] 0 0 (c) 0 = 0 0 [Solutio] Solutio 0 (a) A = 4 0 To fid the ull space of A we ll eed to solve the followig system of equatios. 0 x 0 x = x = 0 4x+ 0x = 0 m such We ve give this i both matrix form ad equatio form. I equatio form it is easy to see that the oly solutio is x = x = 0. I terms of vectors from the solutio cosists of the sigle vector { 0 } ad hece the ull space of A is { 0 }. [Retur to Problems] (b) 7 B = Here is the system that we eed to solve for this part. 7 x 0 x 7x = 0 x = 0 x+ x = 0 Now, we ca see that these two equatios are i fact the same equatio ad so we kow there will be ifiitely may solutios ad that they will have the form, x = 7 t x = t t is ay real umber If you eed a refresher o solutios to system take a look at the first sectio of the first chapter. 007 Paul Dawkis 00

206 So, the sice the ull space of B cosists of all the solutios to B x= 0. Therefore, the ull space x = x, x from that are i the form, of B will cosist of all the vectors ( ) x = ( ) = ( ) 7 tt, t 7, tis ay real umber We ll see a better way to write this aswer i the ext sectio. I terms of equatios, rather tha vectors i, let s ote that the ull space of B will be all of the poits that are o the equatio through the origi give by x 7x = 0. [Retur to Problems] (c) = 0 0 I this case we re goig to be lookig for solutios to 0 0 x = x 0 However, if you thik about it, every vector x i will be a solutio to this system sice we are multiplyig x by the zero matrix. Hece the ull space of 0 is all of. [Retur to Problems] To see some examples of a more complicated ull space check out Example 7 from the sectio o Basis ad Example i the Fudametal Subspace sectio. Both of these examples have more goig o i them, but the first step is to write dow the ull space of a matrix so you ca check out the first step of the examples ad the igore the remaider of the examples. Now, let s go back ad take a look at all the ull spaces that we saw i the previous example. The ull space for the first matrix was { 0 }. For the secod matrix the ull space was the lie through the origi give by x 7x = 0. The ull space for the zero matrix was all of. Thikig back to the early parts of this sectio we ca see that all of these are i fact subspaces of. I fact, this will always be the case as the followig theorem shows. Theorem Suppose that A is a m matrix the the ull space of A will be a subspace of m. Proof : We kow that the subspace of A cosists of all the solutio to the system A x= 0. First, m we should poit out that the zero vector, 0, i will be a solutio to this system ad so we kow that the ull space is ot empty. This is a good thig sice a vector space (subspace or ot) must cotai at least oe elemet. 007 Paul Dawkis 0

207 Now that we kow that the ull space is ot empty let x ad y be two elemets from the ull space ad let c be ay scalar. We just eed to show that the sum ad scalar multiple of these are also i the ull space ad we ll be doe. Let s start with the sum. A( x + y) = Ax + Ay = 0+ 0= 0 The sum, x+ y is a solutio to A x= 0 ad so is i the ull space. The ull space is therefore closed uder additio. Next, let s take a look at the scalar multiple. Ac ( x) = cax= c0= 0 The scalar multiple is also i the ull space ad so the ull space is closed uder scalar multiplicatio. Therefore the ull space is a subspace of m. 007 Paul Dawkis 0

208 Spa I this sectio we will cover a topic that we ll see off ad o over the course of this chapter. Let s start off by goig back to part (b) of Example 4 from the previous sectio. I that example we saw that the ull space of the give matrix cosisted of all the vectors of the form x = ( 7 tt, ) = t( 7, ) tis ay real umber We would like a more compact way of statig this result ad by the ed of this sectio we ll have that. Let s first revisit a idea that we saw quite some time ago. I the sectio o Matrix Arithmetic we looked at liear combiatios of matrices ad colums of matrices. We ca also talk about liear combiatios of vectors. Defiitio We say the vector w from the vector space V is a liear combiatio of the vectors v, v,, v,all from V, if there are scalars c, c,, c so that w ca be writte w = c v + c v + + c v So, we ca see that the ull space we were lookig at above is i fact all the liear combiatios of the vector (7,). It may seem strage to talk about liear combiatios of a sigle vector sice that is really scalar multiplicatio, but we ca thik of it as that if we eed to. The ull space above was ot the first time that we ve see liear combiatios of vectors however. Whe we were lookig at Euclidea -space we itroduced these thigs called the stadard basis vectors. The stadard basis vectors for were defied as, e =,0,0,,0 e = 0,,0,,0 e = 0,0,0,, ( ) ( ) ( ) We saw that we could take ay vector u = ( u u u ) from ad write it as,,,, u= ue+ ue+ + ue Or, i other words, we could write u ad a liear combiatio of the stadard basis vectors, e, e,, e. We will be revisitig this idea agai i a couple of sectios, but the poit here is simply that we ve see liear combiatios of vectors prior to us actually discussig them here. Let s take a look at a example or two. Example Determie if the vector is a liear combiatio of the two give vectors. (a) Is w = (, 0) a liear combiatio of v = (, ) ad v = ( 4, 6)? [Solutio] (b) Is w = ( 4, 0) a liear combiatio of v = (,0) ad v = (, 5)? [Solutio] (c) Is w = (, 4) a liear combiatio of v = (,0) ad v = (, 5)? [Solutio] 007 Paul Dawkis 0

209 Solutio (a) Is w = (, 0) a liear combiatio of v = ( ) ad = ( ), v 4, 6? I each of these cases we ll eed to set up ad solve the followig equatio, w = cv+ cv (,0) = c(,) + c( 4, 6) The set coefficiets equal to arrive at the followig system of equatios, c+ 4c = c 6c = 0 If the system is cosistet (i.e. has at least oe solutio the w is a liear combiatio of the two vectors. If there is o solutio the w is ot a liear combiatio of the two vectors. We ll leave it to you to verify that the solutio to this system is c = 4 ad c =. Therefore, w is a liear combiatio of v ad v ad we ca write w = 4v v. [Retur to Problems] (b) Is w = ( 4, 0) a liear combiatio of v = ( ) ad = ( ),0 v, 5? For this part we ll eed to the same kid of thig so here is the system. c c = 4 0c 5c = 0 The solutio to this system is, c = + t c = t t is ay real umber This meas w is liear combiatio of v ad v. However, ulike the previous part there are literally a ifiite umber of ways i which we ca write the liear combiatio. So, ay of the followig combiatios would work for istace. 4 w = v+ ( 0) v w = ( 0) v v w = 8v+ 4v w =vv There are of course may more. There are just a few of the possibilities. [Retur to Problems] (c) Is w = (, 4) a liear combiatio of v = ( ) ad = ( ),0 v, 5? Here is the system we ll eed to solve for this part. c c = 0c 5c =4 This system does ot have a solutio ad so w is ot a liear combiatio of v ad v. [Retur to Problems] 007 Paul Dawkis 04

210 So, this example was kept fairly simple, but if we add i more compoets ad/or more vectors to the set the problem will work i essetially the same maer. Now that we ve see how liear combiatios work ad how to tell if a vector is a liear combiatio of a set of other vectors we eed to move ito the real topic of this sectio. I the opeig of this sectio we recalled a ull space that we d looked at i the previous sectio. We ca ow see that the ull space from that example is othig more tha all the liear combiatios of the vector (7,) (ad agai, it is kid of strage to be talkig about liear combiatios of a sigle vector). As poited out at the time we re after a more compact otatio for deotig this. It is ow time to give that otatio. Defiitio Let S = { v v v },,, be a set of vectors i a vector space V ad let W be the set of all liear combiatios of the vectors v, v,, v. The set W is the spa of the vectors v, v,, v ad is deoted by W = spa ( S) OR W = spa { v, v,, v} We also say that the vectors v, v,, v spa W. So, with this otatio we ca ow see that the ull space that we examied at the start of this sectio is ow othig more tha, spa{ ( 7, ) } Before we move o to some examples we should get a ice theorem out of the way. Theorem Let v, v,, v be vectors i a vector space V ad let their spa be W = spa { v, v,, v} the, (a) W is a subspace of V. (b) W is the smallest subspace of V that cotais all of the vectors v, v,, v. Proof : (a) So, we eed to show that W is closed uder additio ad scalar multiplicatio. Let u ad w be ay two vectors from W. Now, sice W is the set of all liear combiatios of v, v,, v that meas that both u ad w must be a liear combiatio of these vectors. So, there are scalars c, c,, c ad k, k,, k so that, u= c v + c v + + c v ad w = k v + k v + + k v Now, let s take a look at the sum. u+ w = c + k v + c + k v + + c + k v ( ) ( ) ( ) So the sum, u+ w, is a liear combiatio of the vectors v, v,, v ad hece must be i W ad so W is closed uder additio. 007 Paul Dawkis 05

211 Now, let k be ay scalar ad let s take a look at, ku= kc v + kc v + + kc v ( ) ( ) ( ) As we ca see the scalar multiple, ku, is a liear combiatio of the vectors v, v,, v ad hece must be i W ad so W is closed uder scalar multiplicatio. Therefore, W must be a vector space. (b) I these cases whe we say that W is the smallest vector space that cotais the set of vectors v, v,, v we re really sayig that if W is also a vector space that cotais v, v,, v the it will also cotai a complete copy of W as well. So, let s start this off by oticig that W does i fact cotai each of the v = 0v + 0v + + v + + 0v i i v i s sice, Now, let W be a vector space that cotais v, v,, v ad cosider ay vector u from W. If we ca show that u must also be i W the we ll have show that W cotais a copy of W sice it will cotai all the vectors i W. Now, u is i W ad so must be a liear combiatio of v, v,, v, u= c v + c v + + c v Each of the terms i this sum, civ i, is a scalar multiple of a vector that is i W ad sice W is a vector space it must be closed uder scalar multiplicatio ad so each civ i is i W. But this meas that u is the sum of a buch of vectors that are i W which is closed uder additio ad so that meas that u must i fact be i W. We ve ow show that W cotais every vector from W ad so must cotai W itself. Now, let s take a look at some examples of spas. Example Describe the spa of each of the followig sets of vectors. (a) v = (, 0, 0) ad v = ( 0,, 0). Solutio (b) v = ( ) ad v = ( ), 0,, 0 0,, 0, (a) The spa of this set of vectors, spa { v, v } write dow a geeral liear combiatio for these two vectors. av+ bv = a,0,0 + 0, b,0 = a, b,0, is the set of all liear combiatios ad we ca ( ) ( ) ( ) So, it looks like spa {, } v v will be all of the vectors from for ay choices of a ad b. that are i the form ( ab,,0) 007 Paul Dawkis 06

212 (b) This oe is fairly similar to the first oe. A geeral liear combiatio will look like, av+ bv = ( a,0, a,0) + ( 0, b,0, b) = ( a, b, a, b) So, spa {, } ad b. v v will be all the vectors from 4 of the form ( aba,,, b) for ay choices of a Example Describe the spa of each of the followig sets of vectors (a) v = 0 0 ad v = 0 (b) v =, v = x, ad v = x Solutio These work exactly the same as the previous set of examples worked. The oly differece is that this time we are t workig i for this example. (a) Here is a geeral liear combiatio of these vectors. a a 0 av+ bv = = 0 b 0 b Here it looks like spa {, } M. v v will be all the diagoal matrices i (b) A geeral liear combiatio i this case is, av + bv + cv = a+ bx+ cx I this case spa {,, } term. v v v will be all the polyomials from 007 Paul Dawkis 07 P that do ot have a quadratic Now, let s see if we ca determie a set of vectors that will spa some of the commo vector spaces that we ve see. What we ll eed i each of these examples is a set of vectors with which we ca write a geeral vector from the space as a liear combiatio of the vectors i the set. Example 4 Determie a set of vectors that will exactly spa each of the followig vector spaces. (a) [Solutio] (b) M [Solutio] (c) P [Solutio] Solutio Okay, before we start this let s thik about just what we eed to show here. We ll eed to fid a set of vectors so that the spa of that set will be exactly the space give. I other words, we eed to show that the spa of our proposed set of vectors is i fact the same set as the vector space. So just what do we eed to do to mathematically show that two sets are equal? Let s suppose that we wat to show that A ad B are equal sets. To so this we ll eed to show that each a i A will be i B ad i doig so we ll have show that B will at the least cotai all of A. Likewise, we ll eed to show that each b i B will be i A ad i doig that we ll have show that A will cotai all of B. However, the oly way that A ca cotai all of B ad B ca cotai all of A is for A ad

213 B to be the same set. So, for our example we ll eed to determie a possible set of spaig vectors show that every vector from our vector space is i the spa of our set of vectors. Next we ll eed to show that each vector i our spa will also be i the vector space. (a) We ve pretty much doe this oe already. Earlier i the sectio we showed that the ay vector from ca be writte as a liear combiatio of the stadard basis vectors, e, e,, e ad so at the least the spa of the stadard basis vectors will cotai all of. However, sice ay liear combiatio of the stadard basis vectors is goig to be a vector i we ca see that must also cotai the spa of the stadard basis vectors. Therefore, the spa of the stadard basis vectors must be. [Retur to Problems] (b) M We ca use result of Example (a) above as a guide here. I that example we saw a set of matrices that would spa all the diagoal matrices i M ad so we ca do a atural extesio to get a set that will spa all of M. It looks like the followig set should do it. v =,,, v = = = 0 v 0 0 v 0 Clearly ay liear combiatio of these four matrices will be a matrix ad hece i M ad so the spa of these matrices must be cotaied i M. Likewise, give ay matrix from M, a c A = b d we ca write it as the followig liear combiatio of these vectors. a c A= = a + b + c + d 4 b d v v v v ad so M must be cotaied i the spa of these vectors ad so these vectors will spa M. [Retur to Problems] (c) P We ca use Example (b) to help with this oe. First recall that P is the set of all polyomials of degree or less. Usig Example (b) as a guide it looks like the followig set of vectors will work for us. 007 Paul Dawkis 08

214 v = v = x v = x v = x 0,,,, Note that used subscripts that matched the degree of the term ad so started at v 0 istead of the usual v. It should be clear (hopefully) that a liear combiatio of these is a polyomial of degree or less ad so will be i P. Therefore the spa of these vectors will be cotaied i P. Likewise, we ca write a geeral polyomial of degree or less, p = a0 + ax+ + ax as the followig liear combiatio p= a0v0 + av+ + av Therefore P is cotaied i the spa of these vectors ad this meas that the spa of these vectors is exactly P. [Retur to Problems] There is oe last idea about spas that we eed to discuss ad its best illustrated with a example. Example 5 Determie if the followig sets of vectors will spa. (a) v = (, 0,), v = (,, 4), ad v = (,, ). [Solutio] (b) v = (,, ), v = (,,), ad v = (,8, 5). [Solutio] Solutio (a) v = ( ), v = ( ), ad = ( ), 0,,, 4 v,,. Okay let s thik about how we ve got to approach this. Clearly the spa of these vectors will be i sice they are vectors from. The real questio is whether or ot will be cotaied i the spa of these vectors, spa { v, v, v }. I the previous example our set of vectors cotaied vectors that we could easily show this. However, i this case its ot so clear. So to aswer that questio here we ll do the followig. Choose a geeral vector from u = u, u, u, ad determie if we ca fid scalars c, c, ad c so that u is a liear combiatio of the give vectors. Or, u, u, u = c, 0, + c,, 4 + c,,, ( ) ( ) ( ) ( ) ( ) If we set compoets equal we arrive at the followig system of equatios, c c + c = u c + c = u c+ 4c c = u I matrix form this is, 007 Paul Dawkis 09

215 c u 0 c = u 4 c u What we eed to do is to determie if this system will be cosistet (i.e. have at least oe solutio) for every possible choice of u = ( u, u, u). Nicely eough this is very easy to do if you recall Theorem 9 from the sectio o Determiat Properties. This theorem tells us that this system will be cosistet for every choice of u = ( u, u, u) provided the coefficiet matrix is ivertible ad we ca check that be doig a quick determiat computatio. So, if we deote the det A = 4. coefficiet matrix as A we ll leave it to you to verify that ( ) Therefore the coefficiet matrix is ivertible ad so this system will have a solutio for every choice of u = ( u, u, u). This i tur tells us that spa { v, v, v } is cotaied i ad so we ve ow show that spa { v, v, v} = [Retur to Problems] (b) v = ( ), v = ( ), ad = ( ),,,, v,8, 5. We ll do this oe a little quicker. As with the first part, let s choose a geeral vector u = ( u, u, u) form ad form up the system that we eed to solve. We ll leave it to you to verify that the matrix form of this system is, c u 8 c u = 5 c u This system will have a solutio for every choice of = ( u, u, u) is ivertible. However, i this case we have det ( A ) = 0 (you should verify this) ad so the coefficiet matrix is ot ivertible. u if the coefficiet matrix, A, This i tur tells us that there is at least oe choice of = ( u, u, u) ot have a solutio ad so = ( u, u, u) three vectors. Note that there are i fact ifiitely may choices of = ( u, u, u ) yield solutios! u for which this system will u caot be writte as a liear combiatio of these Now, we kow that spa { v, v, v } is cotaied i least oe vector from that is ot cotaied i spa {,, } vectors will ot be all of. u that will ot, but we ve just show that there is at v v v ad so the spa of these three [Retur to Problems] 007 Paul Dawkis 0

216 This example has show us two thigs. First, it has show us that we ca t just write dow ay set of three vectors ad expect to get those three vectors to spa. This is a idea we re goig to be lookig at i much greater detail i the ext couple of sectios. Secodly, we ve ow see at least two differet sets of vectors that will spa. There are the three vectors from Example 5(a) as well as the stadard basis vectors for. This tells us that the set of vectors that will spa a vector space are ot uique. I other words, we ca have more that oe set of vectors spa the same vector space. 007 Paul Dawkis

217 Liear Idepedece I the previous sectio we saw several examples of writig a particular vector as a liear combiatio of other vectors. However, as we saw i Example (b) of that sectio there is sometimes more tha oe liear combiatio of the same set of vectors ca be used for a give vector. We also saw i the previous sectio that some sets of vectors, S = { v, v,, v}, ca spa a vector space. Recall that by spa we mea that every vector i the space ca be writte as a liear combiatio of the vectors i S. I this sectio we d like to start lookig at whe it will be possible to express a give vector from a vector space as exactly oe liear combiatios of the set S. We ll start this sectio off with the followig defiitio.,,, is a o-empty set of vectors ad form the vector equatio, cv+ cv + + cv = 0 This equatio has at least oe solutio, amely, c = 0, c = 0,, c = 0. This solutio is called the trivial solutio. Defiitio Suppose S = { v v v } If the trivial solutio is the oly solutio to this equatio the the vectors i the set S are called liearly idepedet ad the set is called a liearly idepedet set. If there is aother solutio the the vectors i the set S are called liearly depedet ad the set is called a liearly depedet set. Let s take a look at some examples. Example Determie if each of the followig sets of vectors are liearly idepedet or liearly depedet. (a) v = (, ) ad v = (, ). [Solutio] (b) v = (, 8) ad v = ( 9,6). [Solutio] (c) v = (, 0, 0), v = ( 0,, 0), ad v = ( 0, 0,). [Solutio] (d) v = (,, 4), v = (, 5, 4), ad v = ( 0,,). [Solutio] Solutio To aswer the questio here we ll eed to set up the equatio cv+ cv + + cv = 0 for each part, combie the left side ito a sigle vector ad the set all the compoets of the vector equal to zero (sice it must be the zero vector, 0). At this poit we ve got a system of equatios that we ca solve. If we oly get the trivial solutio the vectors will be liearly idepedet ad if we get more tha oe solutio the vectors will be liearly depedet. (a) v = ( ) ad = ( ), v,. We ll do this oe i detail ad the do the remaiig parts quicker. We ll first set up the equatio ad get the left side combied ito a sigle vector. 007 Paul Dawkis

218 c(, ) + c(, ) = 0 ( c c, c + c ) = ( 0,0) Now, set each of the compoets equal to zero to arrive at the followig system of equatios. c c = 0 c + c = 0 Solvig this system gives to followig solutio (we ll leave it to you to verify this), c = 0 c = 0 The trivial solutio is the oly solutio ad so these two vectors are liearly idepedet. [Retur to Problems] (b) v = ( ) ad = ( ), 8 v 9,6. Here is the vector equatio we eed to solve. c(, 8) + c( 9,6) =0 The system of equatios that we ll eed to solve is, c 9c = 0 8c+ 6c = 0 ad the solutio to this system is, c = t c = t t is ay real umber 4 We ve got more tha the trivial solutio (ote however that the trivial solutio IS still a solutio, there s just more tha that this time) ad so these vectors are liearly depedet. [Retur to Problems] (c) v = ( ), v = ( ), ad = ( ), 0, 0 0,, 0 v 0, 0,. The oly differece betwee this oe ad the previous two are the fact that we ow have three vectors out of. Here is the vector equatio for this part. c(,0,0) + c( 0,,0 ) + c( 0,0,) =0 The system of equatios to solve for this part is, c = 0 c = 0 c = 0 So, ot much solvig to do this time. It is clear that the oly solutio will be the trivial solutio ad so these vectors are liearly idepedet. [Retur to Problems] 007 Paul Dawkis

219 (d) v = ( ), v = ( ), ad = ( ),, 4, 5, 4 v 0,,. Here is the vector equatio for this fial part. c,, 4 + c, 5, 4 + c 0,, =0 ( ) ( ) ( ) The system of equatios that we ll eed to solve here is, c + c = 0 c 5c + c = 0 4c + 4c + c = 0 The solutio to this system is, c = t c = t c = t t 4 is ay real umber We ve got more tha just the trivial solutio ad so these vectors are liearly depedet. [Retur to Problems] Note that we did t really eed to solve ay of the systems above if we did t wat to. All we were iterested i it was whether or ot the system had oly the trivial solutio or if there were more solutios i additio to the trivial solutio. Theorem 9 from the Properties of the Determiat sectio ca help us aswer this questio without solvig the system. This theorem tells us that if the determiat of the coefficiet matrix is o-zero the the system will have exactly oe solutio, amely the trivial solutio. Likewise, it ca be show that if the determiat is zero the the system will have ifiitely may solutios. Therefore, oce the system is set up if the coefficiet matrix is square all we really eed to do is take the determiat of the coefficiet matrix ad if it is o-zero the set of vectors will be liearly idepedet ad if the determiat is zero the the set of vectors will be liearly depedet. If the coefficiet matrix is ot square the we ca t take the determiat ad so we ll ot have a choice but to solve the system. This does ot mea however, that the actual solutio to the system is t ever importat as we ll see towards the ed of the sectio. Before proceedig o we should poit out that the vectors from part (c) of this were actually the stadard basis vectors for. I fact the stadard basis vectors for, e = (,0,0,,0), e = ( 0,,0,,0),, e = ( 0,0,0,,) will be liearly idepedet. The vectors i the previous example all had the same umber of compoets as vectors, i.e. two vectors from or three vectors from. We should work a couple of examples that does ot fit this mold to make sure that you uderstad that we do t eed to have the same umber of vectors as compoets. 007 Paul Dawkis 4

220 Example Determie if the followig sets of vectors are liearly idepedet or liearly depedet. (a) v = (, ), v = (, ) ad v = ( 4, ). [Solutio] (b) v = (,), v = (, ) ad v = ( 4, ). [Solutio] (c) v = (,,, ), v = (,,0, ) ad v = (, 8,, ) (d) v = ( ), v = ( ) ad v = ( ),,, 4,, 4,,,,. [Solutio]. [Solutio] Solutio These will work i pretty much the same maer as the previous set of examples worked. Agai, we ll do the first part i some detail ad the leave it to you to verify the details i the remaiig parts. Also, we ll ot be showig the details of solvig the systems of equatios so you should verify all the solutios for yourself. (a) v = ( ), v = ( ) ad = ( ),, v 4,. Here is the vector equatio we eed to solve. c(, ) + c(, ) + c( 4, ) = 0 c c + 4 c, c + c c = 0,0 ( ) ( ) The system of equatios that we eed to solve is, c c + 4c = 0 c+ c c = 0 ad this has the solutio, c = t c = t c = t t 4 is ay real umber We ve got more tha the trivial solutio ad so these vectors are liearly depedet. Note that we did t really eed to solve this system to kow that they were liearly depedet. From Theorem i the solvig systems of equatios sectio we kow that if there are more ukows tha equatios i a homogeeous system the we will have ifiitely may solutios. [Retur to Problems] (b) v = ( ), v = ( ) ad = ( ),, v 4,. Here is the vector equatio for this part. c(,) + c(, ) + c( 4, ) =0 The system of equatios we ll eed to solve is, c c + 4c = 0 c c c = 0 Now, techically we do t eed to solve this system for the same reaso we really did t eed to solve the system i the previous part. There are more ukows tha equatios so the system 007 Paul Dawkis 5

221 will have ifiitely may solutios (so more tha the trivial solutio) ad therefore the vectors will be liearly depedet. However, let s solve ayway sice there is a importat idea we eed to see i this part. Here is the solutio. c = t c = 0 c = t t is ay real umber I this case oe of the scalars was zero. There is othig wrog with this. We still have solutios other tha the trivial solutio ad so these vectors are liearly depedet. Note that was it does say however, is that v ad v are liearly depedet themselves regardless of v. [Retur to Problems] (c) v = ( ), v = ( ) ad = ( ),,,,,0, v, 8,,. Here is the vector equatio for this part. c(,,, ) + c(,, 0, ) + c(, 8,, ) =0 The system of equatios that we ll eed to solve this time is, c+ c + c = 0 c c 8c = 0 c+ c = 0 c+ c c = 0 The solutio to this system is, 5 c = t c = t c = t t is ay real umber We ve got more solutios tha the trivial solutio ad so these three vectors are liearly depedet. [Retur to Problems] (d) v = ( ), v = ( ) ad = ( ),,, 4,, 4, v,,,. The vector equatio for this part is, c(,,, 4) + c(,, 4, ) + c(,,, ) =0 The system of equatios is, c c + c = 0 c+ c + c = 0 c+ 4c c = 0 4c+ c c = 0 This system has oly the trivial solutio ad so these three vectors are liearly idepedet. [Retur to Problems] We should make oe quick remark about part (b) of this problem. I this case we had a set of three vectors ad oe of the scalars was zero. This will happe o occasio ad as oted this oly meas that the vectors with the zero scalars are ot really required i order to make the set 007 Paul Dawkis 6

222 liearly depedet. This part has show that if you have a set of vectors ad a subset is liearly depedet the the whole set will be liearly depedet. Ofte the oly way to determie if a set of vectors is liearly idepedet or liearly depedet is to set up a system as above ad solve it. However, there are a couple of cases were we ca get the aswer just be lookig at the set of vectors. Theorem A fiite set of vectors that cotais the zero vector will be liearly depedet.,,,, be ay set of vectors that cotais the zero vector as show. We ca the set up the followig equatio. ( 0) + 0v+ 0v + + 0v = 0 Proof : This is a fairly simply proof. Let S = { 0v v v} We ca see from this that we have a o-trivial solutio to this equatio ad so the set of vectors is liearly depedet. Theorem Suppose that S = { v v v } vectors is liearly depedet.,,, k is a set of vectors i. If k > the the set of We re ot goig to prove this oe but we will outlie the basic proof. I fact, we saw how to prove this theorem i parts (a) ad (b) from Example. If we set up the system of equatios correspodig to the equatio, cv+ cv + + ckvk = 0 we will get a system of equatio that has more ukows tha equatios (you should verify this) ad this meas that the system will ifiitely may solutios. The vectors will therefore be liearly depedet. To this poit we ve oly see examples of liear idepedece/depedece with sets of vectors i. We should ow take a look at some examples of vectors from some other vector spaces. Example Determie if the followig sets of vectors are liearly idepedet or liearly depedet (a) v = 0 0 0, v = 0 0 0, ad v = 0 0. [Solutio] 4 (b) v = 0 ad v = 0. [Solutio] 8 (c) v = 0 0 ad v = 5 0. [Solutio] Solutio Okay, the basic process here is pretty much the same as the previous set of examples. It just may ot appear that way at first however. We ll eed to remember that this time the zero vector, 0, is i fact the zero matrix of the same size as the vectors i the give set. 007 Paul Dawkis 7

223 (a) v = 0 0 0, v = 0 0 0, ad v = 0 0. We ll first eed to set of the vector equatio, c v + c v + c v = c c c = Next, combie the vectors (okay, they re matrices so let s call them that.) o the left ito a sigle matrix usig basic matrix scalar multiplicatio ad additio. c 0 c c 0 = Now, we eed both sides to be equal. This meas that the three etries i the matrix o the left that are ot already zero eed to be set equal to zero. This gives the followig system of equatios. c = 0 c = 0 c = 0 Of course this is t really much of a system as it tells us that we must have the trivial solutio ad so these matrices (or vectors if you wat to be exact) are liearly idepedet. [Retur to Problems] 4 (b) v = 0 ad v = 0. So we ca see that for the most part these problems work the same way as the previous problems did. We just eed to set up a system of equatios ad solve. For the remaider of these problems we ll ot put i the detail that did i the first part. Here is the vector equatio we eed to solve for this part c + c = The system of equatios we eed to solve here is, c+ 4c = 0 c+ c = 0 c c = 0 We ll leave it to you to verify that the oly solutio to this system is the trivial solutio ad so these matrices are liearly idepedet. [Retur to Problems] 007 Paul Dawkis 8

224 8 (c) v = 0 0 ad v = 5 0. Here is the vector equatio for this part c c = ad the system of equatios is, 8c c = 0 c+ c = 0 0c 5c = 0 The solutio to this system is, c = t c = t t is ay real umber So, we ve got solutios other tha the trivial solutio ad so these vectors are liearly depedet. [Retur to Problems] Example 4 Determie if the followig sets of vectors are liearly idepedet or liearly depedet. (a) p =, p = x, ad p = x i P. [Solutio] (b) p = x, p = x + x, ad p = x + i P. [Solutio] (c) p = x x+ 7, p = x + 4x +, ad p = x x+ 4 i P. [Solutio] Solutio Agai, these will work i essetially the same maer as the previous problems. I this problem set the zero vector, 0, is the zero fuctio. Sice we re actually workig i P for all these parts we ca thik of this as the followig polyomial. 0 = 0+ 0x + 0x I other words a secod degree polyomial with zero coefficiets. (a) p =, p = x, ad p = x i P. Let s first set up the equatio that we eed to solve. cp+ cp + cp = 0 c() + cx+ cx = 0+ 0x+ 0x Now, we could set up a system of equatios here, however we do t eed to. I order for these two secod degree polyomials to be equal the coefficiet of each term must be equal. At this poit is it should be pretty clear that the polyomial o the left will oly equal if all the coefficiets of the polyomial o the left are zero. So, the oly solutio to the vector equatio will be the trivial solutio ad so these polyomials (or vectors if you wat to be precise) are liearly idepedet. [Retur to Problems] 007 Paul Dawkis 9

225 (b) p = x, p = x + x, ad p = x + i P. The vector equatio for this part is, c x + c x + x + c x + = 0+ 0x+ 0x ( ) ( ) ( ) = ( ) ( ) ( ) c c x c c x c c x x Now, as with the previous part the coefficiets of each term o the left must be zero i order for this polyomial to be the zero vector. This leads to the followig system of equatios. c + c = 0 c+ c = 0 c + c = 0 The oly solutio to this system is the trivial solutio ad so these polyomials are liearly idepedet. [Retur to Problems] (c) p, = x x+ 7 p = x + 4x +, ad p = x x+ 4 i P. I this part the vector equatio is, c x x+ 7 + c x + 4x+ + c x x+ 4 = 0+ 0x+ 0x ( ) ( ) ( ) = + + ( ) ( ) ( ) c c c x c 4c c x 7c c 4c 0 0x 0x The system of equatio we eed to solve is, c+ c + c = 0 c+ 4c c = 0 7c+ c + 4c = 0 The solutio to this system is, c = t c = t c = t t is ay real umber So, we have more solutios tha the trivial solutio ad so these polyomials are liearly depedet. [Retur to Problems] Now that we ve see quite a few examples of liearly idepedet ad liearly depedet vectors we ve got oe fial topic that we wat to discuss i this sectio. Let s go back ad examie the results of the very first example that we worked i this sectio ad i particular let s start with the fial part.,, 4, 5, 4 v 0,, ad determied that they were liearly depedet. We did this by solvig the vector equatio, c(,, 4) + c(, 5, 4) + c( 0,,) =0 ad foud that it had the solutio, c = t c = t c = t t is ay real umber 4 I this part we looked at the vectors v = ( ), v = ( ), ad = ( ) 007 Paul Dawkis 0

226 We kew that the vectors were liearly depedet because there were solutios to the equatio other tha the trivial solutio. Let s take a look at oe of them. Say, c = c = c = I fact, let s plug these values ito the vector equatio above. (,, 4 ) + (, 5, 4 ) + ( 0,, ) =0 Now, if we rearrage this a little we arrive at,, 5,4 =,,4 0,, or, i a little more compact form : v = vv. ( ) ( ) ( ) So, we were able to write oe of the vectors as a liear combiatio of the other two. Notice as well that we could have just as easily writte v ad a liear combiatio of v ad v or v as a liear combiatio of v ad v if we d wated to. Let s see if we ca do this with the three vectors from the third part of this example. I this part we were lookig at the three vectors v = (, 0, 0), v = ( 0,, 0), ad v = ( 0, 0,) ad i that part we determied that these vectors were liearly idepedet. Let s see if we ca write v ad a liear combiatio of v ad v. If we ca we ll be able to fid costats c ad c that will make the followig equatio true.,0,0 = c 0,,0 + c 0,0, = 0, c, c ( ) ( ) ( ) ( ) Now, while we ca fid values of c ad c that will make the secod ad third etries zero as we eed them to we re i some pretty serious trouble with the first etry. I the vector o the left we ve got a i the first etry ad i the vector o the right we ve got a 0 i the first etry. So, there is o way we ca write the first vector as a liear combiatio of the other two. You should also verify we also do this i ay of the other combiatios either. So, what have we see here with these two examples. With a set of liearly depedet vectors we were able to write at least oe of them as a liear combiatio of the other vectors i the set ad with a set of liearly idepedet vectors we were ot able to do this for ay of the vectors. This will always be the case. With a set of liearly idepedet vectors we will ever be able to write oe of the vectors as a liear combiatio of the other vectors i the set. O the other had, if we have a set of liearly depedet vectors the at least oe of them ca be writte as a liear combiatio of the remaiig vectors. I the example of liearly depedet vectors we were lookig at above we could write ay of the vectors as a liear combiatio of the others. This will ot always be the case, to see this take a look at Example (b). I this example we determied that the vectors v = (,), 007 Paul Dawkis

227 v = ( ) ad = ( ), equatio, v 4, were liearly depedet. We also saw that the solutio to the (,) (, ) ( 4, ) c + c + c =0 was give by c = t c = 0 c = t t is ay real umber ad as we saw above we ca always use this to determie how to write at least oe of the vectors as a liear combiatio of the remaiig vectors. Simply pick a value of t ad the rearrage as you eed to. Doig this i our case we see that we ca do oe of the followig. ( 4, ) =(,) ( 0)(, ) (,) =( 0)(, ) ( 4, ) It s easy i this case to write the first or the third vector as a combiatio of the other vectors. However, because the coefficiet of the secod vector is zero, there is o way that we ca write the secod vector as a liear combiatio of the first ad third vectors. What that meas here is that the first ad third vectors are liearly depedet by themselves (as we poited out i that example) but the first ad secod are liearly idepedet vectors as are the secod ad third if we just look at them as a pair of vectors (you should verify this). This ca be a useful idea about liearly idepedet/depedet vectors o occasio. 007 Paul Dawkis

228 Basis ad Dimesio I this sectio we re goig to take a look at a importat idea i the study of vector spaces. We will also be drawig heavily o the ideas from the previous two sectios ad so make sure that you are comfortable with the ideas of spa ad liear idepedece. We ll start this sectio off with the followig defiitio.,,, is a set of vectors from the vector space V. The S is called a basis (plural is bases) for V if both of the followig coditios hold. (a) spa ( S) = V, i.e. S spas the vector space V. (b) S is a liearly idepedet set of vectors. Defiitio Suppose S = { v v v } Let s take a look at some examples. Example Determie if each of the sets of vectors will be a basis for. (a) v = (,,), v = ( 0,, ) ad v = (, 0, ). [Solutio] (b) v = (, 0, 0), v = ( 0,, 0) ad v = ( 0, 0,). [Solutio] (c) v = (,, 0) ad v = (, 0, 0). [Solutio] (d) v = (,,), v = (,, ) ad v = (, 4, 4). [Solutio] Solutio =,, (a) v ( ), v = ( ) ad = ( ) 0,, v, 0,. Now, let s see what we ve got to do here to determie whether or ot this set of vectors will be a basis for. First, we ll eed to show that these vectors spa ad from the sectio o Spa we kow that to do this we eed to determie if we ca fid scalars c, c, ad c so that a geeral vector u = ( u, u, u) from ca be expressed as a liear combiatio of these three vectors or, c,, + c 0,, + c, 0, = u, u, u ( ) ( ) ( ) ( ) As we saw i the sectio o Spa all we eed to do is covert this to a system of equatios, i matrix form, ad the determie if the coefficiet matrix has a o-zero determiat or ot. If the determiat of the coefficiet matrix is o-zero the the set will spa the give vector space ad if the determiat of the coefficiet matrix is zero the it will ot spa the give vector space. Recall as well that if the determiat of the coefficiet matrix is o-zero the there will be exactly oe solutio to this system for each u. The matrix form of the system is, 0 c u 0 c = u c u 007 Paul Dawkis

229 Before we get the determiat of the coefficiet matrix let s also take a look at the other coditio that must be met i order for this set to be a basis for. I order for these vectors to be a basis for the they must be liearly idepedet. From the sectio o Liear Idepedece we kow that to determie this we eed to solve the followig equatio, c,, + c 0,, + c, 0, = 0 = 0, 0, 0 ( ) ( ) ( ) ( ) If this system has oly the trivial solutio the vectors will be liearly idepedet ad if it has solutios other tha the trivial solutio the the vectors will be liearly depedet. Note however, that this is really just a specific case of the system that we eed to solve for the spa questio. Namely here we eed to solve, 0 c 0 0 c 0 = c 0 Also, as oted above, if these vectors will spa the there will be exactly oe solutio to the system for each u. I this case we kow that the trivial solutio will be a solutio, our oly questio is whether or ot it is the oly solutio. So, all that we eed to do here is compute the determiat of the coefficiet matrix ad if it is o-zero the the vectors will both spa ad be liearly idepedet ad hece the vectors will be a basis for. O the other had, if the determiat is zero the the vectors will ot spa ad will ot be liearly idepedet ad so they wo t be a basis for. So, here is the determiat of the coefficiet matrix for this problem. 0 A= 0 det( A) =0 0 So, these vectors will form a basis for. [Retur to Problems] (b) v = ( ), v = ( ) ad = ( ), 0, 0 0,, 0 v 0, 0,. Now, we could use a similar path for this oe as we did earlier. However, i this case, we ve doe all the work for this oe i previous sectios. I Example 4(a) of the sectio o Spa we determied that the stadard basis vectors (Iterestig ame is t it? We ll come back to this i a bit) e, e ad e will spa. Notice that while we ve chaged the otatio a little just for this problem we are workig with the stadard basis vectors here ad so we kow that they will spa. Likewise, i Example (c) from the sectio o Liear Idepedece we saw that these vectors are liearly idepedet. 007 Paul Dawkis 4

230 Hece based o all this previous work we kow that these three vectors will form a basis for. [Retur to Problems] (c) v = ( ) ad = ( ),, 0 v, 0, 0. We ca t use the method from part (a) here because the coefficiet matrix would t be square ad so we ca t take the determiat of it. So, let s just start this out be checkig to see if these two vectors will spa. If these two vectors will spa the for each u = ( u, u, u) i there must be scalars c ad c so that, c,, 0 + c, 0, 0 = u, u, u ( ) ( ) ( ) However, we ca see right away that there will be problems here. The third compoet of the each of these vectors is zero ad hece the liear combiatio will ever have ay o-zero third compoet. Therefore, if we choose u = ( u, u, u) to be ay vector i with u 0 we will ot be able to fid scalars c ad c to satisfy the equatio above. Therefore, these two vectors do ot spa ad hece caot be a basis for Note however, that these two vectors are liearly idepedet (you should verify that). Despite this however, the vectors are still ot a basis for sice they do ot spa.. (d) v = ( ), v = ( ) ad v = ( ),,,,, 4, 4 [Retur to Problems] I this case we ve got three vectors with three compoets ad so we ca use the same method that we did i the first part. The geeral equatio that eeds solved here is, c(,,) + c(,, ) + c(, 4, 4 ) = ( u, u, u) ad the matrix form of this is, c 0 4 c 0 = 4 c 0 We ll leave it to you to verify that det ( A ) = 0 ad so these three vectors do ot spa ad are ot liearly idepedet. Either of which will mea that these three vectors are ot a basis for. [Retur to Problems] Before we move o let s go back ad address somethig we poited out i Example (b). As we poited out at the time the three vectors we were lookig at were the stadard basis vectors for. We should discuss the ame a little more at this poit ad we ll do it a little more geerally tha i. The vectors 007 Paul Dawkis 5

231 (,0,0,,0) ( 0,,0,,0) ( 0,0,0,,) e = e = e = will spa as we saw i the sectio o Spa ad it is fairly simple to show that these vectors are liearly idepedet (you should verify this) ad so they form a basis for. I some way this set of vectors is the simplest (we ll see this i a bit) ad so we call them the stadard basis vectors for. We also have a set of stadard basis vectors for a couple of the other vector spaces we ve bee lookig at occasioally. Let s take a look at each of them. Example The set p 0 =, p = x, called the stadard basis for P. p,, = x p = x is a basis for P ad is usually I Example 4(c) of the sectio o Spa we showed that this set will spa P. I Example 4(a) of the sectio o Liear Idepedece we shows that for = these form a liearly idepedet set i P. A similar argumet ca be used for the geeral case here ad we ll leave it to you to go through that argumet. So, this set of vectors is i fact a basis for P Example The set v = 0 0, v = 0, v = 0 0, ad v 4 = 0 is a basis for M ad is usually called the stadard basis for M. I Example 4(b) of the sectio o Spa we showed that this set will spa M. We have yet so show that they are liearly i depedet however. So, followig the procedure from the last sectio we kow that we eed to set up the followig equatio, c c c c = 0 0 c c 0 0 c c = So, the oly way the matrix o the left ca be the zero matrix is for all the scalars to be zero. I other words, this equatio has oly the trivial solutio ad so the matrices are liearly idepedet. This combied with the fact that they spa M shows that they are i fact a basis for M. Note that we oly looked at the stadard basis vectors for M, but you should be able to modify this appropriately to arrive at a set of stadard basis vector for M m i geeral. Next let s take a look at the followig theorem that gives us oe of the reasos for beig iterested i a set of basis vectors. 007 Paul Dawkis 6

232 Theorem Suppose that the set S = { v v v },,, is a basis for the vector space V the every vector u from V ca be expressed as a liear combiatio of the vectors from S i exactly oe way. Proof : First, sice we kow that the vectors i S are a basis for V the for ay vector u i V we ca write it as a liear combiatio as follows, u= c v + c v + + c v Now, let s suppose that it is also possible to write it as the followig liear combiatio, u= k v + k v + + k v If we take the differece of these two liear combiatios we get, 0= u u= c k v + c k v + + c k v ( ) ( ) ( ) However, because the vectors i S are a basis they are liearly idepedet. That meas that this equatio ca oly have the trivial solutio. Or, i other words, we must have, c k = 0, c k = 0,, c k = 0 But this meas that, c = k, c = k,, c = k ad so the two liear combiatios were i fact the same liear combiatio. We also have the followig fact. It probably does t really rise to the level of a theorem, but we ll call it that ayway. Theorem Suppose that S = { v, v,, v} basis for the vector space V = spa ( S) is a set of liearly idepedet vectors the S is a The proof here is so simple that we re ot really goig to give it. By assumptio the set is liearly idepedet ad by defiitio V is the spa of S ad so the set must be a basis for V. We ow eed to take a look at the followig defiitio. Defiitio Suppose that V is a o-zero vector space ad that S is a set of vectors from V that form a basis for V. If S cotais a fiite umber of vectors, say S = { v, v,, v}, the we call V a fiite dimesioal vector space ad we say that the dimesio of V, deoted by dim( V ), is (i.e. the umber of basis elemets i S). If V is ot a fiite dimesioal vector space (so S does ot have a fiite umber of vectors) the we call it a ifiite dimesioal vector space. By defiitio the dimesio of the zero vector space (i.e. the vector space cosistig solely of the zero vector) is zero. Here are the dimesios of some of the vector spaces we ve bee dealig with to this poit. 007 Paul Dawkis 7

233 Example 4 Dimesios of some vector spaces. dim = sice the stadard basis vectors for (a) ( ) are, (,0,0,,0) ( 0,,0,,0) ( 0,0,0,,) e = e = e = dim = + sice the stadard basis vectors for P are, (b) ( P ) (c) ( ) ( )( ) p = p = p = p = 0 x x x dim M = = 4 sice the stadard basis vectors for M are, (d) dim( M m ) v = v = = = 0 v 0 0 v 0 = m. This follows from the atural extesio of the previous part. The set of stadard basis vectors will be a set of vectors that are zero i all etries except oe etry which is a. There are m possible positios of the ad so there must be m basis vectors (e) The set of real valued fuctios o a iterval, F [ ab, ], ad the set of cotiuous fuctios o a iterval, C[ a, b ], are ifiite dimesioal vector spaces. This is ot easy to show at this poit, but here is somethig to thik about. If we take all the polyomials (of all degrees) the we ca form a set (see part (b) above for elemets of that set) that does ot have a fiite umber of elemets i it ad yet is liearly idepedet. This set will be i either of the two vector spaces above ad i the followig theorem we ca show that there will be o fiite basis set for these vector spaces. We ow eed to take a look at several importat theorems about vector spaces. The first couple of theorems will give us some ice ideas about liearly idepedet/depedet sets ad spas. Oe of the more importat uses of these two theorems is costructig a set of basis vectors as we ll see evetually. Theorem Suppose that V is a vector space ad that S = { v v v } (a) If a set has more tha vectors the it is liearly depedet. (b) If a set has fewer tha vectors the it does ot spa V. m m m m,,, is ay basis for V. Proof : (a) Let R = { w, w,, wm} ad suppose that m>. Sice S is a basis of V every vector i R ca be writte as a liear combiatio of vectors from S as follows, w = av+ av + + a v w = av+ av + + av w = a v + a v + + a v 007 Paul Dawkis 8

234 Now, we wat to show that the vectors i R are liearly depedet. So, we ll eed to show that there are more solutios tha just the trivial solutio to the followig equatio. kw+ kw + + k m w m = 0 If we plug i the set of liear combiatios above for the w i s i this equatio ad collect all the coefficiets of the v j s we arrive at. a k + a k + + a k v + a k + a k + + a k v + ( ) ( ) m m m m ( a k a k a k) v = 0 m Now, the v j s are liearly idepedet ad so we kow that the coefficiets of each of the v j i this equatio must be zero. This gives the followig system of equatios. ak + ak + + a mkm = 0 a k + a k + + a k = 0 m m a k + a k + + a k = 0 m m Now, i this system the a ij s are kow scalars from the liear combiatios above ad the k i s are ukows. So we ca see that there are equatios ad m ukows. However, because m> there are more ukows tha equatios ad so by Theorem i the solvig systems of equatios sectio we kow that if there are more ukows tha equatios i a homogeeous system, as we have here, there will be ifiitely may solutios. Therefore the equatio, kw+ kw + + k m w m = 0 will have more solutios tha the trivial solutio ad so the vectors i R must be liearly depedet. (b) The proof of this part is very similar to the previous part. Let s start with the set R = { w, w,, wm} ad this time we re goig to assume that m<. It s ot so easy to show directly that R will ot spa V, but if we assume for a secod that R does spa V we ll see that we ll ru ito some problems with our basis set S. This is called a proof by cotradictio. We ll assume the opposite of what we wat to prove ad show that this will lead to a cotradictio of somethig that we kow is true (i this case that S is a basis for V). So, we ll assume that R will spa V. This meas that all the vectors i S ca be writte as a liear combiatio of the vectors i R or, v = aw+ aw + + am wm v = aw+ aw + + amwm v = a w + a w + + a w Let s ow look at the equatio, m m 007 Paul Dawkis 9

235 kv+ kv + + k v = 0 Now, because S is a basis we kow that the v s must be liearly idepedet ad so the oly i solutio to this must be the trivial solutio. However, if we substitute the liear combiatios of the v s ito this, rearrage as we did i part (a) ad the settig all the coefficiets equal to zero i gives the followig system of equatios. a k + a k + + a k = 0 a k + a k + + a k = 0 a k + a k + + a k = 0 m m m Agai, there are more ukows tha equatios here ad so there are ifiitely may solutios. This cotradicts the fact that we kow the oly solutio to the equatio kv+ kv + + k v = 0 is the trivial solutio. So, our origial assumptio that R spas V must be wrog. Therefore R will ot spa V. Theorem 4 Suppose S is a o-empty set of vectors i a vector space V. spa S the the (a) If S is liearly idepedet ad u is ay vector i V that is ot i ( ) set R= S { u } (i.e. the set of S ad u) is also a liearly idepedet set. (b) If u is ay vector i S that ca be writte as a liear combiatio of the other vectors R= S u be the set we get by removig u from S. The, i S let { } spa ( S) = spa ( R) I other words, S ad S { u } will spa the same space. Proof : (a) If S = { v, v,, v} we eed to show that the set R = { v, v,, v, u} is liearly idepedet. So, let s form the equatio, cv+ cv + + c v + c + u= 0 Now, if c + is ot zero we will be able to write u as a liear combiatio of the cotradicts the fact that u is ot i ( ) is ow, v i s but this spa S. Therefore we must have c + = 0 ad our equatio c v + c v + + c v = 0 But the vectors i S are liearly depedet ad so the oly solutio to this is the trivial solutio, c = 0 c = 0 c = 0 So, we ve show that the oly solutio to cv+ cv + + c v + c + u= Paul Dawkis 0

236 is c = 0 c = 0 c = 0 c + = 0 Therefore, the vectors i R are liearly idepedet. (b) Let s suppose that our set is S = { v, v,, v, u} ad so we have R = { v v v } First, by assumptio u is a liear combiatio of the remaiig vectors i S or, u= c v + c v + + c v,,,. Next let w be ay vector i spa ( S ). So, w ca be writte as a liear combiatio of all the vectors i S or, w = kv+ kv + + kv + k + u Now plug i the expressio for u above to get, w = kv+ kv + + kv + k+ cv+ cv + + cv = k + k c v + k + k c v + + k + k c v ( ) ( ) ( ) ( ) So, w is a liear combiatio of vectors oly i R ad so at the least every vector that is i spa R. spa ( S ) must also be i ( ) Fially, if w is ay vector i spa ( R ) the it ca be writte as a liear combiatio of vectors from R, but sice these are also vectors i S we see that w ca also, by default, be writte as a liear combiatio of vectors from S ad so is also i spa ( S ). We ve just show that every vector i spa ( R ) must also be i spa ( S ). Sice we ve show that spa ( S ) must be cotaied i spa ( R ) ad that every vector i spa ( R ) must also be cotaied i spa ( S ) this ca oly be true if spa ( S) spa ( R) =. We ca use the previous two theorems to get some ice ideas about the basis of a vector space. Theorem 5 Suppose that V is a vector space the all the bases for V cotai the same umber of vectors.,,, is a basis for V. Now, let R be ay other basis for V. The by Theorem above if R cotais more tha elemets it ca t be a liearly idepedet set ad so ca t be a basis. So, we kow that, at the least R ca t cotai more tha elemets. However, Theorem also tells us that if R cotais less tha elemets the it wo t spa V ad hece ca t be a basis for V. Therefore the oly possibility is that R must cotai exactly elemets. Proof : Suppose that S = { v v v } 007 Paul Dawkis

237 Theorem 6 Suppose that V is a vector space ad that dim( V) that cotais exactly vectors. S will be a basis for V if either V spa{ S} idepedet. Proof : First suppose that V spa{ S} =. Also suppose that S is a set = or S is liearly =. If S is liearly depedet the there must be some vector u i S that ca be writte as a liear combiatio of other vectors i S ad so by Theorem 4(b) we ca remove u from S ad our ew set of vectors will still spa V. However, Theorem (b) tells us that ay set with fewer vectors tha a basis (i.e. less tha i this case) ca t spa V. Therefore, S must be liearly idepedet ad hece S is a basis for V. Now, let s suppose that S is liearly idepedet. If S does ot spa V the there must be a vector u that is ot i spa ( S ). If we add u to S the resultig set with + vectors must be liearly idepedet by Theorem 4(a). O the other had, Theorem (a) tells us that ay set with more vectors tha the basis (i.e. greater tha ) ca t be liearly idepedet. Therefore, S must spa V ad hece S is a basis for V. Theorem 7 Suppose that V is a fiite dimesioal vector space with dim( V) = ad that S is ay fiite set of vectors from V. (a) If S spas V but is ot a basis for V the it ca be reduced to a basis for V by removig certai vectors from S. (b) If S is liearly idepedet but is ot a basis for V the it ca be elarged to a basis for V by addig i certai vectors from V. Proof : (a) If S spas V but is ot a basis for V the it must be a liearly depedet set. So, there is some vector u i S that ca be writte as a liear combiatio of the other vectors i S. Let R be the set that results from removig u from S. The by Theorem 4(b) R will still spa V. If R is liearly idepedet the we have a basis for V ad if it is still liearly depedet we ca remove aother elemet to form a ew set R that will still spa V. We cotiue i this way util we ve reduced S dow to a set of liearly idepedet vectors ad at that poit we will have a basis of V. (b) If S is liearly idepedet but ot a basis the it must ot spa V. Therefore, there is a vector u that is ot i spa ( S ). So, add u to S to form the ew set R. The by Theorem 4(a) the set R is still liearly idepedet. If R ow spas V we ve got a basis for V ad if ot add aother elemet to form the ew liearly idepedet set R. Cotiue i this fashio util we reach a set with vectors ad the by Theorem 6 this set must be a basis for V. Okay we should probably see some examples of some of these theorems i actio. 007 Paul Dawkis

238 Example 5 Reduce each of the followig sets of vectors to obtai a basis for the give vector space. (a) v = (, 0, 0), v = ( 0,, ), v = ( 0, 4, ) ad v 4 = ( 0,,0) for. [Solutio] (b) p 0 =, p =4x, p = x + x +, p = x + 7 ad p 4 = 5x for P. [Solutio] Solutio First, otice that provided each of these sets of vectors spas the give vector space Theorem 7(a) tells us that this ca i fact be doe. (a) v = ( ), v = ( ), v = ( ) ad = ( ), 0, 0 0,, 0, 4, v 4 0,,0 for We will leave it to you to verify that this set of vectors does ideed spa ad sice we kow that dim( ) = we ca see that we ll eed to remove oe vector from the list i order to get dow to a basis. However, we ca t just remove ay of the vectors. For istace if we removed v the set would o loger spa. You should verify this, but you ca also quickly see that. oly v has a o-zero first compoet ad so will be required for the vectors to spa Theorem 4(b) tells us that if we remove a vector that is a liear combiatio of some of the other vectors we wo t chage the spa of the set. So, that is what we eed to look for. Now, it looks like the last three vectors are probably liearly depedet so if we set up the followig equatio cv + cv + cv4 = 0 ad solve it, c = 6t c = t c = t t is ay real umber we ca see that these i fact are liearly depedet vectors. This meas that we ca remove ay of these sice we could write ay oe of them as a liear combiatio of the other two. So, let s remove v for o other reaso that the etries i this vector are larger tha the others. The followig set the still spas ad has exactly vectors ad so by Theorem 6 it must be a basis for. v = (,0,0) v = ( 0,, ) v 4 = ( 0,,0) For the practice you should verify that this set does spa ad is liearly idepedet. [Retur to Problems]. (b) p 0 =, p =4x, p, p = x + 7 ad = x + x + p 4 = 5x for P. We ll go through this oe a little faster. First, you should verify that the set of vectors does ideed spa P. Also, because dim( P ) = we kow that we ll eed to remove two of the vectors. Agai, remember that each vector we remove must be a liear combiatio of some of the other vectors. First, it looks like p is a liear combiatio of p 0 ad p (you should verify this) ad so we ca remove p ad the set will still spa P. This leaves us with the followig set of vectors. 007 Paul Dawkis

239 p = p = 4x p = x + x+ p = 5x 0 4 Now, it looks like p ca easily be writte as a liear combiatio of the remaiig vectors (agai, please verify this) ad so we ca remove that oe as well. We ow have the followig set, p0 = p = 4x p 4 = 5x which has vectors ad will spa P ad so it must be a basis for P by Theorem 6. [Retur to Problems] Example 6 Expad each of the followig sets of vectors ito a basis for the give vector space. 4 (a) v = (,0,0,0), v = (,, 0, 0), v = (,,, 0) i. [Solutio] 0 0 (b) v = 0 0 ad v = 0 i M. [Solutio] Solutio Theorem 7(b) tells us that this is possible to do provided the sets are liearly idepedet. (a) v = ( ), v = ( ), = ( ),0,0,0,, 0, 0 v,,, 0 i 4. 4 We ll leave it to you to verify that these vectors are liearly idepedet. Also, ( ) dim = 4 ad so it looks like we ll just eed to add i a sigle vector to get a basis. Theorem 4(a) tells us that provided the vector we add i is ot i the spa of the origial vectors we ca retai the liear idepedece of the vectors. This will i tur give us a set of 4 liearly idepedet 4 vectors ad so by Theorem 6 will have to be a basis for. Now, we eed to fid a vector that is ot i the spa of the give vectors. This is easy to do provided you otice that all of the vectors have a zero i the fourth compoet. This meas that all the vectors that are i spa { v, v, v } will have a zero i the fourth compoet. Therefore, all that we eed to do is take ay vector that has a o-zero fourth compoet ad we ll have a spa v, v, v. Here are some possible vectors we could use, vector that is outside { } ( 0, 0, 0,) ( 0, 4, 0, ) ( 6,,, ) (,,, ) The last oe seems to be i keepig with the patter of the origial three vectors so we ll use that oe to get the followig set of four vectors. v = (,0,0,0 ) v = (,,0,0 ) v = (,,,0 ) v = (,,,) 4 Sice this set is still liearly idepedet ad ow has 4 vectors by Theorem 6 this set is a basis 4 for (you should verify this). [Retur to Problems] 007 Paul Dawkis 4

240 0 0 (b) v = 0 0 ad v = 0 i M. dim M = 4 ad so we ll eed to add i two vectors to get a basis. We will have to do this i two steps however. The first vector we add caot be i spa { v, v } ad the secod vector we add caot be i spa { v, v, v } where v is the ew vector we added i the first step. The two vectors here are liearly idepedet (verify this) ad ( ) So, first otice that all the vectors i spa {, } v v will have zeroes i the secod colum so aythig that does t have a zero i at least oe etry i the secod colum will work for v. We ll choose the followig for v. 0 v = 0 Note that this is probably ot the best choice sice its got o-zero etries i both etries of the secod colum. It would have bee easier to choose somethig that had a zero i oe of the etries of the secod colum. However, if we do t do that this will allow us make a poit about choosig the secod vector. Here is the list of vectors that we ve got to this poit. v = 0 0 v = = 0 v 0 Now, we eed to fid a fourth vector ad it eeds to be outside of spa {,, } v all the vectors i spa {,, } v v v. Now, let s v v v will have agai ote that because of our choice of idetical umbers i both etries of the secod colum ad so we ca chose ay ew vector that does ot have idetical etries i the secod colum ad we ll have somethig that is outside of spa v, v, v. Agai, we ll go with somethig that is probably ot the best choice if we had { } to work with this basis, but let s ot get too locked ito always takig the easy choice. There are, o occasio, reaso to choose vectors other tha the obvious ad easy choices. I this case we ll use, 0 v 4 = 0 This gives us the followig set of vectors, v = v = = = 0 v 0 v 0 ad they will be a basis for M sice these are four liearly idepedet vectors i a vector space with dimesio of 4. [Retur to Problems] We ll close out this sectio with a couple of theorems ad a example that will relate the dimesios of subspaces of a vector space to the dimesio of the vector space itself. 007 Paul Dawkis 5

241 Theorem 8 Suppose that W is a subspace of a fiite dimesioal vector space V the W is also fiite dimesioal. Proof : Suppose that dim( V) =. Let s also suppose that W is ot fiite dimesioal ad suppose that S is a basis for W. Sice we ve assumed that W is ot fiite dimesioal we kow that S will ot have a fiite umber of vectors i it. However, sice S is a basis for W we kow that they must be liearly idepedet ad we also kow that they must be vectors i V. This however, meas that we ve got a set of more tha vectors that is liearly idepedet ad this cotradicts the results of Theorem (a). Therefore W must be fiite dimesioal as well. We ca actually go a step further here tha this theorem. Theorem 9 Suppose that W is a subspace of a fiite dimesioal vector space V the dim W dim V dim W = dim V the i fact we have W = V. ( ) ( ) ad if ( ) ( ) Proof : By Theorem 8 we kow that W must be a fiite dimesioal vector space ad so let s suppose that S = { w, w,, w} is a basis for W. Now, S is either a basis for V or it is t a basis for V. V = W =. If S is a basis for V the by Theorem 5 we have that dim( ) dim( ) O the other had, if S is ot a basis for V by Theorem 7(b) (the vectors of S must be liearly idepedet sice they form a basis for W) it ca be expaded ito a basis for V ad so we the dim W < dim V. kow that ( ) ( ) So, we ve show that i every case we must have dim( W) dim( V). Now, let s just assume that all we kow is that dim( W) dim( V) of liearly idepedet vectors i a vector space of dimesio (sice dim( W) dim( V) =. I this case S will be a set = ) ad so by Theorem 6, S must be a basis for V as well. This meas that ay vector u from V ca be writte as a liear combiatio of vectors from S. However, sice S is also a basis for W this meas that u must also be i W. So, we ve just show that every vector i V must also be i W, ad because W is a subspace of V we kow that every vector i W is also i V. The oly way for this to be true is if we have W = V. We should probably work oe quick example illustratig this theorem. 007 Paul Dawkis 6

242 Example 7 Determie a basis ad dimesio for the ull space of 7 4 A = Solutio First recall that to fid the ull space of a matrix we eed to solve the followig system of equatios, x 7 4 x 0 0 x 0 = x 4 0 x 5 We solved a similar system back i Example 7 of the Solvig Systems of Equatio sectio so we ll leave it to you to verify that the solutio is, 8 x = t+ s x = 0 x = t+ s x = t x = s s ad t are ay umbers 4 5 m Now, recall that the ull space of a m matrix will be a subspace of so the ull space of 5 this matrix must be a subspace of ad so its dimesio should be 5 or less. To verify this we ll eed the basis for the ull space. This is actually easier to fid tha you 5 might thik. The ull space will cosist of all vectors i that have the form, x = x, x, x, x, x ( ) = t+ s,0, t+ s, t, s Now, split this up ito two vectors. Oe that cotais oly terms with a t i them ad oe that cotais oly term with a s i them. The factor the t ad s out of the vectors. 8 x = t,0, t, t,0 + s,0, s,0, s 8 = t,0,,,0 + s,0,,0, So, we ca see that the ull space is the space that is the set of all vectors that are a liear combiatio of v 8 =,0,,,0 =,0,,0, v ad so the ull space of A is spaed by these two vectors. You should also verify that these two vectors are liearly idepedet ad so they i fact form a basis for the ull space of A. This also meas that the ull space of A has a dimesio of which is less tha 5 as Theorem 9 suggests it should be. 007 Paul Dawkis 7

243 007 Paul Dawkis 8

244 Chage of Basis I Example of the previous sectio we saw that the vectors v = (,,), = ( 0,, ) v = (, 0, ) formed a basis for. This meas that every vector i vector x = ( 0,5, 0) v ad, for example the, ca be writte as a liear combiatio of these three vectors. Of course this is ot the oly basis for. There are may other bases for out there i the world, ot the least of which is the stadard basis for, e =,0,0 e = 0,,0 e = 0,0, ( ) ( ) ( ) The stadard basis for ay vector space is geerally the easiest to work with, but ufortuately there are times whe we eed to work with other bases. I this sectio we re goig to take a look at a way to move betwee two differet bases for a vector space ad see how to write a geeral vector as a liear combiatio of the vectors from each basis. To start this sectio off we re goig to first eed a way to quickly distiguish betwee the various liear combiatios we get from each basis. The followig defiitio will help with this. Defiitio Suppose that S = { v v v },,, is a basis for a vector space V ad that u is ay vector from V. Sice u is a vector i V it ca be expressed as a liear combiatio of the vectors from S as follows, u= c v + c v + + c v The scalars c, c,, c are called the coordiates of u relative to the basis S. The coordiate vectors of u relative to S is deoted by ( u ) S ad defied to be the followig vector i ( u ) = ( ),,, c S c c, Note that by Theorem of the previous sectio we kow that the liear combiatio of vectors from the basis will be uique for u ad so the coordiate vector ( u ) S will also be uique. Also, o occasio it will be coveiet to thik of the coordiate vector as a matrix. I these cases we will call it the coordiate matrix of u relative to S. The coordiate matrix will be deoted ad defied as follows, c c [ u] = S c At this poit we should probably also give a quick warig about the coordiate vectors. I most cases, although ot all as we ll see shortly, the coordiate vector/matrix is NOT the vector itself that we re after. It is othig more tha the coefficiets of the basis vectors that we eed i order to write the give vector as a liear combiatio of the basis vectors. It is very easy to cofuse the coordiate vector/matrix with the vector itself if we are t payig attetio, so be careful. 007 Paul Dawkis 9

245 Let s see some examples of coordiate vectors. Example Determie the coordiate vector of x = ( 0,5,0) relative to the followig bases. (a) The stadard basis vectors for, S = { e, e, e }. [Solutio] (b) The basis A = { v, v, v } where, v = ( ), v = ( ) ad = ( ) Solutio [Solutio],, I each case we ll eed to determie who to write = ( 0,5,0) give basis vectors. 0,, v, 0,. x as a liear combiatio of the (a) The stadard basis vectors for, {,, } S = e e e. I this case the liear combiatio is simple to write dow. x= 0,5,0 = 0e + 5e + 0e ( ) ad so the coordiate vectors for x relative to the stadard basis vectors for is, ( x ) = ( 0,5,0) S So, i the case of the stadard basis vectors we ve got that, ( x ) = ( 0,5,0) = x S this is, of course, what makes the stadard basis vectors so ice to work with. The coordiate vectors relative to the stadard basis vectors is just the vector itself. [Retur to Problems] (b) The basis A = { v, v, v } where, v = ( ), v = ( ) ad = ( ),, 0,, v, 0,. Now, i this case we ll have a little work to do. We ll first eed to set up the followig vector equatio, ( 0,5,0) = c(,,) + c( 0,, ) + c(,0, ) ad we ll eed to determie the scalars c, c ad c. We saw how to solve this kid of vector equatio i both the sectio o Spa ad the sectio o Liear Idepedece. We eed to set up the followig system of equatios, c+ c = 0 c+ c = 5 c+ c c = 0 We ll leave it to you to verify that the solutio to this system is, c = c = c = 4 The coordiate vector for x relative to A is the, ( x ) = (,, 4 ) A [Retur to Problems] 007 Paul Dawkis 40

246 As always we should do a example or two i a vector space other tha. Example Determie the coordiate vector of p = 4 x + x relative to the followig bases. (a) The stadard basis for (b) The basis for [Solutio] Solutio (a) The stadard basis for =. [Solutio] P S {, x, x } A = p p p, where p =, p = 4x, ad p = 5x. P, {,, } =. P S {, x, x } So, we eed to write p as a liear combiatio of the stadard basis vectors i this case. However, it s already writte i that way. So, the coordiate vector for p relative to the stadard basis vectors is, p 4,, ( ) ( ) S = The ease with which we ca write dow this vector is why this set of vectors is stadard basis vectors for P. [Retur to Problems] (b) The basis for A = p p p, where p =, p = 4x, ad P, {,, } p =. 5x Okay, this set is similar to the stadard basis vectors, but they are a little differet so we ca expect the coordiate vector to chage. Note as well that we proved i Example 5(b) of the previous sectio that this set is a basis. We ll eed to fid scalars c, c ad c for the followig liear combiatio. 4 x+ x = cp+ cp + cp = c( ) + c( 4x) + c( 5x ) The will mea solvig the followig system of equatios. c c = 4 4c = 5c = This is ot a terribly difficult system to solve. Here is the solutio, c = c = c = 0 5 The coordiate vector for p relative to this basis is the, ( p ) =,, A 0 5 [Retur to Problems] 007 Paul Dawkis 4

247 0 Example Determie the coordiate vector of v = 4 relative to the followig bases (a) The stadard basis of M, S =,,, [Solutio] 0 0 (b) The basis for M, A = { v, v, v, v 4} where v = 0 0, v = 0, 0 0 v = 0, ad v 4 = 0. [Solutio] Solutio (a) The stadard basis of M, S =,,, As with the previous two examples the stadard basis is called that for a reaso. It is very easy to write ay matrix as a liear combiatio of these vectors. Here it is for this case = ( ) + ( ) + ( 0) + ( 4) The coordiate vector for v relative to the stadard basis is the, ( v ) = (,, 0, 4) S [Retur to Problems] (b) The basis for 0 = 0 v, ad 4 M, {,,, } A = v v v v 4 where 0 v = v = 0 0, v = 0, This oe will be a little work, as usual, but wo t be too bad. We ll eed to fid scalars c, c, c ad c 4 for the followig liear combiatio = c + c + c + c Addig the matrices o the right ito a sigle matrix ad settig compoets equal gives the followig system of equatios that will eed to be solved. c + c c = 4 c = = 0 c+ c4 =4 Not a bad system to solve. Here is the solutio. c = 5 c = c = 0 c = 4 c 007 Paul Dawkis 4

248 The coordiate vector for v relative to this basis is the, ( v ) = ( 5,, 0, ) A [Retur to Problems] Before we move o we should poit out that the order i which we list our basis elemets is importat, to see this let s take a look at the followig example. Example 4 The vectors v = (,,), v = ( 0,, ) ad = (, 0, ) Let A = { v, v, v } ad {,, } v form a basis for. B = v v v be differet orderigs of these vectors ad determie the vector i that has the followig coordiate vectors. x,,8 (a) ( ) = ( ) A (b) ( x ) (,,8 ) B = Solutio So, these are both the same coordiate vector, but they are relative to differet orderigs of the basis vectors. Determiig the vector i for each is a simple thig to do. Recall that the coordiate vector is othig more tha the scalars i the liear combiatio ad so all we eed to do is reform the liear combiatio ad the multiply ad add everythig out to determie the vector. The oe thig that we eed to be careful of order however. The first scalar is the coefficiet of the first vector listed i the set, the secod scalar i the coordiate vector is the coefficiet for the secod vector listed, etc. (a) Here is the work for this part. x =,, + 0,, + 8,0, = 7, 4, 7 ( ) ( )( ) ( )( ) ( ) (b) Ad here is the work for this part. x = 0,, +, 0, + 8,, = 5, 5,5 ( ) ( )( ) ( )( ) ( ) So, we clearly get differet vectors simply be rearragig the order of the vectors i our basis. Now that we ve got the coordiate vectors out of the way we wat to fid a quick ad easy way to covert betwee the coordiate vectors from oe basis to a differet basis. This is called a chage of basis. Actually, it will be easier to covert the coordiate matrix for a vector, but these are essetially the same thig as the coordiate vectors so if we ca covert oe we ca covert the other. We will develop the method for vectors i a -dimesioal space (ot ecessarily ) ad i the process we will see how to do this for ay vector space. So let s start off ad assume that V is a vector space ad that dim( V ) =. Let s also suppose that we have two bases for V. The old basis, B = { v, v } ad the ew basis, 007 Paul Dawkis 4

249 {, } C = w w Now, because B is a basis for V we ca write each of the basis vectors from C as a liear combiatio of the vectors from B. w = av+ bv w = cv + d v This meas that the coordiate matrices of the vectors from C relative to the basis B are, a c [ w] = [ ] B = B b w d Next, let u be ay vector i V. I terms of the ew basis, C, we ca write u as, u= cw+ cw ad so its coordiate matrix relative to C is, c [ u ] = C c Now, we kow how to write the basis vectors from C as liear combiatios of the basis vectors from B so substitute these ito the liear combiatio for u above. This gives, u= c av + bv + c cv + d v ( ) ( ) Rearragig gives the followig equatio. u= ac + cc v + bc + dc v ( ) ( ) We ow kow the coordiate matrix of u is relative to the old basis B. Namely, ac+ cc [ u ] = B bc+ dc We ca ow do a little rewrite as follows, u ac+ cc a c c a c = B = = bc+ dc b d c b d u [ ] [ ] So, if we defie P to be the matrix, a c P = b d where the colums of P are the coordiate matrices for the basis vectors of C relative to B, we ca covert the coordiate matrix for u relative to the ew basis C ito a coordiate matrix for u relative to the old basis B as follows, u = P u [ ] [ ] B Note that this may seem a little backwards at this poit. We re covertig to a ew basis C ad yet we ve foud a way to istead fid the coordiate matrix for u relative to the old basis B ad C C 007 Paul Dawkis 44

250 ot the other way aroud. However, as we ll see we ca use this process to go the other way aroud. Also, it could be that we have a coordiate matrix for a vector relative to the ew basis ad we eed to determie what the coordiate matrix relative to the old basis will be ad this will allow us to do that. Here is the formal defiitio of how to perform a chage of basis betwee two basis sets. Defiitio Suppose that V is a -dimesioal vector space ad further suppose that B = { v, v,, v} ad C = { w, w,, w} are two bases for V. The trasitio matrix from C to B is defied to be, P = [ w] [ w] [ w ] B B B where the i th colum of P is the coordiate matrix of w i relative to B. The coordiate matrix of a vector u i V, relative to B, is the related to the coordiate matrix of u relative to C by the followig equatio. u = P u [ ] [ ] We should probably take a look at a example or two at this poit. B Example 5 Cosider the stadard basis for, B = { e, e, e }, ad the basis C = { v, v, v } where, v = ( ), v = ( ) ad v = ( ).,, 0,,, 0, (a) Fid the trasitio matrix from C to B. [Solutio] (b) Fid the trasitio matrix from B to C. [Solutio] u (c) Use the result of part (a) to compute [ u ] B give ( ) = (,, 4 ) C (d) Use the result of part (a) to compute [ u ] give ( u ) = ( ) B 9,, 8 C (e) Use the result of part (b) to compute [ u ] give ( u ) = ( ) C 0,5,0 B (f) Use the result of part (b) to compute [ u ] C give ( u ) = ( 6,7, ) B C. [Solutio]. [Solutio]. [Solutio]. [Solutio] Solutio Note as well that we gave the coordiate vector i the last four parts of the problem statemet to coserve o space. Whe we go to work with them we ll eed to covert to them to a coordiate matrix. (a) Fid the trasitio matrix from C to B. Whe the basis we re goig to (B i this case) is the stadard basis vectors for the vector space computig the trasitio matrix is geerally pretty simple. Recall that the colums of P are just the coordiate matrices of the vectors i C relative to B. However, whe B is the stadard basis vectors we saw i Example above that the coordiate vector (ad hece the coordiate matrix) is simply the vector itself. Therefore, the coordiate matrix i this case is, 0 P = 0 [Retur to Problems] 007 Paul Dawkis 45

251 (b) Fid the trasitio matrix from B to C. First, do ot make the mistake of thikig that the trasitio matrix here will be the same as the trasitio matrix from part (a). It wo t be. To fid this trasitio matrix we eed the coordiate matrices of the stadard basis vectors relative to C. This meas that we eed to write each of the stadard basis vectors as liear combiatios of the basis vectors from C. We will leave it to you to verify the followig liear combiatios. e = v+ v + v e = v+ v + v e = v+ v v The coordiate matrices for each of this is the, [ e] = 0 [ ] 5 [ ] C e = C e = C The trasitio matrix from C to B is the, P = So, a sigificatly differet matrix as suggested at the start of this problem. Also, otice we used a slightly differet otatio for the trasitio matrix to make sure that we ca keep the two trasitio matrices separate for this problem. [Retur to Problems] (c) Use the result of part (a) to compute [ u ] B give ( ) = (,, 4 ) C u. Okay, we ve doe most of the work for this problem. The remaiig steps are just doig some matrix multiplicatio. Note as well that we already kow what the aswer to this is from Example above. Here is the matrix multiplicatio for this part. 0 0 [ u ] = 0 5 B = 4 0 Sure eough we got the coordiate matrix for the poit that we coverted to get ( u ) = (,, 4 ) from Example. C [Retur to Problems] 007 Paul Dawkis 46

252 (d) Use the result of part (a) to compute [ u ] give ( ) = ( ) B 9,, 8 C u. The matrix multiplicatio for this part is, [ u ] = 0 0 B = 8 5 So, what have we leared here? Well, we were give the coordiate vector of a poit relative to C. Sice the vectors i C are ot the stadard basis vectors we do t really have a frame of referece for what this vector might actually look like. However, with this computatio we kow ow the coordiates of the vectors relative to the stadard basis vectors ad this meas that we actually kow what the vector is. I this case the vector is, u = 5, 0,5 ( ) So, as you ca see, eve though we re cosiderig C to be the ew basis here, we really did eed to determie the coordiate matrix of the vector relative to the old basis here sice that allowed us to quickly determie just what the vector was. Remember that the coordiate matrix/vector is ot the vector itself, oly the coefficiets for the liear combiatio of the basis vectors. [Retur to Problems] (e) Use the result of part (b) to compute [ u ] give ( ) = ( ) C 0,5,0 B u. Agai, here we are really just verifyig the result of Example i this part. Here is the matrix multiplicatio for this part [ u ] = C = Ad agai, we got the result that we would expect to get. [Retur to Problems] (f) Use the result of part (b) to compute [ u ] C give ( ) = ( 6,7, ) B u. Here is the matrix multiplicatio for this part [ u ] = C = So what does this give us? We ll first we kow that ( u ) the stadard basis vectors we kow that the vector from Recall that whe dealig with the stadard basis vectors for 4 =,,. Also, sice B is ,7,. 007 Paul Dawkis 47 C that we re startig with is ( ) the coordiate matrix/vector just

253 also happes to be the vector itself. Agai, do ot always expect this to happe. The coordiate matrix/vector that we just foud tells us how to write the vector as a liear combiatio of vectors from the basis C. Doig this gives, ( 6,7, ) = 7 v 8 + v v [Retur to Problems] Example 6 Cosider the stadard basis for B, xx, where p =, p =4x, ad p = 5x. (a) Fid the trasitio matrix from C to B. [Solutio] = ad the basis {,, } P, { } C = p p p, p = 4,,. C (b) Determie the polyomial that has the coordiate vector ( ) ( ) [Solutio] Solutio (a) Fid the trasitio matrix from C to B. Now, sice B (the matrix we re goig to) is the stadard basis vectors writig dow the trasitio matrix will be easy this time. 0 P = Each colum of P will be the coefficiets of the vectors from C sice those will also be the coordiates of each of those vectors relative to the stadard basis vectors. The first row will be the costat terms from each basis vector, the secod row will be the coefficiet of x from each basis vector ad the third colum will be the coefficiet of x from each basis vector. [Retur to Problems] (b) Determie the polyomial that has the coordiate vector ( p ) = ( 4,, ). C We kow what the coordiates of the polyomial are relative to C, but this is ot the stadard basis ad so it is ot really clear just what the polyomial is. Oe way to get the solutio is to just form up the liear combiatio with the coordiates as the scalars i the liear combiatio ad compute it. However, it would be somewhat illustrative to use the trasitio matrix to aswer this questio. So, we eed to fid [ p ] B ad luckily we ve got the correct trasitio matrix to do that for us. All we eed to do is to do the followig matrix multiplicatio. p = P p [ ] [ ] B C = = So, the coordiate vector for u relative to the stadard basis vectors is 007 Paul Dawkis 48

254 ( p ) ( 9,,55) B = Therefore, the polyomial is, ( ) = p x x x Note that, as metioed above we ca also do this problem as follows, p x = 4p + p + p = 4 + 4x + 5x =9 x+ 55x ( ) ( ) ( ) ( ) The same aswer with less work, but it wo t always be less work to do it this way. We just wated to poit out the alterate method of workig this problem. [Retur to Problems] Example 7 Cosider the stadard basis for M, B =,,, , ad the basis C = { v, v, v, v 4} where v = 0 0, v = 0, v = 0, ad 0 v 4 = 0. (a) Fid the trasitio matrix from C to B. [Solutio] (b) Determie the matrix that has the coordiate vector ( v ) = ( 8,,5, ). [Solutio] C Solutio (a) Fid the trasitio matrix from C to B. Now, as with the previous couple of problems, B is the stadard basis vectors but this time let s be a little careful. Let s fid oe of the colums of the trasitio matrix i detail to make sure we ca quickly write dow the remaiig colums. Let s look at the fourth colum. To fid this we eed to write v 4 as a liear combiatio of the stadard basis vectors. This is fairly simple to do ( ) ( 0) ( 0) ( ) 0 = So, the coordiate matrix for v 4 relative to B ad hece the fourth colum of P is, 0 [ v 4 ] = B 0 So, each colum will be the etries from the v i s ad with the stadard basis vectors i the order that we ve usig them here, the first two etries is the first colum of the v i ad the last two etries will be the secod colum of v. Here is the trasitio matrix for this problem. i 007 Paul Dawkis 49

255 P = [Retur to Problems] (b) Determie the matrix that has the coordiate vector ( ) ( 8,,5, ) v =. C So, just as with the previous problem we have the coordiate vector, but that is for the ostadard basis vectors ad so it s ot readily apparet what the matrix will be. As with the previous problem we could just write dow the liear combiatio of the vectors from C ad compute it directly, but let s go ahead ad used the trasitio matrix. [ v ] B = = Now that we ve got the coordiates for v relative to the stadard basis we ca write dow v. 4 5 v = [Retur to Problems] To this poit we ve oly worked examples where oe of the bases was the stadard basis vectors. Let s work oe more example ad this time we ll avoid the stadard basis vectors. I this example we ll just fid the trasitio matrices. Example 8 Cosider the two bases for,, 0, 6 (a) Fid the trasitio matrix from C to B. [Solutio] (b) Fid the trasitio matrix from B to C. [Solutio], B = {( ) ( )} ad {(, ), (, 4) } C =. Solutio Note that you should verify for yourself that these two sets of vectors really are bases for we claimed them to be. (a) Fid the trasitio matrix from C to B. as To do this we ll eed to write the vectors from C as liear combiatios of the vectors from B. Here are those liear combiatios. (,) = (, ) + ( 0,6) (, 4) =(, ) + ( 0, 6) The two coordiate matrices are the, 007 Paul Dawkis 50

256 (,) = (, 4) B = B ad the trasitio matrix is the, P = [Retur to Problems] (b) Fid the trasitio matrix from B to C. Okay, we ll eed to do pretty much the same thig here oly this time we eed to write the vectors from B as liear combiatios of the vectors from C. Here are the liear combiatios. (, ) = (,) (, 4) 4 ( 0,6) = (,) + (, 4) The coordiate matrices are, (, ) = ( 0, 6) C = C 4 The trasitio matrix is, P = 4 [Retur to Problems] I Examples 5 ad 8 above we computed both trasitio matrices for each directio. There is aother way of computig the secod trasitio matrix from the first ad we will close out this sectio with the theorem that tells us how to do that. Theorem Suppose that V is a fiite dimesioal vector space ad that P is the trasitio matrix from C to B the, (a) P is ivertible ad, (b) P is the trasitio matrix from B to C. You should go back to Examples 5 ad 8 above ad verify that the two trasitio matrices are i fact iverses of each other. Also, ote that due to the difficulties sometimes preset i fidig the iverse of a matrix it might actually be easier to compute the secod trasitio matrix as we did above. 007 Paul Dawkis 5

257 Fudametal Subspaces I this sectio we wat to take a look at some importat subspaces that are associated with matrices. I fact they are so importat that they are ofte called the fudametal subspaces of a matrix. We ve actually already see oe of the fudametal subspaces, the ull space, previously although we will give its defiitio here agai for the sake of completeess. Before we give the formal defiitios of the fudametal subspaces we eed to quickly review a cocept that we first saw back whe we were lookig at matrix arithmetic. Give a m matrix a a a m a a a m A = a a am m The row vectors (we called them row matrices at the time) are the vectors i R formed out of the rows of A. The colum vectors (agai we called them colum matrices at the time) are the vectors i that are formed out of the colums of A. Example Write dow the row vectors ad colum vectors for A = 9 7 Solutio The row vectors are, r = 5 r = 0 4 r = 9 r = 7 [ ] [ ] [ ] [ ] 4 The colum vectors are c = c = 9 7 Note that despite the fact that we re callig them vectors we are usig matrix otatio for them. The reaso is twofold. First, they really are row/colum matrices ad so we may as well deote them as such ad secod i this way we ca keep the orietatio of each to remid us whether or ot they are row vectors or colum vectors. I other words, row vectors are listed horizotally ad colum vectors are listed vertically. Because we ll be usig the matrix otatio for the row ad colum vectors we ll be usig matrix otatio for vectors i geeral i this sectio so we wo t be mixig ad matchig the otatios too much. 007 Paul Dawkis 5

258 Here the are the defiitios of the three fudametal subspaces that we ll be ivestigatig i this sectio. Defiitio Suppose that A is a m matrix. m (a) The subspace of R that is spaed by the row vectors of A is called the row space of A. (b) The subspace of R that is spaed by the colum vectors of A is called the colum space of A. m m (c) The set of all x i such that A x= 0 (which is a subspace of by Theorem from the Subspaces sectio) is called the ull space of A. We are goig to be particularly iterested i the basis for each of these subspaces ad that i tur meas that we re goig to be able to discuss the dimesio of each of them. At this poit we ca give the otatio for the dimesio of the ull space, but we ll eed to wait a bit before we do so for the row ad colum spaces. The reaso for the delay will be apparet oce we reach that poit. So, let s go ahead ad give the otatio for the ull space. Defiitio The dimesio of the ull space of A is called the ullity of A ad is deoted by ullity( A ). We should work a example at this poit. Because we ve already see how to fid the basis for the ull space (Example 4(b) i the Subspaces sectio ad Example 7 of the Basis sectio) we ll do oe example at this poit ad the devote the remaider of the discussio o basis/dimesio of these subspaces to fidig the basis/dimesio for the row ad colum space. Note that we will see a example or two later i this sectio of ull spaces as well. Example Determie a basis for the ull space of the followig matrix. 4 A = Solutio So, to fid the ull space we eed to solve the followig system of equatios. x 4x + x + x x x = x + x + x x = x4 x x + 4x4 x5 + 4x6 = 0 We ll leave it to you to verify that the solutio is give by, x = t+ r x = t s+ r x = t+ 5r x = t x = s x = r t, s, r are ay real umbers I matrix form the solutio ca be writte as, 007 Paul Dawkis 5

259 + t r 0 t s r + t+ 5r 0 5 x = = t + s + r t 0 0 s 0 0 r 0 0 So, the solutio ca be writte as a liear combiatio of the three liearly idepedet vectors (verify the liearly idepedet claim!) x = x = x = ad so these three vectors the form the basis for the ull space sice they spa the ull space ad are liearly idepedet. Note that this also meas that the ull space has a dimesio of sice there are three basis vectors for the ull space ad so we ca see that ullity A = ( ) Agai, remember that we ll be usig matrix otatio for vectors i this sectio. Okay, ow that we ve gotte a example of the basis for the ull space take care of we eed to move oto fidig bases (ad hece the dimesios) for the row ad colum spaces of a matrix. However, before we do that we first eed a couple of theorems out of the way. The first theorem tells us how to fid the basis for a matrix that is i row-echelo form. Theorem Suppose that the matrix U is i row-echelo form. The row vectors cotaiig leadig s (so the o-zero row vectors) will form a basis for the row space of U. The colum vectors that cotai the leadig s from the row vectors will form a basis for the colum space of U. Example Fid the basis for the row ad colum space of the followig matrix U = Solutio Okay, the basis for the row space is simply all the row vectors that cotai a leadig. So, for this matrix the basis for the row space is, r = [ 5 5] r = [ 0 0 0] r = [ ] 007 Paul Dawkis 54

260 We ca also see that the dimesio of the row space will be. The basis for the colum space will be the colums that cotai leadig s ad so for this matrix the basis for the colum space will be, c = c = c 5 = Note that we subscripted the vectors here with the colum that each came out of. We will geerally do that for these problems. Also ote that the dimesio of the colum space is as well. Now, all of this is fie provided we have a matrix i row-echelo form. However, as we kow, most matrices will ot be i row-echelo form. The followig two theorems will tell us how to fid the basis for the row ad colum space of a geeral matrix. Theorem Suppose that A is a matrix ad U is a matrix i row-echelo form that has bee obtaied by performig row operatios o A. The the row space of A ad the row space of U are the same space. So, how does this theorem help us? Well if the matrix A ad U have the same row space the if we kow a basis for oe of them we will have a basis for the other. Notice as well that we assumed the matrix U is i row-echelo form ad we do kow how to fid a basis for its row space. Therefore, to fid a basis for the row space of a matrix A we ll eed to reduce it to rowechelo form. Oce i row-echelo form we ca write dow a basis for the row space of U, but that is the same as the row space of A ad so that set of vectors will also be a basis for the row space of A. So, what about a basis for the colum space? That s ot quite as straight forward, but is almost as simple. Theorem Suppose that A ad B are two row equivalet matrices (so we got from oe to the other by row operatios) the a set of colum vectors from A will be a basis for the colum space of A if ad oly if the correspodig colums from B will form a basis for the colum space of B. How does this theorem help us to fid a basis for the colum space of a geeral matrix? We ll let s start with a matrix A ad reduce it to row-echelo form, U, (which we ll eed for a basis for the row space ayway). Now, because we arrived at U by applyig row operatios to A we kow that A ad U are row equivalet. Next, from Theorem we kow how to idetify the colums from U that will form a basis for the colum space of U. These colums will probably ot be a basis for the colum space of A however, what Theorem tells us is that correspodig colums from A will form a basis for the colums space of A. For example, suppose the colums,, 5 ad 8 from U form a basis for the colum space of U the colums,, 5 ad 8 from A will form a basis for the colum space of A. Before we work a example we ca ow talk about the dimesio of the row ad colum space of a matrix A. From our theorems above we kow that to fid a basis for both the row ad colum space of a matrix A we first eed to reduce it to row-echelo form ad we ca get a basis for the row ad colum space from that. 007 Paul Dawkis 55

261 Let s go back ad take a look at Theorem i a little more detail. Accordig to this theorem the rows with leadig s will form a basis for the row space ad the colums that cotaiig the same leadig s will form a basis for the colum space. Now, there are a fixed umber of leadig s ad each leadig will be i a separate colum. For example, there wo t be two leadig s i the secod colum because that would mea that the upper (oe) would ot be a leadig. Thik about this for a secod. If there are k leadig s i a row-echelo matrix the there will be k row vectors i a basis for the row space ad so the row space will have a dimesio of k. However, sice each of the leadig s will be i separate colums there will also be k colum vectors that will form a basis for the colum space ad so the colum space will also have a dimesio of k. This will always happe ad this is the reaso that we delayed talkig about the dimesio of the row ad colum space above. We eeded to get a couple of theorems out of the way so we could give the followig theorem/defiitio. Theorem 4 Suppose that A is a matrix the the row space of A ad the colum space of A will have the same dimesio. We call this commo dimesio the rak of A ad deote it by rak ( A ). m Note that if A is a m matrix we kow that the row space will be a subspace of R ad hece have a dimesio of m or less ad that the colum space will be a subspace of R ad hece have a dimesio of or less. The, because we kow that the dimesio of the row ad colum space must be the same we have the followig upper boud for the rak of a matrix. rak ( A) mi (, m) We should ow work a example. Example 4 Fid a basis for the row ad colum space of the matrix from Example above. Determie the rak of the matrix. Solutio Before startig this example let s ote that by the upper boud for the rak above we kow that the largest that the rak ca be is sice that is the smaller of the umber of rows ad colums i A. So, the first thig that we eed to do is get the matrix ito row-echelo form. We will leave it to you to verify that the followig is oe possible row echelo form for the matrix from Example above. If you eed a refresher o how to reduce a matrix to row-echelo form you ca go back to the sectio o Solvig Systems of Equatios for a refresher. Also, recall that there is more tha oe possible row-echelo form for a give matrix. U 0 0 = So, a basis for the row space of the matrix will be every row that cotais a leadig (all of them i this case). A basis for the row space is the, 007 Paul Dawkis 56

262 [ ] r r = [ ] r = 0 0 = Next, the first three colums of U will form a basis for the colum space of U sice they all cotai the leadig s. Therefore the first three colums of A will form a basis for the colum space of A. This gives the followig basis for the colum space of A. 4 c = 0 c = c = 0 4 Now, as Theorem 4 suggested both the row space ad the colum space of A have dimesio ad so we have that rak ( A ) = Before goig o to aother example let s stop for a bit ad take a look at the results of Examples ad 4. From these two examples we saw that the rak ad ullity of the matrix used i those examples were both. The fact that they were the same wo t always happe as we ll see shortly ad so is t all that importat. What is importat to ote is that + = 6 ad there were 6 colums i this matrix. This i fact will always be the case. Theorem 5 Suppose that A is a m matrix. The, rak A + ullity A = m ( ) ( ) Let s take a look at a couple more examples ow. Example 5 Fid a basis for the ull space, row space ad colum space of the followig matrix. Determie the rak ad ullity of the matrix A = Solutio Before we get started we ca otice that the rak ca be at most 4 sice that is smaller of the umber of rows ad umber of colums. We ll fid the ull space first sice that was the first thig asked for. To do this we ll eed to solve the followig system of equatios. x + x x + 5x + 6x = 0 You should verify that the solutio is, 4 5 4x 4x 4x x 8x = x 6x x + 4x = x + x + 7x x + x = Paul Dawkis 57

263 x = t x = t 8s x = t x = s x = s 4 5 s ad t are ay real umbers The ull space is the give by, t 0 t 8s 8 x = t = t+ s 0 s 0 s 0 ad so we ca see that a basis for the ull space is, 0 8 x = x = ullity A =. At this poit we kow the rak of A by Theorem 5 above. Accordig to this theorem the rak must be, Therefore we ow kow that ( ) ( A) ( A) rak = # colums ullity = 5 = This will give us a ice check whe we fid a basis for the row space ad the colum space. We ow kow that each should cotai three vectors. Speakig of which, let s get a basis for the row space ad the colum space. We ll eed to reduce A to row-echelo form first. We ll leave it to you to verify that a possible row-echelo form for A is, U = The rows cotaiig leadig s will form a basis for the row space of A ad so this basis is, r = 5 6 r = 0 4 [ ] [ ] r = [ ] Next, the first, secod ad fourth colums of U cotai leadig s ad so will form a basis for the colum space of U ad this tells us that the first, secod ad fourth colums of A will form a basis for the colum space of A. Here is that basis. 007 Paul Dawkis 58

264 5 4 4 c = c = c = 0 Note that the dimesio of each of these is as we oted it should be above. Example 6 Fid a basis for the ull space, row space ad colum space of the followig matrix. Determie the ullity ad rak of this matrix. 6 A = 8 4 Solutio I this case we ca otice that the rak of this matrix ca be at most sice that is the miimum of the umber of rows ad umber of colums. To fid the ull space we ll eed to solve the followig system of equatios, 6x x = 0 x+ x = 0 8x+ 4x = 0 We ll leave it to you to verify that the solutio to this system is, x = 0 x = 0 This is actually the poit to this problem. There is oly a sigle solutio to the system above, amely the zero vector, 0. Therefore the ull space cosists solely of the zero vector ad vector spaces that cosist solely of the zero vector do ot have a basis ad so we ca t give oe. Also, vector spaces cosistig solely of the zero vectors are defied to have a dimesio of zero. Therefore, the ullity of this matrix is zero. This also tells us that the rak of this matrix must be by Theorem 5. Let s ow fid a basis for the row space ad the colum space. You should verify that oe possible row-reduced form for A is, U = A basis for the row space of A is the, r = [ ] r = [ 0 ] ad sice both colums of U form a basis for the colum space of U both colums from A will form a basis for the colum space of A. The basis for the colum space of A is the, 6 c = c = 8 4 Oce agai, both have dimesio of as we kew they should from our use of Theorem 5 above. 007 Paul Dawkis 59

265 I all of the examples that we ve worked to this poit i fidig a basis for the row space ad the colum space we should otice that the basis we foud for the colum space cosisted of colums from the origial matrix while the basis we foud for the row space did ot cosist of rows from the origial matrix. Also ote that we ca t ecessarily use the same idea we used to get a basis for the colum space to get a basis for the row space. For example let s go back ad take a look at Example 5. The first three rows of U formed a basis for the row space, but that does ot mea that the first three rows of A will also form a basis for the row space. I fact, i this case they wo t. I this case the third row is twice the first row added oto the secod row ad so the first three rows are ot liearly idepedet (which you ll recall is required for a set of vectors to be a basis). So, what do we do if we do wat rows from the origial matrix to form our basis? The aswer to this is surprisigly simple. Example 7 Fid a basis for the row space of the matrix i Example 5 that cosists of rows from the origial matrix. Solutio The first thig that we ll do is take the traspose of A. I doig so the rows of A will become the T T colums of A. This meas that the row space of A will become the colum space of A. Recall as well that we fid a basis for the colum space i terms of colums from the origial T T matrix ( A i this case). So, we ll be fidig a basis for the colum space of A i terms of the T T T colums of A, but the colums of A are the rows of A ad the colum space of A is the row space of A. Therefore, whe this is all said ad doe by fidig a basis for the colum space of T A we will also be fidig a basis for the row space of A ad it will be i terms of rows from A ad ot rows from the row-echelo form of the matrix. So, here is the traspose of A. T A = Here is a possible row-echelo form of the traspose (you should verify this) U = The first, secod ad fourth colums of U form a basis for the colum space of U ad so a basis T for the colum space of A is, 007 Paul Dawkis 60

266 4 4 c = c = 4 c = T Agai, however, the colum space of A is othig more tha the row space of A ad so these three colum are rows from A ad will also form a basis for the row space. So, let s chage otatio a little to make it clear that we re dealig with a basis for the row space ad we ll be doe. Here is a basis for the row space of A i terms of rows from A itself. r = 5 6 r = [ ] [ ] r = [ 7 ] Next we wat to give a quick theorem that gives a relatioship betwee the solutio to a system of equatios ad the colum space of the coefficiet matrix. This theorem ca be useful o occasio. Theorem 6 The system of liear equatios A x= b will be cosistet (i.e. have at least oe solutio) if ad oly if b is i the colum space of A. Note that sice the basis for the colum space of a matrix is give i terms of the certai colums of A this meas that a system of equatios will be cosistet if ad oly if b ca be writte as a liear combiatio of at least some of the colums of A. This should be clear from applicatio of the Theorem above. This theorem tells us that b must be i the colum space of A, but that meas that it ca be writte as a liear combiatio of the basis vectors for the colum space of A. We ll close out this sectio with a couple of theorems relatig the ivertibility of a square matrix A to some of the ideas i this sectio. Theorem 7 Let A be a matrix. The followig statemets are equivalet. (a) A is ivertible. 0, i.e. just the zero vector. (b) The ull space of A is { } (c) ullity( A ) = 0. (d) rak ( A) =. (e) The colums vectors of A form a basis for. (f) The row vectors of A form a basis for. The proof of this theorem follows directly from Theorem 9 i the Properties of Determiats sectio ad from the defiitios of ull space, rak ad ullity so we re ot goig to give it here. We will poit our however that if the rak of a matrix is the a basis for the row (colum) space must cotai vectors, but there are oly rows (colums) i A ad so all the rows (colums) of A must be i the basis. Also, the row (colum) space is a subspace of which also has a dimesio of. These ideas are helpful i showig that (d) will imply either (e) or (f). 007 Paul Dawkis 6

267 Fially, speakig of Theorem 9 i the Properties of Determiat sectio, this was also a theorem listig may equivalet statemets o the ivertibility of a matrix. We ca merge that theorem with Theorem 7 above ito the followig theorem. Theorem 8 Let A be a matrix. The followig statemets are equivalet. (a) A is ivertible. (b) The oly solutio to the system A x = 0 is the trivial solutio. (c) A is row equivalet to I. (d) A is expressible as a product of elemetary matrices. (e) A x= bhas exactly oe solutio for every matrix b. (f) A x= bis cosistet for every matrix b. det A 0 (g) ( ) (h) The ull space of A is { 0 }, i.e. just the zero vector. (i) ullity( A ) = 0. (j) rak ( A) =. (k) The colums vectors of A form a basis for. (l) The row vectors of A form a basis for. 007 Paul Dawkis 6

268 Ier Product Spaces If you go back to the Euclidea -space chapter where we first itroduced the cocept of vectors you ll otice that we also itroduced somethig called a dot product. However, i this chapter, where we re dealig with the geeral vector space, we have yet to itroduce aythig eve remotely like the dot product. It is ow time to do that. However, just as this chapter is about vector spaces i geeral, we are goig to itroduce a more geeral idea ad it will tur out that a dot product will fit ito this more geeral idea. Here is the defiitio of this more geeral idea. Defiitio Suppose u, v, ad w are all vectors i a vector space V ad c is ay scalar. A ier product o the vector space V is a fuctio that associates with each pair of vectors i V, say u ad v, a real umber deoted by uv, that satisfies the followig axioms. (a) uv, = vu, (b) u+ v, w = u, w + v, w (c) cuv, = c uv, (d) uu, 0 ad uu, = 0 if ad oly if u = 0 A vector space alog with a ier product is called a ier product space. Note that we are assumig here that the scalars are real umbers i this defiitio. I fact we probably should have bee usig the terms real vector space ad real ier product space i this defiitio to make it clear. If we were to allow the scalars to be complex umbers (i.e. dealig with a complex vector space) the axioms would chage slightly. Also, i the rest of this sectio if we say that V is a ier product space we are implicitly assumig that it is a vector space ad that some ier product has bee defied o it. If we do ot explicitly give the ier product the the exact ier product that we are usig is ot importat. It will oly be importat i these cases that there has bee a ier product defied o the vector space. Example The Euclidea ier product as defied i the Euclidea -space sectio is a ier product. For referece purposes here is the Euclidea ier product. Give two vectors i, u = ( u, u,, u ) ad v = ( v, v,, v ), the Euclidea ier product is defied to be, uv = uv i = uv + uv + + uv, By Theorem from the Euclidea -space sectio we ca see that this does i fact satisfy all the axioms of the defiitio. Therefore, is a ier product space. Here are some more examples of ier products. 007 Paul Dawkis 6

269 Example Suppose that u = ( u u u ) ad = ( v v v ),,, v,,, are two vectors i that w, w,, w are positive real umbers (called weights) the the weighted Euclidea ier product is defied to be, uv = wuv + wu v + + wu v, ad It is fairly simple to show that this is i fact a ier product. All we eed to do is show that it satisfies all the axioms from Defiitio. So, suppose that u, v, ad a are all vectors i ad that c is a scalar. First ote that because we kow that real umbers commute with multiplicatio we have, uv, = wuv + wuv + + wuv = wvu + wvu + + wvu = vu, So, the first axiom is satisfied. To show the secod axiom is satisfied we just eed to ru through the defiitio as follows, u+ v, a = w( u+ v) a+ w( u + v) a + + w( u + v) a = ( wua + wu a + + wu a) + ( wva + wva + + wva) = ua, + va, ad the secod axiom is satisfied. Here s the work for the third axiom. cuv, = wcuv + wcuv + + w cu v = c( wuv + wuv+ + wuv) = c uv, Fially, for the fourth axiom there are two thigs we eed to check. Here s the first, uu, = wu + wu + + wu 0 Note that this is greater tha or equal to zero because the weights w, w,, w are positive umbers. If we had t made that assumptio there would be o way to guaratee that this would be positive. Now suppose that uu, = 0. Because each of the terms above is greater tha or equal to zero the oly way this ca be zero is if each of the terms is zero itself. Agai, however, the weights are positive umbers ad so this meas that ui = 0 ui = 0, i =,,, We therefore must have u = 0 if uu, = 0. Likewise if u = 0 the by pluggig i we ca see that we must also have uu, = 0 ad so the fourth axiom is also satisfied. 007 Paul Dawkis 64

270 a a b b Example Suppose that A = a a ad B = 4 b b 4 ier product o M ca be defied as, AB, = tr( AB T ) where tr ( C ) is the trace of the matrix C. are two matrices i M. A We will leave it to you to verify that this is i fact a ier product. This is ot difficult oce you show (you ca do a direct computatio to show this) that T T tr A B = tr B A = ab+ ab + ab + ab ( ) ( ) 4 4 This formula is very similar to the Euclidea ier product formula ad so showig that this is a ier product will be almost idetical to showig that the Euclidea ier product is a ier product. There are differeces, but for the most part it is pretty much the same. The ext two examples require that you ve had Calculus ad so if you have t had Calculus you ca skip these examples. Both of these however are very importat ier products i some areas of mathematics, although we re ot goig to be lookig at them much here because of the Calculus requiremet. Example 4 Suppose that f = f ( x) ad = g( x) [ ab, ]. I other words, they are i the vector space C[ a, b ]. A ier product o [, ] be defied as, fg, g are two cotiuous fuctios o the iterval b ( ) ( ) = a f x g x dx C a b ca Provided you remember your Calculus, showig this is a ier product is fairly simple. Suppose C a, b ad that c is ay scalar. that f, g, ad h are cotiuous fuctios i [ ] Here is the work showig the first axiom is satisfied. b b f, g = f x g x dx= g x f x dx= g, f a ( ) ( ) ( ) ( ) The secod axiom is just as simple, b f + g, h = f x + g x h x dx a b ( ( ) ( )) ( ) ( ) ( ) ( ) ( ) b a = f x h x dx+ g x h x dx= fh, + g, h a Here s the third axiom. c f, g = b b = f x g x dx = c f, g a a ( ) ( ) ( ) ( ) Fially, the fourth axiom. This is the oly oe that really requires somethig that you may ot remember from a Calculus class. The previous examples all used properties of itegrals that you a 007 Paul Dawkis 65

271 should remember. First, we ll start with the followig, b b ff, = f x f x dx= f x dx a ( ) ( ) ( ) Now, recall that if you itegrate a cotiuous fuctio that is greater tha or equal to zero the the itegral must also be greater tha or equal to zero. Hece, ff, 0 Next, if f = 0 the clearly we ll have, = 0 the we must also have f = 0. a b f a ff. Likewise, if we have ff ( ), = x dx= 0 Example 5 Suppose that f = f ( x) ad g = g( x) are two vectors i C[ a, b ] ad further suppose that w( x ) > 0 is a cotiuous fuctio called a weight. A weighted ier product o C[ a, b ] ca be defied as, b fg, = f ( x ) g a ( x ) w ( x ) dx We ll leave it to you to verify that this is a ier product. It should be fairly simple if you ve had calculus ad you followed the verificatio of the weighted Euclidea ier product. The key ab,. is agai the fact that the weight is a strictly positive fuctio o the iterval [ ] Okay, oce we have a ier product defied o a vector space we ca defie both a orm ad distace for the ier product space as follows. Defiitio Suppose that V is a ier product space. The orm or legth of a vector u i V is defied to be, u = u, u Defiitio Suppose that V is a ier product space ad that u ad v are two vectors i V. d uv, is defied to be, The distace betwee u ad v, deoted by ( ) (, ) d uv = u v We re ot goig to be workig may examples with actual umbers i them i this sectio, but we should work oe or two so at this poit let s pause ad work a example. Note that part (c) i the example below requires Calculus. If you have t had Calculus you should skip that part. 007 Paul Dawkis 66

272 Example 6 For each of the followig compute, vectors ad ier product. =,, 4 (a) u ( ) ad = (,, 0) [Solutio] =,, 4 v i (b) u ( ) ad = (,, 0) v i uv, u ad (, ) d uv for the give pair of with the stadard Euclidea ier product. with the weighed Euclidea ier product usig the weights w =, w = 6 ad w =. [Solutio] 5 (c) u = x ad = x C 0, usig the ier product defied i Example 4. [Solutio] Solutio =,, 4 v i [ ] (a) u ( ) ad = (,, 0) v i with the stadard Euclidea ier product. There really is t much to do here other tha go through the formulas. uv, = = 4 ( )( ) ( )( ) ( )( ) ( ) ( ) ( ) u = u, u = = ( uv) u v ( ) ( ) ( ) ( ) d, = =,,4 = = 6 [Retur to Problems] (b) u = (,, 4) ad = (,, 0) v i the weights w =, w = 6 ad w =. with the weighed Euclidea ier product usig 5 Agai, ot a lot to do other tha formula work. Note however, that eve though we ve got the same vectors as part (a) we should expect to get differet results because we are ow workig with a weighted ier product. uv, = ( )( )( ) + ( )( )( 6) + ( 4)( 0) = 0 5 d 86 u = u, u = ( ) ( ) + ( ) ( 6) + ( 4 ) = = , = =,,4 = = = ( uv) u v ( ) ( ) ( ) ( ) ( ) ( ) So, we did get differet aswers here. Note that i uder this weighted orm u is smaller i some way tha uder the stadard Euclidea orm ad the distace betwee u ad v is larger i some way tha uder the stadard Euclidea orm. [Retur to Problems] 007 Paul Dawkis 67

273 (c) u = x ad v = x i [ 0,] C usig the ier product defied i Example 4. Okay, agai if you have t had Calculus this part wo t make much sese ad you should skip it. If you have had Calculus this should be a fairly simple example. 4 uv, = x( x ) dx= x dx x 0 = = u = u, u = x( x) dx = x dx = x = d( uv, ) = u v = x x = ( x x ) dx = x x + x = [Retur to Problems] Now, we also have all the same properties for the ier product, orm ad distace that we had for the dot product back i the Euclidea -space sectio. We ll list them all here for referece purposes ad so you ca see them with the updated ier product otatio. The proofs for these theorems are practically idetical to their dot product couterparts ad so are t show here. Theorem Suppose u, v, ad w are vectors i a ier product space ad c is ay scalar. The, (a) uv, + w = uv, + uw, (b) u v, w = u, w v, w (c) uv, w = uv, uw, (d) cuv, = c uv, (e) u0, = 0u, = 0 Theorem Cauchy-Schwarz Iequality : Suppose u ad v are two vectors i a ier product space the, uv u v Theorem Suppose u ad v are two vectors i a ier product space ad that c is a scalar the, (a) u 0 (b) u = 0 if ad oly if u=0. (c) cu = c u (d) u+ v u + v - Usually called the Triagle Iequality 007 Paul Dawkis 68

274 Theorem 4 Suppose u, v, ad w are vectors i a ier product space the, d uv, 0 (a) ( ) (b) d (, ) = 0 (c) d( uv, ) = d( vu, ) (d) d(, ) d(, ) + d(, ) uv if ad oly if u=v. uv uw wv - Usually called the Triagle Iequality There was also a importat cocept that we saw back i the Euclidea -space sectio that we ll eed i the ext sectio. Here is the defiitio for this cocept i terms of ier product spaces. Defiitio 4 Suppose that u ad v are two vectors i a ier product space. They are said to be orthogoal if uv, = 0. Note that whether or ot two vectors are orthogoal will deped greatly o the ier product that we re usig. Two vectors may be orthogoal with respect to oe ier product defied o a vector space, but ot orthogoal with respect to a secod ier product defied o the same vector space. Example 7 The two vectors u = (,, 4) ad = (,, 0) v i are ot orthogoal with respect to the stadard Euclidea ier product, but are orthogoal with respect to the weighted w =. 5 Euclidea ier product with weights w =, w = 6 ad We saw the computatios for these back i Example 6. Now that we have the defiitio of orthogoality out of the way we ca give the geeral versio of the Pythagorea Theorem of a ier product space. Theorem 5 Suppose that u ad v are two orthogoal vectors i a ier product space the, u+ v = u + v There is oe fial topic that we wat to briefly touch o i this sectio. I previous sectios we spet quite a bit of time talkig about subspaces of a vector space. There are also subspaces that will oly arise if we are workig with a ier product space. The followig defiitio gives oe such subspace. Defiitio 5 Suppose that W is a subspace of a ier product space V. We say that a vector u from V is orthogoal to W if it is orthogoal to every vector i W. The set of all vectors that are orthogoal to W is called the orthogoal complemet of W ad is deoted by W. We say that W ad W are orthogoal complemets. We re ot goig to be doig much with the orthogoal complemet i these otes, although they will show up o occasio. We just wated to ackowledge that there are subspaces that are oly goig to be foud i ier product spaces. Here are a couple of ice theorems pertaiig to orthogoal complemets. 007 Paul Dawkis 69

275 Theorem 6 Suppose W is a subspace of a ier product space V. The, (a) W is a subspace of V. (b) Oly the zero vector, 0, is commo to both W ad W. (c) ( W ) = W. Or i other words, the orthogoal complemet of W is W. Here is a ice theorem that relates some of the fudametal subspaces that we were discussig i the previous sectio. Theorem 7 If A is a m matrix the, (a) The ull space of A ad the row space of A are orthogoal complemets i respect to the stadard Euclidea ier product. T (b) The ull space of A ad the colum space of A are orthogoal complemets i with respect to the stadard Euclidea ier product. m with 007 Paul Dawkis 70

276 Orthoormal Basis We ow eed to come back ad revisit the topic of basis. We are goig to be lookig at a special kid of basis i this sectio that ca arise i a ier product space, ad yes it does require a ier product space to costruct. However, before we do that we re goig to eed to get some prelimiary topics out of the way first. We ll first eed to get a set of defiitios out of way. Defiitio Suppose that S is a set of vectors i a ier product space. (a) If each pair of distict vectors from S is orthogoal the we call S a orthogoal set. (b) If S is a orthogoal set ad each of the vectors i S also has a orm of the we call S a orthoormal set. Let s take a quick look at a example. Example Give the three vectors v = ( ), v = ( ) ad = ( ), 0, 0,, 0 v,0, 4 i aswer each of the followig. (a) Show that they form a orthogoal set uder the stadard Euclidea ier product for but ot a orthoormal set. [Solutio] (b) Tur them ito a set of vectors that will form a orthoormal set of vectors uder the stadard Euclidea ier product for. [Solutio] Solutio (a) Show that they form a orthogoal set uder the stadard Euclidea ier product for but ot a orthoormal set. All we eed to do here to show that they form a orthogoal set is to compute the ier product of all the possible pairs ad show that they are all zero. v, v = ( )( 0) + ( 0)( ) + ( )( 0) = 0 v, v = ( )( ) + ( 0)( 0) + ( )( 4) = 0 v, v = ( 0)( ) + ( )( 0) + ( 0)( 4) = 0 So, they do form a orthogoal set. To show that they do t form a orthoormal set we just eed to show that at least oe of them does ot have a orm of. For the practice we ll compute all the orms. v v v ( ) ( ) ( ) = = 5 ( ) ( ) ( ) = = ( ) ( ) ( ) = = 0 = 5 So, oe of them has a orm of, but the other two do t ad so they are ot a orthoormal set of vectors. [Retur to Problems] 007 Paul Dawkis 7

277 (b) Tur them ito a set of vectors that will form a orthoormal set of vectors uder the stadard Euclidea ier product for. We ve actually doe most of the work here for this part of the problem already. Back whe we were workig i we saw that we could tur ay vector v ito a vector with orm by dividig by its orm as follows, v v This ew vector will have a orm of. So, we ca tur each of the vectors above ito a set of vectors with orm. u = v = (,0, ) =,0, v u = v = ( 0,,0 ) = ( 0,,0 ) v u = v = (,0, 4 ) =,0, v All that remais is to show that this ew set of vectors is still orthogoal. We ll leave it to you to verify that, u, u = u, u = u, u = 0 ad so we have tured the three vectors ito a set of vectors that form a orthoormal set. [Retur to Problems] We have the followig very ice fact about orthogoal sets. Theorem Suppose S = { v v v },,, is a orthogoal set of o-zero vectors i a ier product space, the S is also a set of liearly idepedet vectors. Proof : Note that we eed the vectors to be o-zero vectors because the zero vector could be i a set of orthogoal vectors ad yet we kow that if a set icludes the zero vector it will be liearly depedet. So, ow that we kow there is a chace that these vectors are liearly idepedet (sice we ve excluded the zero vector) let s form the equatio, cv+ cv + + cv = 0 ad we ll eed to show that the oly scalars that work here are c = 0, c = 0,, c = 0. I fact, we ca do this i a sigle step. All we eed to do is take the ier product of both sides with respect to v i, i=,,,, ad the use the properties of ier products to rearrage thigs a little. c v + c v + + c v, v = 0, v c v, v + c v, v + + c v, v = 0 i i i c v, v + c v, v + + c v, v = 0 i i i i i 007 Paul Dawkis 7

278 Now, because we kow the vectors i S are orthogoal we kow that vi, v j = 0 if i j ad so this reduced dow to, c v, v = 0 i i i Next, sice we kow that the vectors are all o-zero we have vi, v i > 0 ad so the oly way that this ca be zero is if c i = 0. So, we ve show that we must have c = 0, c = 0,, c = 0 ad so these vectors are liearly idepedet. Okay, we are ow ready to move ito the mai topic of this sectio. Sice a set of orthogoal vectors are also liearly idepedet if they just happe to spa the vector space we are workig o they will also form a basis for the vector space. Defiitio Suppose that S = { v v v },,, is a basis for a ier product space. (a) If S is also a orthogoal set the we call S a orthogoal basis. (b) If S is also a orthoormal set the we call S a orthoormal basis. Note that we ve bee usig a orthoormal basis already to this poit. The stadard basis vectors for are a orthoormal basis. The followig fact gives us oe of the very ice properties about orthogoal/orthoormal basis. Theorem Suppose that S = { v v v },,, is a orthogoal basis for a ier product space ad that u is ay vector from the ier product space the,,,, u= uv v + uv v + + uv v v v v If i additio S is i fact a orthoormal basis the, u= u, v v + u, v v + + u, v v Proof : We ll just show that the first formula holds. Oce we have that the secod will follow directly from the fact that all the vectors i a orthoormal set have a orm of. So, give u we eed to fid scalars c, c,, c so that, u= c v + c v + + c v To fid these scalars simply take the ier product of both sides with respect to uv, = c v+ c v + + c v, v i i = c v, v + c v, v + + c v, v i i i v, i =,,,. i 007 Paul Dawkis 7

279 Now, sice we have a orthogoal basis we kow that v, v 0 if i j ad so this reduces to, uv, = c v, v i i i i i j = Also, because v i is a basis vector we kow that it is t the zero vector ad so we also kow that vi, v i > 0. This the gives us,, i c i = uv v, v However, from the defiitio of the orm we see that we ca also write this as,, i c i = uv vi ad so we re doe. i i What this theorem is tellig us is that for ay vector i a ier product space, with a orthogoal/orthoormal basis, it is very easy to write dow the liear combiatio of basis vectors for that vector. I other words, we do t eed to go through all the work to fid the liear combiatios that we were doig i earlier sectios. We would like to be able to costruct a orthogoal/orthoormal basis for a fiite dimesioal vector space give ay basis of that vector space. The followig two theorems will help us to do that. Theorem Suppose that W is a fiite dimesioal subspace of a ier product space V ad further suppose that u is ay vector i V. The u ca be writte as, u= proj u+ proj u W where proj W u is a vector that is i W ad is called the orthogoal projectio of u o W ad proj W u is a vector i W orthogoal to W. W (the orthogoal complemet of W) ad is called the compoet of u Note that this theorem is really a extesio of the idea of projectios that we saw whe we first itroduced the cocept of the dot product. Also ote that proj W u ca be easily computed from proj W u by, proj u= u proj W W u This theorem is ot really the oe that we eed to costruct a orthoormal basis. We will use portios of this theorem, but we eeded it more to ackowledge that we could do projectios ad to get the otatio out of the way. The followig theorem is the oe that will be the mai workhorse of the process. 007 Paul Dawkis 74

280 Theorem 4 Suppose that W is a fiite dimesioal subspace of a ier product space V. Further suppose that W has a orthogoal basis S = { v, v,, v} ad that u is ay vector i V the, proj W,,, u= uv v + uv v + + uv v v v v If i additio S is i fact a orthoormal basis the, proj u= u, v v + u, v v + + u, v v W So, just how does this theorem help us to costruct a orthogoal/orthoormal basis? The followig process, called the Gram-Schmidt process, will costruct a orthogoal/orthoormal basis for a fiite dimesioal ier product space give ay basis. We ll also be able to develop some very ice facts about the basis that we re goig to be costructig as we go through the costructio process. Gram-Schmidt Process Suppose that V is a fiite dimesioal ier product space ad that { v, v,, v} V. The followig process will costruct a orthogoal basis for V, { u, u,, u} orthoormal basis simply divide the Step : Let u = v. Step : Let spa{ } u i s by their orms. u is a basis for. To fid a W = u ad the defie u = proj W v (i.e. u is the portio of v that is orthogoal tou ). Techically, this is all there is to step (oce we show that u 0 ayway) sice u will be orthogoal to u because it is i W. However, this is t terribly useful from a computatioal stadpoit. Usig the result of Theorem ad the formula from Theorem 4 gives us the followig formula for u,, u = v proj W v = v v u u Next, we eed to verify that u 0 because the zero vector caot be a basis vector. To see that u 0 assume for a secod that we do have u = 0. This would give us, v, u v, u v = u = v sice u = v u u But this tells us that v is a multiple of v which we ca t have sice they are both basis vectors ad are hece liearly idepedet. So, u 0. Fially, let s observe a iterestig cosequece of how we foud u. Both u ad u are orthogoal ad so are liearly idepedet by Theorem above ad this meas that they are a 007 Paul Dawkis 75

281 basis for the subspace spa {, } W = u u ad this subspace has dimesio of. However, they are also liear combiatios of v ad v ad so spa v, v which also has dimesio. Therefore, by Theorem 9 from the sectio o Basis we ca see that we must i fact have, spa u, u = spa v, v W is a subspace of { } { } { } So, the two ew vectors, u ad u, will i fact spa the same subspace as the two origial vectors, v ad v, spa. This is a ice cosequece of the Gram-Schmidt process. Step : This step is really a extesio of Step ad so we wo t go ito quite the detail here as we did i Step. First, defie W = spa { u, u } ad the defie u = proj W v ad so u will be the portio of v that is orthogoal to both u ad u. We ca compute u as follows, v, u v, u u = v proj W v = v u u u u Next, both u ad u are liear combiatios of v ad v ad so u ca be thought of as a liear combiatio of v, v, ad v. The because v, v, ad v are liearly idepedet we kow that we must have u 0. You should probably go through the steps of verifyig the claims made here for the practice. With this step we ca also ote that because u is i the orthogoal complemet of W (by costructio) ad because we kow that, W = spa { u, u} = spa { v, v } from the previous step we kow as well that u must be orthogoal to all vectors i W. I particular u must be orthogoal to v ad v. Fially, followig a argumet similar to that i Step we get that, spa u, u, u = spa v, v, v Step 4 : Cotiue i this fashio util we ve foud { } { } There is the Gram-Schmidt process. Goig through the process above, with all the explaatio as we provided, ca be a little dautig ad ca make the process look more complicated tha it i fact is. Let s summarize the process before we go oto a couple of examples. u. 007 Paul Dawkis 76

282 Gram-Schmidt Process Suppose that V is a fiite dimesioal ier product space ad that { v, v,, v} V the a orthogoal basis for V, { u u u },,, is a basis for, ca be foud usig the followig process. u = v, u = v v u u u v, u v, u u = v u u u u,,, u = v v u u v u u v u u u u u To covert the basis to a orthoormal basis simply divide all the ew basis vectors by their orm. Also, due to the costructio process we have spa { u, u,, uk} = spa { v, v,, vk} k =,,, ad k spa,,, k for k =,,. u will be orthogoal to { v v v } Okay, let s go through a couple of examples here. Example Give that v = ( ), v = ( ), ad = ( ) ad,, 0, 0, v, 7, is a basis of assumig that we re workig with the stadard Euclidea ier product costruct a orthogoal basis for. Solutio You should verify that the set of vectors above is i fact a basis for. Now, we ll eed to go through the Gram-Schmidt process a couple of times. The first step is easy. u = v = (,,0 ) The remaiig two steps are goig to ivolve a little more work, but wo t be all that bad. Here is the formula for the secod vector i our orthogoal basis., u = v v u u u ad here is all the quatities that we ll eed. v, u = u = 5 The secod vector is the, u = (, 0, ) (,, 0 ) =,, Paul Dawkis 77

283 The formula for the third (ad fial vector) is, v, u v, u u = v u u u u ad here are the quatities that we eed for this step. v, u = v 6, u = u = 5 u = 5 5 The third vector is the, u =, 7,,, 0,, =,, ( ) ( ) 6 5 So, the orthogoal basis that we ve costructed is, u = (,,0 ) u =,, =,, 5 5 u You should verify that these do i fact form a orthogoal set. Example Give that v = ( ), v = ( ), ad = ( ),, 0, 0, v, 7, is a basis of ad assumig that we re workig with the stadard Euclidea ier product costruct a orthoormal basis for. Solutio First, ote that this is almost the same problem as the previous oe except this time we re lookig for a orthoormal basis istead of a orthogoal basis. There are two ways to approach this. The first is ofte the easiest way ad that is to ackowledge that we ve got a orthogoal basis ad we ca tur that ito a orthoormal basis simply by dividig by the orms of each of the vectors. Let s do it this way ad see what we get. Here are the orms of the vectors from the previous example. u = 5 u = u = 5 Note that i order to elimiate as may square roots as possible we ratioalized the deomiators of the fractios here. Dividig by the orms gives the followig set of vectors. w 5 =,,0 =,, =,, 5 5 w w Okay that s the first way to do it. The secod way is to go through the Gram-Schmidt process ad this time divide by the orm as we fid each ew vector. This will have two effects. First, it will put a fair amout of roots ito the vectors that we ll eed to work with. Secod, because we are turig the ew vectors ito vectors with legth oe the orm i the Gram-Schmidt formula will also be ad so is t eeded. 007 Paul Dawkis 78

284 Let s go through this oce just to show you the differeces. The first ew vector will be, u = v = (,,0 ) =,, v Now, to get the secod vector we first eed to compute, v, u w = v u = v v, u u u however we wo t call it u yet sice we ll eed to divide by it s orm oce we re doe. Also ote that we ve ackowledged that the orm of u is ad so we do t eed that i the formula. Here is the dot product that we eed for this step. v, u = 5 Here is the ew orthogoal vector. w = (, 0, ),, 0 =,, Notice that this is the same as the secod vector we foud i Example. I this case we ll eed to divide by its orm to get the vector that we wat i this case. 5 5 u = w =,, =,, w Fially, for the third orthogoal vector the formula will be, w = v v, u u v, u u ad agai we ve ackowledged that the orms of the first two vectors will be ad so are t eeded i this formula. Here are the dot products that we ll eed. v, u = v, u = 5 0 The orthogoal vector is the, w = (, 7, ),, 0,, =, Agai, this is the third orthogoal vector that we foud i Example. Here is the fial step to get our third orthoormal vector for this problem u = w =,, =,, w So, we got exactly the same vectors as if we did whe we just used the results of Example. Of course that is somethig that we should expect to happe here. So, as we saw i the previous example there are two ways to get a orthoormal basis from ay give basis. Each has its pros ad cos ad you ll eed to decide which method to use. If we 007 Paul Dawkis 79

285 first compute the orthogoal basis ad the divide all of them at the ed by their orms we do t have to work much with square roots, however we do eed to compute orms that we wo t eed otherwise. Agai, it will be up to you to determie what the best method for you to use is. Example 4 Give that v = ( ), v = ( ), v = ( ) ad = ( ),,,,,, 0,, 0, 0 v 4, 0, 0, 0 is a 4 basis of ad assumig that we re workig with the stadard Euclidea ier product 4 costruct a orthoormal basis for. Solutio Now, we re lookig for a orthoormal basis ad so we ve got our two optios o how to proceed here. I this case we ll costruct a orthogoal basis ad the covert that ito a orthoormal basis at the very ed. The first vector is, ( ) u = v =,,, Here s the dot product ad orm we eed for the secod vector. v, u = u = 4 The secod orthogoal vector is the, u = (,,, 0) (,,, ) =,,, For the third vector we ll eed the followig dot products ad orms v, u = v, u = u = 4 u = 4 ad the third orthogoal vector is, u = (,, 0, 0) (,,, ),,,,,, 0 = Fially, for the fourth orthogoal vector we ll eed, v4, u = v4, u = v4, u = 4 u = 4 u = u = 4 ad the fourth vector i out ew orthogoal basis is, 4 u 4 = (,0,0,0 ) (,,, ),,,,,,0,,0,0 = Okay, the orthogoal basis is the, u = (,,, ) u =,,, =,,, 0 4 =,, 0, u u Next, we ll eed their orms so we ca tur this set ito a orthoormal basis. 007 Paul Dawkis 80

286 u = u 6 = u = u 4 = The orthoormal basis is the, w = u =,,, w w w u = u =,,, u = u =,,, u = u =,,0,0 4 4 u4 Now, we saw how to expad a liearly idepedet set of vectors ito a basis for a vector space. We ca do the same thig here with orthogoal sets of vectors ad the Gram-Schmidt process. Example 5 Expad the vectors v = ( ) ad = ( ), 0, v,0, 4 ito a orthogoal basis for ad assume that we re workig with the stadard Euclidea ier product. Solutio First otice that the two vectors are already orthogoal ad liearly idepedet. Sice they are liearly idepedet ad we kow that a basis for will cotai vectors we kow that we ll oly eed to add i oe more vector. Next, sice they are already orthogoal that will simplify some of the work. Now, recall that i order to expad a liearly idepedet set ito a basis for a vector space we eed to add i a vector that is ot i the spa of the origial vectors. Doig so will retai the liear idepedece of the set. Sice both of these vectors have a zero i the secod term we ca add i ay of the followig to the set. ( 0,,0 ) (,,) (,,0 ) ( 0,, ) If we used the first oe we d actually have a orthogoal set without ay work, but that would be borig ad defeat the purpose of the example. To make our life at least somewhat easier with the work let s add i the fourth o to get the set of vectors. v =,0, v =,0, 4 v = 0,, ( ) ( ) ( ) Now, we kow these are liearly idepedet ad sice there are three vectors by Theorem 6 from the sectio o Basis we kow that they form a basis for. However, they do t form a orthogoal basis. To get a orthogoal basis we would eed to perform Gram-Schmidt o the set. However, sice the first two vectors are already orthogoal performig Gram-Schmidt would ot have ay affect (you should verify this). So, let s just reame the first two vectors as, 007 Paul Dawkis 8

287 (,0, ) (,0, 4) u = u = ad the just perform Gram-Schmidt for the third vector. Here are the dot products ad orms that we ll eed. v, u = v, u = 4 u = 5 u = 0 The third vector will the be, 4 u = = 5 0 ( 0,,) (,0, ) (,0, 4) ( 0,,0 ) 007 Paul Dawkis 8

288 Least Squares I this sectio we re goig to take a look at a importat applicatio of orthogoal projectios to icosistet systems of equatios. Recall that a system is called icosistet if there are o solutios to the system. The atural questio should probably arise at this poit of just why we would care about this. Let s take a look at the followig examples that we ca use to motivate the reaso for lookig ito this. Example Fid the equatio of the lie that rus through the four poits (, ), ( ) (, 9) ad (, ). 4,, Solutio So, what we re lookig for are the values of m ad b for which the lie, y = mx+ b will ru through the four poits give above. If we plug these poits ito the lie we arrive at the followig system of equatios. m+ b= 4m+ b= m+ b=9 m+ b= The correspodig matrix form of this system is, 4 m = b 9 Solvig this system (either the matrix form or the equatios) gives us the solutio, m= 4 b= 5 So, the lie y = 4x 5 will ru through the three poits give above. Note that this makes this a cosistet system. Example Fid the equatio of the lie that rus through the four poits(, 70) ( 7,0) ad ( 5, 5)., ( ),, Solutio So, this is essetially the same problem as i the previous example. Here are the system of equatios ad matrix form of the system of equatios that we eed to solve for this problem. m+ b= m+ b= m = 7m+ b= 0 7 b 0 5m+ b=5 5 5 Now, try as we might we wo t fid a solutio to this system ad so this system is icosistet. 007 Paul Dawkis 8

289 The previous two examples were askig for pretty much the same thig ad i the first example we were able to aswer the questio while i the secod we were ot able to aswer the questio. It is the secod example that we wat to look at a little closer. Here is a graph of the poits give i this example. We ca see that these poits do almost fall o a lie. Without the referece lie that we put ito the sketch it would ot be clear that these poits did ot fall oto a lie ad so askig the questio that we did was ot totally ureasoable. Let s further suppose that the four poits that we have i this example came from some experimet ad we kow for some physical reaso that the data should all lie o a straight lie. However, due to iaccuracies i the measurig equipmet caused some (or all) of the umbers to be a little off. I light of this the questio i Example is agai ot ureasoable ad i fact we may still eed to aswer it i some way. That is the poit of this sectio. Give this set of data ca we fid the equatio of a lie that will as closely as possible (whatever this meas ) approximate each of the data poits. Or more geerally, give a icosistet system of equatios, A x= b, ca we fid a vector, let s call it x, so that Ax will be as close to b as possible (agai, what ever this meas ). To aswer this questio let s step back a bit ad take a look at the geeral situatio. So, we will suppose that we have a icosistet system of equatios i m ukows, A x= b, so the coefficiet matrix, A, will have size m. Let s rewrite the system a little ad make the followig defiitio. ε = b Ax We will call ε the error vector ad we ll call ε = bax the error sice it will measure the m distace betwee Ax ad b for ay vector x i (there are m ukows ad so x will be i m ). Note that we re goig to be usig the stadard Euclidea ier product to compute the orm i these cases. The least squares problem is the the followig problem. Least Square Problem Give a icosistet system of equatios, A x= b, we wat to fid a m vector, x, from so that the error ε = b Ax is the smallest possible error. The vector x is called the least squares solutio. 007 Paul Dawkis 84

290 Solvig this problem is actually easier tha it might look at first. The first thig that we ll wat to do is look at a more geeral situatio. The followig theorem will be useful i solvig the least squares problem. Theorem Suppose that W is a fiite dimesioal subspace of a ier product space V ad that u is ay vector i V. The best approximatio to u from W is the proj W u. By best approximatio we mea that for every w (that is ot proj W u ) i W we will have, u proj W u < uw Proof : For ay vector w i W we ca write. u w = u proj u + proj uw ( W ) ( W ) Notice that proj W u wis a differece of vectors i W ad hece must also be i W. Likewise, u proj W u is i fact proj W u, the compoet of u orthogoal to W, ad so is orthogoal to ay vector i W. Therefore proj W u w ad u proj W u are orthogoal vectors. So, by the Pythagorea Theorem we have, u w = u proj u + proj u w = u proj u + proj uw ( ) ( ) W W W W Or, upo droppig the middle term, u w = u proj u + proj uw W W Fially, if we have term we get, w proj W u the we kow that u w > uproj W u projw u w > 0 ad so if we drop this This is equivalet to, ad so we re doe. u w > uproj W u m So, just what does this theorem do for us? Well for ay vector x i we kow that Ax will be a liear combiatio of the colum vectors from A. Now, let W be the subspace of (yes, sice each colum of A has etries) that is spaed by the colum vectors of A. The Ax will ot oly be i W (sice it s a liear combiatio of the colum vectors) but as we let x rage over m all possible vectors i Ax will rage over all of W. m Now, the least squares problem is askig us to fid the vector x i, we re callig it x, so that ε is smaller tha (i.e. smaller orm) tha all other possible values of ε, i.e. ε < ε. If we plug i for the defiitio of the errors we arrive at. b Ax < b Ax 007 Paul Dawkis 85

291 With the least squares problem we are lookig for the closest that we ca get Ax to b. However, this is exactly the type of situatio that Theorem is tellig us how to solve. The Ax rage over all possible vectors i W ad we wat the oe that is closed to some vector b i. Theorem tells us that the oe that we re after is, A x = proj W b Of course we are actually after x ad ot Ax but this does give us oe way to fid x. We could first compute proj W b ad the solve A x = proj W b for x ad we d have the solutio that we re after. There is however a better way of doig this. Before we give that theorem however, we ll eed a quick fact. Theorem Suppose that A is a m matrix with liearly idepedet colums. The, is a ivertible matrix. T AA T Proof : From Theorem 8 i the Fudametal Subspaces sectio we kow that if AA x= 0 has T T oly the trivial solutio the AA will be a ivertible matrix. So, let s suppose that AA x= 0. T This tells us that Ax is i the ull space of A, but we also kow that Ax is i the colum space of A. Theorem 7 from the sectio o Ier Product Spaces tells us that these two spaces are orthogoal complemets ad Theorem 6 from the same sectio tells us that the oly vector i commo to both must be the zero vector ad so we kow that A x= 0. If c, c,, c m are the colums of A the we kow that Ax ca be writte as, Ax= xc + x c + + x c The usig A x= m m 0 we also kow that, Ax= xc + x c + + x c = 0 m m However, sice the colums of A are liearly idepedet this equatios ca oly have the trivial solutio, x= 0. Therefore T AA x= 0 has oly the trivial solutio ad so T AA is a ivertible matrix. The followig theorem will ow give us a better method for fidig the least squares solutio to a system of equatios. Theorem Give the system of equatios A x= b, a least squares solutio to the system deoted by x, will also be a solutio to the associated ormal system, T T A Ax= A b Further if A has liearly idepedet colums the there is a uique least squares solutio give by, T T x = AA Ab ( ) 007 Paul Dawkis 86

292 Proof : Let s suppose that x is a least squares solutio ad so, A x = proj W b Now, let s cosider, b Ax = bproj W b However as poited out i the proof of Theorem we kow that b proj W b is i the orthogoal complemet of W. Next, W is the colum space of A ad by Theorem 7 from the sectio o Ier Product Spaces we kow that the orthogoal complemet of the colum space of A is i fact the T T ull space of A ad so, b proj W b must be i the ull space of A. So, we must the have, T T A ( b projw b) = A ( b Ax) = 0 Or, with a little rewritig we arrive at, T T AAx = Ab ad so we see that x must also be a solutio to the ormal system of equatios. For the secod part we do t have much to do. If the colums of A are liearly idepedet the T AA is ivertible by Theorem above. However, by Theorem 8 i the Fudametal Subspaces T T sectio this meas that AAx= Ab has a uique solutio. To fid the uique solutio we just eed to multiply both sides by the iverse of T AA. So, to fid a least squares solutio to A x= b all we eed to do is solve the ormal system of equatios, T T AAx= Ab ad we will have a least squares solutio. Now we should work a couple of examples. We ll start with Example from above. Example Use least squares to fid the equatio of the lie that will best approximate the, 70 7,0 5, 5. poits ( ), (, ), ( ) ad ( ) Solutio The system of equatios that we eed to solve from Example is, 70 m = 7 b Paul Dawkis 87

293 So, we have, T A= A = 7 b = The ormal system that we eed to solve is the, m b = m 4 = 4 4 b 66 This is a fairly simple system to solve ad upo doig so we get, 47 m= =. b= = So, the lie that best approximates all the poits above is give by, y =.x+ 9.4 The sketch of the lie ad poits after Example above shows this lie i relatio to the poits. Example 4 Fid the least squares solutio to the followig system of equatios. 4 5 x x = 4 5 x Solutio Okay there really is t much to do here other tha ru through the formula. Here are the various matrices that we ll eed here. 4 5 T A= A = 5 4 b = 5 4 The ormal system if the, 5 7 T T AA= 8 6 A 0 b = Paul Dawkis 88

294 5 7 x 8 6 x = x This system is a little messier to solve tha the previous example, but upo solvig we get, x = x = x = I vector form the least squares solutio is the, x = We eed to address oe more issues before we move o to the ext sectio. Whe we opeed this discussio up we said that we were after a solutio, deoted x, so that Ax will be as close to b as possible i some way. We the defied, ε = b Ax ε = b Ax ad stated that what we meat by as close to b as possible was that we wated to fid the x for which, ε < ε for all x x. Okay, this is all fie i terms of mathematically defiig what we mea by as close as possible, but i practical terms just what are we askig for here? Let s go back to Example. For this example the geeral formula for ε is, ( m+ b) ε m ( m+ b ) ε ε = b Ax = = = 0 7 b 0 ( 7m+ b) ε ( 5m+ b) ε 4 So, the compoets of the error vector, ε, each measure just how close each possible choice of m ad b will get us to the exact aswer (which is give by the compoets of b). We ca also thik about this i terms of the equatio of the lie. We ve bee give a set of poits ( xi, y i) ad we wat to determie a m ad a b so that whe we plug x i, the x coordiate or our poit, ito mx + b the error, ε i = yi ( mxi + b) is as small as possible (i some way that we re tryig to figure out here) for all the poits that we ve bee give. The if we plug i the poits that we ve bee give we ll see that this formula is othig more tha the compoets of the error vector Now, i the case of our example we were lookig for, 007 Paul Dawkis 89

295 so that, m x = b ε = bax is as small as possible, or i other words is smaller tha all other possible choices of x. We ca ow aswer just what we mea by as small as possible. First, let s compute the followig, ε = ε + ε + ε + ε 4 The least squares solutio, x, will be the value of x for which, ε = ε + ε + ε + ε4 < ε + ε + ε + ε4 = ε ad hece the ame least squares. The solutio we re after is the value that will give the least value of the sum of the squares of the errors. Example 5 Compute the error for the solutio from Example. Solutio First, the lie that we foud usig least squares is, y =.x+ 9.4 We ca compute the errors for each of the poits by pluggig i the give x value ito this lie ad the takig the differece of the result form the equatio ad the kow y value. Here are the error computatios for each of the four poits i Example. ε = = 4. 4 ( ( ) ) ( () ) ( ( ) ) ( ( ) ) ε = =.7 ε = =4. ε = =.9 O a side ote, we could have just as easily computed these by doig the followig matrix work ε = = The square of the error ad the error is the, ε = = 64. ε = 64. = 8.05 ( ) ( ) ( ) ( ) Now, accordig to our discussio above this meas that if we choose ay other value of m ad b ad compute the error we will arrive at a value that is larger that Paul Dawkis 90

296 QR Decompositio I this sectio we re goig to look at a way to decompose or factor a m matrix as follows. Theorem Suppose that A is a m matrix with liearly idepedet colums the A ca be factored as, A= QR where Q is a m matrix with orthoormal colums ad R is a ivertible m m upper triagular matrix. Proof : The proof here will cosist of actually costructig Q ad R ad showig that they i fact do multiply to give A. Okay, let s start with A ad suppose that it s colums are give by c, c,, c m. Also suppose that we perform the Gram-Schmidt process o these vectors ad arrive at a set of orthoormal vectors u, u,, u m. Next, defie Q (yes, the Q i the theorem statemet) to be the m matrix whose colums are u, u,, u m ad so Q will be a matrix with orthoormal colums. We ca the write A ad Q as, A= c c c Q= u u u [ ] [ ] m c s are i { u u u } Next, because each of the i spa,, m we kow from Theorem of the previous sectio that we ca write each c i as a liear combiatio of u, u,, u m i the followig maer. c = c, u u+ c, u u + + c, um um c = c, u u+ c, u u + + c, um um c = c, u u + c, u u + + c, u u m m m m m m Next, defie R (ad yes, this will evetually be the R from the theorem statemet) to be the m m matrix defied as, c, u c, u cm, u,, m, R c u c u c u = c, um c, um cm, um m Now, let s examie the product, QR. QR = [ u u u ] m c, u c, u c, u m c, u c, u c, u m c, u c, u c, u m m m m 007 Paul Dawkis 9

297 From the sectio o Matrix Arithmetic we kow that the j th colum of this product is simply Q times the j th colum of R. However, if you work through a couple of these you ll see that whe we multiply Q times the j th colum of R we arrive at the formula for c j that we ve got above. I other words, c, u c, u cm, u,, m, QR [ m ] c u c u c u = u u u c, um c, um cm, um = [ c c cm ] = A So, we ca factor A as a product of Q ad R ad Q has the correct form. Now all that we eed to do is to show that R is a ivertible upper triagular matrix ad we ll be doe. First, from the Gram-Schmidt process we kow that u k is orthogoal to c, c,, ck. This meas that all the ier products below the mai diagoal must be zero sice they are all of the form ci, u j with i< j. Now, we kow from Theorem from the Special Matrices sectio that a triagular matrix will be ivertible if the mai diagoal etries, ci, u i, are o-zero. This is fairly easy to show. Here is the geeral formula for u i from the Gram-Schmidt process. u = c c, u u c, u u c, u u i i i i i i i Recall that we re assumig that we foud the orthoormal u i s ad so each of these will have a orm of ad so the orms are ot eeded i the formula. Now, solvig this for c i gives, c = u + c, u u + c, u u + + c, u u i i i i i i i Let s look at the diagoal etries of R. We ll plug i the formula for c i ito the ier product ad do some rewritig usig the properties of the ier product. c, u = u + c, u u + c, u u + + c, u u, u However the i i i i i i i i i = u, u + c, u u, u + c, u u, u + + c, u u, u i i i i i i i i i i u i are orthoormal basis vectors ad so we kow that u, u = 0 j =,, i u, u 0 j i i i Usig these we see that the diagoal etries are othig more tha, c, u = u, u 0 i i i i So, the diagoal etries of R are o-zero ad hece R must be ivertible. 007 Paul Dawkis 9

298 So, ow that we ve gotte the proof out of the way let s work a example. Example Fid the QR-decompositio for the matrix, A = Solutio The colums from A are, c = 0 7 c = c = 0 We performed Gram-Schmidt o these vectors i Example of the previous sectio. So, the orthoormal vectors that we ll use for Q are, u = 5 u = u 0 = ad the matrix Q is, Q = The matrix R is, c, u c, u c, u R = 0 c, u c, u = , c u So, the QR-Decompositio for this matrix is, = We ll leave it to you to verify that this multiplicatio does i fact give A. There is a ice applicatio of the QR-Decompositio to the Least Squares Process that we examied i the previous sectio. To see this however, we will first eed to prove a quick theorem. 007 Paul Dawkis 9

299 Theorem If Q is a m matrix with mthe the colums of Q are a orthoormal set of T vectors i with the stadard Euclidea ier product if ad oly if QQ= I m. Note that the oly way Q ca have orthoormal colums i is to require that m. Because the colums are vectors i ad we kow from Theorem i the Orthoormal Basis sectio that a set of orthogoal vectors will also be liearly idepedet. However, from Theorem i the Liear Idepedece sectio we kow that if m> the colum vectors will be liearly depedet. Also, because we wat to make it clear that we re usig the stadard Euclidea ier product we will go back to the dot product otatio, ui v, istead of the usual ier product otatio, uv,. Proof : Now recall that to prove a if ad oly if theorem we eed to assume each part ad show that this implies the other part. However, there is some work that we ca do that we ll eed i both parts so let s do that first. Let q, q,, q m be the colums of Q. So, [ ] Q = q q q m For the traspose we take the colums of Q ad tur them ito the rows of T T T rows of Q are q, q,, T row vectors, q i ) ad, T q T T Q q = T qm T Q. Therefore, the T q m (the trasposes are eeded to tur the colum vectors, T Now, let s take a look at the product QQ. Etries i the product will be rows of colums of Q ad so the product will be, T T T qq qq qq m T T T T m QQ qq qq qq = T T T qq m qq m qq m m q i, ito T Q times Recallig that T i = vuwe see that we ca also write the product as, uv T QQ qiq qiq qiq q iq q iq q iq q iq q iq q iq m = m m m m m 007 Paul Dawkis 94

300 Now, let s actually do the proof. ( ) Assume that the colums of Q are orthogoal ad show that this meas that we must T haveqq= I m. Sice we are assumig that the colums of Q are orthoormal we kow that, q iq = 0 i j q i q = i, j =,, m i j i i Therefore the product is, So we re doe with this part. 0 0 T 0 0 QQ = = I m 0 0 ( ) Here assume that are orthogoal. T QQ= I m ad we ll eed to show that this meas that the colums of Q So, we re assumig that, qiq qiq qiqm 0 0 T m 0 0 QQ q iq q iq q iq = = = I qmiq qmiq qmiqm 0 0 However, simply by settig etries i these two matrices equal we see that, qiiq j = 0 i j qii qi = i, j =,, m ad this is exactly what it meas for the colums to be orthogoal so we re doe. m The followig theorem ca be used, o occasio, to sigificatly reduce the amout of work required for the least squares problem. Theorem Suppose that A has liearly idepedet colums. The the ormal system associated with A x= b ca be writte as, T Rx= Q b Proof : There really is t much to do here other tha plug formulas i. We ll start with the ormal system for A x= b. T T AAx= Ab Now, A has liearly idepedet colums we kow that it has a QR-Decompositio for A so let s plug the decompositio ito the ormal system ad usig properties of trasposes we ll rewrite thigs a little. 007 Paul Dawkis 95

301 ( QR) T QRx= ( QR) T T T T RQQRx= RQb T Now, sice the colums of Q are orthoormal we kow that QQ= I m by Theorem above. T Also, we kow that R is a ivertible matrix ad so kow that R is also a ivertible matrix. T So, we ll also multiply both sides by ( R ). Upo doig all this we arrive at, T Rx= Q b T b So, just how is this supposed to help us with the Least Squares Problem? We ll sice R is upper triagular this will be a very easy system to solve. It ca however take some work to get dow to this system. Let s rework the last example from the previous sectio oly this time we ll use the QR-Decompositio method. Example Fid the least squares solutio to the followig system of equatios. 4 5 x x = 4 5 x Solutio First, we ll leave it to you to verify that the colums of A are liearly idepedet. Here are the colums of A. 5 c = c = c = 4 Now, we ll eed to perform Gram-Schmidt o these to get them ito a set of orthoormal vectors. The first step is, u = c = Here s the ier product ad orm that we ll eed for the secod step. c, u = u = 5 The secod vector is the, 007 Paul Dawkis 96

302 c, u 5 5 = = = 6 u u c u The fial step will require the followig ier products ad orms. c, u = 7 c 5 99, u = u = 5 u = 5 5 The third, ad fial orthogoal vector is the, c, u c, u u = c u u u u = = Okay, these are the orthogoal vectors. If we divide each of them by their orms we will get the orthoormal vectors that we eed for the decompositio. The orms are, u = 5 u = u = 5 99 The orthoormal vectors are the, w = w = 8 w = We ca ow write dow Q for the decompositio Q = Fially, R is give by, 007 Paul Dawkis 97

303 7 c, u c, u c, u R = 0 c, u c, u = , c u Okay, we ca ow proceed with the Least Squares process. First we ll eed T Q = T Q. The ormal system ca the be writte as, x x = x x x = x This correspods to the followig system of equatios x x + x = x = x x = x = x = x = These are the same values that we received i the previous sectio. At this poit you are probably askig yourself just why this method is better tha the method we used i the previous sectio. After all, it was a lot of work ad some of the umbers were dowright awful. The aswer is that by had, this may ot be the best way for doig these problems, however, if you are goig to program the least squares method ito a computer all of the steps here are very easy to program ad so this method is a very ice method for programmig the Least Squares process. 007 Paul Dawkis 98

304 Orthogoal Matrices I this sectio we re goig to be talkig about a special kid of matrix called a orthogoal matrix. This is also goig to be a fairly short sectio (at least i relatio to may of the other sectios i this chapter ayway) to close out the chapter. We ll start with the followig defiitio. Defiitio Let Q be a square matrix ad suppose that T Q = Q the we call Q a orthogoal matrix. Notice that because we eed to have a iverse for Q i order for it to be orthogoal we are implicitly assumig that Q is a square matrix here. Before we see ay examples of some orthogoal matrices (ad we have already see at least oe orthogoal matrix) let s get a couple of theorems out of the way. Theorem Suppose that Q is a square matrix the Q is orthogoal if ad oly if T T QQ = Q Q = I. Proof : This is a really simple proof that falls directly from the defiitio of what it meas for a matrix to be orthogoal. T ( ) I this directio we ll assume that Q is orthogoal ad so we kow that Q = Q, but this promptly tells us that, T T QQ = Q Q = I T T QQ = Q Q = I, sice this is exactly what is eeded to ( ) I this directio we ll assume that T show that we have a iverse we ca see that Q = Q ad so Q is orthogoal. The ext theorem gives us a easier check for a matrix beig orthogoal. Theorem Suppose that Q is a matrix, the the followig are all equivalet. (a) Q is orthogoal. (b) The colums of Q are a orthoormal set of vectors i uder the stadard Euclidea ier product. (c) The rows of Q are a orthoormal set of vectors i uder the stadard Euclidea ier product. Proof : We ve actually doe most of this proof already. Normally i this kid of theorem we d a b c a. However, i this case if we prove a loop of equivaleces such as ( ) ( ) ( ) ( ) prove ( a) ( b) ad ( a) ( c) we ca get the above loop of equivaleces by default ad it will be much easier to prove the two equivaleces as we ll see. is directly give by Theorem from the previous sectio sice that theorem is i fact a more geeral versio of this equivalece. The equivalece ( a) ( b) 007 Paul Dawkis 99

305 The proof of the equivalece ( a) ( c) previous sectio ad so we ll leave it to you to fill i the details. is early idetical to the proof of Theorem from the Sice it is much easier to verify that the colums/rows of a matrix or orthoormal tha it is to T check Q = Q i geeral this theorem will be useful for idetifyig orthogoal matrices. As oted above, i order for a matrix to be a orthogoal matrix it must be square. So a matrix that is ot square, but does have orthoormal colums will ot be orthogoal. Also, ote that we did mea to say that the colums are orthoormal. This may seem odd give that we call the matrix orthogoal whe orthoormal would probably be a better ame for the matrix, but traditioally this kid of matrix has bee called orthogoal ad so we ll keep up with traditio. I the previous sectio we were fidig QR-Decompositios ad if you recall the matrix Q had colums that were a set of orthoormal vectors ad so if Q is a square matrix the it will also be a orthogoal matrix, while if it is t square the it wo t be a orthogoal matrix. At this poit we should probably do a example or two. Example Here are the QR-Decompositios that we performed i the previous sectio. From Example A= 0 7 = 0 = QR From Example A= = 0 QR 8 8 = I the first case the matrix Q is, Q = ad by costructio this matrix has orthoormal colums ad sice it is a square matrix it is a orthogoal matrix. I the secod case the matrix Q is, 007 Paul Dawkis 00

306 Q = Agai, by costructio this matrix has orthoormal colums. However, sice it is ot a square matrix it is NOT a orthogoal matrix. Example Fid value(s) for a, b, ad c for which the followig matrix will be orthogoal. 0 a Q= 5 b 5 c Solutio So, the colums of Q are, 0 a q = 5 b q = q = c 5 We will leave it to you to verify that q =, q = ad qi q = 0 ad so all we eed to do if fid a, b, ad c for which we will have q =, qi q = 0 ad qi q = 0. Let s start with the two dot products ad see what we get. b c qiq = = qiq = a+ b+ c= 0 From the first dot product we ca see that, b= c. Pluggig this ito the secod dot product gives us, c= a. Usig the fact that we ow kow what c is i terms of a ad pluggig this ito 5 4 b= c we ca see that b= a. 5 Now, usig the above work we ow kow that i order for the third colum to be orthogoal (sice we have t eve touched orthoormal yet) it must be i the form, a 4 w = 5 a 5 a Fially, we eed to make sure that the third colum has orm of. I other words we eed to 007 Paul Dawkis 0

307 require that w =, or we ca require that positive quatity here. So, let s compute w w = sice we kow that the orm must be a, set it equal to oe ad see what we get = w = a + a + a = a a =± This gives us two possible values of a that we ca use ad this i tur meas that we could used either of the followig two vectors for q q = OR q 45 = A atural questio is why do we care about orthogoal matrices? The followig theorem gives some very ice properties of orthogoal matrices. Theorem If Q is a matrix the the followig are all equivalet. (a) Q is orthogoal. (b) Q x = x for all x i. This is ofte called preservig orms. (c) QxiQ y = xi y for all x ad all y i. This is ofte called preservig dot products. Proof : We ll prove this set of statemets i the order : ( a) ( b) ( c) ( a) ( a) ( b) : We ll start off by assumig that Q is orthogoal ad let s write dow the orm. ( i ) Qx = Qx Qx However, we kow that we ca write the dot product as, ( ) i Q T Q Qx = x x Now we ca use the fact that Q is orthogoal to write T QQ= I. Usig this gives, which is what we were after. ( i ) Q x = x x = x ( b) ( c) : We ll assume that Q x = x for all x i. Let s assume that x ad y are ay two vectors i. The usig Theorem 8 from the sectio o Euclidea -space we have, Qx i Qy = Qx + Qy Qx Qy == Q( x + y) Q( x y) Next both x + y ad x y are i ad so by assumptio ad a use of Theorem 8 agai we have, Qx iq y = x + y x y = xi y 4 4 which is what we were after i this case. 007 Paul Dawkis 0

308 ( c) ( a) : I this case we ll assume that QxiQ y = xi y for all x ad all y i i the first part of this proof we ll rewrite the dot product o the left. T xiqq y = xi y Now, rearrage thigs a little ad we ca arrive at the followig, T xiqqy xiy = 0 T xi QQy y = 0 Now, this must hold for a x i xi ( ) T ( QQ I) y = 0 T ad so let x = ( QQI) ( T T QQI) yi ( QQ I) y = 0 y. This the gives, Theorem (e) from the Euclidea -space sectio tells us that we must the have, T QQ I y = 0 ( ). As we did ad this must be true for all y i. That ca oly happe if the coefficiet matrix of this system is the zero matrix or, T T QQ I= 0 QQ= I Fially, by Theorem above Q must be orthogoal. The secod ad third statemet for this theorem are very useful sice they tell us that we ca add or take out a orthogoal matrix from a orm or a dot product at will ad we ll preserve the result. As a fial theorem for this sectio here are a couple of other ice properties of orthogoal matrices. Theorem 4 Suppose that A ad B are two orthogoal matrices the, (a) A is a orthogoal matrix. (b) AB is a orthogoal matrix. det det A = (c) Either ( A ) = or ( ) Proof : The proof of all three parts follow pretty much from basic properties of orthogoal matrices. (a) Sice A is orthogoal the its colum vectors form a orthogoal (i fact they are orthoormal, but we oly eed orthogoal for this) set of vectors. Now, by the defiitio of T orthogoal matrices, we have A = A. But this meas that the rows of A are othig more tha the colums of A ad so are a orthogoal set of vectors ad so by Theorem above A is a orthogoal matrix. (b) I this case let s start with the followig orm. ABx = A Bx ( ) 007 Paul Dawkis 0

309 where x is ay vector from. But A is orthogoal ad so by Theorem above must preserve orms. I other words we must have, ABx = A Bx = Bx ( ) Now we ca use the fact that B is also orthogoal ad so will preserve orms a well. This gives, ABx = Bx = x Therefore, the product AB also preserves orms ad hece by Theorem must be orthogoal. (c) I this case we ll start with fact that sice A is orthogoal we kow that take the determiat of both sides, T det AA = det I = ( ) ( ) T AA = I ad let s Next use Theorem ad Theorem 6 from the Properties of Determiats sectio to rewrite this as, T det A det A = So, we get the result. ( ) ( ) ( A) ( A) = ( A) ( A) det det det = det =± 007 Paul Dawkis 04

310 Eigevalues ad Eigevectors Itroductio This is goig to be a very short chapter. The mai topic of this chapter will be the Eigevalues ad Eigevectors sectio. I this sectio we will be lookig at special situatios where give a square matrix A ad a vector x the product Ax will be the same as the scalar multiplicatio λx for some scalar, λ. This idea has importat applicatios i may areas of math ad sciece ad so we put it ito a chapter of its ow. We ll also have a quick review of determiats sice those will be required i order to due the work i the Eigevalues ad Eigevectors sectio. We ll also take a look at a applicatio that uses eigevalues. Here is a listig of the topics i this chapter. Review of Determiats I this sectio we ll do a quick review of determiats. Eigevalues ad Eigevectors Here we will take a look at the mai sectio i this chapter. We ll be lookig at the cocept of Eigevalues ad Eigevectors. Diagoalizatio We ll be lookig at diagoalizable matrices i this sectio. 007 Paul Dawkis 05

311 Review of Determiats I this sectio we are goig to do a quick review of determiats ad we ll be cocetratig almost exclusively o how to compute them. For a more i depth look at determiats you should check out the secod chapter which is devoted to determiats ad their properties. Also, we ll ackowledge that the examples i this sectio are all examples that were worked i the secod chapter. We ll start off with a quick workig defiitio of a determiat. See The Determiat Fuctio from the secod chapter for the exact defiitio of a determiat. What we re goig to give here will be sufficiet for what we re goig to be doig i this chapter. So, give a square matrix, A, the determiat of A, deoted by det ( A ), is a fuctio that associated with A a umber. That s it. That s what a determiat does. It takes a matrix ad associates a umber with that matrix. There is also some alterate otatio that we should ackowledge because we ll be usig it quite a bit. The alterate otatio is, det ( A) = A. We ow eed to discuss how to compute determiats. There are may ways of computig determiats, but most of the geeral methods ca lead to some fairly log computatios. We will see oe geeral method towards the ed of this sectio, but there are some ice quick formulas that ca help with some special cases so we ll start with those. We ll be workig mostly with matrices i this chapter that fit ito these special cases. We will start with the formulas for ad matrices. Defiitio If A a a = a a det the the determiat of A is, ( ) a a A = = aa aa a a Defiitio If a a a A a a a = a a a the the determiat of A is, det ( ) a a a A = a a a a a a = a a a + a a a + a a a a a a a a a a a a Okay, we said that these were ice ad quick formulas ad the formula for the matrix is fairly ice ad quick, but the formula for the matrix is either ice or quick. Luckily there are some ice little tricks that ca help us to write dow both formulas. 007 Paul Dawkis 06

312 We ll start with the followig determiat of a matrix ad we ll sketch i two diagoals as show Note that if you multiply alog the gree diagoal you will get the first product i formula for matrices ad if you multiply alog the red diagoal you will get the secod product i the formula. Also, otice that the red diagoal, ruig from right to left, was the product that was subtracted off, while the gree diagoal, ruig from left to right, gave the product that was added. We ca do somethig similar for matrices, but there is a differece. First, we eed to tack a copy of the leftmost two colums oto the right side of the determiat. We the have three diagoals that ru from left to right (show i gree below) ad three diagoals that ru from right to left (show i red below). As will the case, if we multiply alog the gree diagoals we get the products i the formula that are added i the formula ad if we multiply log the red diagoals we get the products i the formula that are subtracted i the formula. Here are a couple of quick examples. Example Compute the determiat of each of the followig matrices. (a) A = 9 5 [Solutio] 5 4 (b) B = 8 [Solutio] 7 6 (c) C = 8 [Solutio] Solutio (a) A = 9 5 We do t really eed to sketch i the diagoals for matrices. The determiat is simply the product of the diagoal ruig left to right mius the product of the diagoal ruig from right to left. So, here is the determiat for this matrix. The oly thig we eed to worry about is payig attetio to mius sigs. It is easy to make a mistake with mius sigs i these computatios if you are t payig attetio. det A = 5 9 = ( ) ( )( ) ( )( ) 007 Paul Dawkis 07

313 [Retur to Problems] 5 4 (b) B = 8 7 Okay, with this oe we ll copy the two colums over ad sketch i the diagoals to make sure we ve got the idea of these dow. Now, just remember to add products alog the left to right diagoals ad subtract products alog the right to left diagoals. det B = ( ) ( )( )( ) ( )( )( ) ( )( )( ) ( )( )( ) ( )( 8)( ) ( 4)( )( ) =467 [Retur to Problems] 6 (c) C = 8 We ll do this oe with a little less detail. We ll copy the colums but ot bother to actually sketch i the diagoals this time. 6 6 ( C) det = 8 8 ( )( 8)( ) ( 6)( )( ) ( )( )( ) ( 6)( )( ) ( )( )( ) ( )( 8)( ) = + + = 0 [Retur to Problems] As we ca see from this example the determiat for a matrix ca be positive, egative or zero. Likewise, as we will see towards the ed of this review we are goig to be especially iterested i whe the determiat of a matrix is zero. Because of this we have the followig defiitio. Defiitio Suppose A is a square matrix. det A = 0 we call A a sigular matrix. (a) If ( ) (b) If det ( A) 0 we call A a o-sigular matrix. So, i Example above, both A ad B are o-sigular while C is sigular. 007 Paul Dawkis 08

314 Before we proceed we should poit out that while there are formulas for larger matrices (see the Determiat Fuctio sectio for details o how to write them dow) there are ot ay easy tricks with diagoals to write them dow as we had for ad matrices. With the statemet above made we should ote that there is a simple formula for geeral matrices of certai kids. The followig theorem gives this formula. Theorem Suppose that A is a triagular matrix with diagoal etries a, a,, a the determiat of A is, det A = a a a ( ) This theorem will be valid regardless of whether the triagular matrix is a upper triagular matrix or a lower triagular matrix. Also, because a diagoal matrix ca also be cosidered to be a triagular matrix Theorem is also valid for diagoal matrices. Here are a couple of quick examples of this. Example Compute the determiat of each of the followig matrices A= 0 0 B C = = Solutio Here are these determiats. det ( A) = ( 5)( )( 4) =60 det ( B) = ( 6)( ) =6 det C = = 0 ( ) ( )( )( )( ) There are several methods for fidig determiats i geeral. Oe of them is the Method of Cofactors. What follows is a very brief overview of this method. For a more detailed discussio of this method see the Method of Cofactors i the Determiats Chapter. We ll start with a couple of defiitios first. Defiitio 4 If A is a square matrix the the mior of a ij, deoted by of the submatrix that results from removig the i th row ad j th colum of A. Defiitio 5 If A is a square matrix the the cofactor of a ij, deoted by ( ) i + j M ij. Here is a quick example showig some mior ad cofactor computatios. M ij, is the determiat C ij, is the umber 007 Paul Dawkis 09

315 Example For the followig matrix compute the cofactors C, C 4, ad C A = Solutio I order to compute the cofactors we ll first eed the mior associated with each cofactor. Remember that i order to compute the mior we will remove the i th row ad j th colum of A. So, to compute M (which we ll eed for C ) we ll eed to compute the determiate of the matrix we get by removig the st row ad d colum of A. Here is that work. We ve marked out the row ad colum that we elimiated ad we ll leave it to you to verify the determiat computatio. Now we ca get the cofactor. C + ( ) M ( ) ( ) = = 60 = 60 Let s ow move oto the secod cofactor. Here is the work for the mior. The cofactor i this case is, C ( ) M ( ) ( ) = = 508 = Here is the work for the fial cofactor. C + 5 ( ) M ( ) ( ) = = 50 = 50 Notice that the cofactor for a give etry is really just the mior for the same etry with a + or a - i frot of it. The followig table shows whether or ot there should be a + or a - i frot of a mior for a give cofactor. 007 Paul Dawkis 0

Infinite Sequences and Series

Infinite Sequences and Series Chapter 6 Ifiite Sequeces ad Series 6.1 Ifiite Sequeces 6.1.1 Elemetary Cocepts Simply speakig, a sequece is a ordered list of umbers writte: {a 1, a 2, a 3,...a, a +1,...} where the elemets a i represet

More information

(3) If you replace row i of A by its sum with a multiple of another row, then the determinant is unchanged! Expand across the i th row:

(3) If you replace row i of A by its sum with a multiple of another row, then the determinant is unchanged! Expand across the i th row: Math 5-4 Tue Feb 4 Cotiue with sectio 36 Determiats The effective way to compute determiats for larger-sized matrices without lots of zeroes is to ot use the defiitio, but rather to use the followig facts,

More information

6.3 Testing Series With Positive Terms

6.3 Testing Series With Positive Terms 6.3. TESTING SERIES WITH POSITIVE TERMS 307 6.3 Testig Series With Positive Terms 6.3. Review of what is kow up to ow I theory, testig a series a i for covergece amouts to fidig the i= sequece of partial

More information

(3) If you replace row i of A by its sum with a multiple of another row, then the determinant is unchanged! Expand across the i th row:

(3) If you replace row i of A by its sum with a multiple of another row, then the determinant is unchanged! Expand across the i th row: Math 50-004 Tue Feb 4 Cotiue with sectio 36 Determiats The effective way to compute determiats for larger-sized matrices without lots of zeroes is to ot use the defiitio, but rather to use the followig

More information

The Binomial Theorem

The Binomial Theorem The Biomial Theorem Robert Marti Itroductio The Biomial Theorem is used to expad biomials, that is, brackets cosistig of two distict terms The formula for the Biomial Theorem is as follows: (a + b ( k

More information

U8L1: Sec Equations of Lines in R 2

U8L1: Sec Equations of Lines in R 2 MCVU U8L: Sec. 8.9. Equatios of Lies i R Review of Equatios of a Straight Lie (-D) Cosider the lie passig through A (-,) with slope, as show i the diagram below. I poit slope form, the equatio of the lie

More information

Inverse Matrix. A meaning that matrix B is an inverse of matrix A.

Inverse Matrix. A meaning that matrix B is an inverse of matrix A. Iverse Matrix Two square matrices A ad B of dimesios are called iverses to oe aother if the followig holds, AB BA I (11) The otio is dual but we ofte write 1 B A meaig that matrix B is a iverse of matrix

More information

Mon Feb matrix inverses. Announcements: Warm-up Exercise:

Mon Feb matrix inverses. Announcements: Warm-up Exercise: Math 225-4 Week 6 otes We will ot ecessarily fiish the material from a give day's otes o that day We may also add or subtract some material as the week progresses, but these otes represet a i-depth outlie

More information

Complex Numbers Primer

Complex Numbers Primer Complex Numbers Primer Complex Numbers Primer Before I get started o this let me first make it clear that this documet is ot iteded to teach you everythig there is to kow about complex umbers. That is

More information

Complex Numbers Primer

Complex Numbers Primer Before I get started o this let me first make it clear that this documet is ot iteded to teach you everythig there is to kow about complex umbers. That is a subject that ca (ad does) take a whole course

More information

INTEGRATION BY PARTS (TABLE METHOD)

INTEGRATION BY PARTS (TABLE METHOD) INTEGRATION BY PARTS (TABLE METHOD) Suppose you wat to evaluate cos d usig itegratio by parts. Usig the u dv otatio, we get So, u dv d cos du d v si cos d si si d or si si d We see that it is ecessary

More information

CHAPTER I: Vector Spaces

CHAPTER I: Vector Spaces CHAPTER I: Vector Spaces Sectio 1: Itroductio ad Examples This first chapter is largely a review of topics you probably saw i your liear algebra course. So why cover it? (1) Not everyoe remembers everythig

More information

Theorem: Let A n n. In this case that A does reduce to I, we search for A 1 as the solution matrix X to the matrix equation A X = I i.e.

Theorem: Let A n n. In this case that A does reduce to I, we search for A 1 as the solution matrix X to the matrix equation A X = I i.e. Theorem: Let A be a square matrix The A has a iverse matrix if ad oly if its reduced row echelo form is the idetity I this case the algorithm illustrated o the previous page will always yield the iverse

More information

Example 1.1 Use an augmented matrix to mimic the elimination method for solving the following linear system of equations.

Example 1.1 Use an augmented matrix to mimic the elimination method for solving the following linear system of equations. MTH 261 Mr Simods class Example 11 Use a augmeted matrix to mimic the elimiatio method for solvig the followig liear system of equatios 2x1 3x2 8 6x1 x2 36 Example 12 Use the method of Gaussia elimiatio

More information

Properties and Tests of Zeros of Polynomial Functions

Properties and Tests of Zeros of Polynomial Functions Properties ad Tests of Zeros of Polyomial Fuctios The Remaider ad Factor Theorems: Sythetic divisio ca be used to fid the values of polyomials i a sometimes easier way tha substitutio. This is show by

More information

September 2012 C1 Note. C1 Notes (Edexcel) Copyright - For AS, A2 notes and IGCSE / GCSE worksheets 1

September 2012 C1 Note. C1 Notes (Edexcel) Copyright   - For AS, A2 notes and IGCSE / GCSE worksheets 1 September 0 s (Edecel) Copyright www.pgmaths.co.uk - For AS, A otes ad IGCSE / GCSE worksheets September 0 Copyright www.pgmaths.co.uk - For AS, A otes ad IGCSE / GCSE worksheets September 0 Copyright

More information

CALCULUS II. Sequences and Series. Paul Dawkins

CALCULUS II. Sequences and Series. Paul Dawkins CALCULUS II Sequeces ad Series Paul Dawkis Table of Cotets Preface... ii Sequeces ad Series... 3 Itroductio... 3 Sequeces... 5 More o Sequeces...5 Series The Basics... Series Covergece/Divergece...7 Series

More information

6 Integers Modulo n. integer k can be written as k = qn + r, with q,r, 0 r b. So any integer.

6 Integers Modulo n. integer k can be written as k = qn + r, with q,r, 0 r b. So any integer. 6 Itegers Modulo I Example 2.3(e), we have defied the cogruece of two itegers a,b with respect to a modulus. Let us recall that a b (mod ) meas a b. We have proved that cogruece is a equivalece relatio

More information

a for a 1 1 matrix. a b a b 2 2 matrix: We define det ad bc 3 3 matrix: We define a a a a a a a a a a a a a a a a a a

a for a 1 1 matrix. a b a b 2 2 matrix: We define det ad bc 3 3 matrix: We define a a a a a a a a a a a a a a a a a a Math S-b Lecture # Notes This wee is all about determiats We ll discuss how to defie them, how to calculate them, lear the allimportat property ow as multiliearity, ad show that a square matrix A is ivertible

More information

A sequence of numbers is a function whose domain is the positive integers. We can see that the sequence

A sequence of numbers is a function whose domain is the positive integers. We can see that the sequence Sequeces A sequece of umbers is a fuctio whose domai is the positive itegers. We ca see that the sequece,, 2, 2, 3, 3,... is a fuctio from the positive itegers whe we write the first sequece elemet as

More information

Matrices and vectors

Matrices and vectors Oe Matrices ad vectors This book takes for grated that readers have some previous kowledge of the calculus of real fuctios of oe real variable It would be helpful to also have some kowledge of liear algebra

More information

Calculus II. Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed

Calculus II. Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed This documet was writte ad copyrighted by Paul Dawkis. Use of this documet ad its olie versio is govered by the Terms ad Coditios of Use located at. The olie versio of this documet is available at http://tutorial.math.lamar.edu.

More information

SNAP Centre Workshop. Basic Algebraic Manipulation

SNAP Centre Workshop. Basic Algebraic Manipulation SNAP Cetre Workshop Basic Algebraic Maipulatio 8 Simplifyig Algebraic Expressios Whe a expressio is writte i the most compact maer possible, it is cosidered to be simplified. Not Simplified: x(x + 4x)

More information

Some examples of vector spaces

Some examples of vector spaces Roberto s Notes o Liear Algebra Chapter 11: Vector spaces Sectio 2 Some examples of vector spaces What you eed to kow already: The te axioms eeded to idetify a vector space. What you ca lear here: Some

More information

PROPERTIES OF AN EULER SQUARE

PROPERTIES OF AN EULER SQUARE PROPERTIES OF N EULER SQURE bout 0 the mathematicia Leoard Euler discussed the properties a x array of letters or itegers ow kow as a Euler or Graeco-Lati Square Such squares have the property that every

More information

SECTION 1.5 : SUMMATION NOTATION + WORK WITH SEQUENCES

SECTION 1.5 : SUMMATION NOTATION + WORK WITH SEQUENCES SECTION 1.5 : SUMMATION NOTATION + WORK WITH SEQUENCES Read Sectio 1.5 (pages 5 9) Overview I Sectio 1.5 we lear to work with summatio otatio ad formulas. We will also itroduce a brief overview of sequeces,

More information

Sequences A sequence of numbers is a function whose domain is the positive integers. We can see that the sequence

Sequences A sequence of numbers is a function whose domain is the positive integers. We can see that the sequence Sequeces A sequece of umbers is a fuctio whose domai is the positive itegers. We ca see that the sequece 1, 1, 2, 2, 3, 3,... is a fuctio from the positive itegers whe we write the first sequece elemet

More information

a for a 1 1 matrix. a b a b 2 2 matrix: We define det ad bc 3 3 matrix: We define a a a a a a a a a a a a a a a a a a

a for a 1 1 matrix. a b a b 2 2 matrix: We define det ad bc 3 3 matrix: We define a a a a a a a a a a a a a a a a a a Math E-2b Lecture #8 Notes This week is all about determiats. We ll discuss how to defie them, how to calculate them, lear the allimportat property kow as multiliearity, ad show that a square matrix A

More information

Statistics 511 Additional Materials

Statistics 511 Additional Materials Cofidece Itervals o mu Statistics 511 Additioal Materials This topic officially moves us from probability to statistics. We begi to discuss makig ifereces about the populatio. Oe way to differetiate probability

More information

CALCULUS II. Paul Dawkins

CALCULUS II. Paul Dawkins CALCULUS II Paul Dawkis Table of Cotets Preface... iii Outlie... v Itegratio Techiques... Itroductio... Itegratio by Parts... Itegrals Ivolvig Trig Fuctios... Trig Substitutios... Partial Fractios...4

More information

Summary: Congruences. j=1. 1 Here we use the Mathematica syntax for the function. In Maple worksheets, the function

Summary: Congruences. j=1. 1 Here we use the Mathematica syntax for the function. In Maple worksheets, the function Summary: Cogrueces j whe divided by, ad determiig the additive order of a iteger mod. As described i the Prelab sectio, cogrueces ca be thought of i terms of coutig with rows, ad for some questios this

More information

Math 140. Paul Dawkins

Math 140. Paul Dawkins Math 40 Paul Dawkis Math 40 Table of Cotets Itegrals... Itroductio... Idefiite Itegrals... 5 Computig Idefiite Itegrals... Substitutio Rule for Idefiite Itegrals... More Substitutio Rule... 5 Area Problem...

More information

CALCULUS II Sequences and Series. Paul Dawkins

CALCULUS II Sequences and Series. Paul Dawkins CALCULUS II Sequeces ad Series Paul Dawkis Table of Cotets Preface... ii Sequeces ad Series... Itroductio... Sequeces... 5 More o Sequeces... 5 Series The Basics... Series Covergece/Divergece... 7 Series

More information

Chapter Vectors

Chapter Vectors Chapter 4. Vectors fter readig this chapter you should be able to:. defie a vector. add ad subtract vectors. fid liear combiatios of vectors ad their relatioship to a set of equatios 4. explai what it

More information

Apply change-of-basis formula to rewrite x as a linear combination of eigenvectors v j.

Apply change-of-basis formula to rewrite x as a linear combination of eigenvectors v j. Eigevalue-Eigevector Istructor: Nam Su Wag eigemcd Ay vector i real Euclidea space of dimesio ca be uiquely epressed as a liear combiatio of liearly idepedet vectors (ie, basis) g j, j,,, α g α g α g α

More information

Discrete Mathematics for CS Spring 2007 Luca Trevisan Lecture 22

Discrete Mathematics for CS Spring 2007 Luca Trevisan Lecture 22 CS 70 Discrete Mathematics for CS Sprig 2007 Luca Trevisa Lecture 22 Aother Importat Distributio The Geometric Distributio Questio: A biased coi with Heads probability p is tossed repeatedly util the first

More information

Math 113 Exam 3 Practice

Math 113 Exam 3 Practice Math Exam Practice Exam will cover.-.9. This sheet has three sectios. The first sectio will remid you about techiques ad formulas that you should kow. The secod gives a umber of practice questios for you

More information

REVISION SHEET FP1 (MEI) ALGEBRA. Identities In mathematics, an identity is a statement which is true for all values of the variables it contains.

REVISION SHEET FP1 (MEI) ALGEBRA. Identities In mathematics, an identity is a statement which is true for all values of the variables it contains. The mai ideas are: Idetities REVISION SHEET FP (MEI) ALGEBRA Before the exam you should kow: If a expressio is a idetity the it is true for all values of the variable it cotais The relatioships betwee

More information

4.3 Growth Rates of Solutions to Recurrences

4.3 Growth Rates of Solutions to Recurrences 4.3. GROWTH RATES OF SOLUTIONS TO RECURRENCES 81 4.3 Growth Rates of Solutios to Recurreces 4.3.1 Divide ad Coquer Algorithms Oe of the most basic ad powerful algorithmic techiques is divide ad coquer.

More information

In algebra one spends much time finding common denominators and thus simplifying rational expressions. For example:

In algebra one spends much time finding common denominators and thus simplifying rational expressions. For example: 74 The Method of Partial Fractios I algebra oe speds much time fidig commo deomiators ad thus simplifyig ratioal epressios For eample: + + + 6 5 + = + = = + + + + + ( )( ) 5 It may the seem odd to be watig

More information

Matrix Algebra 2.2 THE INVERSE OF A MATRIX Pearson Education, Inc.

Matrix Algebra 2.2 THE INVERSE OF A MATRIX Pearson Education, Inc. 2 Matrix Algebra 2.2 THE INVERSE OF A MATRIX MATRIX OPERATIONS A matrix A is said to be ivertible if there is a matrix C such that CA = I ad AC = I where, the idetity matrix. I = I I this case, C is a

More information

Zeros of Polynomials

Zeros of Polynomials Math 160 www.timetodare.com 4.5 4.6 Zeros of Polyomials I these sectios we will study polyomials algebraically. Most of our work will be cocered with fidig the solutios of polyomial equatios of ay degree

More information

, then cv V. Differential Equations Elements of Lineaer Algebra Name: Consider the differential equation. and y2 cos( kx)

, then cv V. Differential Equations Elements of Lineaer Algebra Name: Consider the differential equation. and y2 cos( kx) Cosider the differetial equatio y '' k y 0 has particular solutios y1 si( kx) ad y cos( kx) I geeral, ay liear combiatio of y1 ad y, cy 1 1 cy where c1, c is also a solutio to the equatio above The reaso

More information

Discrete-Time Systems, LTI Systems, and Discrete-Time Convolution

Discrete-Time Systems, LTI Systems, and Discrete-Time Convolution EEL5: Discrete-Time Sigals ad Systems. Itroductio I this set of otes, we begi our mathematical treatmet of discrete-time s. As show i Figure, a discrete-time operates or trasforms some iput sequece x [

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors 5 Eigevalues ad Eigevectors 5.3 DIAGONALIZATION DIAGONALIZATION Example 1: Let. Fid a formula for A k, give that P 1 1 = 1 2 ad, where Solutio: The stadard formula for the iverse of a 2 2 matrix yields

More information

U8L1: Sec Equations of Lines in R 2

U8L1: Sec Equations of Lines in R 2 MCVU Thursda Ma, Review of Equatios of a Straight Lie (-D) U8L Sec. 8.9. Equatios of Lies i R Cosider the lie passig through A (-,) with slope, as show i the diagram below. I poit slope form, the equatio

More information

Chapter 6: Determinants and the Inverse Matrix 1

Chapter 6: Determinants and the Inverse Matrix 1 Chapter 6: Determiats ad the Iverse Matrix SECTION E pplicatios of Determiat By the ed of this sectio you will e ale to apply Cramer s rule to solve liear equatios ermie the umer of solutios of a give

More information

Math 61CM - Solutions to homework 3

Math 61CM - Solutions to homework 3 Math 6CM - Solutios to homework 3 Cédric De Groote October 2 th, 208 Problem : Let F be a field, m 0 a fixed oegative iteger ad let V = {a 0 + a x + + a m x m a 0,, a m F} be the vector space cosistig

More information

Chimica Inorganica 3

Chimica Inorganica 3 himica Iorgaica Irreducible Represetatios ad haracter Tables Rather tha usig geometrical operatios, it is ofte much more coveiet to employ a ew set of group elemets which are matrices ad to make the rule

More information

CHAPTER 5. Theory and Solution Using Matrix Techniques

CHAPTER 5. Theory and Solution Using Matrix Techniques A SERIES OF CLASS NOTES FOR 2005-2006 TO INTRODUCE LINEAR AND NONLINEAR PROBLEMS TO ENGINEERS, SCIENTISTS, AND APPLIED MATHEMATICIANS DE CLASS NOTES 3 A COLLECTION OF HANDOUTS ON SYSTEMS OF ORDINARY DIFFERENTIAL

More information

Summary: CORRELATION & LINEAR REGRESSION. GC. Students are advised to refer to lecture notes for the GC operations to obtain scatter diagram.

Summary: CORRELATION & LINEAR REGRESSION. GC. Students are advised to refer to lecture notes for the GC operations to obtain scatter diagram. Key Cocepts: 1) Sketchig of scatter diagram The scatter diagram of bivariate (i.e. cotaiig two variables) data ca be easily obtaied usig GC. Studets are advised to refer to lecture otes for the GC operatios

More information

The Random Walk For Dummies

The Random Walk For Dummies The Radom Walk For Dummies Richard A Mote Abstract We look at the priciples goverig the oe-dimesioal discrete radom walk First we review five basic cocepts of probability theory The we cosider the Beroulli

More information

THE ASYMPTOTIC COMPLEXITY OF MATRIX REDUCTION OVER FINITE FIELDS

THE ASYMPTOTIC COMPLEXITY OF MATRIX REDUCTION OVER FINITE FIELDS THE ASYMPTOTIC COMPLEXITY OF MATRIX REDUCTION OVER FINITE FIELDS DEMETRES CHRISTOFIDES Abstract. Cosider a ivertible matrix over some field. The Gauss-Jorda elimiatio reduces this matrix to the idetity

More information

Polynomial Functions and Their Graphs

Polynomial Functions and Their Graphs Polyomial Fuctios ad Their Graphs I this sectio we begi the study of fuctios defied by polyomial expressios. Polyomial ad ratioal fuctios are the most commo fuctios used to model data, ad are used extesively

More information

ACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 1 MATH00030 SEMESTER / Statistics

ACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 1 MATH00030 SEMESTER / Statistics ACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 1 MATH00030 SEMESTER 1 018/019 DR. ANTHONY BROWN 8. Statistics 8.1. Measures of Cetre: Mea, Media ad Mode. If we have a series of umbers the

More information

Lecture 8: October 20, Applications of SVD: least squares approximation

Lecture 8: October 20, Applications of SVD: least squares approximation Mathematical Toolkit Autum 2016 Lecturer: Madhur Tulsiai Lecture 8: October 20, 2016 1 Applicatios of SVD: least squares approximatio We discuss aother applicatio of sigular value decompositio (SVD) of

More information

Linear Regression Demystified

Linear Regression Demystified Liear Regressio Demystified Liear regressio is a importat subject i statistics. I elemetary statistics courses, formulae related to liear regressio are ofte stated without derivatio. This ote iteds to

More information

Matrix Algebra from a Statistician s Perspective BIOS 524/ Scalar multiple: ka

Matrix Algebra from a Statistician s Perspective BIOS 524/ Scalar multiple: ka Matrix Algebra from a Statisticia s Perspective BIOS 524/546. Matrices... Basic Termiology a a A = ( aij ) deotes a m matrix of values. Whe =, this is a am a m colum vector. Whe m= this is a row vector..2.

More information

Notes for Lecture 5. 1 Grover Search. 1.1 The Setting. 1.2 Motivation. Lecture 5 (September 26, 2018)

Notes for Lecture 5. 1 Grover Search. 1.1 The Setting. 1.2 Motivation. Lecture 5 (September 26, 2018) COS 597A: Quatum Cryptography Lecture 5 (September 6, 08) Lecturer: Mark Zhadry Priceto Uiversity Scribe: Fermi Ma Notes for Lecture 5 Today we ll move o from the slightly cotrived applicatios of quatum

More information

REVISION SHEET FP1 (MEI) ALGEBRA. Identities In mathematics, an identity is a statement which is true for all values of the variables it contains.

REVISION SHEET FP1 (MEI) ALGEBRA. Identities In mathematics, an identity is a statement which is true for all values of the variables it contains. the Further Mathematics etwork wwwfmetworkorguk V 07 The mai ideas are: Idetities REVISION SHEET FP (MEI) ALGEBRA Before the exam you should kow: If a expressio is a idetity the it is true for all values

More information

MAT 271 Project: Partial Fractions for certain rational functions

MAT 271 Project: Partial Fractions for certain rational functions MAT 7 Project: Partial Fractios for certai ratioal fuctios Prerequisite kowledge: partial fractios from MAT 7, a very good commad of factorig ad complex umbers from Precalculus. To complete this project,

More information

P1 Chapter 8 :: Binomial Expansion

P1 Chapter 8 :: Binomial Expansion P Chapter 8 :: Biomial Expasio jfrost@tiffi.kigsto.sch.uk www.drfrostmaths.com @DrFrostMaths Last modified: 6 th August 7 Use of DrFrostMaths for practice Register for free at: www.drfrostmaths.com/homework

More information

1 Generating functions for balls in boxes

1 Generating functions for balls in boxes Math 566 Fall 05 Some otes o geeratig fuctios Give a sequece a 0, a, a,..., a,..., a geeratig fuctio some way of represetig the sequece as a fuctio. There are may ways to do this, with the most commo ways

More information

APPENDIX F Complex Numbers

APPENDIX F Complex Numbers APPENDIX F Complex Numbers Operatios with Complex Numbers Complex Solutios of Quadratic Equatios Polar Form of a Complex Number Powers ad Roots of Complex Numbers Operatios with Complex Numbers Some equatios

More information

Random Matrices with Blocks of Intermediate Scale Strongly Correlated Band Matrices

Random Matrices with Blocks of Intermediate Scale Strongly Correlated Band Matrices Radom Matrices with Blocks of Itermediate Scale Strogly Correlated Bad Matrices Jiayi Tog Advisor: Dr. Todd Kemp May 30, 07 Departmet of Mathematics Uiversity of Califoria, Sa Diego Cotets Itroductio Notatio

More information

Chapter 6 Overview: Sequences and Numerical Series. For the purposes of AP, this topic is broken into four basic subtopics:

Chapter 6 Overview: Sequences and Numerical Series. For the purposes of AP, this topic is broken into four basic subtopics: Chapter 6 Overview: Sequeces ad Numerical Series I most texts, the topic of sequeces ad series appears, at first, to be a side topic. There are almost o derivatives or itegrals (which is what most studets

More information

MA131 - Analysis 1. Workbook 3 Sequences II

MA131 - Analysis 1. Workbook 3 Sequences II MA3 - Aalysis Workbook 3 Sequeces II Autum 2004 Cotets 2.8 Coverget Sequeces........................ 2.9 Algebra of Limits......................... 2 2.0 Further Useful Results........................

More information

Ray-triangle intersection

Ray-triangle intersection Ray-triagle itersectio ria urless October 2006 I this hadout, we explore the steps eeded to compute the itersectio of a ray with a triagle, ad the to compute the barycetric coordiates of that itersectio.

More information

Chapter 10: Power Series

Chapter 10: Power Series Chapter : Power Series 57 Chapter Overview: Power Series The reaso series are part of a Calculus course is that there are fuctios which caot be itegrated. All power series, though, ca be itegrated because

More information

TEACHER CERTIFICATION STUDY GUIDE

TEACHER CERTIFICATION STUDY GUIDE COMPETENCY 1. ALGEBRA SKILL 1.1 1.1a. ALGEBRAIC STRUCTURES Kow why the real ad complex umbers are each a field, ad that particular rigs are ot fields (e.g., itegers, polyomial rigs, matrix rigs) Algebra

More information

MA131 - Analysis 1. Workbook 2 Sequences I

MA131 - Analysis 1. Workbook 2 Sequences I MA3 - Aalysis Workbook 2 Sequeces I Autum 203 Cotets 2 Sequeces I 2. Itroductio.............................. 2.2 Icreasig ad Decreasig Sequeces................ 2 2.3 Bouded Sequeces..........................

More information

3.2 Properties of Division 3.3 Zeros of Polynomials 3.4 Complex and Rational Zeros of Polynomials

3.2 Properties of Division 3.3 Zeros of Polynomials 3.4 Complex and Rational Zeros of Polynomials Math 60 www.timetodare.com 3. Properties of Divisio 3.3 Zeros of Polyomials 3.4 Complex ad Ratioal Zeros of Polyomials I these sectios we will study polyomials algebraically. Most of our work will be cocered

More information

Recitation 4: Lagrange Multipliers and Integration

Recitation 4: Lagrange Multipliers and Integration Math 1c TA: Padraic Bartlett Recitatio 4: Lagrage Multipliers ad Itegratio Week 4 Caltech 211 1 Radom Questio Hey! So, this radom questio is pretty tightly tied to today s lecture ad the cocept of cotet

More information

subcaptionfont+=small,labelformat=parens,labelsep=space,skip=6pt,list=0,hypcap=0 subcaption ALGEBRAIC COMBINATORICS LECTURE 8 TUESDAY, 2/16/2016

subcaptionfont+=small,labelformat=parens,labelsep=space,skip=6pt,list=0,hypcap=0 subcaption ALGEBRAIC COMBINATORICS LECTURE 8 TUESDAY, 2/16/2016 subcaptiofot+=small,labelformat=pares,labelsep=space,skip=6pt,list=0,hypcap=0 subcaptio ALGEBRAIC COMBINATORICS LECTURE 8 TUESDAY, /6/06. Self-cojugate Partitios Recall that, give a partitio λ, we may

More information

Sequences I. Chapter Introduction

Sequences I. Chapter Introduction Chapter 2 Sequeces I 2. Itroductio A sequece is a list of umbers i a defiite order so that we kow which umber is i the first place, which umber is i the secod place ad, for ay atural umber, we kow which

More information

LESSON 2: SIMPLIFYING RADICALS

LESSON 2: SIMPLIFYING RADICALS High School: Workig with Epressios LESSON : SIMPLIFYING RADICALS N.RN.. C N.RN.. B 5 5 C t t t t t E a b a a b N.RN.. 4 6 N.RN. 4. N.RN. 5. N.RN. 6. 7 8 N.RN. 7. A 7 N.RN. 8. 6 80 448 4 5 6 48 00 6 6 6

More information

Principle Of Superposition

Principle Of Superposition ecture 5: PREIMINRY CONCEP O RUCUR NYI Priciple Of uperpositio Mathematically, the priciple of superpositio is stated as ( a ) G( a ) G( ) G a a or for a liear structural system, the respose at a give

More information

IP Reference guide for integer programming formulations.

IP Reference guide for integer programming formulations. IP Referece guide for iteger programmig formulatios. by James B. Orli for 15.053 ad 15.058 This documet is iteded as a compact (or relatively compact) guide to the formulatio of iteger programs. For more

More information

Definitions and Theorems. where x are the decision variables. c, b, and a are constant coefficients.

Definitions and Theorems. where x are the decision variables. c, b, and a are constant coefficients. Defiitios ad Theorems Remember the scalar form of the liear programmig problem, Miimize, Subject to, f(x) = c i x i a 1i x i = b 1 a mi x i = b m x i 0 i = 1,2,, where x are the decisio variables. c, b,

More information

NUMERICAL METHODS COURSEWORK INFORMAL NOTES ON NUMERICAL INTEGRATION COURSEWORK

NUMERICAL METHODS COURSEWORK INFORMAL NOTES ON NUMERICAL INTEGRATION COURSEWORK NUMERICAL METHODS COURSEWORK INFORMAL NOTES ON NUMERICAL INTEGRATION COURSEWORK For this piece of coursework studets must use the methods for umerical itegratio they meet i the Numerical Methods module

More information

TMA4205 Numerical Linear Algebra. The Poisson problem in R 2 : diagonalization methods

TMA4205 Numerical Linear Algebra. The Poisson problem in R 2 : diagonalization methods TMA4205 Numerical Liear Algebra The Poisso problem i R 2 : diagoalizatio methods September 3, 2007 c Eiar M Røquist Departmet of Mathematical Scieces NTNU, N-749 Trodheim, Norway All rights reserved A

More information

Notes for Lecture 11

Notes for Lecture 11 U.C. Berkeley CS78: Computatioal Complexity Hadout N Professor Luca Trevisa 3/4/008 Notes for Lecture Eigevalues, Expasio, ad Radom Walks As usual by ow, let G = (V, E) be a udirected d-regular graph with

More information

Chapter 7: Numerical Series

Chapter 7: Numerical Series Chapter 7: Numerical Series Chapter 7 Overview: Sequeces ad Numerical Series I most texts, the topic of sequeces ad series appears, at first, to be a side topic. There are almost o derivatives or itegrals

More information

P.3 Polynomials and Special products

P.3 Polynomials and Special products Precalc Fall 2016 Sectios P.3, 1.2, 1.3, P.4, 1.4, P.2 (radicals/ratioal expoets), 1.5, 1.6, 1.7, 1.8, 1.1, 2.1, 2.2 I Polyomial defiitio (p. 28) a x + a x +... + a x + a x 1 1 0 1 1 0 a x + a x +... +

More information

( ) ( ) ( ) notation: [ ]

( ) ( ) ( ) notation: [ ] Liear Algebra Vectors ad Matrices Fudametal Operatios with Vectors Vector: a directed lie segmets that has both magitude ad directio =,,,..., =,,,..., = where 1, 2,, are the otatio: [ ] 1 2 3 1 2 3 compoets

More information

The Growth of Functions. Theoretical Supplement

The Growth of Functions. Theoretical Supplement The Growth of Fuctios Theoretical Supplemet The Triagle Iequality The triagle iequality is a algebraic tool that is ofte useful i maipulatig absolute values of fuctios. The triagle iequality says that

More information

NICK DUFRESNE. 1 1 p(x). To determine some formulas for the generating function of the Schröder numbers, r(x) = a(x) =

NICK DUFRESNE. 1 1 p(x). To determine some formulas for the generating function of the Schröder numbers, r(x) = a(x) = AN INTRODUCTION TO SCHRÖDER AND UNKNOWN NUMBERS NICK DUFRESNE Abstract. I this article we will itroduce two types of lattice paths, Schröder paths ad Ukow paths. We will examie differet properties of each,

More information

Machine Learning for Data Science (CS 4786)

Machine Learning for Data Science (CS 4786) Machie Learig for Data Sciece CS 4786) Lecture & 3: Pricipal Compoet Aalysis The text i black outlies high level ideas. The text i blue provides simple mathematical details to derive or get to the algorithm

More information

Chapter 6: Numerical Series

Chapter 6: Numerical Series Chapter 6: Numerical Series 327 Chapter 6 Overview: Sequeces ad Numerical Series I most texts, the topic of sequeces ad series appears, at first, to be a side topic. There are almost o derivatives or itegrals

More information

Stochastic Matrices in a Finite Field

Stochastic Matrices in a Finite Field Stochastic Matrices i a Fiite Field Abstract: I this project we will explore the properties of stochastic matrices i both the real ad the fiite fields. We first explore what properties 2 2 stochastic matrices

More information

Addition: Property Name Property Description Examples. a+b = b+a. a+(b+c) = (a+b)+c

Addition: Property Name Property Description Examples. a+b = b+a. a+(b+c) = (a+b)+c Notes for March 31 Fields: A field is a set of umbers with two (biary) operatios (usually called additio [+] ad multiplicatio [ ]) such that the followig properties hold: Additio: Name Descriptio Commutativity

More information

Physics 324, Fall Dirac Notation. These notes were produced by David Kaplan for Phys. 324 in Autumn 2001.

Physics 324, Fall Dirac Notation. These notes were produced by David Kaplan for Phys. 324 in Autumn 2001. Physics 324, Fall 2002 Dirac Notatio These otes were produced by David Kapla for Phys. 324 i Autum 2001. 1 Vectors 1.1 Ier product Recall from liear algebra: we ca represet a vector V as a colum vector;

More information

ALGEBRAIC GEOMETRY COURSE NOTES, LECTURE 5: SINGULARITIES.

ALGEBRAIC GEOMETRY COURSE NOTES, LECTURE 5: SINGULARITIES. ALGEBRAIC GEOMETRY COURSE NOTES, LECTURE 5: SINGULARITIES. ANDREW SALCH 1. The Jacobia criterio for osigularity. You have probably oticed by ow that some poits o varieties are smooth i a sese somethig

More information

II. Descriptive Statistics D. Linear Correlation and Regression. 1. Linear Correlation

II. Descriptive Statistics D. Linear Correlation and Regression. 1. Linear Correlation II. Descriptive Statistics D. Liear Correlatio ad Regressio I this sectio Liear Correlatio Cause ad Effect Liear Regressio 1. Liear Correlatio Quatifyig Liear Correlatio The Pearso product-momet correlatio

More information

Chapter 4. Fourier Series

Chapter 4. Fourier Series Chapter 4. Fourier Series At this poit we are ready to ow cosider the caoical equatios. Cosider, for eample the heat equatio u t = u, < (4.) subject to u(, ) = si, u(, t) = u(, t) =. (4.) Here,

More information

Determinants of order 2 and 3 were defined in Chapter 2 by the formulae (5.1)

Determinants of order 2 and 3 were defined in Chapter 2 by the formulae (5.1) 5. Determiats 5.. Itroductio 5.2. Motivatio for the Choice of Axioms for a Determiat Fuctios 5.3. A Set of Axioms for a Determiat Fuctio 5.4. The Determiat of a Diagoal Matrix 5.5. The Determiat of a Upper

More information

Problems from 9th edition of Probability and Statistical Inference by Hogg, Tanis and Zimmerman:

Problems from 9th edition of Probability and Statistical Inference by Hogg, Tanis and Zimmerman: Math 224 Fall 2017 Homework 4 Drew Armstrog Problems from 9th editio of Probability ad Statistical Iferece by Hogg, Tais ad Zimmerma: Sectio 2.3, Exercises 16(a,d),18. Sectio 2.4, Exercises 13, 14. Sectio

More information

Goodness-of-Fit Tests and Categorical Data Analysis (Devore Chapter Fourteen)

Goodness-of-Fit Tests and Categorical Data Analysis (Devore Chapter Fourteen) Goodess-of-Fit Tests ad Categorical Data Aalysis (Devore Chapter Fourtee) MATH-252-01: Probability ad Statistics II Sprig 2019 Cotets 1 Chi-Squared Tests with Kow Probabilities 1 1.1 Chi-Squared Testig................

More information

Recursive Algorithms. Recurrences. Recursive Algorithms Analysis

Recursive Algorithms. Recurrences. Recursive Algorithms Analysis Recursive Algorithms Recurreces Computer Sciece & Egieerig 35: Discrete Mathematics Christopher M Bourke cbourke@cseuledu A recursive algorithm is oe i which objects are defied i terms of other objects

More information

Mathematical Induction

Mathematical Induction Mathematical Iductio Itroductio Mathematical iductio, or just iductio, is a proof techique. Suppose that for every atural umber, P() is a statemet. We wish to show that all statemets P() are true. I a

More information