1.1 SOLUTIONS. Replace R2 by R2 + (2)R1 and obtain: 2. Scale R2 by 1/3: Replace R1 by R1 + ( 5)R2:

Size: px
Start display at page:

Download "1.1 SOLUTIONS. Replace R2 by R2 + (2)R1 and obtain: 2. Scale R2 by 1/3: Replace R1 by R1 + ( 5)R2:"

Transcription

1 SOLUIONS Notes: he key exercises are 7 (or or ), 9, and 5 For brevity, the symbols R, R,, stand for row (or equation ), row (or equation ), and so on Additional notes are at the end of the section x + 5x 7 x 7x Replace R by R + ()R and obtain: x + 5x 7 3x Scale R by /3: x + 5x 7 x Replace R by R + ( 5)R: he solution is (x, x ) ( 8, 3), or simply ( 8, 3) x x x + 4x 4 5x + 7x Scale R by / and obtain: x+ x 5x+ 7x Replace R by R + ( 5)R: x+ x 3x Scale R by /3: x+ x x 7 Replace R by R + ( )R: x x 7 he solution is (x, x ) (, 7), or simply (, 7)

2 CHAPER Linear Equations in Linear Algebra 3 he point of intersection satisfies the system of two linear equations: x + 5x 7 x x 5 7 Replace R by R + ( )R and obtain: Scale R by /7: Replace R by R + ( 5)R: he point of intersection is (x, x ) (4/7, 9/7) x + 5x 7 7x 9 x + 5x 7 x x x 9/7 4/7 9/7 4 he point of intersection satisfies the system of two linear equations: x 5x 3x 7x Replace R by R + ( 3)R and obtain: Scale R by /8: Replace R by R + (5)R: he point of intersection is (x, x ) (9/4, /4) x 5x 8x x 5x x x x /4 9/4 / /7 4/7 9/ /4 9/4 /4 5 he system is already in triangular form he fourth equation is x 4 5, and the other equations do not contain the variable x 4 he next two steps should be to use the variable x 3 in the third equation to eliminate that variable from the first two equations In matrix notation, that means to replace R by its sum with 3 times R3, and then replace R by its sum with 5 times R3 6 One more step will put the system in triangular form Replace R4 by its sum with 3 times R3, which produces After that, the next step is to scale the fourth row by / Ordinarily, the next step would be to interchange R3 and R4, to put a in the third row and third column But in this case, the third row of the augmented matrix corresponds to the equation x + x + x 3, or simply, A system containing this condition has no solution Further row operations are unnecessary once an equation such as is evident he solution set is empty

3 Solutions 3 8 he standard row operations are: ~ 7 ~ ~ he solution set contains one solution: (,, ) 9 he system has already been reduced to triangular form Begin by scaling the fourth row by / and then replacing R3 by R3 + (3)R4: ~ ~ Next, replace R by R + (3)R3 Finally, replace R by R + R: ~ ~ 5 5 he solution set contains one solution: (4, 8, 5, ) he system has already been reduced to triangular form Use the in the fourth row to change the 4 and 3 above it to zeros hat is, replace R by R + (4)R4 and replace R by R + ( 3)R4 For the final step, replace R by R + ()R ~ ~ he solution set contains one solution: ( 3, 5, 6, 3) First, swap R and R hen replace R3 by R3 + ( 3)R Finally, replace R3 by R3 + ()R ~ 4 5 ~ 4 5 ~ he system is inconsistent, because the last row would require that if there were a solution he solution set is empty Replace R by R + ( 3)R and replace R3 by R3 + (4)R Finally, replace R3 by R3 + (3)R ~ 5 4 ~ he system is inconsistent, because the last row would require that 3 if there were a solution he solution set is empty

4 4 CHAPER Linear Equations in Linear Algebra ~ 5 9 ~ 5 ~ ~ 5 ~ 3 he solution is (5, 3, ) ~ 5 7 ~ ~ ~ ~ ~ he solution is (,, ) 5 First, replace R4 by R4 + ( 3)R, then replace R3 by R3 + ()R, and finally replace R4 by R4 + (3)R ~ ~ ~ he resulting triangular system indicates that a solution exists In fact, using the argument from Example, one can see that the solution is unique 6 First replace R4 by R4 + ()R and replace R4 by R4 + ( 3/)R (One could also scale R before adding to R4, but the arithmetic is rather easy keeping R unchanged) Finally, replace R4 by R4 + R3 3 3 ~ ~ ~ he system is now in triangular form and has a solution he next section discusses how to continue with this type of system

5 Solutions 5 7 Row reduce the augmented matrix corresponding to the given system of three equations: ~ 7 5 ~ he system is consistent, and using the argument from Example, there is only one solution So the three lines have only one point in common 8 Row reduce the augmented matrix corresponding to the given system of three equations: ~ ~ he third equation, 5, shows that the system is inconsistent, so the three planes have no point in common 9 h 4 h 4 ~ h 4 Write c for 6 3h If c, that is, if h, then the system has no solution, because cannot equal 4 Otherwise, when h, the system has a solution h 3 h 3 ~ h Write c for 4 + h hen the second equation cx has a solution for every value of c So the system is consistent for all h 3 3 ~ 4 h 8 h+ Write c for h + hen the second equation cx has a solution for every value of c So the system is consistent for all h 3 h 3 h ~ h if h 5/3 he system is consistent if and only if 5 + 3h, that is, if and only 3 a rue See the remarks following the box titled Elementary Row Operations b False A 5 6 matrix has five rows c False he description given applied to a single solution he solution set consists of all possible solutions Only in special cases does the solution set consist of exactly one solution Mark a statement rue only if the statement is always true d rue See the box before Example 4 a rue See the box preceding the subsection titled Existence and Uniqueness Questions b False he definition of row equivalent requires that there exist a sequence of row operations that transforms one matrix into the other c False By definition, an inconsistent system has no solution d rue his definition of equivalent systems is in the second paragraph after equation ()

6 6 CHAPER Linear Equations in Linear Algebra g 4 7 g 4 7 g 3 5 h ~ 3 5 h ~ 3 5 h 5 9 k 3 5 k + g k + g + h Let b denote the number k + g + h hen the third equation represented by the augmented matrix above is b his equation is possible if and only if b is zero So the original system has a solution if and only if k + g + h 6 A basic principle of this section is that row operations do not affect the solution set of a linear system Begin with a simple augmented matrix for which the solution is obviously (,, ), and then perform any elementary row operations to produce other augmented matrices Here are three examples he fact that they are all row equivalent proves that they all have the solution set (,, ) ~ 3 ~ Study the augmented matrix for the given system, replacing R by R + ( c)r: 3 f 3 f ~ c d g d 3 c g cf his shows that shows d 3c must be nonzero, since f and g are arbitrary Otherwise, for some choices of f and g the second row would correspond to an equation of the form b, where b is nonzero hus d 3c 8 Row reduce the augmented matrix for the given system Scale the first row by /a, which is possible since a is nonzero hen replace R by R + ( c)r a b f b/ a f / a b/ a f / a ~ ~ c d g c d g d c ( b / a ) g c ( f / a ) he quantity d c(b/a) must be nonzero, in order for the system to be consistent when the quantity g c( f /a) is nonzero (which can certainly happen) he condition that d c(b/a) can also be written as ad bc, or ad bc 9 Swap R and R; swap R and R 3 Multiply R by /; multiply R by 3 Replace R3 by R3 + ( 4)R; replace R3 by R3 + (4)R 3 Replace R3 by R3 + (3)R; replace R3 by R3 + ( 3)R 33 he first equation was given he others are: ( )/4, or ( )/4, or ( )/4, or

7 Solutions 7 Rearranging, Begin by interchanging R and R4, then create zeros in the first column: ~ ~ Scale R by and R by /4, create zeros in the second column, and replace R4 by R4 + R3: ~ ~ ~ Scale R4 by /, use R4 to create zeros in column 4, and then scale R3 by /4: ~ ~ ~ he last step is to replace R by R + ( )R3: 75 ~ he solution is (, 75, 3, 5) 3 5 Notes: he Study Guide includes a Mathematical Note about statements, If, then his early in the course, students typically use single row operations to reduce a matrix As a result, even the small grid for Exercise 34 leads to about 5 multiplications or additions (not counting operations with zero) his exercise should give students an appreciation for matrix programs such as MALAB Exercise 4 in Section returns to this problem and states the solution in case students have not already solved the system of equations Exercise 3 in Section 5 uses this same type of problem in connection with an LU factorization For instructors who wish to use technology in the course, the Study Guide provides boxed MALAB notes at the ends of many sections Parallel notes for Maple, Mathematica, and the I-83+/86/89 and HP-48G calculators appear in separate appendices at the end of the Study Guide he MALAB box for Section describes how to access the data that is available for all numerical exercises in the text his feature has the ability to save students time if they regularly have their matrix program at hand when studying linear algebra he MALAB box also explains the basic commands replace, swap, and scale hese commands are included in the text data sets, available from the text web site, wwwlaylinalgebracom

8 8 CHAPER Linear Equations in Linear Algebra SOLUIONS Notes: he key exercises are and 3 8 (Students should work at least four or five from Exercises 7 4, in preparation for Section 5) Reduced echelon form: a and b Echelon form: d Not echelon: c Reduced echelon form: a Echelon form: b and d Not echelon: c ~ ~ ~ 3 ~ 3 Pivot cols and ~ 4 8 ~ 3 ~ ~ 3 ~ ~ Pivot cols ,, and * *,, * * 6,, ~ ~ ~ x + 3x 5 Corresponding system of equations: x3 3 he basic variables (corresponding to the pivot positions) are x and x 3 he remaining variable x is free Solve for the basic variables in terms of the free variable he general solution is x 53x x is free x3 3 Note: Exercise 7 is paired with Exercise

9 Solutions ~ ~ ~ x 9 Corresponding system of equations: x 4 he basic variables (corresponding to the pivot positions) are x and x he remaining variable x 3 is free Solve for the basic variables in terms of the free variable In this particular problem, the basic variables do not depend on the value of the free variable x 9 General solution: x 4 x3 is free Note: A common error in Exercise 8 is to assume that x 3 is zero o avoid this, identify the basic variables first Any remaining variables are free (his type of computation will arise in Chapter 5) ~ ~ Corresponding system: x 3 x 5x 4 6x 5 3 x 4+ 5x3 Basic variables: x, x ; free variable: x 3 General solution: x 5+ 6x3 x3 is free ~ ~ Corresponding system: x x 4 x 3 7 x 4+ x Basic variables: x, x 3 ; free variable: x General solution: x is free x /3 /3 9 6 ~ ~ x x + x3 3 3 Corresponding system:

10 CHAPER Linear Equations in Linear Algebra Basic variable: x ; free variables x, x 3 General solution: x x x 3 4 x x3 3 3 is free is free ~ 3 ~ Corresponding system: x 7x + 6x 5 4 x Basic variables: x and x 3 ; free variables: x, x 4 General solution: x x 5+ 7x 6x x is free x3 3+ x4 x4 is free ~ ~ Corresponding system: x 5 x 5 3x 5 4x + 9x Basic variables: x, x, x 4 ; free variables: x 3, x 5 General solution: x x 5+ 3x5 x + 4x5 x3 is free x4 4 9x 5 x5 is free Note: he Study Guide discusses the common mistake x ~

11 Solutions Corresponding system: x 3 + 7x 9 x 6x 3x Basic variables: x, x, x 5 ; free variables: x 3, x 4 General solution: x x 97x3 x + 6x + 3x x3 is free x 4 is free x a he system is consistent, with a unique solution b he system is inconsistent (he rightmost column of the augmented matrix is a pivot column) 6 a he system is consistent, with a unique solution b he system is consistent here are many solutions because x is a free variable 7 3 h 3 h ~ h he system has a solution only if 7 h, that is, if h 7/ ~ 5 h 7 h+ 5 3 If h +5 is zero, that is, if h 5, then the system has no solution, because cannot equal 3 Otherwise, when h 5, the system has a solution h h ~ 4 8 k 84h k 8 a When h and k 8, the augmented column is a pivot column, and the system is inconsistent b When h, the system is consistent and has a unique solution here are no free variables c When h and k 8, the system is consistent and has many solutions 3 3 ~ 3 h k h9 k 6 a When h 9 and k 6, the system is inconsistent, because the augmented column is a pivot column b When h 9, the system is consistent and has a unique solution here are no free variables c When h 9 and k 6, the system is consistent and has many solutions a False See heorem b False See the second paragraph of the section c rue Basic variables are defined after equation (4) d rue his statement is at the beginning of Parametric Descriptions of Solution Sets e False he row shown corresponds to the equation 5x 4, which does not by itself lead to a contradiction So the system might be consistent or it might be inconsistent

12 CHAPER Linear Equations in Linear Algebra a False See the statement preceding heorem Only the reduced echelon form is unique b False See the beginning of the subsection Pivot Positions he pivot positions in a matrix are determined completely by the positions of the leading entries in the nonzero rows of any echelon form obtained from the matrix c rue See the paragraph after Example 3 d False he existence of at least one solution is not related to the presence or absence of free variables If the system is inconsistent, the solution set is empty See the solution of Practice Problem e rue See the paragraph just before Example 4 3 Yes he system is consistent because with three pivots, there must be a pivot in the third (bottom) row of the coefficient matrix he reduced echelon form cannot contain a row of the form [ ] 4 he system is inconsistent because the pivot in column 5 means that there is a row of the form [ ] Since the matrix is the augmented matrix for a system, heorem shows that the system has no solution 5 If the coefficient matrix has a pivot position in every row, then there is a pivot position in the bottom row, and there is no room for a pivot in the augmented column So, the system is consistent, by heorem 6 Since there are three pivots (one in each row), the augmented matrix must reduce to the form a x a b and so x b c x3 c No matter what the values of a, b, and c, the solution exists and is unique 7 If a linear system is consistent, then the solution is unique if and only if every column in the coefficient matrix is a pivot column; otherwise there are infinitely many solutions his statement is true because the free variables correspond to nonpivot columns of the coefficient matrix he columns are all pivot columns if and only if there are no free variables And there are no free variables if and only if the solution is unique, by heorem 8 Every column in the augmented matrix except the rightmost column is a pivot column, and the rightmost column is not a pivot column 9 An underdetermined system always has more variables than equations here cannot be more basic variables than there are equations, so there must be at least one free variable Such a variable may be assigned infinitely many different values If the system is consistent, each different value of a free variable will produce a different solution 3 Example: x + x + x 3 x + x + x Yes, a system of linear equations with more equations than unknowns can be consistent Example (in which x x ): x x + x x 3x + x 5

13 Solutions 3 3 According to the numerical note in Section, when n 3 the reduction to echelon form takes about (3) 3 /3 8, flops, while further reduction to reduced echelon form needs at most (3) 9 flops Of the total flops, the backward phase is about 9/89 48 or about 5% When n 3, the estimates are (3) 3 /3 8,, phase for the reduction to echelon form and (3) 9, flops for the backward phase he fraction associated with the backward phase is about (9 4 ) /(8 6 ) 5, or about 5% 33 For a quadratic polynomial p(t) a + a t + a t to exactly fit the data (, ), (, 5), and (3, 6), the coefficients a, a, a must satisfy the systems of equations given in the text Row reduce the augmented matrix: 4 5 ~ 3 3 ~ 3 3 ~ ~ 6 ~ 6 he polynomial is p(t) 7 + 6t t 34 [M] he system of equations to be solved is: a4 a5 a + a + a + a + a + a a + a + a + a + a + a 9 a + a 4 + a 4 + a 4 + a 4 + a 4 48 a + a 6 + a 6 + a 6 + a 6 + a a + a 8 + a 8 + a 8 + a 8 + a a a a a + 9 he unknowns are a, a,, a 5 Use technology to compute the reduced echelon of the augmented matrix: ~ ~ ~

14 4 CHAPER Linear Equations in Linear Algebra ~ ~ ~ ~ ~ hus p(t) 75t 948t + 665t 3 7t 4 + 6t 5, and p(75) 646 hundred lb Notes: In Exercise 34, if the coefficients are retained to higher accuracy than shown here, then p(75) 648 If a polynomial of lower degree is used, the resulting system of equations is overdetermined he augmented matrix for such a system is the same as the one used to find p, except that at least column 6 is missing When the augmented matrix is row reduced, the sixth row of the augmented matrix will be entirely zero except for a nonzero entry in the augmented column, indicating that no solution exists Exercise 34 requires 5 row operations It should give students an appreciation for higher-level commands such as gauss and bgauss, discussed in Section 4 of the Study Guide he command ref (reduced echelon form) is available, but I recommend postponing that command until Chapter he Study Guide includes a Mathematical Note about the phrase, If and only if, used in heorem 3 SOLUIONS Notes: he key exercises are 4, 7, 5, and 6 A discussion of Exercise 5 will help students understand the notation [a a a 3 ], {a, a, a 3 }, and Span{a, a, a 3 } u 3 + ( 3) 4 + v + + ( ) Using the definitions carefully, u 3 ( )( 3) v ( ) + + ( )( ) + 4, or, more quickly, u v + 4 he intermediate step is often not written u + + v + + ( ) Using the definitions carefully,

15 3 Solutions 5 u 3 3 ( )() 3 + ( 4) v ( ) + + ( )( ) + 4, or, more quickly, u 3 34 v + 4 he intermediate step is often not written 3 x u v u v u + v u v v x v 4 u v x u v v v u u + v x v , x x 6x 3x x + 4x 7 5x 5 6x 3x x 4x 7 +, 5x 5 Usually the intermediate steps are not displayed 6x 3x x 4x 7 + 5x , x 8x x3 3x + 5x + 6x 3, x + 8x + x 6 x x x3 3 3x + 5x 6x 3 Usually the intermediate steps are not displayed x+ 8x + x3 3x+ 5x 6x 3 7 See the figure below Since the grid can be extended in every direction, the figure suggests that every vector in R can be written as a linear combination of u and v o write a vector a as a linear combination of u and v, imagine walking from the origin to a along the grid "streets" and keep track of how many "blocks" you travel in the u-direction and how many in the v-direction a o reach a from the origin, you might travel unit in the u-direction and units in the v-direction (that is, units in the negative v-direction) Hence a u v

16 6 CHAPER Linear Equations in Linear Algebra b o reach b from the origin, travel units in the u-direction and units in the v-direction So b u v Or, use the fact that b is unit in the u-direction from a, so that b a + u (u v) + u u v c he vector c is 5 units from b in the v-direction, so c b 5v (u v) 5v u 35v d he map suggests that you can reach d if you travel 3 units in the u-direction and 4 units in the v-direction If you prefer to stay on the paths displayed on the map, you might travel from the origin to 3v, then move 3 units in the u-direction, and finally move unit in the v-direction So d 3v + 3u v 3u 4v Another solution is d b v + u (u v) v + u 3u 4v c d v a b v u u v x w v y z Figure for Exercises 7 and 8 8 See the figure above Since the grid can be extended in every direction, the figure suggests that every vector in R can be written as a linear combination of u and v w o reach w from the origin, travel units in the u-direction (that is, unit in the negative u-direction) and travel units in the v-direction hus, w ( )u + v, or w v u x o reach x from the origin, travel units in the v-direction and units in the u-direction hus, x u + v Or, use the fact that x is units in the u-direction from w, so that x w u ( u + v) u u + v y he vector y is 5 units from x in the v-direction, so y x + 5v ( u + v) + 5v u + 35v z he map suggests that you can reach z if you travel 4 units in the v-direction and 3 units in the u-direction So z 4v 3u 3u + 4v If you prefer to stay on the paths displayed on the map, you might travel from the origin to u, then 4 units in the v-direction, and finally move unit in the u-direction So z u + 4v u 3u + 4v 9 x + 5x3 4x + 6x x3, x + 3x 8x 3 x 5x x + 5x 3 4x 6x x 3 + x 3x 8x x 6x x 3 + +, x x x3 x 3x 8x Usually, the intermediate calculations are not displayed

17 3 Solutions 7 Note: he Study Guide says, Check with your instructor whether you need to show work on a problem such as Exercise 9 4x + x + 3x 9 3 x 7x x, 3 8x + 6x 5x 5 3 4x x 3x 9 4x+ x + 3x3 9 x 7x x 3 8x+ 6x 5x x 7x x 3 + +, x x x3 8x 6x 5x3 5 Usually, the intermediate calculations are not displayed he question Is b a linear combination of a, a, and a 3? is equivalent to the question Does the vector equation x a + x a + x 3 a 3 b have a solution? he equation 5 x + x + x a a a b 3 has the same solution set as the linear system whose augmented matrix is 5 M Row reduce M until the pivot positions are visible: 5 5 M ~ 4 3 ~ he linear system corresponding to M has a solution, so the vector equation (*) has a solution, and therefore b is a linear combination of a, a, and a 3 he equation 5 x + x 5 + x a a a b 3 has the same solution set as the linear system whose augmented matrix is (*) (*)

18 8 CHAPER Linear Equations in Linear Algebra 5 M Row reduce M until the pivot positions are visible: 5 5 M ~ 5 4 ~ he linear system corresponding to M has no solution, so the vector equation (*) has no solution, and therefore b is not a linear combination of a, a, and a 3 3 Denote the columns of A by a, a, a 3 o determine if b is a linear combination of these columns, use the boxed fact on page 34 Row reduced the augmented matrix until you reach echelon form: ~ he system for this augmented matrix is inconsistent, so b is not a linear combination of the columns of A [a a a 3 b] ~ he linear system corresponding to this 5 9 matrix has a solution, so b is a linear combination of the columns of A 5 Noninteger weights are acceptable, of course, but some simple choices are v + v, and 7 5 v + v, v + v 3 6 v + v 4, v v Some likely choices are v + v, and v + v v + v 3, v + v, v v 5 3 5

19 3 Solutions [a a b] 4 3 ~ 5 5 ~ 3 ~ 3 he vector b is 7 h 3 h+ 8 3 h+ 8 h+ 7 in Span{a, a } when h + 7 is zero, that is, when h 7 3 h 3 h 3 h 8 [v v y] 5 ~ 5 ~ 5 he vector y is in h 7+ h Span{v, v } when 7 + h is zero, that is, when h 7/ 9 By inspection, v (3/)v Any linear combination of v and v is actually just a multiple of v For instance, av + bv av + b(3/)v (a + 3b/)v So Span{v, v } is the set of points on the line through v and Note: Exercises 9 and prepare the way for ideas in Sections 4 and 7 Span{v, v } is a plane in R 3 through the origin, because the neither vector in this problem is a multiple of the other Every vector in the set has as its second entry and so lies in the xz-plane in ordinary 3-space So Span{v, v } is the xz-plane h Let y k hen [u v y] h h ~ k k + h/ his augmented matrix corresponds to a consistent system for all h and k So y is in Span{u, v} for all h and k Construct any 3 4 matrix in echelon form that corresponds to an inconsistent system Perform sufficient row operations on the matrix to eliminate all zero entries in the first three columns 3 a False he alternative notation for a (column) vector is ( 4, 3), using parentheses and commas 5 b False Plot the points to verify this Or, see the statement preceding Example 3 If were on the line through 5 and the origin, then 5 would have to be a multiple of 5, which is not the case c rue See the line displayed just before Example 4 d rue See the box that discusses the matrix in (5) e False he statement is often true, but Span{u, v} is not a plane when v is a multiple of u, or when u is the zero vector 4 a rue See the beginning of the subsection Vectors in R n b rue Use Fig 7 to draw the parallelogram determined by u v and v c False See the first paragraph of the subsection Linear Combinations d rue See the statement that refers to Fig e rue See the paragraph following the definition of Span{v,, v p }

20 CHAPER Linear Equations in Linear Algebra 5 a here are only three vectors in the set {a, a, a 3 }, and b is not one of them b here are infinitely many vectors in W Span{a, a, a 3 } o determine if b is in W, use the method of Exercise ~ 3 ~ a a a b 3 he system for this augmented matrix is consistent, so b is in W c a a + a + a 3 See the discussion in the text following the definition of Span{v,, v p } 6 a [a a a 3 b] ~ ~ ~ Yes, b is a linear combination of the columns of A, that is, b is in W b he third column of A is in W because a 3 a + a + a 3 7 a 5v is the output of 5 days operation of mine # 5 b he total output is x v + x v, so x and x should satisfy xv+ xv c [M] Reduce the augmented matrix ~ Operate mine # for 5 days and mine # for 4 days (his is the exact solution) 8 a he amount of heat produced when the steam plant burns x tons of anthracite and x tons of bituminous coal is 76x + 3x million Btu b he total output produced by x tons of anthracite and x tons of bituminous coal is given by the 76 3 vector x 3 + x c [M] he appropriate values for x and x satisfy x 3 + x 64 3,6 5 36,63 o solve, row reduce the augmented matrix: ~ he steam plant burned 39 tons of anthracite coal and 8 tons of bituminous coal

21 3 Solutions 9 he total mass is So v (v +5v + v 3 + v 4 )/ hat is, v Let m be the total mass of the system By definition, m m ( ) k v m v + + mkvk v+ + vk m m m he second expression displays v as a linear combination of v,, v k, which shows that v is in Span{v,, v k } 8 /3 3 a he center of mass is b he total mass of the new system is 9 grams he three masses added, w, w, and w 3, satisfy the equation ( ) ( ) 8 w w ( w3 ) which can be rearranged to 8 8 ( w+ ) ( w ) ( w3 ) and 8 8 w w w he condition w + w + w 3 6 and the vector equation above combine to produce a system of three equations whose augmented matrix is shown below, along with a sequence of row operations: ~ 8 8 ~ ~ 8 4 ~ 8 4 ~ 5 Answer: Add 35 g at (, ), add 5 g at (8, ), and add g at (, 4) Extra problem: Ignore the mass of the plate, and distribute 6 gm at the three vertices to make the center of mass at (, ) Answer: Place 3 g at (, ), g at (8, ), and g at (, 4) 3 See the parallelograms drawn on Fig 5 from the text Here c, c, c 3, and c 4 are suitable scalars he darker parallelogram shows that b is a linear combination of v and v, that is c v + c v + v 3 b

22 CHAPER Linear Equations in Linear Algebra he larger parallelogram shows that b is a linear combination of v and v 3, that is, c 4 v + v + c 3 v 3 b So the equation x v + x v + x 3 v 3 b has at least two solutions, not just one solution (In fact, the equation has infinitely many solutions) v 3 c 3 v 3 c v b v v c v c 4 v 33 a For j,, n, the jth entry of (u + v) + w is (u j + v j ) + w j By associativity of addition in R, this entry equals u j + (v j + w j ), which is the jth entry of u + (v + w) By definition of equality of vectors, (u + v) + w u + (v + w) b For any scalar c, the jth entry of c(u + v) is c(u j + v j ), and the jth entry of cu + cv is cu j + cv j (by definition of scalar multiplication and vector addition) hese entries are equal, by a distributive law in R So c(u + v) cu + cv 34 a For j,, n, u j + ( )u j ( )u j + u j, by properties of R By vector equality, u + ( )u ( )u + u b For scalars c and d, the jth entries of c(du) and (cd )u are c(du j ) and (cd )u j, respectively hese entries in R are equal, so the vectors c(du) and (cd)u are equal Note: When an exercise in this section involves a vector equation, the corresponding technology data (in the data files on the web) is usually presented as a set of (column) vectors o use MALAB or other technology, a student must first construct an augmented matrix from these vectors he MALAB note in the Study Guide describes how to do this he appendices in the Study Guide give corresponding information about Maple, Mathematica, and the I and HP calculators 4 SOLUIONS Notes: Key exercises are, 7, 8, 3 and 3 Exercises 9, 3, 33, and 34 are harder Exercise 34 anticipates the Invertible Matrix heorem but is not used in the proof of that theorem he matrix-vector product Ax product is not defined because the number of columns () in the matrix 6 does not match the number of entries (3) in the vector 7

23 4 Solutions 3 he matrix-vector product Ax product is not defined because the number of columns () in the 3 matrix 6 5 does not match the number of entries () in the vector Ax , and ( 3) 3 Ax 4 3 ( 4) ( 3) ( 3) ( 3) Ax , and ( 4) 7 Ax On the left side of the matrix equation, use the entries in the vector x as the weights in a linear combination of the columns of the matrix A: On the left side of the matrix equation, use the entries in the vector x as the weights in a linear combination of the columns of the matrix A: he left side of the equation is a linear combination of three vectors Write the matrix A whose columns are those three vectors, and create a variable vector x with three entries: A and x x x 7 5 x3 4 7 x hus the equation Ax b is x x3

24 4 CHAPER Linear Equations in Linear Algebra For your information: he unique solution of this equation is (5, 7, 3) Finding the solution by hand would be time-consuming Note: he skill of writing a vector equation as a matrix equation will be important for both theory and application throughout the text See also Exercises 7 and 8 8 he left side of the equation is a linear combination of four vectors Write the matrix A whose columns are those four vectors, and create a variable vector with four entries: z A z , and z hen the equation Az b z 3 z4 z z 4 is 5 4 z 3 3 z 4 For your information: One solution is (7, 3, 3, ) he general solution is z z 3 5z 4, z 5 5z 3 5z 4, with z 3 and z 4 free 9 he system has the same solution set as the vector equation x x x and this equation has the same solution set as the matrix equation x x 4 x3 he system has the same solution set as the vector equation 8 4 x 5 x and this equation has the same solution set as the matrix equation 8 4 x 5 4 x 3 o solve Ax b, row reduce the augmented matrix [a a a 3 b] for the corresponding linear system: ~ 5 ~ 5 ~ 3 ~

25 4 Solutions 5 he solution is x x 3 As a vector, the solution is x x3 x x 3 x 3 o solve Ax b, row reduce the augmented matrix [a a a 3 b] for the corresponding linear system: 3 ~ 5 5 ~ 5 5 ~ /5 ~ 5 4 ~ 4/5 ~ 4/5 x 3/5 x 3/5 he solution is x 4/5 As a vector, the solution is x x 4/5 x3 x 3 3 he vector u is in the plane spanned by the columns of A if and only if u is a linear combination of the columns of A his happens if and only if the equation Ax u has a solution (See the box preceding Example 3 in Section 4) o study this equation, reduce the augmented matrix [A u] ~ 6 4 ~ 8 ~ he equation Ax u has a solution, so u is in the plane spanned by the columns of A For your information: he unique solution of Ax u is (5/, 3/) 4 Reduce the augmented matrix [A u] to echelon form: ~ 3 ~ 3 ~ he equation Ax u has no solution, so u is not in the subset spanned by the columns of A b 5 he augmented matrix for Ax b is 6 3 b, which is row equivalent to b b 3b + his shows that the equation Ax b is not consistent when 3b + b is nonzero he set of b for which the equation is consistent is a line through the origin the set of all points (b, b ) satisfying b 3b 3 4 b 6 Row reduce the augmented matrix [A b]: A 3 6, b b 5 8 b3 3 4 b 3 4 b 3 6 b ~ 7 6 b 3b b3 4 b3 5b

26 6 CHAPER Linear Equations in Linear Algebra 3 4 b 3 4 b ~ b b b b + + b3 5b+ ( b + 3 b) b+ b + b3 he equation Ax b is consistent if and only if b + b + b 3 he set of such b is a plane through the origin in R 3 7 Row reduction shows that only three rows of A contain a pivot position: A ~ ~ ~ Because not every row of A contains a pivot position, heorem 4 in Section 4 shows that the equation Ax b does not have a solution for each b in R 4 8 Row reduction shows that only three rows of B contain a pivot position: B ~ ~ ~ Because not every row of B contains a pivot position, heorem 4 in Section 4 shows that the equation Bx y does not have a solution for each y in R 4 9 he work in Exercise 7 shows that statement (d) in heorem 4 is false So all four statements in heorem 4 are false hus, not all vectors in R 4 can be written as a linear combination of the columns of A Also, the columns of A do not span R 4 he work in Exercise 8 shows that statement (d) in heorem 4 is false So all four statements in heorem 4 are false hus, not all vectors in R 4 can be written as a linear combination of the columns of B he columns of B certainly do not span R 3, because each column of B is in R 4, not R 3 (his question was asked to alert students to a fairly common misconception among students who are just learning about spanning) Row reduce the matrix [v v v 3 ] to determine whether it has a pivot in each row ~ ~ ~ he matrix [v v v 3 ] does not have a pivot in each row, so the columns of the matrix do not span R 4, by heorem 4 hat is, {v, v, v 3 } does not span R 4 Note: Some students may realize that row operations are not needed, and thereby discover the principle covered in Exercises 3 and 3

27 4 Solutions 7 Row reduce the matrix [v v v 3 ] to determine whether it has a pivot in each row ~ he matrix [v v v 3 ] has a pivot in each row, so the columns of the matrix span R 4, by heorem 4 hat is, {v, v, v 3 } spans R 4 3 a False See the paragraph following equation (3) he text calls Ax b a matrix equation b rue See the box before Example 3 c False See the warning following heorem 4 d rue See Example 4 e rue See parts (c) and (a) in heorem 4 f rue In heorem 4, statement (a) is false if and only if statement (d) is also false 4 a rue his statement is in heorem 3 However, the statement is true without any "proof" because, by definition, Ax is simply a notation for x a + + x n a n, where a,, a n are the columns of A b rue See Example c rue, by heorem 3 d rue See the box before Example Saying that b is not in the set spanned by the columns of A is the same a saying that b is not a linear combination of the columns of A e False See the warning that follows heorem 4 f rue In heorem 4, statement (c) is false if and only if statement (a) is also false 5 By definition, the matrix-vector product on the left is a linear combination of the columns of the matrix, in this case using weights 3,, and So c 3, c, and c 3 6 he equation in x and x involves the vectors u, v, and w, and it may be viewed as x [ u v] x w By definition of a matrix-vector product, x u + x v w he stated fact that 3u 5v w can be rewritten as 3u 5v w So, a solution is x 3, x 5 7 Place the vectors q, q, and q 3 into the columns of a matrix, say, Q and place the weights x, x, and x 3 into a vector, say, x hen the vector equation becomes x Qx v, where Q [q q q 3 ] and x x x3 Note: If your answer is the equation Ax b, you need to specify what A and b are 8 he matrix equation can be written as c v + c v + c 3 v 3 + c 4 v 4 + c 5 v 5 v 6, where c 3, c, c 3 4, c 4, c 5, and v ,, 3, 4, 5, v v v v v

28 8 CHAPER Linear Equations in Linear Algebra 9 Start with any 3 3 matrix B in echelon form that has three pivot positions Perform a row operation (a row interchange or a row replacement) that creates a matrix A that is not in echelon form hen A has the desired property he justification is given by row reducing A to B, in order to display the pivot positions Since A has a pivot position in every row, the columns of A span R 3, by heorem 4 3 Start with any nonzero 3 3 matrix B in echelon form that has fewer than three pivot positions Perform a row operation that creates a matrix A that is not in echelon form hen A has the desired property Since A does not have a pivot position in every row, the columns of A do not span R 3, by heorem 4 3 A 3 matrix has three rows and two columns With only two columns, A can have at most two pivot columns, and so A has at most two pivot positions, which is not enough to fill all three rows By heorem 4, the equation Ax b cannot be consistent for all b in R 3 Generally, if A is an m n matrix with m > n, then A can have at most n pivot positions, which is not enough to fill all m rows hus, the equation Ax b cannot be consistent for all b in R 3 3 A set of three vectors in cannot span R 4 Reason: the matrix A whose columns are these three vectors has four rows o have a pivot in each row, A would have to have at least four columns (one for each pivot), which is not the case Since A does not have a pivot in every row, its columns do not span R 4, by heorem 4 In general, a set of n vectors in R m cannot span R m when n is less than m 33 If the equation Ax b has a unique solution, then the associated system of equations does not have any free variables If every variable is a basic variable, then each column of A is a pivot column So the ª º reduced echelon form of A must be ¼ Note: Exercises 33 and 34 are difficult in the context of this section because the focus in Section 4 is on existence of solutions, not uniqueness However, these exercises serve to review ideas from Section, and they anticipate ideas that will come later 34 If the equation Ax b has a unique solution, then the associated system of equations does not have any free variables If every variable is a basic variable, then each column of A is a pivot column So the ª º reduced echelon form of A must be Now it is clear that A has a pivot position in each row ¼ By heorem 4, the columns of A span R 3 35 Given Ax y and Ax y, you are asked to show that the equation Ax w has a solution, where w y + y Observe that w Ax + Ax and use heorem 5(a) with x and x in place of u and v, respectively hat is, w Ax + Ax A(x + x ) So the vector x x + x is a solution of w Ax 36 Suppose that y and z satisfy Ay z hen 4z 4Ay By heorem 5(b), 4Ay A(4y) So 4z A(4y), which shows that 4y is a solution of Ax 4z hus, the equation Ax 4z is consistent ª 7 5 8º ª 7 5 8º ª 7 5 8º /7 3/7 3/7 /7 3/7 3/7 37 [M] ~ ~ / 7 6/ 7 / 7 5/ 89/ ¼ 3 3 ¼ ¼

29 4 Solutions or, approximately, to three significant figures he original matrix does not have a pivot in every row, so its columns do not span R 4, by heorem 4 38 [M] /5 /5 9/5 /5 /5 9/5 ~ ~ /5 9/5 8/ / 5 44/5 6/5 * * MALAB shows starred entries for numbers that are essentially zero (to many decimal places) So, with pivots only in the first three rows, the original matrix has columns that do not span R 4, by heorem 4 39 [M] 4 [M] /4 /4 /4 3/4 ~ / 3/ 3/ 3/ / 3 9/ 3 3/ /4 /4 /4 3/4 5/4 /4 /4 3/4 ~ ~ 8/ 5 4/5 /5 8/5 4/5 /5 he original matrix has a pivot in every row, so its columns span R 4, by heorem /8 /4 /8 9/8 ~ /8 5/ 4 5/8 9/ /8 5/4 43/8 95/ /8 /4 /8 9/8 3/8 /4 /8 9/8 ~ ~ 6 6 he original matrix has a pivot in every row, so its columns span R 4, by heorem 4 4 [M] Examine the calculations in Exercise 39 Notice that the fourth column of the original matrix, say A, is not a pivot column Let A o be the matrix formed by deleting column 4 of A, let B be the echelon form obtained from A, and let B o be the matrix obtained by deleting column 4 of B he sequence of row operations that reduces A to B also reduces A o to B o Since B o is in echelon form, it shows that A o has a pivot position in each row herefore, the columns of A o span R 4 It is possible to delete column 3 of A instead of column 4 In this case, the fourth column of A becomes a pivot column of A o, as you can see by looking at what happens when column 3 of B is deleted For later work, it is desirable to delete a nonpivot column

30 3 CHAPER Linear Equations in Linear Algebra Note: Exercises 4 and 4 help to prepare for later work on the column space of a matrix (See Section 9 or 46) he Study Guide points out that these exercises depend on the following idea, not explicitly mentioned in the text: when a row operation is performed on a matrix A, the calculations for each new entry depend only on the other entries in the same column If a column of A is removed, forming a new matrix, the absence of this column has no affect on any row-operation calculations for entries in the other columns of A (he absence of a column might affect the particular choice of row operations performed for some purpose, but that is not being considered here) 4 [M] Examine the calculations in Exercise 4 he third column of the original matrix, say A, is not a pivot column Let A o be the matrix formed by deleting column 3 of A, let B be the echelon form obtained from A, and let B o be the matrix obtained by deleting column 3 of B he sequence of row operations that reduces A to B also reduces A o to B o Since B o is in echelon form, it shows that A o has a pivot position in each row herefore, the columns of A o span R 4 It is possible to delete column of A instead of column 3 (See the remark for Exercise 4) However, only one column can be deleted If two or more columns were deleted from A, the resulting matrix would have fewer than four columns, so it would have fewer than four pivot positions In such a case, not every row could contain a pivot position, and the columns of the matrix would not span R 4, by heorem 4 Notes: At the end of Section 4, the Study Guide gives students a method for learning and mastering linear algebra concepts Specific directions are given for constructing a review sheet that connects the basic definition of span with related ideas: equivalent descriptions, theorems, geometric interpretations, special cases, algorithms, and typical computations I require my students to prepare such a sheet that reflects their choices of material connected with span, and I make comments on their sheets to help them refine their review Later, the students use these sheets when studying for exams he MALAB box for Section 4 introduces two useful commands gauss and bgauss that allow a student to speed up row reduction while still visualizing all the steps involved he command B gauss(a,) causes MALAB to find the left-most nonzero entry in row of matrix A, and use that entry as a pivot to create zeros in the entries below, using row replacement operations he result is a matrix that a student might write next to A as the first stage of row reduction, since there is no need to write a new matrix after each separate row replacement I use the gauss command frequently in lectures to obtain an echelon form that provides data for solving various problems For instance, if a matrix has 5 rows, and if row swaps are not needed, the following commands produce an echelon form of A: B gauss(a,), B gauss(b,), B gauss(b,3), B gauss(b,4) If an interchange is required, I can insert a command such as B swap(b,,5) he command bgauss uses the left-most nonzero entry in a row to produce zeros above that entry his command, together with scale, can change an echelon form into reduced echelon form he use of gauss and bgauss creates an environment in which students use their computer program the same way they work a problem by hand on an exam Unless you are able to conduct your exams in a computer laboratory, it may be unwise to give students too early the power to obtain reduced echelon forms with one command they may have difficulty performing row reduction by hand during an exam Instructors whose students use a graphic calculator in class each day do not face this problem In such a case, you may wish to introduce rref earlier in the course than Chapter 4 (or Section 8), which is where I finally allow students to use that command 5 SOLUIONS Notes: he geometry helps students understand Span{u, v}, in preparation for later discussions of subspaces he parametric vector form of a solution set will be used throughout the text Figure 6 will appear again in Sections 9 and 48

31 5 Solutions 3 For solving homogeneous systems, the text recommends working with the augmented matrix, although no calculations take place in the augmented column See the Study Guide comments on Exercise 7 that illustrate two common student errors All students need the practice of Exercises 4 (Assign all odd, all even, or a mixture If you do not assign Exercise 7, be sure to assign both 8 and ) Otherwise, a few students may be unable later to find a basis for a null space or an eigenspace Exercises 9 34 are important Exercises 33 and 34 help students later understand how solutions of Ax encode linear dependence relations among the columns of A Exercises are more challenging Exercise 37 will help students avoid the standard mistake of forgetting that heorem 6 applies only to a consistent equation Ax b Reduce the augmented matrix to echelon form and circle the pivot positions If a column of the coefficient matrix is not a pivot column, the corresponding variable is free and the system of equations has a nontrivial solution Otherwise, the system has only the trivial solution ~ 9 ~ he variable x 3 is free, so the system has a nontrivial solution ~ 5 ~ here is no free variable; the system has only the trivial solution ~ he variable x 3 is free; the system has nontrivial solutions An alert student will realize that row operations are unnecessary With only two equations, there can be at most two basic variables One variable must be free Refer to Exercise 3 in Section ~ ~ x 3 is a free variable; the system has nontrivial solutions As in Exercise 3, row operations are unnecessary ~ 3 6 ~ 3 6 ~ x 5x 3 x + x he variable x 3 is free, x 5x 3, and x x 3 3 x 5x3 5 In parametric vector form, the general solution is x x x 3 x 3 x3 x3

32 3 CHAPER Linear Equations in Linear Algebra ~ 3 ~ 3 ~ x + 4x 3 x 3x he variable x 3 is free, x 4x 3, and x 3x 3 3 In parametric vector form, the general solution is x 4x3 4 x x 3x x x3 x ~ x + 9x3 8x4 x 4x3 + 5x4 he basic variables are x and x, with x 3 and x 4 free Next, x 9x 3 + 8x 4, and x 4x 3 5x 4 he general solution is x 9x3 + 8x4 9x3 8x4 9 8 x 4x3 5x 4 4x 3 5x x + x 3 + x 4 x3 x3 x3 x4 x4 x ~ 6 6 x 5x3 7x4 x + x3 6x4 he basic variables are x and x, with x 3 and x 4 free Next, x 5x 3 + 7x 4 and x x 3 + 6x 4 he general solution in parametric vector form is x 5x3 + 7x4 5x3 7x4 5 7 x x3 + 6x 4 x 3 6x 4 6 x + x 3 + x 4 x3 x3 x3 x4 x4 x x 3x + x3 ~ ~ he solution is x 3x x 3, with x and x 3 free In parametric vector form, 3x x3 3x x3 3 x x x + x + x 3 x3 x x 3x 4x4 ~ 6 8 he only basic variable is x, so x, x 3, and x 4 are free (Note that x 3 is not zero) Also, x 3x + 4x 4 he general solution is

33 5 Solutions 33 x 3x + 4x 3x 4x 3 4 x + + x + x 3 + x x x x x3 x3 x3 x4 x4 x ~ ~ x 4x + 5x6 x3 x6 he basic variables are x, x 3, and x 5 he remaining variables are free x5 4x6 In particular, x 4 is free (and not zero as some may assume) he solution is x 4x 5x 6, x 3 x 6, x 5 4x 6, with x, x 4, and x 6 free In parametric vector form, x 4x 5x6 4x 5x6 4 5 x x x x3 x6 x6 x + + x + x4 + x6 x4 x4 x4 x 5 4x 6 4x 6 4 x6 x6 x6 u v w Note: he Study Guide discusses two mistakes that students often make on this type of problem ~ ~ x + 5x + 8x4 + x5 x3 7x4 + 4x5 x6 he basic variables are x, x 3, and x 6 ; the free variables are x, x 4, and x 5 he general solution is x 5x 8x 4 x 5, x 3 7x 4 4x 5, and x 6 In parametric vector form, the solution is

34 34 CHAPER Linear Equations in Linear Algebra x 5x 8x x 5x 8x x x 5 x 5 x 5 x x x x x3 7x4 4x5 7x4 4x5 x + + x + x4 + x5 x4 x4 x4 3 o write the general solution in parametric vector form, pull out the constant terms that do not involve the free variable: x 5+ 4x3 5 4x3 5 4 x x 7x 3 7x 3 x p+ x3q x3 x3 x3 5 Geometrically, the solution set is the line through in the direction of p q o write the general solution in parametric vector form, pull out the constant terms that do not involve the free variable: x 3x4 3x4 3 x 8 + x 4 8 x 4 8 x + + x 4 p+ x4q x3 5x4 5x4 5 x4 x4 x4 he solution set is the line through p in the direction of q 5 Row reduce the augmented matrix for the system: ~ ~ x 5x3 ~ ~ x + x3 hus x + 5x 3, x x 3, and x 3 is free In parametric vector form, x + 5x3 5x3 5 x x x 3 x 3 x x3 x3 x3 p q

35 5 Solutions 35 he solution set is the line through system in Exercise 5, parallel to the line that is the solution set of the homogeneous 6 Row reduce the augmented matrix for the system: ~ 3 3 ~ 3 3 ~ x + 4x3 5 x 3x3 3 hus x 5 4x 3, x 3 + 3x 3, and x 3 is free In parametric vector form, x 54x3 5 4x3 5 4 x x 3 3x 3 3 3x 3 3 x x3 x3 x3 he solution set is the line through system in Exercise 6 5 3, parallel to the line that is the solution set of the homogeneous 7 Solve x + 9x 4x 3 for the basic variable: x 9x + 4x 3, with x and x 3 free In vector form, the solution is x 9x + 4x3 9x 4x3 9 4 x x x x x x x3 x3 x3 he solution of x + 9x 4x 3 is x 9x + 4x 3, with x and x 3 free In vector form, x 9x + 4x3 9x 4x3 9 4 x x x x x x x u + x 3 v x3 x3 x3 he solution set of the homogeneous equation is the plane through the origin in R 3 spanned by u and v he solution set of the nonhomogeneous equation is parallel to this plane and passes through the point p 8 Solve x 3x + 5x 3 4 for the basic variable: x 4 + 3x 5x 3, with x and x 3 free In vector form, the solution is x 4+ 3x 5x3 4 3x 5x x x x x x x x3 x3 x3

36 36 CHAPER Linear Equations in Linear Algebra he solution of x 3x + 5x 3 is x 3x 5x 3, with x and x 3 free In vector form, x 3x 5x3 3x 5x3 3 5 x x x x x x x u + x 3 v x3 x3 x3 he solution set of the homogeneous equation is the plane through the origin in R 3 spanned by u and v he solution set of the nonhomogeneous equation is parallel to this plane and passes through the 4 point p 9 he line through a parallel to b can be written as x a + t b, where t represents a parameter: x 5 x 5t x + t x 3, or x 3t he line through a parallel to b can be written as x a + tb, where t represents a parameter: x 3 7 x 37t x + t x 4 8, or x 4+ 8t 3 he line through p and q is parallel to q p So, given p and 5 q, form q 3 5 p ( 5) 6, and write the line as x p + t(q p) 5 + t he line through p and q is parallel to q p So, given p and 3 4 q, form q ( 6) 6 p 43 7, and write the line as x p + t(q p) t 3 7 Note: Exercises and prepare for Exercise 7 in Section 8 3 a rue See the first paragraph of the subsection titled Homogeneous Linear Systems b False he equation Ax gives an implicit description of its solution set See the subsection entitled Parametric Vector Form c False he equation Ax always has the trivial solution he box before Example uses the word nontrivial instead of trivial d False he line goes through p parallel to v See the paragraph that precedes Fig 5 e False he solution set could be empty! he statement (from heorem 6) is true only when there exists a vector p such that Ap b 4 a False A nontrivial solution of Ax is any nonzero x that satisfies the equation See the sentence before Example b rue See Example and the paragraph following it

37 5 Solutions 37 c rue If the zero vector is a solution, then b Ax A d rue See the paragraph following Example 3 e False he statement is true only when the solution set of Ax is nonempty heorem 6 applies only to a consistent system 5 Suppose p satisfies Ax b hen Ap b heorem 6 says that the solution set of Ax b equals the set S {w : w p + v h for some v h such that Av h } here are two things to prove: (a) every vector in S satisfies Ax b, (b) every vector that satisfies Ax b is in S a Let w have the form w p + v h, where Av h hen Aw A(p + v h ) Ap + Av h By heorem 5(a) in section 4 b + b So every vector of the form p + v h satisfies Ax b b Now let w be any solution of Ax b, and set v h w p hen Av h A(w p) Aw Ap b b So v h satisfies Ax hus every solution of Ax b has the form w p + v h 6 (Geometric argument using heorem 6) Since Ax b is consistent, its solution set is obtained by translating the solution set of Ax, by heorem 6 So the solution set of Ax b is a single vector if and only if the solution set of Ax is a single vector, and that happens if and only if Ax has only the trivial solution (Proof using free variables) If Ax b has a solution, then the solution is unique if and only if there are no free variables in the corresponding system of equations, that is, if and only if every column of A is a pivot column his happens if and only if the equation Ax has only the trivial solution 7 When A is the 3 3 zero matrix, every x in R 3 satisfies Ax So the solution set is all vectors in R 3 8 No If the solution set of Ax b contained the origin, then would satisfy A b, which is not true since b is not the zero vector 9 a When A is a 3 3 matrix with three pivot positions, the equation Ax has no free variables and hence has no nontrivial solution b With three pivot positions, A has a pivot position in each of its three rows By heorem 4 in Section 4, the equation Ax b has a solution for every possible b he term "possible" in the exercise means that the only vectors considered in this case are those in R 3, because A has three rows 3 a When A is a 3 3 matrix with two pivot positions, the equation Ax has two basic variables and one free variable So Ax has a nontrivial solution b With only two pivot positions, A cannot have a pivot in every row, so by heorem 4 in Section 4, the equation Ax b cannot have a solution for every possible b (in R 3 ) 3 a When A is a 3 matrix with two pivot positions, each column is a pivot column So the equation Ax has no free variables and hence no nontrivial solution b With two pivot positions and three rows, A cannot have a pivot in every row So the equation Ax b cannot have a solution for every possible b (in R 3 ), by heorem 4 in Section 4 3 a When A is a 4 matrix with two pivot positions, the equation Ax has two basic variables and two free variables So Ax has a nontrivial solution b With two pivot positions and only two rows, A has a pivot position in every row By heorem 4 in Section 4, the equation Ax b has a solution for every possible b (in R )

38 38 CHAPER Linear Equations in Linear Algebra 6 33 Look at x 7 x + and notice that the second column is 3 times the first So suitable values for x and x would be 3 and respectively (Another pair would be 6 and, etc) hus x satisfies Ax 34 Inspect how the columns a and a of A are related he second column is 3/ times the first Put 3 another way, 3a + a hus satisfies Ax Note: Exercises 33 and 34 set the stage for the concept of linear dependence 35 Look for A [a a a 3 ] such that a + a + a 3 hat is, construct A so that each row sum (the sum of the entries in a row) is zero 36 Look for A [a a a 3 ] such that a a + a 3 hat is, construct A so that the sum of the first and third columns is twice the second column 37 Since the solution set of Ax contains the point (4,), the vector x (4,) satisfies Ax Write this equation as a vector equation, using a and a for the columns of A: 4 a + a hen a 4a So choose any nonzero vector for the first column of A and multiply that column by 4 4 to get the second column of A For example, set A 4 Finally, the only way the solution set of Ax b could not be parallel to the line through (,4) and the origin is for the solution set of Ax b to be empty his does not contradict heorem 6, because that theorem applies only to the case when the equation Ax b has a nonempty solution set For b, take any vector that is not a multiple of the columns of A Note: In the Study Guide, a Checkpoint for Section 5 will help students with Exercise No If Ax y has no solution, then A cannot have a pivot in each row Since A is 3 3, it has at most two pivot positions So the equation Ax z for any z has at most two basic variables and at least one free variable hus, the solution set for Ax z is either empty or has infinitely many elements 39 If u satisfies Ax, then Au For any scalar c, heorem 5(b) in Section 4 shows that A(cu) cau c 4 Suppose Au and Av hen, since A(u + v) Au + Av by heorem 5(a) in Section 4, A(u + v) Au + Av + Now, let c and d be scalars Using both parts of heorem 5, A(cu + dv) A(cu) + A(dv) cau + dav c + d Note: he MALAB box in the Study Guide introduces the zeros command, in order to augment a matrix with a column of zeros

39 6 Solutions 39 6 SOLUIONS Fill in the exchange table one column at a time he entries in a column describe where a sector's output goes he decimal fractions in each column sum to Distribution of Output From: Goods Services Purchased by: output input 7 Goods 8 3 Services Denote the total annual output (in dollars) of the sectors by p G and p S From the first row, the total input to the Goods sector is p G + 7 p S he Goods sector must pay for that So the equilibrium prices must satisfy income expenses pg pg + 7pS From the second row, the input (that is, the expense) of the Services sector is 8 p G + 3 p S he equilibrium equation for the Services sector is income expenses ps 8 pg + 3pS Move all variables to the left side and combine like terms: 8 pg 7 ps 8 pg + 7 ps Row reduce the augmented matrix: ~ ~ 8 7 he general solution is p G 875 p S, with p S free One equilibrium solution is p S and p G 875 If one uses fractions instead of decimals in the calculations, the general solution would be written p G (7/8) p S, and a natural choice of prices might be p S 8 and p G 7 Only the ratio of the prices is important: p G 875 p S he economic equilibrium is unaffected by a proportional change in prices ake some other value for p S, say million dollars he other equilibrium prices are then p C 88 million, p E 7 million Any constant nonnegative multiple of these prices is a set of equilibrium prices, because the solution set of the system of equations consists of all multiples of one vector Changing the unit of measurement to, say, European euros has the same effect as multiplying all equilibrium prices by a constant he ratios of the prices remain the same, no matter what currency is used 3 a Fill in the exchange table one column at a time he entries in a column describe where a sector s output goes he decimal fractions in each column sum to

40 4 CHAPER Linear Equations in Linear Algebra Distribution of Output From: Purchased Chemicals Fuels Machinery by: output input 8 4 Chemicals 3 4 Fuels 5 Machinery b Denote the total annual output (in dollars) of the sectors by p C, p F, and p M From the first row of the table, the total input to the Chemical & Metals sector is p C + 8 p F + 4 p M So the equillibrium prices must satisfy income expenses pc pc + 8 pf + 4pM From the second and third rows of the table, the income/expense requirements for the Fuels & Power sector and the Machinery sector are, respectively, pf 3 pc + pf + 4 pm pm 5 pc + pf + pm Move all variables to the left side and combine like terms: 8 pc 8 pf 4 pm 3 pc + 9 pf 4 pm 5 p p + 8p C F M c [M] You can obtain the reduced echelon form with a matrix program Actually, hand calculations are not too messy o simplify the calculations, first scale each row of the augmented matrix by, then continue as usual ~ ~ he number of decimal ~ 97 ~ 97 places displayed is somewhatarbitrary he general solution is p C 47 p M, p F 97 p M, with p M free If p M is assigned the value, then p C 47 and p F 97 Note that only the ratios of the prices are determined his makes sense, for if the were converted from, say, dollars to yen or Euros, the inputs and outputs of each sector would still balance he economic equilibrium is not affected by a proportional change in prices

41 6 Solutions 4 4 a Fill in the exchange table one column at a time he entries in each column must sum to Distribution of Output From : Agric Energy Manuf ransp Purchased by : output input Agric 5 Energy Manuf ransp b Denote the total annual output of the sectors by p A, p E, p M, and p, respectively From the first row of the table, the total input to Agriculture is 65p A + 3p E + 3p M + p So the equilibrium prices must satisfy income expenses pa 65 pa + 3 pe + 3 pm + p From the second, third, and fourth rows of the table, the equilibrium equations are pe pa + pe + 5 pm + p pm 5 pa + 35 pe + 5 pm + 3 p p 5 p + 4 p + 4 p E M Move all variables to the left side and combine like terms: 35 pa 3 pe 3 pm p pa + 9 pe 5 pm p 5 pa 35 pe + 85 pm 3 p 5 p 4 p + 6 p E M Use gauss, bgauss, and scale operations to reduce the augmented matrix to reduced echelon form ~ ~ Scale the first row and solve for the basic variables in terms of the free variable p, and obtain p A 3p, p E 53p, and p M 7p he data probably justifies at most two significant figures, so take p and round off the other prices to p A, p E 53, and p M 5 he following vectors list the numbers of atoms of boron (B), sulfur (S), hydrogen (H), and oxygen (O): boron 3 sulfur BS: 3, HO:, HBO: 3 3, HS: 3 hydrogen 3 oxygen he coefficients in the equation x B S 3 + x H x 3 H 3 BO 3 + x 4 H S satisfy

42 4 CHAPER Linear Equations in Linear Algebra 3 x + x x 3 + x Move the right terms to the left side (changing the sign of each entry in the third and fourth vectors) and row reduce the augmented matrix of the homogeneous system: 3 3/ 3 3 ~ ~ ~ 3 3 3/ 3/ /3 /3 3 ~ ~ ~ /3 /3 /3 3 he general solution is x (/3) x 4, x x 4, x 3 (/3) x 4, with x 4 free ake x 4 3 hen x, x 6, and x 3 he balanced equation is B S 3 + 6H H 3 BO 3 + 3H S 6 he following vectors list the numbers of atoms of sodium (Na), phosphorus (P), oxygen (O), barium (Ba), and nitrogen(n): 3 sodium phosphorus Na3PO 4: 4, Ba(NO 3) : 6, Ba 3(PO 4) : 8, NaNO 3: 3 oxygen 3 barium nitrogen he coefficients in the equation x Na 3 PO 4 + x Ba(NO 3 ) x 3 Ba 3 (PO 4 ) + x 4 NaNO 3 satisfy 3 x4+ x6 x38+ x43 3 Move the right terms to the left side (changing the sign of each entry in the third and fourth vectors) and row reduce the augmented matrix of the homogeneous system: ~ ~ 6 3 ~

43 6 Solutions 43 /3 3 3 / ~ 8 3 ~ /6 ~ /6 6 6 he general solution is x (/3)x 4, x (/)x 4, x 3 (/6)x 4, with x 4 free ake x 4 6 hen x, x 3, and x 3 he balanced equation is Na 3 PO 4 + 3Ba(NO 3 ) Ba 3 (PO 4 ) + 6NaNO 3 7 he following vectors list the numbers of atoms of sodium (Na), hydrogen (H), carbon (C), and oxygen (O): 3 sodium 8 5 hydrogen NaHCO 3:, H3C6H5O 7:, Na3C6H5O 7:, HO :, CO : 6 6 carbon oxygen he order of the various atoms is not important he list here was selected by writing the elements in the order in which they first appear in the chemical equation, reading left to right: x NaHCO 3 + x H 3 C 6 H 5 O 7 x 3 Na 3 C 6 H 5 O 7 + x 4 H O + x 5 CO he coefficients x,, x 5 satisfy the vector equation x + x x 3 + x 4 + x Move all the terms to the left side (changing the sign of each entry in the third, fourth, and fifth vectors) and reduce the augmented matrix: /3 ~ ~ 6 6 / he general solution is x x 5, x (/3)x 5, x 3 (/3)x 5, x 4 x 5, and x 5 is free ake x 5 3 hen x x 4 3, and x x 3 he balanced equation is 3NaHCO 3 + H 3 C 6 H 5 O 7 Na 3 C 6 H 5 O 7 + 3H O + 3CO 8 he following vectors list the numbers of atoms of potassium (K), manganese (Mn), oxygen (O), sulfur (S), and hydrogen (H): potassium manganese KMnO 4: 4, MnSO 4: 4, HO:, MnO :, KSO 4: 4, HSO 4: 4 oxygen sulfur hydrogen he coefficients in the chemical equation

44 44 CHAPER Linear Equations in Linear Algebra x KMnO 4 + x MnSO 4 + x 3 H O x 4 MnO + x 5 K SO 4 + x 6 H SO 4 satisfy the vector equation x4+ x 4+ x3 x4+ x54+ x6 4 Move the terms to the left side (changing the sign of each entry in the last three vectors) and reduce the augmented matrix: ~ 5 5 he general solution is x x 6, x (5)x 6, x 3 x 6, x 4 (5)x 6, x 5 5x 6, and x 6 is free ake x 6 hen x x 3, and x 3, x 4 5, and x 5 he balanced equation is KMnO 4 + 3MnSO 4 + H O 5MnO + K SO 4 + H SO 4 9 [M] Set up vectors that list the atoms per molecule Using the order lead (Pb), nitrogen (N), chromium (Cr), manganese (Mn), and oxygen (O), the vector equation to be solved is 3 lead 6 nitrogen x+ x x3+ x4+ x5+ x6 chromium manganese oxygen he general solution is x (/6)x 6, x (/45)x 6, x 3 (/8)x 6, x 4 (/45)x 6, x 5 (44/45)x 6, and x 6 is free ake x 6 9 hen x 5, x 44, x 3 5, x 4, and x 5 88 he balanced equation is 5PbN CrMn O 8 5Pb 3 O 4 + Cr O MnO + 9NO [M] Set up vectors that list the atoms per molecule Using the order manganese (Mn), sulfur (S), arsenic (As), chromium (Cr), oxygen (O), and hydrogen (H), the vector equation to be solved is manganese 3 sulfur arsenic x + x + x3 x4 + x5 + x6 + x7 chromium oxygen 3 hydrogen In rational format, the general solution is x (6/37)x 7, x (3/37)x 7, x 3 (374/37)x 7, x 4 (6/37)x 7, x 5 (6/37)x 7, x 6 (3/37)x 7, and x 7 is free ake x 7 37 to make the other variables whole numbers he balanced equation is 6MnS + 3As Cr O H SO 4 6HMnO 4 + 6AsH 3 + 3CrS 3 O + 37H O

45 6 Solutions 45 Note that some students may use decimal calculation and simply "round off" the fractions that relate x,, x 6 to x 7 he equations they construct may balance most of the elements but miss an atom or two Here is a solution submitted by two of my students: 5MnS + 4As Cr O H SO 4 5HMnO 4 + 8AsH 3 + 4CrS 3 O + H O Everything balances except the hydrogen he right side is short 8 hydrogen atoms Perhaps the students thought that the 4H (hydrogen gas) escaped! Write the equations for each node: Node Flow in Flow out A B C x+ x3 x 8 x3 + x4 x+ x otal flow: 8 x + Rearrange the equations: x + x3 x x3 x4 x + x x Reduce the augmented matrix: 6 ~ ~ For this type of problem, the best description of the general solution uses the style of Section rather than parametric vector form: x x3 x 6 + x3 Since x cannot be negative, the largest value of x 3 is x3 is free x4 6 8 x A C x 3 x B x 4 Write the equations for each intersection: Intersection Flow in Flow out A x x + x + 4 B x + x C x + x x + D x + x 6 otal flow: 4 A B x x x 3 x 4 x 5 D 6 C

46 46 CHAPER Linear Equations in Linear Algebra Rearrange the equations: x x x x x x + x x 3 5 x + x Reduce the augmented matrix: 4 ~ 6 6 he general solution (written in the style of Section ) is x + x3 x5 x x3 + x5 x3 is free b When x 4, x 5 must be 6, and x4 6 x 5 x5 is free x 4 + x3 x 6 x3 x3 is free x4 x5 6 c he minimum value of x is 4 cars/minute, because x 3 cannot be negative 3 Write the equations for each intersection: Intersection Flow in Flow out A x + 3 x + 8 B x + x x + x C x + x D x + 4 x E x + 6 x + 3 otal flow: 3 3 Rearrange the equations: x x x 5 x x + x x x 3 x x x 5 6 x A x x 5 C x B x 6 6 E x 3 x 4 D Reduce the augmented matrix: ~ ~

47 6 Solutions 47 4 ~ ~ 5 6 x x3 4 x x3 + x3 is free a he general solution is x4 x6 + 5 x5 x6 + 6 x6 is free b o find minimum flows, note that since x cannot be negative, x 3 > 4 his implies that x > 5 Also, since x 6 cannot be negative, x 4 > 5 and x 5 > 6 he minimum flows are x 5, x 3 4, x 4 5, x 5 6 (when x and x 6 ) 4 Write the equations for each intersection Intersection Flow in Flow out A x x + B x + 5 x3 C x3 x4 + D x4 + 5 x5 E x5 x6 + 8 F x + x 6 Rearrange the equations: x x x x3 5 x3 x4 x4 x5 5 x5 x6 8 x + x 6 Reduce the augmented matrix: 5 5 ~ ~ x 3 B x A 5 C D x 5 x 4 E x 6 x F 8

48 48 CHAPER Linear Equations in Linear Algebra 5 ~ ~ he general solution is 7 8 Since x 4 cannot be negative, the minimum value of x 6 is 7 x + x6 x x6 x3 5 + x6 x 7 + x x5 8 + x6 x6 is free 4 6 Note: he MALAB box in the Study Guide discusses rational calculations, needed for balancing the chemical equations in Exercises 9 and As usual, the appendices cover this material for Maple, Mathematica, and the I and HP graphic calculators 7 SOLUIONS Note: Key exercises are 9 and 3 3 Exercise 3 states a result that could be a theorem in the text here is a danger, however, that students will memorize the result without understanding the proof, and then later mix up the words row and column Exercises 37 and 38 anticipate the discussion in Section 9 of one-to-one transformations Exercise 44 is fairly difficult for my students Use an augmented matrix to study the solution set of x u + x v + x 3 w (*), where u, v, and w are the three given vectors Since 4 ~ 4, there are no free variables So the homogeneous equation (*) has only the trivial solution he vectors are linearly independent Use an augmented matrix to study the solution set of x u + x v + x 3 w (*), where u, v, and w are the 3 8 three given vectors Since 5 4 ~ 5 4, there are no free variables So the 8 3 homogeneous equation (*) has only the trivial solution he vectors are linearly independent 3 Use the method of Example 3 (or the box following the example) By comparing entries of the vectors, one sees that the second vector is 3 times the first vector hus, the two vectors are linearly dependent 4 From the first entries in the vectors, it seems that the second vector of the pair, 4 8 may be times the first vector But there is a sign problem with the second entries So neither of the vectors is a multiple of the other he vectors are linearly independent 5 Use the method of Example Row reduce the augmented matrix for Ax : ~ ~ ~ ~

49 7 Solutions 49 here are no free variables he equation Ax has only the trivial solution and so the columns of A are linearly independent 6 Use the method of Example Row reduce the augmented matrix for Ax : ~ ~ ~ ~ here are no free variables he equation Ax has only the trivial solution and so the columns of A are linearly independent 7 Study the equation Ax Some people may start with the method of Example : ~ ~ But this is a waste of time here are only 3 rows, so there are at most three pivot positions Hence, at least one of the four variables must be free So the equation Ax has a nontrivial solution and the columns of A are linearly dependent 8 Same situation as with Exercise 7 he (unnecessary) row operations are ~ 8 4 ~ Again, because there are at most three pivot positions yet there are four variables, the equation Ax has a nontrivial solution and the columns of A are linearly dependent 9 a he vector v 3 is in Span{v, v } if and only if the equation x v + x v v 3 has a solution o find out, row reduce [v v v 3 ], considered as an augmented matrix: ~ 8 6 h h At this point, the equation 8 shows that the original vector equation has no solution So v 3 is in Span{v, v } for no value of h b For {v, v, v 3 } to be linearly independent, the equation x v + x v + x 3 v 3 must have only the trivial solution Row reduce the augmented matrix [v v v 3 ] ~ 8 ~ 8 6 h h For every value of h, x is a free variable, and so the homogeneous equation has a nontrivial solution hus {v, v, v 3 } is a linearly dependent set for all h

50 5 CHAPER Linear Equations in Linear Algebra a he vector v 3 is in Span{v, v } if and only if the equation x v + x v v 3 has a solution o find out, row reduce [v v v 3 ], considered as an augmented matrix: 5 9 ~ 3 6 h h+ 6 At this point, the equation shows that the original vector equation has no solution So v 3 is in Span{v, v } for no value of h b For {v, v, v 3 } to be linearly independent, the equation x v + x v + x 3 v 3 must have only the trivial solution Row reduce the augmented matrix [v v v 3 ] 5 9 ~ ~ 3 6 h h+ 6 For every value of h, x is a free variable, and so the homogeneous equation has a nontrivial solution hus {v, v, v 3 } is a linearly dependent set for all h o study the linear dependence of three vectors, say v, v, v 3, row reduce the augmented matrix [v v v 3 ]: ~ 4 ~ h 5 h+ 4 h6 he equation x v + x v + x 3 v 3 has a nontrivial solution if and only if h 6 (which corresponds to x 3 being a free variable) hus, the vectors are linearly dependent if and only if h 6 o study the linear dependence of three vectors, say v, v, v 3, row reduce the augmented matrix [v v v 3 ]: h ~ 5 h he equation x v + x v + x 3 v 3 has a free variable and hence a nontrivial solution no matter what the value of h So the vectors are linearly dependent for all values of h 3 o study the linear dependence of three vectors, say v, v, v 3, row reduce the augmented matrix [v v v 3 ]: h ~ h he equation x v + x v + x 3 v 3 has a free variable and hence a nontrivial solution no matter what the value of h So the vectors are linearly dependent for all values of h

51 7 Solutions 5 4 o study the linear dependence of three vectors, say v, v, v 3, row reduce the augmented matrix [v v v 3 ]: ~ ~ 3 8 h 7 h+ 3 h+ he equation x v + x v + x 3 v 3 has a nontrivial solution if and only if h + (which corresponds to x 3 being a free variable) hus, the vectors are linearly dependent if and only if h 5 he set is linearly dependent, by heorem 8, because there are four vectors in the set but only two entries in each vector 6 he set is linearly dependent because the second vector is 3/ times the first vector 7 he set is linearly dependent, by heorem 9, because the list of vectors contains a zero vector 8 he set is linearly dependent, by heorem 8, because there are four vectors in the set but only two entries in each vector 9 he set is linearly independent because neither vector is a multiple of the other vector [wo of the entries in the first vector are 4 times the corresponding entry in the second vector But this multiple does not work for the third entries] he set is linearly dependent, by heorem 9, because the list of vectors contains a zero vector a False A homogeneous system always has the trivial solution See the box before Example b False See the warning after heorem 7 c rue See Fig 3, after heorem 8 d rue See the remark following Example 4 a rue See Fig b False For instance, the set consisting of heorem 8 c rue See the remark following Example 4 d False See Example 3(a) and is linearly dependent See the warning after 3 * * * 4 *,, 5 * and

52 5 CHAPER Linear Equations in Linear Algebra * * * 6 he columns must linearly independent, by heorem 7, because the first column is not zero, the second column is not a multiple of the first, and the third column is not a linear combination of the preceding two columns (because a 3 is not in Span{a, a }) 7 All five columns of the 7 5 matrix A must be pivot columns Otherwise, the equation Ax would have a free variable, in which case the columns of A would be linearly dependent 8 If the columns of a 5 7 matrix A span R 5, then A has a pivot in each row, by heorem 4 Since each pivot position is in a different column, A has five pivot columns 9 A: any 3 matrix with two nonzero columns such that neither column is a multiple of the other In this case the columns are linearly independent and so the equation Ax has only the trivial solution B: any 3 matrix with one column a multiple of the other 3 a n b he columns of A are linearly independent if and only if the equation Ax has only the trivial solution his happens if and only if Ax has no free variables, which in turn happens if and only if every variable is a basic variable, that is, if and only if every column of A is a pivot column 3 hink of A [a a a 3 ] he text points out that a 3 a + a Rewrite this as a + a a 3 As a matrix equation, Ax for x (,, ) 3 hink of A [a a a 3 ] he text points out that a + a a 3 Rewrite this as a + a a 3 As a matrix equation, Ax for x (,, ) 33 rue, by heorem 7 (he Study Guide adds another justification) 34 rue, by heorem 9 35 False he vector v could be the zero vector 36 False Counterexample: ake v, v, and v 4 all to be multiples of one vector ake v 3 to be not a multiple of that vector For example, 4 4 v, v, v3, v rue A linear dependence relation among v, v, v 3 may be extended to a linear dependence relation among v, v, v 3, v 4 by placing a zero weight on v 4 38 rue If the equation x v + x v + x 3 v 3 had a nontrivial solution (with at least one of x, x, x 3 nonzero), then so would the equation x v + x v + x 3 v 3 + v 4 But that cannot happen because {v, v, v 3, v 4 } is linearly independent So {v, v, v 3 } must be linearly independent his problem can also be solved using Exercise 37, if you know that the statement there is true

53 7 Solutions If for all b the equation Ax b has at most one solution, then take b, and conclude that the equation Ax has at most one solution hen the trivial solution is the only solution, and so the columns of A are linearly independent 4 An m n matrix with n pivot columns has a pivot in each column So the equation Ax b has no free variables If there is a solution, it must be unique 4 [M] /8 5 5/8 9/4 A ~ /4 5/4 5/ 5 7 7/8 7 35/8 35/ /8 5 5/8 9/4 5/8 5 5/8 9/4 ~ ~ /5 /5 77/ he pivot columns of A are,, and 5 Use them to form B Other likely choices use columns 3 or 4 of A instead of : , Actually, any set of three columns of A that includes column 5 will work for B, but the concepts needed to prove that are not available now (Column 5 is not in the two-dimensional subspace spanned by the first four columns) 4 [M] /6 / /4 59/ 65/ ~ ~ 89/ 89/ he pivot columns of A are,, 4, and 6 Use them to form B Other likely choices might use column 3 of A instead of, and/or use column 5 instead of 4

54 54 CHAPER Linear Equations in Linear Algebra 43 [M] Make v any one of the columns of A that is not in B and row reduce the augmented matrix [B v] he calculations will show that the equation Bx v is consistent, which means that v is a linear combination of the columns of B hus, each column of A that is not a column of B is in the set spanned by the columns of B 44 [M] Calculations made as for Exercise 43 will show that each column of A that is not a column of B is in the set spanned by the columns of B Reason: he original matrix A has only four pivot columns If one or more columns of A are removed, the resulting matrix will have at most four pivot columns (Use exactly the same row operations on the new matrix that were used to reduce A to echelon form) If v is a column of A that is not in B, then row reduction of the augmented matrix [B v] will display at most four pivot columns Since B itself was constructed to have four pivot columns, adjoining v cannot produce a fifth pivot column hus the first four columns of [B v] are the pivot columns his implies that the equation Bx v has a solution Note: At the end of Section 7, the Study Guide has another note to students about Mastering Linear Algebra Concepts he note describes how to organize a review sheet that will help students form a mental image of linear independence he note also lists typical misuses of terminology, in which an adjective is applied to an inappropriate noun (his is a major problem for my students) I require my students to prepare a review sheet as described in the Study Guide, and I try to make helpful comments on their sheets I am convinced, through personal observation and student surveys, that the students who prepare many of these review sheets consistently perform better than other students Hopefully, these students will remember important concepts for some time beyond the final exam 8 SOLUIONS Notes: he key exercises are 7, 5 and 3 Exercise is worth assigning even if you normally assign only odd exercises Exercise 5 (and 7) can be used to make a few comments about computer graphics, even if you do not plan to cover Section 6 For Exercise 3, the Study Guide encourages students not to look at the proof before trying hard to construct it hen the Guide explains how to create the proof Exercises 9 and provide a natural segue into Section 9 I arrange to discuss the homework on these exercises when I am ready to begin Section 9 he definition of the standard matrix in Section 9 follows naturally from the homework, and so I ve covered the first page of Section 9 before students realize we are working on new material he text does not provide much practice determining whether a transformation is linear, because the time needed to develop this skill would have to be taken away from some other topic If you want your students to be able to do this, you may need to supplement Exercises 9, 3, 3 and 33 If you skip the concepts of one-to-one and onto in Section 9, you can use the result of Exercise 3 to show that the coordinate mapping from a vector space onto R n (in Section 44) preserves linear independence and dependence of sets of vectors (See Example 6 in Section 44) (u) Au 3 6, (v) a a b b a 5a (u) Au 5, (v) 5 b 5b c 5c

55 8 Solutions 55 3 [ A b ] 6 7 ~ 5 ~ ~ 5 ~ x, unique solution 4 [ A b ] ~ 4 7 ~ ~ 3 ~ 3 3 x, unique solution 5 [ A b ] ~ ~ Note that a solution is not o avoid this common error, write the equations: x 3 x + 3x 3 and solve for the basic variables: + x 3 General solution 3 x 3 and x 6 [ A b ] x 3 3 x3 x3 x 33x3 x x3 x3 is free x 33x3 3 3 x x x + x For a particular solution, one might choose ~ ~ ~ x + 3x 7 + x 3 3 General solution: x 73x3 x 3 x3 x3 is free x 73x3 7 3 x x 3 x 3 + x, one choice: 3 3 x3 x3 7 3

56 56 CHAPER Linear Equations in Linear Algebra 7 a 5; the domain of is R 5, because a 6 5 matrix has 5 columns and for Ax to be defined, x must be in R 5 b 6; the codomain of is R 6, because Ax is a linear combination of the columns of A, and each column of A is in R 6 8 A must have 5 rows and 4 columns For the domain of to be R 4, A must have four columns so that Ax is defined for x in R 4 For the codomain of to be R 5, the columns of A must have five entries (in which case A must have five rows), because Ax is a linear combination of the columns of A Solve Ax 4 3 ~ 4 3 ~ x 9x3 7x4 9 7 x 9x3 + 7x4 ~ 4 3 x 4x3 3x4 x 4x3 + 3x4, x3 is free x4 is free x 9x3 7x4 9 7 x 4x3 3x x x 3 + x 4 x3 x3 x4 x Solve Ax ~ ~ ~ ~ ~ 3 8 x 3x3 3x3 3 x + 3x3 x x 3 x 3 x + x3 x x 3 x3 is free x4 x3 x4 Is the system represented by [A b] consistent? Yes, as the following calculation shows ~ 4 3 ~ he system is consistent, so b is in the range of the transformation x Ax

57 8 Solutions 57 Is the system represented by [A b] consistent? ~ ~ ~ ~ he system is inconsistent, so b is not in the range of the transformation x Ax 3 4 v x v x u (v) u (u) x x (u) (v) A reflection through the origin A contraction by the factor 5 he transformation in Exercise 3 may also be described as a rotation of π radians about the origin or a rotation of π radians about the origin 5 6 x x (u) v (v) v (u) u u x x (v) A projection onto the x -axis A reflection through the line x x 6 7 (3u) 3(u) 3 3, (v) (v) 3 6, and 6 4 (3u + v) 3(u) (v)

58 58 CHAPER Linear Equations in Linear Algebra 8 Draw a line through w parallel to v, and draw a line through w parallel to u See the left part of the figure below From this, estimate that w u + v Since is linear, (w) (u) + (v) Locate (u) and (v) as in the right part of the figure and form the associated parallelogram to locate (w) x x (v) w u (v) (w) v v x (u) x 9 All we know are the images of e and e and the fact that is linear he key idea is to write 5 x e e hen, from the linearity of, write 3 (x) (5e 3e ) 5(e ) 3(e ) 5y 3y x x o find the image of x, observe that x x x x x x + + e e hen x x (x) (x e + x e ) x (e ) + x (e ) x x x 6x + Use the basic definition of Ax to construct A Write x 7 7 ( x) xv+ xv [ v v], A x 5 3 x 5 3 a rue Functions from R n to R m are defined before Fig A linear transformation is a function with certain properties b False he domain is R 5 See the paragraph before Example c False he range is the set of all linear combinations of the columns of A See the paragraph before Example d False See the paragraph after the definition of a linear transformation e rue See the paragraph following the box that contains equation (4) a rue See the paragraph following the definition of a linear transformation b False If A is an m n matrix, the codomain is R m See the paragraph before Example c False he question is an existence question See the remark about Example (d), following the solution of Example d rue See the discussion following the definition of a linear transformation e rue See the paragraph following equation (5)

59 8 Solutions 59 3 x u + v x u v u cu x x (u) (v) (u) (cu) (u + v) 4 Given any x in R n, there are constants c,, c p such that x c v + c p v p, because v,, v p span R n hen, from property (5) of a linear transformation, (x) c (v ) + + c p (v p ) c + + c p 5 Any point x on the line through p in the direction of v satisfies the parametric equation x p + tv for some value of t By linearity, the image (x) satisfies the parametric equation (x) (p + tv) (p) + t(v) (*) If (v), then (x) (p) for all values of t, and the image of the original line is just a single point Otherwise, (*) is the parametric equation of a line through (p) in the direction of (v) 6 Any point x on the plane P satisfies the parametric equation x su + tv for some values of s and t By linearity, the image (x) satisfies the parametric equation (x) s(u) + t(v) (s, t in R) (*) he set of images is just Span{(u), (v)} If (u) and (v) are linearly independent, Span{(u), (v)} is a plane through (u), (v), and If (u) and (v) are linearly dependent and not both zero, then Span{(u), (v)} is a line through If (u) (v), then Span{(u), (v)} is {} 7 a From Fig 7 in the exercises for Section 5, the line through (p) and (q) is in the direction of q p, and so the equation of the line is x p + t(q p) p + tq tp ( t)p + tq b Consider x ( t)p + tq for t such that < t < hen, by linearity of, (x) (( t)p + tq) ( t)(p) + t(q) < t < (*) If (p) and (q) are distinct, then (*) is the equation for the line segment between (p) and (q), as shown in part (a) Otherwise, the set of images is just the single point (p), because ( t)(p) + t(q) ( t)(p) + t(p) (p) 8 Consider a point x in the parallelogram determined by u and v, say x au + bv for < a <, < b < By linearity of, the image of x is (x) (au + bv) a(u) + b(v), for < a <, < b < (*) his image point lies in the parallelogram determined by (u) and (v) Special degenerate cases arise when (u) and (v) are linearly dependent If one of the images is not zero, then the parallelogram is actually the line segment from to (u) + (v) If both (u) and (v) are zero, then the parallelogram is just {} Another possibility is that even u and v are linearly dependent, in which case the original parallelogram is degenerate (either a line segment or the zero vector) In this case, the set of images must be degenerate, too 9 a When b, f (x) mx In this case, for all x,y in R and all scalars c and d, f (cx + dy) m(cx + dy) mcx + mdy c(mx) + d(my) c f (x) + d f (y) his shows that f is linear

60 6 CHAPER Linear Equations in Linear Algebra b When f (x) mx + b, with b nonzero, f() m() b b his shows that f is not linear, because every linear transformation maps the zero vector in its domain into the zero vector in the codomain (In this case, both zero vectors are just the number ) Another argument, for instance, would be to calculate f (x) m(x) + b and f (x) mx + b If b is nonzero, then f (x) is not equal to f (x) and so f is not a linear transformation c In calculus, f is called a linear function because the graph of f is a line 3 Let (x) Ax + b for x in R n If b is not zero, () A + b b Actually, fails both properties of a linear transformation For instance, (x) A(x) + b Ax + b, which is not the same as (x) (Ax + b) Ax + b Also, (x + y) A(x + y) + b Ax + Ay + b which is not the same as (x) + (y) Ax + b + Ay + b 3 (he Study Guide has a more detailed discussion of the proof) Suppose that {v, v, v 3 } is linearly dependent hen there exist scalars c, c, c 3, not all zero, such that c v + c v + c 3 v 3 hen (c v + c v + c 3 v 3 ) () Since is linear, c (v ) + c (v ) + c 3 (v 3 ) Since not all the weights are zero, {(v ), (v ), (v 3 )} is a linearly dependent set 3 ake any vector (x, x ) with x, and use a negative scalar For instance, (, ) (, 3), but ( (, )) (, ) (, 3) ( ) (, ) 33 One possibility is to show that does not map the zero vector into the zero vector, something that every linear transformation does do (, ) (, 4, ) 34 Suppose that {u, v} is a linearly independent set in R n and yet (u) and (v) are linearly dependent hen there exist weights c, c, not both zero, such that c (u) + c (v) Because is linear, (c u + c v) hat is, the vector x c u + c v satisfies (x) Furthermore, x cannot be the zero vector, since that would mean that a nontrivial linear combination of u and v is zero, which is impossible because u and v are linearly independent hus, the equation (x) has a nontrivial solution 35 ake u and v in R 3 and let c and d be scalars hen cu + dv (cu + dv, cu + dv, cu 3 + dv 3 ) he transformation is linear because (cu + dv) (cu + dv, cu + dv, (cu 3 + dv 3 )) (cu + dv, cu + dv, cu 3 dv 3 ) (cu, cu, cu 3 ) + (dv, dv, dv 3 ) c(u, u, u 3 ) + d(v, v, v 3 ) c(u) + d(v) 36 ake u and v in R 3 and let c and d be scalars hen cu + dv (cu + dv, cu + dv, cu 3 + dv 3 ) he transformation is linear because (cu + dv) (cu + dv,, cu 3 + dv 3 ) (cu,, cu 3 ) + (dv,, dv 3 ) c(u,, u 3 ) + d(v,, v 3 ) c(u) + d(v)

61 8 Solutions 6 37 [M] / / ~, x (7 / ) x4 x (9/ ) x4 x3 x4 is free 7/ 9/ x x 4 38 [M] / /4 ~, / x (3/ 4) x4 x (5/ 4) x x3 (7 / 4) x4 x4 is free 4 3/4 5/4 x x 4 7/ / / 7 39 [M] ~, yes, b is in the range of the transformation, because the augmented matrix shows a consistent system In fact, x 4 + (7/) x4 4 x 7 + (9/) x 4 7 the general solution is ; when x 4 a solution is x x3 x4 is free /4 5/ /4 /4 4 [M] ~, yes, b is in the range of the /4 3/ transformation, because the augmented matrix shows a consistent system In fact, x 5/4 (3/4) x4 x / 4 (5/ 4) x 4 4 the general solution is ; when x 4 a solution is x x3 3/ 4 + (7 / 4) x4 5 x4 is free Notes: At the end of Section 8, the Study Guide provides a list of equations, figures, examples, and connections with concepts that will strengthen a student s understanding of linear transformations I encourage my students to continue the construction of review sheets similar to those for span and linear independence, but I refrain from collecting these sheets At some point the students have to assume the responsibility for mastering this material If your students are using MALAB or another matrix program, you might insert the definition of matrix multiplication after this section, and then assign a project that uses random matrices to explore properties of matrix multiplication See Exercises in Section Meanwhile, in class you can continue with your plans for finishing Chapter When you get to Section, you won t have much to do he Study Guide s MALAB note for Section contains the matrix notation students will need for a project on matrix multiplication he appendices in the Study Guide have the corresponding material for Mathematica, Maple, and the -83+/86/89 and HP-48G graphic calculators

62 6 CHAPER Linear Equations in Linear Algebra 9 SOLUIONS Notes: his section is optional if you plan to treat linear transformations only lightly, but many instructors will want to cover at least heorem and a few geometric examples Exercises 5 and 6 illustrate a fast way to solve Exercises 7 without explicitly computing the images of the standard basis he purpose of introducing one-to-one and onto is to prepare for the term isomorphism (in Section 44) and to acquaint math majors with these terms Mastery of these concepts would require a substantial digression, and some instructors prefer to omit these topics (and Exercises 5 4) In this case, you can use the result of Exercise 3 in Section 8 to show that the coordinate mapping from a vector space onto R n (in Section 44) preserves linear independence and dependence of sets of vectors (See Example 6 in Section 44) he notions of one-to-one and onto appear in the Invertible Matrix heorem (Section 3), but can be omitted there if desired Exercises 5 8 and 3 36 offer fairly easy writing practice Exercises 3, 3, and 35 provide important links to earlier material A [(e ) (e )] A [(e ) (e ) (e 3 )] (e ) e, (e ) e A [ e e ] 4 (e ) /, (e ) /, A / / / / / / 5 (e ) e e, (e ) e, A 6 (e ) e, (e ) e + 3e 3, A 3 7 Follow what happens to e and e Since e is on the unit circle in the plane, it rotates through 3 π /4 radians into a point on the unit circle that lies in the third quadrant and on the line x x (that is, y x in more familiar notation) he point (, ) is on the ine x x, but its distance from the origin is So the rotational image of e is ( /, / ) hen this image reflects in the horizontal axis to ( /,/ ) Similarly, e rotates into a point on the unit circle that lies in the second quadrant and on the line x x, namely,

63 9 Solutions 63 ( /, / ) hen this image reflects in the horizontal axis to ( /,/ ) When the two calculations described above are written in vertical vector notation, the transformation s standard matrix [(e ) (e )] is easily seen: / / / / / / e, e, A / / / / / / 8 e e e and e e e, so A [ e e ] 9 he horizontal shear maps e into e, and then the reflection in the line x x maps e into e (See able ) he horizontal shear maps e into e into e e o find the image of e e when it is reflected in the line x x, use the fact that such a reflection is a linear transformation So, the image of e e is the same linear combination of the images of e and e, namely, e ( e ) e + e o summarize, e e e and e e e e+ e, so A o find the image of e e when it is reflected through the vertical axis use the fact that such a reflection is a linear transformation So, the image of e e is the same linear combination of the images of e and e, namely, e + e, e e e and e e e so A he transformation described maps e e e and maps e e e A rotation through π radians also maps e into e and maps e into e Since a linear transformation is completely determined by what it does to the columns of the identity matrix, the rotation transformation has the same effect as on every vector in R he transformation in Exercise 8 maps e e e and maps e e e A rotation about the origin through π / radians also maps e into e and maps e into e Since a linear transformation is completely determined by what it does to the columns of the identity matrix, the rotation transformation has the same effect as on every vector in R 3 Since (, ) e + e, the image of (, ) under is (e ) + (e ), by linearity of On the figure in the exercise, locate (e ) and use it with (e ) to form the parallelogram shown below x (e ) (e ) (, ) (e ) x

64 64 CHAPER Linear Equations in Linear Algebra 4 Since (x) Ax [a a ]x x a + x a a + 3a, when x (, 3), the image of x is located by forming the parallelogram shown below x (, 3) a a x a 5 By inspection, 6 By inspection, 3 x 3xx3 4 x 4x x3 x x + x3 x x x x x x + x 7 o express (x) as Ax, write (x) and x as column vectors, and then fill in the entries in A by inspection, as done in Exercises 5 and 6 Note that since (x) and x have four entries, A must be a 4 4 matrix x x x x x x (x) + A x + x3 x3 x3 x3 + x4 x 4 x 4 8 As in Exercise 7, write (x) and x as column vectors Since x has entries, A has columns Since (x) has 4 entries, A has 4 rows x 3x 3 x 4x x 4 x A x x x 9 Since (x) has entries, A has rows Since x has 3 entries, A has 3 columns x x x 5x + 4x3 5 4 A x x x 6x 3 6 x 3 x 3 Since (x) has entry, A has row Since x has 4 entries, A has 4 columns x x x x [x+ 3x3 4 x4] [ A ] [ 3 4] x3 x3 x 4 x 4

65 9 Solutions 65 x+ x x x (x) A 4x 5x x 4 5 x o solve (x) ~ ~, x 3 8, row reduce the augmented matrix: xx (x) x x x 3x A 3 + x x o solve (x) 4, row reduce the augmented 3xx 3 9 matrix: ~ 3 ~ 3 ~ 3 5, x a rue See heorem b rue See Example 3 c False See the paragraph before able d False See the definition of onto Any function from R n to R m maps each vector onto another vector e False See Example 5 4 a False See the paragraph preceding Example b rue See heorem c rue See able d False See the definition of one-to-one Any function from R n to R m maps a vector onto a single (unique) vector e rue See the solution of Example 5 5 hree row interchanges on the standard matrix A of the transformation in Exercise 7 produce his matrix shows that A has only three pivot positions, so the equation Ax has a nontrivial solution By heorem, the transformation is not one-to-one Also, since A does not have a pivot in each row, the columns of A do not span R 4 By heorem, does not map R 4 onto R 4 6 he standard matrix A of the transformation in Exercise is 3 Its columns are linearly dependent because A has more columns than rows So is not one-to-one, by heorem Also, A is row 4 5 equivalent to 9 9, which shows that the rows of A span R By heorem, maps R 3 onto R he standard matrix A of the transformation in Exercise 9 is 6 he columns of A are linearly dependent because A has more columns than rows So is not one-to-one, by heorem Also, A has a pivot in each row, so the rows of A span R By heorem, maps R 3 onto R

66 66 CHAPER Linear Equations in Linear Algebra 8 he standard matrix A of the transformation in Exercise 4 has linearly independent columns, because the figure in that exercise shows that a and a are not multiples So is one-to-one, by heorem Also, A must have a pivot in each column because the equation Ax has no free variables hus, the * echelon form of A is Since A has a pivot in each row, the columns of A span R So maps R onto R An alternate argument for the second part is to observe directly from the figure in Exercise 4 that a and a span R his is more or less evident, based on experience with grids such as those in Figure 8 and Exercise 7 of Section 3 9 By heorem, the columns of the standard matrix A must be linearly independent and hence the * * * equation Ax has no free variables So each column of A must be a pivot column: A ~ Note that cannot be onto because of the shape of A 3 By heorem, the columns of the standard matrix A must span R 3 By heorem 4, the matrix must have a pivot in each row here are four possibilities for the echelon form: * * * * * * * * * * * * *, * *, *, * * Note that cannot be one-to-one because of the shape of A 3 is one-to-one if and only if A has n pivot columns By heorem (b), is one-to-one if and only if the columns of A are linearly independent And from the statement in Exercise 3 in Section 7, the columns of A are linearly independent if and only if A has n pivot columns 3 he transformation maps R n onto R m if and only if the columns of A span R m, by heorem his happens if and only if A has a pivot position in each row, by heorem 4 in Section 4 Since A has m rows, this happens if and only if A has m pivot columns hus, maps R n onto R m if and only A has m pivot columns n m 33 Define : R R by (x) Bx for some m n matrix B, and let A be the standard matrix for By definition, A [(e ) (e n )], where e j is the jth column of I n However, by matrix-vector multiplication, (e j ) Be j b j, the jth column of B So A [b b n ] B 34 he transformation maps R n onto R m if and only if for each y in R m there exists an x in R n such that y (x) n m n m 35 If : R R maps R onto R, then its standard matrix A has a pivot in each row, by heorem and by heorem 4 in Section 4 So A must have at least as many columns as rows hat is, m < n When is one-to-one, A must have a pivot in each column, by heorem, so m > n 36 ake u and v in R p and let c and d be scalars hen (S(cu + dv)) (c S(u) + d S(v)) because S is linear c (S(u)) + d (S(v)) because is linear his calculation shows that the mapping x (S(x)) is linear See equation (4) in Section 8

67 Solutions / / [M] ~ ~ ~ here is no pivot in the / fourth column of the standard matrix A, so the equation Ax has a nontrivial solution By heorem, the transformation is not one-to-one (For a shorter argument, use the result of Exercise 3) [M] ~ ~ No here is no pivot in the third column of the standard matrix A, so the equation Ax has a nontrivial solution By heorem, the transformation is not one-to-one (For a shorter argument, use the result of Exercise 3) [M] ~ ~ here is not a pivot in every row, so the columns of the standard matrix do not span R 5 By heorem, the transformation does not map R 5 onto R [M] ~ ~ here is not a pivot in every row, so the columns of the standard matrix do not span R 5 By heorem, the transformation does not map R 5 onto R 5 SOLUIONS a If x is the number of servings of Cheerios and x is the number of servings of % Natural Cereal, then x and x should satisfy nutrients nutrients quantities xper serving + xper serving of of nutrients of Cheerios % Natural required hat is, x + x

68 68 CHAPER Linear Equations in Linear Algebra b he equivalent matrix equation is matrix for this equation x 9 o solve this, row reduce the augmented x ~ ~ ~ ~ ~ he desired nutrients are provided by 5 servings of Cheerios together with serving of % Natural Cereal Set up nutrient vectors for one serving of Kellogg s Cracklin Oat Bran (COB) and Kellogg's Crispix (Crp): Nutrients: COB Crp calories protein 3 carbohydrate 5 fat COB Crp, u a Let B [ ] hen Bu lists the amounts of calories, protein, carbohydrate, and fat in a mixture of three servings of Cracklin' Oat Bran and two servings of Crispix b Let u and u be the number of servings of Cracklin Oat Bran and Crispix, respectively Can these 5 numbers satisfy the equation B u u? o find out, row reduce the augmented matrix ~ ~ ~

69 Solutions 69 he last row identifies an inconsistent system, because 5 is impossible So, technically, there is no mixture of the two cereals that will supply exactly the desired list of nutrients However, one could tentatively ignore the final equation and see what the other equations prescribe hey reduce to u 5 and u 75 What does the corresponding mixture provide? COB + 75 Crp he error of 5% for fat might be acceptable for practical purposes Actually, the data in COB and Crp are certainly not precise and may have some errors even greater than 5% 3 Here are the data, assembled from able and Exercise 3: Mg of Nutrients/Unit Nutrients soy soy Required Nutrient milk flour whey prot (milligrams) protein carboh fat calcium a Let x, x, x 3, x 4 represent the number of units of nonfat milk, soy flour, whey, and isolated soy protein, respectively hese amounts must satisfy the following matrix equation x x x x 4 8 b [M] ~ ~ he solution is x 64, x 54, x 3 9, x 4 his solution is not feasible, because the mixture cannot include negative amounts of whey and isolated soy protein Although the coefficients of these two ingredients are fairly small, they cannot be ignored he mixture of 64 units of nonfat milk and 54 units of soy flour provide 56 g of protein, 56 g of carbohydrate, 38 g of fat, and 9 g of calcium Some of these nutrients are nowhere close to the desired amounts 4 Let x, x, and x 3 be the number of units of foods,, and 3, respectively, needed for a meal he values of x, x, and x 3 should satisfy nutrients nutrients nutrients x (in mg) (in mg) (in mg) milligrams per unit x per unit x per unit of nutrients of Food of Food of Food 3 required

70 7 CHAPER Linear Equations in Linear Algebra From the given data, x 5 + x 4 + x o solve, row reduce the corresponding augmented matrix: ~ 6 9 ~ 3/ / /33 5/ ~ 3/ /3 ~ 5/33 ~ 5/33 4/33 4/33 4/33 5/ 455 units of Food x 5/33 5 units of Food 4/33 units of Food 3 5 Loop : he resistance vector is 5 r otal of four RI voltage drops for current I Voltage drop for I is negative; I flows in opposite direction Current I3 does not flow in loop Current I does not flow in loop 4 Loop : he resistance vector is Voltage drop for I is negative; I flows in opposite direction r otal of four RI voltage drops for current I 3 Voltage drop for I is negative; I flows in opposite direction 3 3 Current does not flow in loop Also, r 3 I 4 3, r 4 7 4, and R [r r r 3 r 4 ] Notice that each off-diagonal entry of R is negative (or zero) his happens because the loop current directions are all chosen in the same direction on the figure (For each loop j, this choice forces the currents in other loops adjacent to loop j to flow in the direction opposite to current I j ) 4 3 Next, set v he voltages in loops and 4 are negative because the battery orientation in each loop is opposite to the direction chosen for positive current flow hus, the equation Ri v becomes

71 Solutions 7 5 I 4 3 I 3 [M]: he solution is i 3 7 4I3 4 5 I4 I 756 I I3 93 I Loop : he resistance vector is 4 otal of four RI voltage drops for current I r Voltage drop for is negative; flows in opposite direction I I Current I does not flow in loop 3 Current I does not flow in loop Loop : he resistance vector is r 4 Voltage drop for I is negative; I flows in opposite direction 6 otal of four RI voltage drops for current I Voltage drop for I is negative; I flows in opposite direction 3 3 Current does not flow in loop I 4 Also, r 3, r 4, and R [r r r 3 r 4 ] Set v 3 3 3I3 3 I4 4 3 hen Ri v becomes 4 I 4 I 6 I 3 I 844 [M]: he solution is i I3 46 I Loop : he resistance vector is otal of three RI voltage drops for current I 7 Voltage drop for is negative; r flows in opposite direction I I Current I does not flow in loop 3 4 Voltage drop for I is negative; I flows in opposite direction Loop : he resistance vector is r 4 7 Voltage drop for I is negative; I flows in opposite direction 5 otal of three RI voltage drops for current I 6 Voltage drop for I is negative; I flows in opposite direction 3 3 Current I does not flow in loop 4 4

72 7 CHAPER Linear Equations in Linear Algebra Also, r 3 6, r , and R [r r r 3 r 4 ] Notice that each off-diagonal entry of R is negative (or zero) his happens because the loop current directions are all chosen in the same direction on the figure (For each loop j, this choice forces the currents in other loops adjacent to loop j to flow in the direction opposite to current I j ) 4 3 Next, set v Note the negative voltage in loop 4 he current direction chosen in loop 4 is opposed by the orientation of the voltage source in that loop hus Ri v becomes 7 4 I 4 I I 3 I 55 [M]: he solution is i 6 4 5I3 I I 4 I Loop : he resistance vector is 5 otal of four RI voltage drops for current I 5 Voltage drop for I is negative; I flows in opposite direction r Current I3 does not flow in loop 5 Voltage drop for I is negative; I flows in opposite direction 4 4 Voltage drop for I is negative; I flows in opposite direction Loop : he resistance vector is r Voltage drop for I is negative; I flows in opposite direction 5 otal of four RI voltage drops for current I 5 Voltage drop for I is negative; I flows in opposite direction 3 3 Current I does not flow in loop 4 Voltage drop for I is negative; I flows in opposite direction Also, r 3 5, r 4 5, r 5 3, and R Set v Note the negative voltages for loops where the chosen current direction is opposed by the orientation of the voltage source in that loop hus Ri v becomes:

73 Solutions I I 3 [M] he solution is I I4 3 4 I 5 I 337 I I3 7 I4 67 I he population movement problems in this section assume that the total population is constant, with no migration or immigration he statement that about 5% of the city s population moves to the suburbs means also that the rest of the city s population (95%) remain in the city his determines the entries in the first column of the migration matrix (which concerns movement from the city) From: City Suburbs o: 95 City 5 Suburbs Likewise, if 4% of the suburban population moves to the city, then the other 96% remain in the suburbs 95 4 his determines the second column of the migration matrix:, M 5 96 he difference equation is 6, x k+ Mx k for k,,, Also, x 4, , 586, he population in (when k ) is x Mx , 44, , 573,6 he population in (when k ) is x Mx , 46,74 he data in the first sentence implies that the migration matrix has the form: From: City Suburbs o: 3 City 7 Suburbs he remaining entries are determined by the fact that the numbers in each column must sum to (For instance, if 7% of the city people move to the suburbs, then the rest, or 93%, remain in the city) So the 93 3 migration matrix is M 7 97 he initial population is x 8, 5, , 759, he population in (when k ) is x Mx , 54, , 7, he population in (when k ) is x Mx , 577,9 he problem concerns two groups of people those living in California and those living outside California (and in the United States) It is reasonable, but not essential, to consider the people living inside

74 74 CHAPER Linear Equations in Linear Algebra California first hat is, the first entry in a column or row of a vector will concern the people living in California With this choice, the migration matrix has the form: From: Calif Outside o: Calif Outside a For the first column of the migration matrix M, compute Calif persons { } who moved 59,5 746 otal Calif pop 9,76, { } he other entry in the first column is he exercise requests that 5 decimal places be used So this number should be rounded to 9885 Whatever number of decimal places is used, it is important that the two entries sum to So, for the first fraction, use 75 outside persons { } For the second column of M, compute who moved 564, 58 he other entry { otal outside pop } 8,994, is hus, the migration matrix is From: Calif Outside o: Calif Outside b [M] he initial vector is x (976, 8994), with data in millions of persons Since x describes the population in 99, and x describes the population in 99, the vector x describes the projected population for the year, assuming that the migration rates remain constant and there are no deaths, births, or migration Here are some of the vectors in the calculation, with only the first 4 or 5 figures displayed Numbers are in millions of persons: ,,,,,, x Set M 9 5 and 48 x hen x , and x he entries in x give the approximate distribution of cars on Wednesday, two days after Monday 3 [M] he order of entries in a column of a migration matrix must match the order of the columns For instance, if the first column concerns the population in the city, then the first entry in each column must be the fraction of the population that moves to (or remains in) the city In this case, the data in the 95 3 exercise leads to M 5 97 and x 6, 4,

75 Solutions 75 a Some of the population vectors are x 53,93 47, ,47 47,456 5,, 5, 476,77 57, 63 56,583 58,544 x x x he data here shows that the city population is declining and the suburban population is increasing, but the changes in population each year seem to grow smaller b When x 35, 65,, the situation is different Now x 358,53 364,4 367,843 37,83 5,, 5, 64, ,86 63,57 69,77 x x x he city population is increasing slowly and the suburban population is decreasing No other conclusions are expected (his example will be analyzed in greater detail later in the text) 4 Here are Figs (a) and (b) for Exercise 3, followed by the figure for Exercise 34 in Section : (a) 3 3 (b) Section For Fig (a), the equations are: o solve the system, rearrange the equations and row reduce the augmented matrix Interchanging rows and 4 speeds up the calculations he first five steps are shown in detail ~ ~ ~ ~ 4 4 ~ ~ ~

76 76 CHAPER Linear Equations in Linear Algebra For Fig (b), the equations are Rearrange the equations and row reduce the augmented matrix: ~ ~ a Here are the solution temperatures for the three problems studied: Fig (a) in Exercise 4 of Section : (,,, ) Fig (b) in Exercise 4 of Section : (, 75,, 5) Figure for Exercises 34 in Section (, 75, 3, 5) When the solutions are arranged this way, it is evident that the third solution is the sum of the first two solutions What might not be so evident is that list of boundary temperatures of the third problem is the sum of the lists of boundary temperatures of the first two problems (he temperatures are listed clockwise, starting at the left of ) Fig (a): (,,,,,,, ) Fig (b): (,,, 4, 4,,, ) Fig from Section : (,,, 4, 4, 3, 3, ) b When the boundary temperatures in Fig (a) are multiplied by 3, the new interior temperatures are also multiplied by 3 c he correspondence from the list of eight boundary temperatures to the list of four interior temperatures is a linear transformation A verification of this statement is not expected However, it can be shown that the solutions of the steady-state temperature problem here satisfy a superposition principle he system of equations that approximate the interior temperatures can be written in the form Ax b, where A is determined by the arrangement of the four interior points on the plate and b is a vector in R 4 determined by the boundary temperatures Note: he MALAB box in the Study Guide for Section discusses scientific notation and shows how to generate a matrix whose columns list the vectors x, x, x,, determined by an equation x k+ Mx k for k,, Chapter SUPPLEMENARY EXERCISES a False (he word reduced is missing) Counterexample: A, B, C 3 4 he matrix A is row equivalent to matrices B and C, both in echelon form

77 Chapter Supplementary Exercises 77 b False Counterexample: Let A be any n n matrix with fewer than n pivot columns hen the equation Ax has infinitely many solutions (heorem in Section says that a system has either zero, one, or infinitely many solutions, but it does not say that a system with infinitely many solutions exists Some counterexample is needed) c rue If a linear system has more than one solution, it is a consistent system and has a free variable By the Existence and Uniqueness heorem in Section, the system has infinitely many solutions d False Counterexample: he following system has no free variables and no solution: x + x x 5 x + x e rue See the box after the definition of elementary row operations, in Section If [A b] is transformed into [C d] by elementary row operations, then the two augmented matrices are row equivalent f rue heorem 6 in Section 5 essentially says that when Ax b is consistent, the solution sets of the nonhomogeneous equation and the homogeneous equation are translates of each other In this case, the two equations have the same number of solutions g False For the columns of A to span R m, the equation Ax b must be consistent for all b in R m, not for just one vector b in R m h False Any matrix can be transformed by elementary row operations into reduced echelon form, but not every matrix equation Ax b is consistent i rue If A is row equivalent to B, then A can be transformed by elementary row operations first into B and then further transformed into the reduced echelon form U of B Since the reduced echelon form of A is unique, it must be U j False Every equation Ax has the trivial solution whether or not some variables are free k rue, by heorem 4 in Section 4 If the equation Ax b is consistent for every b in R m, then A must have a position in every one of its m rows If A has m pivot positions, then A has m pivot columns, each containing one pivot position l False he word unique should be deleted Let A be any matrix with m pivot columns but more than m columns altogether hen the equation Ax b is consistent and has m basic variables and at least one free variable hus the equation does not does not have a unique solution m rue If A has n pivot positions, it has a pivot in each of its n columns and in each of its n rows he reduced echelon form has a in each pivot position, so the reduced echelon form is the n n identity matrix n rue Both matrices A and B can be row reduced to the 3 3 identity matrix, as discussed in the previous question Since the row operations that transform B into I 3 are reversible, A can be transformed first into I 3 and then into B o rue he reason is essentially the same as that given for question f p rue If the columns of A span R m, then the reduced echelon form of A is a matrix U with a pivot in each row, by heorem 4 in Section 4 Since B is row equivalent to A, B can be transformed by row operations first into A and then further transformed into U Since U has a pivot in each row, so does B By heorem 4, the columns of B span R m q False See Example 5 in Section 6 r rue Any set of three vectors in R would have to be linearly dependent, by heorem 8 in Section 6

78 78 CHAPER Linear Equations in Linear Algebra s False If a set {v, v, v 3, v 4 } were to span R 5, then the matrix A [v v v 3 v 4 ] would have a pivot position in each of its five rows, which is impossible since A has only four columns t rue he vector u is a linear combination of u and v, namely, u ( )u + v u False If u and v are multiples, then Span{u, v} is a line, and w need not be on that line v False Let u and v be any linearly independent pair of vectors and let w v hen w u + v, so w is a linear combination of u and v However, u cannot be a linear combination of v and w because if it were, u would be a multiple of v hat is not possible since {u, v} is linearly independent w False he statement would be true if the condition v is not zero were present See heorem 7 in Section 7 However, if v, then {v, v, v 3 } is linearly dependent, no matter what else might be true about v and v 3 x rue Function is another word used for transformation (as mentioned in the definition of transformation in Section 8), and a linear transformation is a special type of transformation y rue For the transformation x Ax to map R 5 onto R 6, the matrix A would have to have a pivot in every row and hence have six pivot columns his is impossible because A has only five columns z False For the transformation x Ax to be one-to-one, A must have a pivot in each column Since A has n columns and m pivots, m might be less than n If a, then x b/a; the solution is unique If a, and b, the solution set is empty, because x b If a and b, the equation x has infinitely many solutions 3 a Any consistent linear system whose echelon form is * * * * * * * * * * or * or * b Any consistent linear system whose coefficient matrix has reduced echelon form I 3 c Any inconsistent linear system of three equations in three variables 4 Since there are three pivots (one in each row), the augmented matrix must reduce to the form * * * * * A solution of Ax b exists for all b because there is a pivot in each row of A Each * solution is unique because there are no free variables 3 k 3 k 5 a ~ 4 h 8 h 84k If h and k, the second row of the augmented matrix indicates an inconsistent system of the form x b, with b nonzero If h, and k, there is only one nonzero equation, and the system has infinitely many solutions Finally, if h, the coefficient matrix has two pivots and the system has a unique solution h h b ~ 6 k k + 3h If k + 3h, the system is inconsistent Otherwise, the coefficient matrix has two pivots and the system has a unique solution

79 Chapter Supplementary Exercises 79 6 a Set v 4 7,, v v, and 5 b 3 Determine if b is a linear combination of v, v, v 3 Or, Determine if b is in Span{v, v, v 3 } o do this, compute ~ he system is consistent, so b is in Span{v, v, v 3 } b Set A, 8 3 b 3 Determine if b is a linear combination of the columns of A c Define (x) Ax Determine if b is in the range of 4 b 7 a Set v 5,, 3 v v and b b Determine if v, v, v 3 span R 3 o do this, row b3 reduce [v v v 3 ]: ~ 9 4 ~ 9 4 he matrix does not have a pivot in each row, so its columns do not span R 3, by heorem 4 in Section 4 4 b Set A 5 Determine if the columns of A span R c Define (x) Ax Determine if maps R 3 onto R 3 8 a * * * * *,, * b * * * 9 he first line is the line spanned by he second line is spanned by So the problem is to write 5 6 as the sum of a multiple of and a multiple of hat is, find x and x such that 5 x x + 6 Reduce the augmented matrix for this equation: /3 ~ ~ ~ ~ /3 7/ hus, or 5 8/3 7/ /3 4/3 he line through a and the origin and the line through a and the origin determine a grid on the x x -plane as shown below Every point in R can be described uniquely in terms of this grid hus, b can

80 8 CHAPER Linear Equations in Linear Algebra be reached from the origin by traveling a certain number of units in the a -direction and a certain number of units in the a -direction x b a x a A solution set is a line when the system has one free variable If the coefficient matrix is 3, then two of * the columns should be pivot columns For instance, take 3 * Put anything in column 3 he resulting matrix will be in echelon form Make one row replacement operation on the second row to create a matrix not in echelon form, such as ~ 3 5 A solution set is a plane where there are two free variables If the coefficient matrix is 3, then only one column can be a pivot column he echelon form will have all zeros in the second row Use a row replacement to create a matrix not in echelon form For instance, let A 3 3 * 3 he reduced echelon form of A looks like E * Since E is row equivalent to A, the equation * 3 Ex has the same solutions as Ax hus * By inspection, 3 E a 4 Row reduce the augmented matrix for x x a + a+ (*) a a a ~ a a+ a a ( a)( + a) + he equation (*) has a nontrivial solution only when ( a)( + a) So the vectors are linearly independent for all a except a and a 5 a If the three vectors are linearly independent, then a, c, and f must all be nonzero (he converse is true, too) Let A be the matrix whose columns are the three linearly independent vectors hen

81 Chapter Supplementary Exercises 8 A must have three pivot columns (See Exercise 3 in Section 7, or realize that the equation Ax has only the trivial solution and so there can be no free variables in the system of equations) Since A is 3 3, the pivot positions are exactly where a, c, and f are located b he numbers a,, f can have any values Here's why Denote the columns by v, v, and v 3 Observe that v is not the zero vector Next, v is not a multiple of v because the third entry of v is nonzero Finally, v 3 is not a linear combination of v and v because the fourth entry of v 3 is nonzero By heorem 7 in Section 7, {v, v, v 3 } is linearly independent 6 Denote the columns from right to left by v,, v 4 he first vector v is nonzero, v is not a multiple of v (because the third entry of v is nonzero), and v 3 is not a linear combination of v and v (because the second entry of v 3 is nonzero) Finally, by looking at first entries in the vectors, v 4 cannot be a linear combination of v, v, and v 3 By heorem 7 in Section 7, the columns are linearly independent 7 Here are two arguments he first is a direct proof he second is called a proof by contradiction i Since {v, v, v 3 } is a linearly independent set, v Also, heorem 7 shows that v cannot be a multiple of v, and v 3 cannot be a linear combination of v and v By hypothesis, v 4 is not a linear combination of v, v, and v 3 hus, by heorem 7, {v, v, v 3, v 4 } cannot be a linearly dependent set and so must be linearly independent ii Suppose that {v, v, v 3, v 4 } is linearly dependent hen by heorem 7, one of the vectors in the set is a linear combination of the preceding vectors his vector cannot be v 4 because v 4 is not in Span{v, v, v 3 } Also, none of the vectors in {v, v, v 3 } is a linear combinations of the preceding vectors, by heorem 7 So the linear dependence of {v, v, v 3, v 4 } is impossible hus {v, v, v 3, v 4 } is linearly independent 8 Suppose that c and c are constants such that c v + c (v + v ) (*) hen (c + c )v + c v Since v and v are linearly independent, both c + c and c It follows that both c and c in (*) must be zero, which shows that {v, v + v } is linearly independent 9 Let M be the line through the origin that is parallel to the line through v, v, and v 3 hen v v and v 3 v are both on M So one of these two vectors is a multiple of the other, say v v k(v 3 v ) his equation produces a linear dependence relation (k )v + v kv 3 A second solution: A parametric equation of the line is x v + t(v v ) Since v 3 is on the line, there is some t such that v 3 v + t (v v ) ( t )v + t v So v 3 is a linear combination of v and v, and {v, v, v 3 } is linearly dependent If (u) v, then since is linear, ( u) (( )u) ( )(u) v Either compute (e ), (e ), and (e 3 ) to make the columns of A, or write the vectors vertically in the definition of and fill in the entries of A by inspection:??? x x Ax? A? x x, A??? x3 x3 By heorem in Section 9, the columns of A span R 3 By heorem 4 in Section 4, A has a pivot in each of its three rows Since A has three columns, each column must be a pivot column So the equation

82 8 CHAPER Linear Equations in Linear Algebra Ax has no free variables, and the columns of A are linearly independent By heorem in Section 9, the transformation x Ax is one-to-one 3 a b 4 5 4a 3b 5 implies that b a 3 Solve: 3a + 4b /5 4/5 ~ ~ ~ ~ 3 4 5/ 4 5/ 4 3/5 3/5 3/ 5 hus a 4/5 and b 3/5 4 he matrix equation displayed gives the information a 4b 5 and 4a+ b Solve for a and b: / 5 ~ ~ ~ / 5 / 5 So a / 5, b / 5 5 a he vector lists the number of three-, two-, and one-bedroom apartments provided when x floors of plan A are constructed x 7 x 4 x b c [M] Solve x 7 x 4 x / x (/) x ~ 3/8 5 x + (3/8) x he general solution is x + (/) x3 / x x 5 (3/8) x 3 5 x 3 3/8 + x3 x3 However, the only feasible solutions must have whole numbers of floors for each plan hus, x 3 must be a multiple of 8, to avoid fractions One solution, for x 3, is to use floors of plan A and 5 floors of plan B Another solution, for x 3 8, is to use 6 floors of plan A, floors of plan B, and 8 floors of plan C hese are the only feasible solutions A larger positive multiple of 8 for x 3 makes x negative A negative value for x 3, of course, is not feasible either

83 SOLUIONS Notes: he definition here of a matrix product AB gives the proper view of AB for nearly all matrix calculations (he dual fact about the rows of A and the rows of AB is seldom needed, mainly because vectors here are usually written as columns) I assign Exercise 3 and most of Exercises 7 to reinforce the definition of AB Exercises 3 and 4 are used in the proof of the Invertible Matrix heorem, in Section 3 Exercises 3 5 are mentioned in a footnote in Section A class discussion of the solutions of Exercises 3 5 can provide a transition to Section Or, these exercises could be assigned after starting Section Exercises 7 and 8 are optional, but they are mentioned in Example 4 of Section 4 Outer products also appear in Exercises 3 34 of Section 46 and in the spectral decomposition of a symmetric matrix, in Section 7 Exercises 9 33 provide good training for mathematics majors 4 A ( ) Next, use B A B + ( A): B A he product AC is not defined because the number of columns of A does not match the number of rows ( ) of C CD () For mental computation, the row-column rule is probably easier to use than the definition A+ B he expression 3C E is not defined because 3C has columns and E has only column ( 5) + ( 4) + ( 3) CB ( 5) + ( 4) + ( 3) he product EB is not defined because the number of columns of E does not match the number of rows of R 83

84 84 CHAPER Matrix Algebra ( ) 3 3I A ( ) , or (3 I) A 3( IA) ( ) + 3 (3 I) A ( ) A 5I (5 I3) A 5( I3A) 5A , or (5 I3) A ( ) ( 8) 5 7 5( 6) ( 4) a Ab 5 4 7, A b AB [ Ab Ab ] ( ) ( ) b ( ) 5( ) ( ) ( ) a Ab 3 3, A 3 9 b AB [ Ab Ab ] ( ) 4 3 b ( ) ( ) 3 4

85 Solutions 85 7 Since A has 3 columns, B must match with 3 rows Otherwise, AB is undefined Since AB has 7 columns, so does B hus, B is he number of rows of B matches the number of rows of BC, so B has 3 rows k AB, 3 3 k 9 5+ k while BA 3 k 3 6 3k 5+ k hen AB BA if and only if + 5k 5 and 9 6 3k, which happens if and only if k AB, AC AD DA Right-multiplication (that is, multiplication on the right) by the diagonal matrix D multiplies each column of A by the corresponding diagonal entry of D Left-multiplication by D multiplies each row of A by the corresponding diagonal entry of D o make AB BA, one can take B to be a multiple of I 3 For instance, if B 4I 3, then AB and BA are both the same as 4A Consider B [b b ] o make AB, one needs Ab and Ab By inspection of A, a suitable b is, or any multiple of Example: 6 B 3 3 Use the definition of AB written in reverse order: [Ab Ab p ] A[b b p ] hus [Qr Qr p ] QR, when R [r r p ] 4 By definition, UQ U[q q 4 ] [Uq Uq 4 ] From Example 6 of Section 8, the vector Uq lists the total costs (material, labor, and overhead) corresponding to the amounts of products B and C specified in the vector q hat is, the first column of UQ lists the total costs for materials, labor, and overhead used to manufacture products B and C during the first quarter of the year Columns, 3, and 4 of UQ list the total amounts spent to manufacture B and C during the nd, 3 rd, and 4 th quarters, respectively 5 a False See the definition of AB b False he roles of A and B should be reversed in the second half of the statement See the box after Example 3 c rue See heorem (b), read right to left d rue See heorem 3(b), read right to left e False he phrase in the same order should be in the reverse order See the box after heorem 3 6 a False AB must be a 3 3 matrix, but the formula for AB implies that it is 3 he plus signs should be just spaces (between columns) his is a common mistake b rue See the box after Example 6 c False he left-to-right order of B and C cannot be changed, in general

86 86 CHAPER Matrix Algebra d False See heorem 3(d) e rue his general statement follows from heorem 3(b) 3, b b b the first column of B satisfies the equation 7 Ax 6 Row reduction:[ A A ] ~ ~ b So b 7 4 Similarly, 8 [ A A b ] ~ ~ and b Since AB [ A A A ] Note: An alternative solution of Exercise 7 is to row reduce [A Ab Ab ] with one sequence of row operations his observation can prepare the way for the inversion algorithm in Section 8 he first two columns of AB are Ab and Ab hey are equal since b and b are equal 9 (A solution is in the text) Write B [b b b 3 ] By definition, the third column of AB is Ab 3 By hypothesis, b 3 b + b So Ab 3 A(b + b ) Ab + Ab, by a property of matrix-vector multiplication hus, the third column of AB is the sum of the first two columns of AB he second column of AB is also all zeros because Ab A Let b p be the last column of B By hypothesis, the last column of AB is zero hus, Ab p However, b p is not the zero vector, because B has no column of zeros hus, the equation Ab p is a linear dependence relation among the columns of A, and so the columns of A are linearly dependent Note: he text answer for Exercise is, he columns of A are linearly dependent Why? he Study Guide supplies the argument above, in case a student needs help If the columns of B are linearly dependent, then there exists a nonzero vector x such that Bx From this, A(Bx) A and (AB)x (by associativity) Since x is nonzero, the columns of AB must be linearly dependent 3 If x satisfies Ax, then CAx C and so I n x and x his shows that the equation Ax has no free variables So every variable is a basic variable and every column of A is a pivot column (A variation of this argument could be made using linear independence and Exercise 3 in Section 7) Since each pivot is in a different row, A must have at least as many rows as columns 4 ake any b in R m By hypothesis, ADb I m b b Rewrite this equation as A(Db) b hus, the vector x Db satisfies Ax b his proves that the equation Ax b has a solution for each b in R m By heorem 4 in Section 4, A has a pivot position in each row Since each pivot is in a different column, A must have at least as many columns as rows 5 By Exercise 3, the equation CA I n implies that (number of rows in A) > (number of columns), that is, m > n By Exercise 4, the equation AD I m implies that (number of rows in A) < (number of columns), that is, m < n hus m n o prove the second statement, observe that DAC (DA)C I n C C, and also DAC D(AC) DI m D hus C D A shorter calculation is C I n C (DA)C D(AC) DI n D 6 Write I 3 [e e e 3 ] and D [d d d 3 ] By definition of AD, the equation AD I 3 is equivalent to the three equations Ad e, Ad e, and Ad 3 e 3 Each of these equations has at least one solution because the columns of A span R 3 (See heorem 4 in Section 4) Select one solution of each equation and use them for the columns of D hen AD I 3

87 Solutions 87 7 he product u v is a matrix, which usually is identified with a real number and is written without the matrix brackets a [ 3 4 ] b u v a+ 3b4c, vu [ a b c ] 3 a+ 3b4c c 4 a b c uv 3 [ a b c ] 3a 3b 3c 4 4a 4b 4c a a 3a 4a vu b [ 3 4 ] b 3b 4b c c 3c 4c 8 Since the inner product u v is a real number, it equals its transpose hat is, u v (u v) v (u ) v u, by heorem 3(d) regarding the transpose of a product of matrices and by heorem 3(a) he outer product uv is an n n matrix By heorem 3, (uv ) (v ) u vu 9 he (i, j)-entry of A(B + C) equals the (i, j)-entry of AB + AC, because n n n aik ( bkj + ckj ) aikbkj + aikckj k k k he (i, j)-entry of (B + C)A equals the (i, j)-entry of BA + CA, because n n n ( bik + cik ) akj bikakj + cik akj k k k 3 he (i, j))-entries of r(ab), (ra)b, and A(rB) are all equal, because n n n ik kj ik kj ik kj k k k r a b ( ra ) b a ( rb ) 3 Use the definition of the product I m A and the fact that I m x x for x in R m I m A I m [a a n ] [I m a I m a n ] [a a n ] A 3 Let e j and a j denote the jth columns of I n and A, respectively By definition, the jth column of AI n is Ae j, which is simply a j because e j has in the jth position and zeros elsewhere hus corresponding columns of AI n and A are equal Hence AI n A 33 he (i, j)-entry of (AB) is the ( j, i)-entry of AB, which is a b + + a b j i jn ni he entries in row i of B are b i,, b ni, because they come from column i of B Likewise, the entries in column j of A are a j,, a jn, because they come from row j of A hus the (i, j)-entry in B A is a b + + a b, as above j i jn ni 34 Use heorem 3(d), treating x as an n matrix: (ABx) x (AB) x B A 35 [M] he answer here depends on the choice of matrix program For MALAB, use the help command to read about zeros, ones, eye, and diag For other programs see the appendices in the Study Guide (he I calculators have fewer single commands that produce special matrices)

88 88 CHAPER Matrix Algebra 36 [M] he answer depends on the choice of matrix program In MALAB, the command rand(6,4) creates a 6 4 matrix with random entries uniformly distributed between and he command round(9*(rand(6,4) 5)) creates a random 6 4 matrix with integer entries between 9 and 9 he same result is produced by the command randomint in the Laydata oolbox on text website For other matrix programs see the appendices in the Study Guide 37 [M] (A + I)(A I) (A I) for all 4 4 matrices However, (A + B)(A B) A B is the zero matrix only in the special cases when AB BA In general, (A + B)(A B) A(A B) + B(A B) AA AB + BA BB 38 [M] he equality (AB) A B is very likely to be false for 4 4 matrices selected at random 39 [M] he matrix S shifts the entries in a vector (a, b, c, d, e) to yield (b, c, d, e, ) he entries in S result from applying S to the columns of S, and similarly for S 3, and so on his explains the patterns of entries in the powers of S: 4 [M] 3 4 S, S, S S 5 is the 5 5 zero matrix S 6 is also the 5 5 zero matrix A , A he entries in A all agree with to 9 or decimal places he entries in A 3 all agree with to at least 4 decimal places he matrices appear to approach the matrix /3 /3 /3 /3 /3 /3 Further exploration of this behavior appears in Sections 49 and 5 /3 /3 /3 Note: he MALAB box in the Study Guide introduces basic matrix notation and operations, including the commands that create special matrices needed in Exercises 35, 36 and elsewhere he Study Guide appendices treat the corresponding information for the other matrix programs SOLUIONS Notes: he text includes the matrix inversion algorithm at the end of the section because this topic is popular Students like it because it is a simple mechanical procedure However, I no longer cover it in my classes because technology is readily available to invert a matrix whenever needed, and class time is better spent on more useful topics such as partitioned matrices he final subsection is independent of the inversion algorithm and is needed for Exercises 35 and 36 Key Exercises: 8, 4, 35 (Actually, Exercise 8 is only helpful for some exercises in this section Section 3 has a stronger result) Exercises 3 and 4 are used in the proof of the Invertible Matrix heorem (IM) in Section 3, along with Exercises 3 and 4 in Section I recommend letting students work on two or more of these four exercises before proceeding to Section 3 In this way students participate in the

89 Solutions 89 proof of the IM rather than simply watch an instructor carry out the proof Also, this activity will help students understand why the theorem is true / / 3/ or ( 35) or ( 8 ) /4 3/ he system is equivalent to Ax b, where A and 5 4 b, and the solution is x A b 3 7 5/ 4 9 hus x 7 and x he system is equivalent to Ax b, where A and 7 5 b, and the solution is x A b o compute this by hand, the arithmetic is simplified by keeping the fraction /det(a) in front of the matrix for A (he Study Guide comments on this in its discussion of Exercise 7) From Exercise 3, x A b hus x and x 5 7 a 6 or x A 8 9 b Similar calculations give 6 3 A b, A 3, A b b b [A b b b 3 b 4 ] ~ ~ ~ he solutions are 9 6 3,,, and, the same as in part (a)

90 9 CHAPER Matrix Algebra Note: he Study Guide also discusses the number of arithmetic calculations for this Exercise 7, stating that when A is large, the method used in (b) is much faster than using A 8 Left-multiply each side of the equation AD I by A to obtain A AD A I, ID A, and D A Parentheses are routinely suppressed because of the associative property of matrix multiplication 9 a rue, by definition of invertible b False See heorem 6(b) c False If A, then ab cd, but heorem 4 shows that this matrix is not invertible, because ad bc d rue his follows from heorem 5, which also says that the solution of Ax b is unique, for each b e rue, by the box just before Example 6 a False he product matrix is invertible, but the product of inverses should be in the reverse order See heorem 6(b) b rue, by heorem 6(a) c rue, by heorem 4 d rue, by heorem 7 e False he last part of heorem 7 is misstated here (he proof can be modeled after the proof of heorem 5) he n p matrix B is given (but is arbitrary) Since A is invertible, the matrix A B satisfies AX B, because A(A B) A A B IB B o show this solution is unique, let X be any solution of AX B hen, left-multiplication of each side by A shows that X must be A B: A (AX) A B, IX A B, and X A B If you assign this exercise, consider giving the following Hint: Use elementary matrices and imitate the proof of heorem 7 he solution in the Instructor s Edition follows this hint Here is another solution, based on the idea at the end of Section Write B [b b p ] and X [u u p ] By definition of matrix multiplication, AX [Au Au p ] hus, the equation AX B is equivalent to the p systems: Au b, Au p b p Since A is the coefficient matrix in each system, these systems may be solved simultaneously, placing the augmented columns of these systems next to A to form [A b b p ] [A B] Since A is invertible, the solutions u,, u p are uniquely determined, and [A b b p ] must row reduce to [I u u p ] [I X] By Exercise, X is the unique solution A B of AX B 3 Left-multiply each side of the equation AB AC by A to obtain A AB A AC, IB IC, and B C his conclusion does not always follow when A is singular Exercise of Section provides a counterexample 4 Right-multiply each side of the equation (B C)D by D to obtain (B C)DD D, (B C)I, B C, and B C 5 he box following heorem 6 suggests what the inverse of ABC should be, namely, C B A o verify that this is correct, compute: (ABC) C B A ABCC B A ABIB A ABB A AIA AA I and C B A (ABC) C B A ABC C B IBC C B BC C IC C C I

91 Solutions 9 6 Let C AB hen CB ABB, so CB AI A his shows that A is the product of invertible matrices and hence is invertible, by heorem 6 Note: he Study Guide warns against using the formula (AB) B A here, because this formula can be used only when both A and B are already known to be invertible 7 Right-multiply each side of AB BC by B : ABB BCB, AI BCB, A BCB 8 Left-multiply each side of A PBP by P : P A P PBP, P A IBP, P A BP hen right-multiply each side of the result by P: P AP BP P, P AP BI, P AP B 9 Unlike Exercise 7, this exercise asks two things, Does a solution exist and what is it? First, find what the solution must be, if it exists hat is, suppose X satisfies the equation C (A + X)B I Left-multiply each side by C, and then right-multiply each side by B: CC (A + X)B CI, I(A + X)B C, (A + X)B B CB, (A + X)I CB Expand the left side and then subtract A from both sides: AI + XI CB, A + X CB, X CB A If a solution exists, it must be CB A o show that CB A really is a solution, substitute it for X: C [A + (CB A)]B C [CB]B C CBB II I Note: he Study Guide suggests that students ask their instructor about how many details to include in their proofs After some practice with algebra, an expression such as CC (A + X)B could be simplified directly to (A + X)B without first replacing CC by I However, you may wish this detail to be included in the homework for this section a Left-multiply both sides of (A AX) X B by X to see that B is invertible because it is the product of invertible matrices b Invert both sides of the original equation and use heorem 6 about the inverse of a product (which applies because X and B are invertible): A AX (X B) B (X ) B X hen A AX + B X (A + B )X he product (A + B )X is invertible because A is invertible Since X is known to be invertible, so is the other factor, A + B, by Exercise 6 or by an argument similar to part (a) Finally, (A + B ) A (A + B ) (A + B )X X Note: his exercise is difficult he algebra is not trivial, and at this point in the course, most students will not recognize the need to verify that a matrix is invertible Suppose A is invertible By heorem 5, the equation Ax has only one solution, namely, the zero solution his means that the columns of A are linearly independent, by a remark in Section 7 Suppose A is invertible By heorem 5, the equation Ax b has a solution (in fact, a unique solution) for each b By heorem 4 in Section 4, the columns of A span R n 3 Suppose A is n n and the equation Ax has only the trivial solution hen there are no free variables in this equation, and so A has n pivot columns Since A is square and the n pivot positions must be in different rows, the pivots in an echelon form of A must be on the main diagonal Hence A is row equivalent to the n n identity matrix

92 9 CHAPER Matrix Algebra 4 If the equation Ax b has a solution for each b in R n, then A has a pivot position in each row, by heorem 4 in Section 4 Since A is square, the pivots must be on the diagonal of A It follows that A is row equivalent to I n By heorem 7, A is invertible a b 5 Suppose A c d and ad bc If a b, then examine x c d x his has the d solution x c his solution is nonzero, except when a b c d In that case, however, A is the b zero matrix, and Ax for every vector x Finally, if a and b are not both zero, set x a hen a b b ab ba A x + c d a cb+ da, because cb + da hus, x is a nontrivial solution of Ax So, in all cases, the equation Ax has more than one solution his is impossible when A is invertible (by heorem 5), so A is not invertible 6 d b a b da bc c a c d cb+ ad Divide both sides by ad bc to get CA I a b d b ad bc c d c a cb+ da Divide both sides by ad bc he right side is I he left side is AC, because a b d b a b d b ad bc c d c a c d ad bc c a AC 7 a Interchange A and B in equation () after Example 6 in Section : row i (BA) row i (B) A hen replace B by the identity matrix: row i (A) row i (IA) row i (I) A b Using part (a), when rows and of A are interchanged, write the result as row ( A) row ( I) A row ( I) row ( A) row ( I) A row ( I) A EA row 3( A) row 3( I) A row 3( I) Here, E is obtained by interchanging rows and of I he second equality in (*) is a consequence of the fact that row i (EA) row i (E) A c Using part (a), when row 3 of A is multiplied by 5, write the result as row ( A) row ( I) A row ( I) row ( A) row ( I) A row ( I) A EA 5row( 3 A) 5row() 3 I A 5row() 3 I Here, E is obtained by multiplying row 3 of I by 5 8 When row 3 of A is replaced by row 3 (A) 4 row (A), write the result as row ( A) row ( I) A row ( A) row ( I) A row 3( A) 4 row ( A) row 3( I) A 4 row ( I) A (*)

93 Solutions 93 row ( I) A row ( I) row ( I) A row ( I) A EA [row 3( I) 4 row ( I)] A row 3( I) 4 row ( I) Here, E is obtained by replacing row 3 (I) by row 3 (I) 4 row (I) [ A I] ~ ~ ~ A /5 /5 [ A I] ~ ~ /5 /5 7/5 7/5 ~ ~ A 4/5 4/5 4/5 [ A I] 3 4 ~ ~ 3 ~ ~ 4 A 4 7/ 3/ / 7/ 3/ / [ A I] ~ ~ 4 he matrix A is not invertible 33 Let B, and for j,, n, let a j, b j, and e j denote the jth columns of A, B, and I, respectively Note that for j,, n, a j a j+ e j (because a j and a j+ have the same entries except for the jth row), b j e j e j+ and a n b n e n o show that AB I, it suffices to show that Ab j e j for each j For j,, n, Ab j A(e j e j+ ) Ae j Ae j+ a j a j+ e j

94 94 CHAPER Matrix Algebra and Ab n Ae n a n e n Next, observe that a j e j + + e n for each j hus, Ba j B(e j + + e n ) b j + + b n (e j e j+ ) + (e j+ e j+ ) + + (e n e n ) + e n e j his proves that BA I Combined with the first part, this proves that B A Note: Students who do this problem and then do the corresponding exercise in Section 4 will appreciate the Invertible Matrix heorem, partitioned matrix notation, and the power of a proof by induction 34 Let A / / 3, and B /3 /3 3 n / n / n and for j,, n, let a j, b j, and e j denote the jth columns of A, B, and I, respectively Note that for j,, n, a j j(e j + + e n ), b j ej e j+, and bn e n j j + n o show that AB I, it suffices to show that Ab j e j for each j For j,, n, Ab j A j j+ j e j+ e aj a j+ j j + (e j + + e n ) (e j+ + + e n ) e j Also, Ab n A n n n e n a e Finally, for j,, n, the sum b n j + + b n is a telescoping sum whose value is j j e hus, Ba j j(be j + + Be n ) j(b j + + b n ) j ej j j e which proves that BA I Combined with the first part, this proves that B A Note: If you assign Exercise 34, you may wish to supply a hint using the notation from Exercise 33: Express each column of A in terms of the columns e,, e n of the identity matrix Do the same for B 35 Row reduce [A e 3 ]: ~ 5 6 ~ ~ ~ 6 ~ 6 ~ Answer: he third column of A is 3 6 4

95 Solutions [M] Write B [A F], where F consists of the last two columns of I 3, and row reduce: B he last two columns of A are 3/ 9/ ~ 433/ 6 439/ 68/ here are many possibilities for C, but C is the only one whose entries are,, and With only three possibilities for each entry, the construction of C can be done by trial and error his is probably faster than setting up a system of 4 equations in 6 unknowns he fact that A cannot be invertible follows from Exercise 5 in Section, because A is not square 38 Write AD A[d d ] [Ad Ad ] he structure of A shows that D [here are 5 possibilities for D if entries of D are allowed to be,, and ] here is no 4 matrix C such that CA I 4 If this were true, then CAx would equal x for all x in R 4 his cannot happen because the columns of A are linearly dependent and so Ax for some nonzero vector x For such an x, CAx C() An alternate justification would be to cite Exercise 3 or 5 in Section y Df he deflections are 7 in, 3 in, and 3 in at points,, 5 3 and 3, respectively 4 [M] he stiffness matrix is D Use an inverse command to produce D 5 3 o find the forces (in pounds) required to produce a deflection of 4 cm at point 3, most students will use technology to solve Df (,, 4) and obtain (, 5, ) Here is another method, based on the idea suggested in Exercise 4 he first column of D lists the forces required to produce a deflection of in at point (with zero deflection at the other points) Since the transformation y D y is linear, the forces required to produce a deflection of 4 cm at point 3 is given by 4 times the third column of D, namely (4)(5) times (,, ), or (, 5, ) pounds 4 o determine the forces that produce a deflections of 8,, 6, and cm at the four points on the beam, use technology to solve Df y, where y (8,, 6, ) he forces at the four points are, 5, 5, and newtons, respectively

96 96 CHAPER Matrix Algebra 4 [M] o determine the forces that produce a deflection of 4 cm at the second point on the beam, use technology to solve Df y, where y (, 4,, ) he forces at the four points are 4, 67, 3, and 56 newtons, respectively (to three significant digits) hese forces are 4 times the entries in the second column of D Reason: he transformation y D y is linear, so the forces required to produce a deflection of 4 cm at the second point are 4 times the forces required to produce a deflection of cm at the second point hese forces are listed in the second column of D Another possible discussion: he solution of Dx (,,, ) is the second column of D Multiply both sides of this equation by 4 to obtain D(4x) (, 4,, ) So 4x is the solution of Df (, 4,, ) (he argument uses linearity, but students may not mention this) Note: he Study Guide suggests using gauss, swap, bgauss, and scale to reduce [A I], because I prefer to postpone the use of ref (or rref) until later If you wish to introduce ref now, see the Study Guide s technology notes for Sections 8 or 43 (Recall that Sections 8 and 9 are only covered when an instructor plans to skip Chapter 4 and get quickly to eigenvalues) 3 SOLUIONS Notes: his section ties together most of the concepts studied thus far With strong encouragement from an instructor, most students can use this opportunity to review and reflect upon what they have learned, and form a solid foundation for future work Students who fail to do this now usually struggle throughout the rest of the course Section 3 can be used in at least three different ways () Stop after Example and assign exercises only from among the Practice Problems and Exercises to 8 I do this when teaching Course 3 described in the text's Notes to the Instructor If you did not cover heorem in Section 9, omit statements (f) and (i) from the Invertible Matrix heorem () Include the subsection Invertible Linear ransformations in Section 3, if you covered Section 9 I do this when teaching Course because our mathematics and computer science majors take this class Exercises 9 4 support this material (3) Skip the linear transformation material here, but discusses the condition number and the Numerical Notes Assign exercises from among 8 and 4 45, and perhaps add a computer project on the condition number (See the projects on our web site) I do this when teaching Course for our engineers he abbreviation IM (here and in the Study Guide) denotes the Invertible Matrix heorem (heorem 8) 5 7 he columns of the matrix 3 6 are not multiples, so they are linearly independent By (e) in the IM, the matrix is invertible Also, the matrix is invertible by heorem 4 in Section because the determinant is nonzero 4 6 he fact that the columns of 6 9 are multiples is not so obvious he fastest check in this case may be the determinant, which is easily seen to be zero By heorem 4 in Section, the matrix is not invertible 3 Row reduction to echelon form is trivial because there is really no need for arithmetic calculations: ~ 7 ~ 7 he 3 3 matrix has 3 pivot positions and hence is invertible, by (c) of the IM [Another explanation could be given using the transposed matrix But see the note below that follows the solution of Exercise 4]

97 3 Solutions he matrix 3 obviously has linearly dependent columns (because one column is zero), and 9 so the matrix is not invertible (or singular) by (e) in the IM ~ 3 5 ~ 3 5 ~ he matrix is not invertible because it is not row equivalent to the identity matrix ~ 3 4 ~ he matrix is not invertible because it is not row equivalent to the identity matrix ~ ~ he 4 4 matrix has four pivot positions and so is invertible by (c) of the IM 8 he 4 4 matrix is invertible because it has four pivot positions, by (c) of the IM [M] ~ ~ ~ ~ ~ ~ ~ he 4 4 matrix is invertible because it has four pivot positions, by (c) of the IM

98 98 CHAPER Matrix Algebra [M] ~ ~ 34 ~ he 5 5 matrix is invertible because it has five pivot positions, by (c) of the IM a rue, by the IM If statement (d) of the IM is true, then so is statement (b) b rue If statement (h) of the IM is true, then so is statement (e) c False Statement (g) of the IM is true only for invertible matrices d rue, by the IM If the equation Ax has a nontrivial solution, then statement (d) of the IM is false In this case, all the lettered statements in the IM are false, including statement (c), which means that A must have fewer than n pivot positions e rue, by the IM If A is not invertible, then statement () of the IM is false, and hence statement (a) must also be false a rue If statement (k) of the IM is true, then so is statement ( j) b rue If statement (e) of the IM is true, then so is statement (h) c rue See the remark immediately following the proof of the IM d False he first part of the statement is not part (i) of the IM In fact, if A is any n n matrix, the linear transformation x Ax maps n into n, yet not every such matrix has n pivot positions e rue, by the IM If there is a b in n such that the equation Ax b is inconsistent, then statement (g) of the IM is false, and hence statement (f) is also false hat is, the transformation x Ax cannot be one-to-one Note: he solutions below for Exercises 3 3 refer mostly to the IM In many cases, however, part or all of an acceptable solution could also be based on various results that were used to establish the IM 3 If a square upper triangular n n matrix has nonzero diagonal entries, then because it is already in echelon form, the matrix is row equivalent to I n and hence is invertible, by the IM Conversely, if the matrix is invertible, it has n pivots on the diagonal and hence the diagonal entries are nonzero 4 If A is lower triangular with nonzero entries on the diagonal, then these n diagonal entries can be used as pivots to produce zeros below the diagonal hus A has n pivots and so is invertible, by the IM If one of the diagonal entries in A is zero, A will have fewer than n pivots and hence be singular Notes: For Exercise 4, another correct analysis of the case when A has nonzero diagonal entries is to apply the IM (or Exercise 3) to A hen use heorem 6 in Section to conclude that since A is invertible so is its transpose, A You might mention this idea in class, but I recommend that you not spend much time discussing A and problems related to it, in order to keep from making this section too lengthy (he transpose is treated infrequently in the text until Chapter 6) If you do plan to ask a test question that involves A and the IM, then you should give the students some extra homework that develops skill using A For instance, in Exercise 4 replace columns by rows

99 3 Solutions 99 Also, you could ask students to explain why an n n matrix with linearly independent columns must also have linearly independent rows 5 If A has two identical columns then its columns are linearly dependent Part (e) of the IM shows that A cannot be invertible 6 Part (h) of the IM shows that a 5 5 matrix cannot be invertible when its columns do not span R 5 7 If A is invertible, so is A, by heorem 6 in Section By (e) of the IM applied to A, the columns of A are linearly independent 8 By (g) of the IM, C is invertible Hence, each equation Cx v has a unique solution, by heorem 5 in Section his fact was pointed out in the paragraph following the proof of the IM 9 By (e) of the IM, D is invertible hus the equation Dx b has a solution for each b in R 7, by (g) of the IM Even better, the equation Dx b has a unique solution for each b in R 7, by heorem 5 in Section (See the paragraph following the proof of the IM) By the box following the IM, E and F are invertible and are inverses So FE I EF, and so E and F commute he matrix G cannot be invertible, by heorem 5 in Section or by the box following the IM So (h) of the IM is false and the columns of G do not span R n Statement (g) of the IM is false for H, so statement (d) is false, too hat is, the equation Hx has a nontrivial solution 3 Statement (b) of the IM is false for K, so statements (e) and (h) are also false hat is, the columns of K are linearly dependent and the columns do not span R n 4 No conclusion about the columns of L may be drawn, because no information about L has been given he equation Lx always has the trivial solution 5 Suppose that A is square and AB I hen A is invertible, by the (k) of the IM Left-multiplying each side of the equation AB I by A, one has A AB A I, IB A, and B A By heorem 6 in Section, the matrix B (which is A ) is invertible, and its inverse is (A ), which is A 6 If the columns of A are linearly independent, then since A is square, A is invertible, by the IM So A, which is the product of invertible matrices, is invertible By the IM, the columns of A span R n 7 Let W be the inverse of AB hen ABW I and A(BW) I Since A is square, A is invertible, by (k) of the IM Note: he Study Guide for Exercise 7 emphasizes here that the equation A(BW) I, by itself, does not show that A is invertible Students are referred to Exercise 38 in Section for a counterexample Although there is an overall assumption that matrices in this section are square, I insist that my students mention this fact when using the IM Even so, at the end of the course, I still sometimes find a student who thinks that an equation AB I implies that A is invertible 8 Let W be the inverse of AB hen WAB I and (WA)B I By (j) of the IM applied to B in place of A, the matrix B is invertible

100 CHAPER Matrix Algebra 9 Since the transformation x Ax is not one-to-one, statement (f) of the IM is false hen (i) is also false and the transformation x Ax does not map R n onto R n Also, A is not invertible, which implies that the transformation x Ax is not invertible, by heorem 9 3 Since the transformation x Ax is one-to-one, statement (f) of the IM is true hen (i) is also true and the transformation x Ax maps R n onto R n Also, A is invertible, which implies that the transformation x Ax is invertible, by heorem 9 3 Since the equation Ax b has a solution for each b, the matrix A has a pivot in each row (heorem 4 in Section 4) Since A is square, A has a pivot in each column, and so there are no free variables in the equation Ax b, which shows that the solution is unique Note: he preceding argument shows that the (square) shape of A plays a crucial role A less revealing proof is to use the pivot in each row and the IM to conclude that A is invertible hen heorem 5 in Section shows that the solution of Ax b is unique 3 If Ax has only the trivial solution, then A must have a pivot in each of its n columns Since A is square (and this is the key point), there must be a pivot in each row of A By heorem 4 in Section 4, the equation Ax b has a solution for each b in R n Another argument: Statement (d) of the IM is true, so A is invertible By heorem 5 in Section, the equation Ax b has a (unique) solution for each b in R n (Solution in Study Guide) he standard matrix of is A, 4 7 which is invertible because det A By heorem 9, the transformation is invertible and the standard matrix of is A From 7 9 the formula for a inverse, A 4 5 So 7 9 x ( x, x) ( 7x 9 x,4x 5x) x he standard matrix of is A, 5 7 which is invertible because det A By heorem 9, is invertible, and 7 8 ( x ) Bx, where B A 5 6 hus 7 8 x 7 5 ( x, x) x 4 x, x 3x 5 6 x (Solution in Study Guide) o show that is one-to-one, suppose that (u) (v) for some vectors u and v in R n hen S((u)) S((v)), where S is the inverse of By Equation (), u S((u)) and S((v)) v, so u v hus is one-to-one o show that is onto, suppose y represents an arbitrary vector in R n and define x S(y) hen, using Equation (), (x) (S(y)) y, which shows that maps R n onto R n Second proof: By heorem 9, the standard matrix A of is invertible By the IM, the columns of A are linearly independent and span R n By heorem in Section 9, is one-to-one and maps R n onto R n 36 If maps R n onto R n, then the columns of its standard matrix A span R n, by heorem in Section 9 By the IM, A is invertible Hence, by heorem 9 in Section 3, is invertible, and A is the standard matrix of Since A is also invertible, by the IM, its columns are linearly independent and span R n Applying heorem in Section 9 to the transformation, we conclude that is a one-to-one mapping of R n onto R n

101 3 Solutions 37 Let A and B be the standard matrices of and U, respectively hen AB is the standard matrix of the mapping x U ( ( x)), because of the way matrix multiplication is defined (in Section ) By hypothesis, this mapping is the identity mapping, so AB I Since A and B are square, they are invertible, by the IM, and B A hus, BA I his means that the mapping x U ( ( x)) is the identity mapping, ie, U((x)) x for all x in R n 38 Let A be the standard matrix of By hypothesis, is not a one-to-one mapping So, by heorem in Section 9, the standard matrix A of has linearly dependent columns Since A is square, the columns of A do not span R n By heorem, again, cannot map R n onto R n 39 Given any v in R n, we may write v (x) for some x, because is an onto mapping hen, the assumed properties of S and U show that S(v) S((x)) x and U(v) U((x)) x So S(v) and U(v) are equal for each v hat is, S and U are the same function from R n into R n 4 Given u, v in n, let x S(u) and y S(v) hen (x)(s(u)) u and (y) (S(v)) v, by equation () Hence S( u+ v) S( ( x) + ( y)) S ( ( x+ y)) Becauseislinear x+ y By equation () S( u) + S( v) So, S preserves sums For any scalar r, Sr ( u) Sr ( ( x)) Sr ( ( x)) Becauseislinear rx Byequation () rs( u) So S preserves scalar multiples hus S ia a linear transformation 4 [M] a he exact solution of (3) is x 394 and x 49 he exact solution of (4) is x 9 and x b When the solution of (4) is used as an approximation for the solution in (3), the error in using the value of 9 for x is about 6%, and the error in using for x is about 38% c he condition number of the coefficient matrix is 3363 he percentage change in the solution from (3) to (4) is about 77 times the percentage change in the right side of the equation his is the same order of magnitude as the condition number he condition number gives a rough measure of how sensitive the solution of Ax b can be to changes in b Further information about the condition number is given at the end of Chapter 6 and in Chapter 7 Note: See the Study Guide s MALAB box, or a technology appendix, for information on condition number Only the I-83+ and I-89 lack a command for this 4 [M] MALAB gives cond(a) 3683, which is approximately 4 If you make several trials with MALAB, which records 6 digits accurately, you should find that x and x agree to at least or 3 significant digits So about 4 significant digits are lost Here is the result of one experiment he vectors were all computed to the maximum 6 decimal places but are here displayed with only four decimal places: rand(4,) x, b Ax 3 he MALAB solution is x A\b

102 CHAPER Matrix Algebra However, x x he computed solution x is accurate to about decimal places 43 [M] MALAB gives cond(a) 68,6 Since this has magnitude between 4 and 5, the estimated accuracy of a solution of Ax b should be to about four or five decimal places less than the 6 decimal places that MALAB usually computes accurately hat is, one should expect the solution to be accurate to only about or decimal places Here is the result of one experiment he vectors were all computed to the maximum 6 decimal places but are here displayed with only four decimal places: x rand(5,) 6789, b Ax 997 he MALAB solution is x A\b However, x x he computed solution x is accurate to about decimal places 5 44 [M] Solve Ax (,,,, ) MALAB shows that cond( A) 48 Since MALAB computes numbers accurately to 6 decimal places, the entries in the computed value of x should be accurate to at least digits he exact solution is (63, 6, 567, 88, 44) 45 [M] Some versions of MALAB issue a warning when asked to invert a Hilbert matrix of order or larger using floating-point arithmetic he product AA should have several off-diagonal entries that are far from being zero If not, try a larger matrix Note: All matrix programs supported by the Study Guide have data for Exercise 45, but only MALAB and Maple have a single command to create a Hilbert matrix he HP-48G data for Exercise 45 contain a program that can be edited to create other Hilbert matrices Notes: he Study Guide for Section 3 organizes the statements of the Invertible Matrix heorem in a table that imbeds these ideas in a broader discussion of rectangular matrices he statements are arranged in three columns: statements that are logically equivalent for any m n matrix and are related to existence concepts, those that are equivalent only for any n n matrix, and those that are equivalent for any n p matrix and are related to uniqueness concepts Four statements are included that are not in the text s official list of statements, to give more symmetry to the three columns You may or may not wish to comment on them I believe that students cannot fully understand the concepts in the IM if they do not know the correct wording of each statement (Of course, this knowledge is not sufficient for understanding) he Study Guide s Section 3 has an example of the type of question I often put on an exam at this point in the course he section concludes with a discussion of reviewing and reflecting, as important steps to a mastery of linear algebra

103 4 Solutions 3 4 SOLUIONS Notes: Partitioned matrices arise in theoretical discussions in essentially every field that makes use of matrices he Study Guide mentions some examples (with references) Every student should be exposed to some of the ideas in this section If time is short, you might omit Example 4 and heorem, and replace Example 5 by a problem similar to one in Exercises (A sample replacement is given at the end of these solutions) hen select homework from Exercises 3, 5, and 4 he exercises just mentioned provide a good environment for practicing matrix manipulation Also, students will be reminded that an equation of the form AB I does not by itself make A or B invertible (he matrices must be square and the IM is required) Apply the row-column rule as if the matrix entries were numbers, but for each product always write the entry of the left block-matrix on the left I A B IA+ C IB+ D A B E I C D EA+ IC EB+ ID EA+ C EB+ D Apply the row-column rule as if the matrix entries were numbers, but for each product always write the entry of the left block-matrix on the left E A B EA + C EB + D EA EB F C D A + FC B + FD FC FD 3 Apply the row-column rule as if the matrix entries were numbers, but for each product always write the entry of the left block-matrix on the left I W X W + IY X + IZ Y Z I Y Z IW + Y IX + Z W X 4 Apply the row-column rule as if the matrix entries were numbers, but for each product always write the entry of the left block-matrix on the left I A B IA+ C IB+ D A B X I C D XA+ IC XB+ ID XA+ C XB+ D 5 Compute the left side of the equation: A B I AI + BX A+ BY C X Y CI + X C+ Y Set this equal to the right side of the equation: A+ BX BY I A+ BX BY I so that C Z C Z Since the (, ) blocks are equal, Z C Since the (, ) blocks are equal, BY I o proceed further, assume that B and Y are square hen the equation BY I implies that B is invertible, by the IM, and Y B (See the boxed remark that follows the IM) Finally, from the equality of the (, ) blocks, BX A, B BX B ( A), and X B A he order of the factors for X is crucial Note: For simplicity, statements (j) and (k) in the Invertible Matrix heorem involve square matrices C and D Actually, if A is n n and if C is any matrix such that AC is the n n identity matrix, then C must be n n, too (For AC to be defined, C must have n rows, and the equation AC I implies that C has n columns) Similarly, DA I implies that D is n n Rather than discuss this in class, I expect that in Exercises 5 8, when

104 4 CHAPER Matrix Algebra students see an equation such as BY I, they will decide that both B and Y should be square in order to use the IM 6 Compute the left side of the equation: X A XA+ B X+ C XA Y Z B C YA+ ZB Y + ZC YA+ ZB ZC Set this equal to the right side of the equation: XA I XA I so that YA ZB ZC I + YA + ZB ZC I o use the equality of the (, ) blocks, assume that A and X are square By the IM, the equation XA I implies that A is invertible and X A (See the boxed remark that follows the IM) Similarly, if C and Z are assumed to be square, then the equation ZC I implies that C is invertible, by the IM, and Z C Finally, use the (, ) blocks and right-multiplication by A : YA ZB C B, YAA ( C B)A, and Y C BA he order of the factors for Y is crucial 7 Compute the left side of the equation: A Z X XA B XZ I Y I YA IB YZ II B I Set this equal to the right side of the equation: XA XZ I XA I XZ so that YA B YZ I I + + YA + B YZ + I I o use the equality of the (, ) blocks, assume that A and X are square By the IM, the equation XA I implies that A is invertible and X A (See the boxed remark that follows the IM) Also, X is invertible Since XZ, X XZ X, so Z must be Finally, from the equality of the (, ) blocks, YA B Right-multiplication by A shows that YAA BA and Y BA he order of the factors for Y is crucial 8 Compute the left side of the equation: A B X Y Z AX + B AY + B AZ + BI I I X + I Y + I Z + II Set this equal to the right side of the equation: AX AY AZ + B I I I o use the equality of the (, ) blocks, assume that A and X are square By the IM, the equation XA I implies that A is invertible and X A (See the boxed remark that follows the IM Since AY, from the equality of the (, ) blocks, left-multiplication by A gives A AY A, so Y Finally, from the (, 3) blocks, AZ B Left-multiplication by A gives A AZ A ( B), and Z A B he order of the factors for Z is crucial Note: he Study Guide tells students, Problems such as 5 make good exam questions Remember to mention the IM when appropriate, and remember that matrix multiplication is generally not commutative When a problem statement includes a condition that a matrix is square, I expect my students to mention this fact when they apply the IM

105 4 Solutions 5 9 Compute the left side of the equation: I A A IA + A + A3 IA + A + A3 X I A A XA + IA + A XA + IA + A 3 3 Y I A3 A3 YA + A + IA3 YA + A + IA3 Set this equal to the right side of the equation: A A B B XA + A XA + A B YA + A3 YA + A3 B3 so that A B A B XA + A XA + A B YA + A YA + A B Since the (,) blocks are equal, XA + A and XA A Since A is invertible, right A gives X A A Likewise since the (3,) blocks are equal, 3 3 A Y A3 A XA A BSince X AA +, B AA A + A multiplication by YA + A and YA A Since A is invertible, right multiplication by Finally, from the (,) entries, gives Since the two matrices are inverses, I I C I Z I I A B I X Y I I Compute the left side of the equation: I I II + Z + X I+ I + Y I+ + I C I Z I CI IZ X C II Y C I I A B I X Y I AI + BZ + IX A+ BI + IY A+ B+ II Set this equal to the right side of the equation: I I C Z I I + A+ BZ + X B+ Y I I I I so that C+ Z I I A+ BZ + X B+ Y I I Since the (,) blocks are equal, C+ Z and Z C Likewise since the (3, ) blocks are equal, B + Y and Y B Finally, from the (3,) entries, A+ BZ + X and X A BZ Since Z C, X AB( C) A+ BC a rue See the subsection Addition and Scalar Multiplication b False See the paragraph before Example 3 a rue See the paragraph before Example 4 b False See the paragraph before Example 3

106 6 CHAPER Matrix Algebra 3 You are asked to establish an if and only if statement First, supose that A is invertible, D E and let A F G hen B D E BD BE I C F G CF CG I Since B is square, the equation BD I implies that B is invertible, by the IM Similarly, CG I implies that C is invertible Also, the equation BE imples that E B Similarly F hus B D E B A C E G (*) C his proves that A is invertible only if B and C are invertible For the if part of the statement, suppose that B and C are invertible hen (*) provides a likely candidate for A which can be used to show that A is invertible Compute: B B BB I C C CC I Since A is square, this calculation and the IM imply that A is invertible (Don t forget this final sentence Without it, the argument is incomplete) Instead of that sentence, you could add the equation: B B B B I C C C C I 4 You are asked to establish an if and only if statement First suppose that A is invertible Example 5 shows that A and A are invertible his proves that A is invertible only if A A are invertible For the if part of this statement, suppose that A and A are invertible hen the formula in Example 5 provides a likely candidate for A which can be used to show that A is invertible Compute: A A A A A A ( ) A A+ A A( A) A A + A A A A A A + A A A + A A I ( A A) A A + A A I I A A + A A I I I Since A is square, this calculation and the IM imply that A is invertible 5 Compute the right side of the equation: I A I Y A I Y A AY X I S I XA S I X A XAY + S Set this equal to the left side of the equation: A AY A A A A AY A so that X A XAY S A A + XA A XAY + S A Since the (, ) blocks are equal, AY A Since A is invertible, left multiplication by A gives Y A A Likewise since the (,) blocks are equal, X A A Since A is invertible, right

107 4 Solutions 7 multiplication by A gives that X A A One can check that the matrix S as given in the exercise satisfies the equation X AY + S A with the calculated values of X and Y given above 6 Suppose that A and A are invertible First note that and I I I X I X I I I Y I Y I I I I I I Y Since the matrices and X I I are square, they are both invertible by the IM Equation (7) may be left multipled by I I Y X I and right multipled by I to find A I I Y A S X I I A hus by heorem 6, the matrix S is invertible as the product of invertible matrices Finally, Exercise 3 above may be used to show that S is invertible 7 he column-row expansions of G k and G k+ are: Gk XkXk col ( X k)row ( Xk ) + + col k( Xk)row k( Xk ) and Gk+ Xk+ Xk+ col ( Xk+ )row ( Xk+ ) + + col k( Xk+ )row k( Xk+ ) + col k+ ( Xk+ )row k+ ( Xk+ ) col ( Xk)row ( Xk ) + + col k( Xk)row k( Xk ) + col k+ ( Xk+ )row k+ ( Xk ) G + col ( X )row ( X ) k k+ k+ k+ k since the first k columns of X k+ are identical to the first k columns of X k hus to update G k to produce G k+, the number col k+ (X k+ ) row k+ ( X ) should be added to G k 8 Since W [ X x ], X X X X x W W [ X x ] x xx xx By applying the formula for S from Exercise 5, S may be computed: S xx xx( X X) X x x( Im X( X X) X ) x x Mx k

108 8 CHAPER Matrix Algebra 9 he matrix equation (8) in the text is equivalent to ( A si ) x+ Bu and Cx+ u y n Rewrite the first equation as ( A sin ) xbu When A sin is invertible, n n x ( AsI ) ( Bu) ( AsI ) Bu Substitute this formula for x into the second equation above: n, m n C( ( A si ) Bu) + u y sothat I uc( A si ) Bu y hus y ( I C( AsI ) B) u If W() s I C( AsI ) B, then y W( s) u he matrix W(s) is the m n m n Schur complement of the matrix A sin in the system matrix in equation (8) he matrix in question is ABCsIn B C I m By applying the formula for S from Exercise 5, S may be computed: S I ( C)( ABCsI ) B m m I + C( ABCsI ) B m m a b A M ( ) + A A A + + I I A I A A A ( A) I + Let C be any nonzero 3 matrix Define A I3 A C I hen I3 I3 I3 + + I3 C I C I CI 3 IC ( I) I + 3 he product of two lower triangular matrices is lower triangular Suppose that for n k, the product of two k k lower triangular matrices is lower triangular, and consider any (k+) (k+) matrices A and B Partition these matrices as a b A, B v A w B where A and B are k k matrices, v and w are in R k, and a and b are scalars Since A and B are lower triangular, so are A and B hen a b ab + w a + B ab AB v A w B vb+ Aw v + AB bv+ Aw AB Since A and B are k k, AB is lower triangular he form of A B shows that it, too, is lower triangular hus the statement about lower triangular matrices is true for n k + if it is true for n k By the principle of induction, the statement is true for all n >

109 4 Solutions 9 Note: Exercise 3 is good for mathematics and computer science students he solution of Exercise 3 in the Study Guide shows students how to use the principle of induction he Study Guide also has an appendix on he Principle of Induction, at the end of Section 4 he text presents more applications of induction in Section 3 and in the Supplementary Exercises for Chapter 3 4 Let An, Bn By direct computation A B I Assume that for n k, the matrix A k B k is I k, and write A k and B + k+ v Ak w Bk where v and w are in R k, v [ ], and w [ ] hen A B I + w + Bk k + k + k v Ak w Bk v+ A I kw v + AkB + k k he (,)-entry is because v equals the first column of A k, and A k w is times the first column of A k By the principle of induction, A n B n I n for all n > Since A n and B n are square, the IM shows that these matrices are invertible, and B A n n Note: An induction proof can also be given using partitions with the form shown below he details are slightly more complicated A A Bk v w k k+ and B k+ A B A B I I k k k AB k k + w A + k k+ k+ k + v w v B k + w v + he (,)-entry is because v times a column of B k equals the sum of the entries in the column, and all of such sums are zero except the last, which is So v B k is the negative of w By the principle of induction, A n B n I n for all n > Since A n and B n are square, the IM shows that these matrices are invertible, and B A n n 5 First, visualize a partition of A as a block diagonal matrix, as below, and then visualize the (,)-block itself as a block-diagonal matrix hat is, 3 5 A A A, where A B

110 CHAPER Matrix Algebra Observe that B is invertible and B By Exercise 3, the block diagonal matrix A is invertible, and 5 5 A Next, observe that A is also invertible, with inverse 3 By Exercise 3, A itself is invertible, and its inverse is block diagonal: A A 5 5 A [M] his exercise and the next, which involve large matrices, are more appropriate for MALAB, Maple, and Mathematica, than for the graphic calculators a Display the submatrix of A obtained from rows 5 to and columns 5 to MALAB: A(5:, 5:) Maple: submatrix(a, 5, 5) Mathematica: ake[ A, {5,}, {5,} ] b Insert a 5 matrix B into rows to 4 and columns to 9 of matrix A: MALAB: A(:4, :9) B ; he semicolon suppresses output display Maple: copyinto(b, A,, ): he colon suppresses output display Mathematica: For [ i, i<4, i++, For [ j, j<9, j++, A [[ i,j ]] B [[ i-9, j-9 ]] ] ]; Colon suppresses output A c o create B with MALAB, build B out of four blocks: A B [A zeros(3,); zeros(,3) A ]; Another method: first enter B A ; and then enlarge B with the command B(:5, 3:5) A ; his places A in the (, ) block of the larger B and fills in the (, ) and (, ) blocks with zeros For Maple: B : matrix(5,5,): copyinto(a, B,, ): copyinto( transpose(a), B,, 3): For Mathematica: B BlockMatrix[ {{A, ZeroMatrix[3,]}, ZeroMatrix[,3], ranspose[a]}} ]

111 4 Solutions 7 a [M] Construct A from four blocks, say C, C, C, and C, for example with C a 3 3 matrix and C a matrix MALAB: C A(:3, :3) + B(:3, :3) C A(:3, 3:5) + B(:3, 3:5) C A(3:5, :3)+ B(3:5, :3) C A(3:5, 3:5) + B(3:5, 3:5) C [C C; C C] he commands in Maple and Mathematica are analogous, but with different syntax he first commands are: Maple: C : submatrix(a, 3, 3) + submatrix(b, 3, 3) Mathematica: c : ake[ A, {,3), {,3} ] + ake[b, {,3), {,3} ] b he algebra needed comes from block matrix multiplication: A A B B A B + A B AB + AB AB A A B B AB A B AB AB + + Partition both A and B, for example with 3 3 (, ) blocks and (, ) blocks he four necessary submatrix computations use syntax analogous to that shown for (a) A x b c he algebra needed comes from the block matrix equation A A, where x and b x b are in R 3 and x and b are in R hen A x b, which can be solved to produce x Once x is found, rewrite the equation A x + A x b as A x c, where c b A x, and solve A x c for x Notes: he following may be used in place of Example 5: Example 5: Use equation (*) to find formulas for X, Y, and Z in terms of A, B, and C Mention any assumptions you make in order to produce the formulas Solution: X I I Y Z A B C I his matrix equation provides four equations that can be used to find X, Y, and Z: X + I, YI + ZA C, Y + ZB I (Note the order of the factors) he first equation says that X I o solve the fourth equation, ZB I, assume that B and Z are square In this case, the equation ZB I implies that B and Z are invertible, by the IM (Actually, it suffices to assume either that B is square or that Z is square) hen, right-multiply each side of ZB I to get ZBB IB and Z B Finally, the third equation is Y + ZA C So, Y + B A C, and Y C B A he following counterexample shows that Z need not be square for the equation (*) above to be true (*)

112 CHAPER Matrix Algebra Note that Z is not determined by A, B, and C, when B is not square For instance, another Z that works in 3 5 this counterexample is Z 5 SOLUIONS Notes: Modern algorithms in numerical linear algebra are often described using matrix factorizations For practical work, this section is more important than Sections 47 and 54, even though matrix factorizations are explained nicely in terms of change of bases Computational exercises in this section emphasize the use of the LU factorization to solve linear systems he LU factorization is performed using the algorithm explained in the paragraphs before Example, and performed in Example he text discusses how to build L when no interchanges are needed to reduce the given matrix to U An appendix in the Study Guide discusses how to build L in permuted unit lower triangular form when row interchanges are needed Other factorizations are introduced in Exercises L, U, 5 b First,solve Ly b [ L b ] 5 ~ he only arithmetic is in column , so y 6 6 Next, solve Ux y, using back-substitution (with matrix notation) [ U y ] ~ So x (3, 4, 6) o confirm this result, row reduce the matrix [A b]: [ A b ] From this point the row reduction follows that of [U y] above, yielding the same result

113 5 Solutions L, U, 4 b First, solve Ly b: 6 so [ L b ] 4, 6 y Next solve Ux y, using back-substitution (with matrix notation): [ U y ] /4, so x (/4,,) o confirm this result, row reduce the matrix [A b]: [ A b ] From this point the row reduction follows that of [U y] above, yielding the same result 3 L 3, U 3 4, b First, solve Ly b: 4 4 [ L b ] 3 3 3, 4 4 so y 3 3 Next solve Ux y, using back-substitution (with matrix notation): [ U 5 5 y ] , 3 so x (, 3, 3)

114 4 CHAPER Matrix Algebra L /, U, 5 b First, solve Ly b: 3/ [ L b ] / 5 5 5, 3/ so y 5 8 Next solve Ux y, using back-substitution (with matrix notation): 4 4 [ U y ] , so x ( 5,, 3) L, U, b First solve Ly b: [ L b ] , so y 3 Next solve Ux y, using back-substitution (with matrix notation): [ U y ] 4 3 3

115 5 Solutions , 3 3 so x (,,, 3) L, U, b First, solve Ly b: [ L b ] , 3 so y Next solve Ux y, using back-substitution (with matrix notation): [ U y ] , so x (3,,, )

116 6 CHAPER Matrix Algebra 5 7 Place the first pivot column of 3 4 3/ times row to row, yielding U 5 5 A ~ U 3 4 7/ into L, after dividing the column by (the pivot), then add 3 [7/] 7/, L 3/ 3/ 8 Row reduce A to echelon form using only row replacement operations hen follow the algorithm in Example to find L A U [ ] 6, L /3 / A 3 3 ~ 3 U [ 8] 3 3 8, L 3 /3 3 /3

117 5 Solutions A 8 9 ~ ~ U [9] 5 9, L A U [5] 3 5 5, L /3 /3 Row reduce A to echelon form using only row replacement operations hen follow the algorithm in Example to find L Use the last column of I 3 to make L unit lower triangular A U /, L / 3 3

118 8 CHAPER Matrix Algebra U No more pivots! 4 Use the last two columns of I4 to make L unit lower triangular, L A U Use the last two columns of I4 to make L unit lower triangular 5 3 3, L

119 5 Solutions A U [5] 3 5 3, L 3 / / A 3 5 ~ 4 ~ U Use the last three columns of I to make L unit lower triangular 7 5 3/, L 3/ L, U o find L, use the method of Section ; that is, row reduce [L I ]: [ L I] [ I L ],

120 CHAPER Matrix Algebra so L Likewise to find U, row reduce [ UI ]: / [ U I] 4 3/ /4 3/8 /4 / / [ I U ], / /4 3/8 /4 so U / / hus / /4 3/8 /4 /8 3/8 /4 A U L / / 3/ / / / / 8 L 3, U 3 4 ofind L, rowreduce[ L I]: 4 [ L I] 3 ~ ~ 3 I L, so L 3 Likewise tofind U,row reduce [ U I] : [ U I] 3 4 ~ 3 4 /3 /3 ~ /3 4/3 ~ /3 4/3 / /6 /3 ~ /3 4/3 [ I U ],

121 5 Solutions / /6 /3 so U / 3 4/3 hus / /6 /3 /3 / /3 A U L /3 4/3 3 7/3 4/3 9 Let A be a lower-triangular n n matrix with nonzero entries on the diagonal, and consider the augmented matrix [A I] a he (, )-entry can be scaled to and the entries below it can be changed to by adding multiples of row to the rows below his affects only the first column of A and the first column of I So the (, )-entry in the new matrix is still nonzero and now is the only nonzero entry of row in the first n columns (because A was lower triangular) he (, )-entry can be scaled to, the entries below it can be changed to by adding multiples of row to the rows below his affects only columns and n + of the augmented matrix Now the (3, 3) entry in A is the only nonzero entry of the third row in the first n columns, so it can be scaled to and then used as a pivot to zero out entries below it Continuing in this way, A is eventually reduced to I, by scaling each row with a pivot and then using only row operations that add multiples of the pivot row to rows below b he row operations just described only add rows to rows below, so the I on the right in [A I] changes into a lower triangular matrix By heorem 7 in Section, that matrix is A Let A LU be an LU factorization for A Since L is unit lower triangular, it is invertible by Exercise 9 hus by the Invertible Matrix heroem, L may be row reduced to I But L is unit lower triangular, so it can be row reduced to I by adding suitable multiples of a row to the rows below it, beginning with the top row Note that all of the described row operations done to L are row-replacement operations If elementary matrices E, E, E p implement these row-replacement operations, then E EEA ( E EE) LU IU U p p his shows that A may be row reduced to U using only row-replacement operations (Solution in Study Guide) Suppose A BC, with B invertible hen there exist elementary matrices E,, E p corresponding to row operations that reduce B to I, in the sense that E p E B I Applying the same sequence of row operations to A amounts to left-multiplying A by the product E p E By associativity of matrix multiplication E EA E EBC IC C p p so the same sequence of row operations reduces A to C First find an LU factorization for A Row reduce A to echelon form using only row replacement operations: A ~ 3 6 ~

122 CHAPER Matrix Algebra ~ 5 U then follow the algorithm in Example to find L Use the last two columns of I 5 to make L unit lower triangular , L Now notice that the bottom two rows of U contain only zeros If one uses the row-column method to find LU, the entries in the final two columns of L will not be used, since these entries will be multiplied zeros from the bottom two rows of U So let B be the first three columns of L and let C be the top three rows of U hat is, B, C hen B and C have the desired sizes and BC LU A We can generalize this process to the case where A in m n, A LU, and U has only three non-zero rows: let B be the first three columns of L and let C be the top three rows of U 3 a Express each row of D as the transpose of a column vector hen use the multiplication rule for partitioned matrices to write d d d3 d4 [ ] A CD c c c c c d + c d + c d + c d which is the sum of four outer products b Since A has 4 4 entries, C has entries and D has 4 4 entries, to store C and D together requires only entries, which is 5% of the amount of entries needed to store A directly

123 5 Solutions 3 4 Since Q is square and Q Q I, Q is invertible by the Invertible Matrix heorem and Q Q hus A is the product of invertible matrices and hence is invertible hus by heorem 5, the equation Ax b has a unique solution for all b From Ax b, we have QRx b, Q QRx Q b, Rx Q b, and finally x R Q b A good algorithm for finding x is to compute Q b and then row reduce the matrix [ R Q b ] See Exercise in Section for details on why this process works he reduction is fast in this case because R is a triangular matrix 5 A UDV Since U and V are square, the equations U U I and V V I imply that U and V are invertible, by the IM, and hence U U and (V ) V Since the diagonal entries σ,, σn in D are nonzero, D is invertible, with the inverse of D being the diagonal matrix with σ,, σ on the n diagonal hus A is a product of invertible matrices By heorem 6, A is invertible and A (UDV ) (V ) D U VD U 6 If A PDP, where P is an invertible 3 3 matrix and D is the diagonal matrix D / /3 then A ( PDP )( PDP ) PD( P P) DP PDIDP PD P and since D / / / /4 /3 /3 /3 /9 A P /4 P /9 Likewise, A 3 PD 3 P, so A P / P P /8 P 3 /3 /7 3 3 In general, A k PD k P, so k k A P / P k /3 7 First consider using a series circuit with resistance R followed by a shunt circuit with resistance R for the network he transfer matrix for this network is R R / R / R ( R R)/ R +

124 4 CHAPER Matrix Algebra For an input of volts and 6 amps to produce an output of 9 volts and 4 amps, the transfer matrix must satisfy R 6R 9 / R ( R+ R)/ R 6 ( + 6R+ 6 R)/ R 4 Equate the top entries and obtain R ohm Substitute this value in the bottom entry and solve to 9 obtain R ohms he ladder network is a i i i i 3 v / ohm v 9/ ohms v 3 Next consider using a shunt circuit with resistance R followed by a series circuit with resistance R for the network he transfer matrix for this network is R ( R+ R)/ R R / R / R For an input of volts and 6 amps to produce an output of 9 volts and 4 amps, the transfer matrix must satisfy ( R+ R) / R R (R+ R) / R6 R 9 / R 6 / R+ 6 4 Equate the bottom entries and obtain R 6 ohms Substitute this value in the top entry and solve to 3 obtain R ohms he ladder network is 4 b i i i i 3 v 6 ohms v 3/4 ohm v 3 8 he three shunt circuits have transfer matrices,, and / R / R / R 3 respectively o find the transfer matrix for the series of circuits, multiply these matrices,, and / R / R / R (/ R + / R + / R ) 3 3 hus the resulting network is itself a shunt circuit with resistance / R+ / R + / R3 9 a he first circuit is a shunt circuit with resistance R ohms, so its transfer matrix is / R R he second circuit is a series circuit with resistance R ohms, so its transfer matrix is

125 5 Solutions 5 he third circuit is a shunt circuit with resistance R 3 ohms so its transfer matrix is / R3 he transfer matrix of the network is the product of these matrices, in right-to-left order: R ( R+ R)/ R R / R3 / R ( R R R3)/ R3 ( R R3)/ R b o find a ladder network with a structure like that in part (a) and with the given transfer matrix A, we must find resistances R, R, and R 3 such that 4/3 ( R+ R)/ R R A /4 3 ( R R R3)/ R3 ( R R3)/ R From the (, ) entries, R ohms he (, ) entries now give ( R+ )/ R 4/3, which may be solved to obtain R 36 ohms Likewise the (, ) entries give ( R3 + )/ R3 3, which also may be solved to obtain R 3 6 ohms hus the matrix A may be factored as R A / R3 / R /6 /36 he ladder network is i i i i 3 i 3 i 4 v 36 ohms v ohms v 6 3 v ohms 4 3 Answers may vary he network below interchanges the series and shunt circuits i i i i 3 i 3 i 4 R R 3 v v R v 3 v 4 he transfer matrix of this network is the product of the individual transfer matrices, in right-to-left order R3 R / R ( R + R3)/ R R3 R( R + R3)/ R / R ( R R)/ R + By setting the matrix A from the previous exercise equal to this matrix, one may find that ( R + R3)/ R R3 R( R + R3)/ R 4/3 / R ( R+ R)/ R /4 3 Set the (, ) entries equal and obtain R 4 ohms Substitute this value for R, equating the (, ) entries and solving gives R 8 ohms Likewise equating the (, ) entries gives R 3 4/3 ohms

126 6 CHAPER Matrix Algebra he ladder network is i i i i 3 i 3 i 4 8 ohms 4 4/3 ohms v v v 3 v ohms 4 Note: he Study Guide s MALAB box for Section 5 suggests that for most LU factorizations in this section, students can use the gauss command repeatedly to produce U, and use paper and mental arithmetic to write down the columns of L as the row reduction to U proceeds his is because for Exercises 7 6 the pivots are integers and other entries are simple fractions However, for Exercises 3 and 3 this is not reasonable, and students are expected to solve an elementary programming problem (he Study Guide provides no hints) 3 [M] Store the matrix A in a temporary matrix B and create L initially as the 8 8 identity matrix he following sequence of MALAB commands fills in the entries of L below the diagonal, one column at a time, until the first seven columns are filled (he eighth column is the final column of the identity matrix) L(:8, ) B(:8, )/B(, ) B gauss(b, ) L(3:8, ) B(3:8, )/B(, ) B gauss(b, ) L(8:8, 7) B(8:8, 7)/B(7, 7) U gauss(b,7) Of course, some students may realize that a loop will speed up the process he forend syntax is illustrated in the MALAB box for Section 56 Here is a MALAB program that includes the initial setup of B and L: B A L eye(8) for j:7 L(j+:8, j) B(j+:8, j)/b(j, j) B gauss(b, j) end U B a o four decimal places, the results of the LU decomposition are L

127 5 Solutions U b he result of solving Ly b and then Ux y is c x (39569, 65885, 439, 7397, 569, 8768, 945, 43) A [M] A 3 he commands shown for Exercise 3, but modified for matrices, produce 3 3 L U

128 8 CHAPER Matrix Algebra b Let s k+ be the solution of Ls k+ t k for k,,, hen t k+ is the solution of Ut k+ s k+ for k,,, he results are s 775, t 4444, s 48889, t 8596, s3 6, t3 69, s4 969, t SOLUIONS Notes: his section is independent of Section he material here makes a good backdrop for the series expansion of (I C) because this formula is actually used in some practical economic work Exercise 8 gives an interpretation to entries of an inverse matrix that could be stated without the economic context he answer to this exercise will depend upon the order in which the student chooses to list the sectors he important fact to remember is that each column is the unit consumption vector for the appropriate sector If we order the sectors manufacturing, agriculture, and services, then the consumption matrix is 6 6 C 3 3 he intermediate demands created by the production vector x are given by Cx hus in this case the intermediate demand is Cx 3 3 Solve the equation x Cx + d for d: x 6 6 x 9 x 6 x 6x3 d x Cx x 3 x 3 x 8x 8 + x3 3 x3 3 x x + 9x3 his system of equations has the augmented matrix ~ so x (3333, 35, 5)

129 6 Solutions 9 3 Solving as in Exercise : x 6 6 x 9 x 6 x 6x3 8 d x C x x 3 x 3 x 8x + x3 3 x3 3 x x + 9x3 his system of equations has the augmented matrix ~ so x (4, 5, 5) 4 Solving as in Exercise : x 6 6 x 9 x 6 x 6x 8 d x Cx x x x + x x3 3 x3 3 x x + 9x3 his system of equations has the augmented matrix ~ so x (7333, 5, 3) Note: Exercises 4 may be used by students to discover the linearity of the Leontief model x ( I C) d / 3/ 8 5 x ( I C) d 5 8 5/ 45/ 45 7 a From Exercise 5, 6 ( I C) so 6 6 x ( I C ) d C which is the first column of ( I ) b x ( I C ) d 3

130 3 CHAPER Matrix Algebra c From Exercise 5, the production x corressponding to Note that d d+ d hus x ( I C) d ( I C) ( d+ d) ( I C) d+ ( I C) x+ x d d 5 is x 8 a Given ( I C) x d and ( I C) x d, ( I C)( x+ x) ( I C) x+ ( I C) x d+ d hus x+ x is the production level corresponding to a demand of d+ d b Since C x ( I ) d and d is the first column of I, x will be the first column of C ( I ) 9 In this case 8 I C Row reduce [ I C d ] to find ~ So x (88, 3, 3) From Exercise 8, the (i, j) entry in (I C) corresponds to the effect on production of sector i when the final demand for the output of sector j increases by one unit Since these entries are all positive, an increase in the final demand for any sector will cause the production of all sectors to increase hus an increase in the demand for any sector will lead to an increase in the demand for all sectors (Solution in study Guide) Following the hint in the text, compute p x in two ways First, take the transpose of both sides of the price equation, p C p + v, to obtain p ( C p+ v) ( C p) + v p C+ v and right-multiply by x to get px ( p C+ v ) x p Cx+ v x Another way to compute p x starts with the production equation x Cx + d Left multiply by p to get p x p ( Cx+ d) p Cx+ p d he two expression for p x show that p Cx+ v x p Cx+ p d so v x p d he Study Guide also provides a slightly different solution Since m+ Dm+ I + C + C + + C I + C( I + C + + C ) I + CD D m + may be found iteratively by Dm+ I + CDm m m

131 6 Solutions 3 3 [M] he matrix I C is so the augmented matrix [ I C d ] may be row reduced to find ~ so x (99576, 9773, 53, 357, 49488, 39554, 3835) Since the entries in d seem to be accurate to the nearest thousand, a more realistic answer would be x (, 98, 5, 3, 49, 33, 4) 4 [M] he augmented matrix [ I C d ] in this case may be row reduced to find

132 3 CHAPER Matrix Algebra ~ so x (3434, 3687, 6947, 769, 66596, , 843) o the nearest thousand, x (34, 3, 69, 77, 67, 444, 8) 5 [M] Here are the iterations rounded to the nearest tenth: () x (74, 56, 5, 5, 75, 96, 5) () x (89344, 77735, 678, 73347, 3356, 6558, 9378) () x (9468, 87745, , 55, 38598, , 48) (3) x (979 9, 9573, , 5457, 4349, 339, 5988) (4) x (9896, 9533, 47345, 35, 4647, 354, 3855) (5) x (9897, 96353, 4966, 737, , 34796, 34938) (6) x (9966, , 5396, 9967, , 37538, 36559) (7) x (99393, 97378, 56564, 3386, 498, 3849, 374) (8) x (9948, 9757, 5987, 3948, 4935, , 37859) (9) x (99555, , 57 9, 344, , 3993, 3894) () x (995494, 97647, 547, 3399, 49477, , 387) () x (99569, , 5868, 3484, 49453, , 388) () x (995684, , 575, 353, 49469, 395, 3836) so x () is the first vector whose entries are accurate to the nearest thousand he calculation of x () takes about 6 flops, while the row reduction above takes about 55 flops If C is larger than, then fewer flops are required to compute x () by iteration than by row reduction he advantage of the iterative method increases with the size of C he matrix C also becomes more sparse for larger models, so fewer iterations are needed for good accuracy 7 SOLUIONS Notes: he content of this section seems to have universal appeal with students It also provides practice with composition of linear transformations he case study for Chapter concerns computer graphics see this case study (available as a project on the website) for more examples of computer graphics in action he Study Guide encourages the student to examine the book by Foley referenced in the text his section could form the beginning of an independent study on computer graphics with an interested student

133 7 Solutions 33 Refer to Example 5 he representation in homogenous coordinates can be written as a partitioned matrix A of the form, where A is the matrix of the linear transformation Since in this case 5 A, the representation of the transformation with respect to homogenous coordinates is 5 Note: he Study Guide shows the student why the action of action of A on x A on the vector x corresponds to the he matrix of the transformation is A, so the transformed data matrix is AD 3 3 Both the original triangle and the transformed triangle are shown in the following sketch x 5 5 x 3 Following Examples 4 6, / / 3 / / / / / / / / 3/ / / 3 / / 3 / 3/ / 3/ / / 3/ / 3/

134 34 CHAPER Matrix Algebra 7 A 6 rotation about the origin is given in homogeneous coordinates by the matrix / 3 / 3/ / o rotate about the point (6, 8), first translate by ( 6, 8), then rotate about the origin, then translate back by (6, 8) (see the Practice Problem in this section) A 6 rotation about (6, 8) is thus is given in homogeneous coordinates by the matrix 6 / 3/ 6 / 3/ / / 8 3 / / A 45 rotation about the origin is given in homogeneous coordinates by the matrix / / / / o rotate about the point (3, 7), first translate by ( 3, 7), then rotate about the origin, then translate back by (3, 7) (see the Practice Problem in this section) A 45 rotation about (3, 7) is thus is given in homogeneous coordinates by the matrix 3 / / 3 / / 3+ 7 / / 7 / / 75 9 o produce each entry in BD two multiplications are necessary Since BD is a matrix, it will take 8 multiplications to compute BD By the same reasoning it will take 8 multiplications to compute A(BD) hus to compute A(BD) from the beginning will take multiplications o compute the matrix AB it will take 8 multiplications, and to compute (AB)D it will take 8 multiplications hus to compute (AB)D from the beginning will take multiplications For computer graphics calculations that require applying multiple transformations to data matrices, it is thus more efficient to compute the product of the transformation matrices before applying the result to the data matrix Let the transformation matrices in homogeneous coordinates for the dilation, rotation, and translation be called respectively D, and R, and hen for some value of s, ϕ, h, and k, s cosϕ sinϕ h D s, R sinϕ cosϕ, k Compute the products of these matrices: scosϕ ssin ϕ scosϕ ssin ϕ DR s sinϕ s cosϕ, RD s sinϕ s cosϕ

135 7 Solutions 35 s sh s h D s sk, D s k cosϕ sinϕ hcosϕ ksinϕ cosϕ sinϕ h R sinϕ cosϕ hsinϕ kcos ϕ, R sinϕ cosϕ k + Since DR RD, D D and R R, D and R commute, D and do not commute and R and do not commute o simplify A A completely, the following trigonometric identities will be needed: sinϕ cosϕ tan ϕ cos ϕ cos ϕ sin ϕ sinϕ sin ϕ cos ϕ cosϕ cosϕ cosϕ cosϕ sec ϕ tan ϕ sin ϕ sin ϕ cos ϕ Using these identities, secϕ tanϕ AA sinϕ cosϕ secϕ tanϕsinϕ tanϕcosϕ sinϕ cosϕ cosϕ sinϕ sinϕ cosϕ which is the transformation matrix in homogeneous coordinates for a rotation in o simplify this product completely, the following trigonometric identity will be needed: cosϕ sinϕ tan ϕ / sinϕ + cosϕ his identity has two important consequences: cosϕ (tan ϕ/ )(sin ϕ) sinϕ cosϕ sinϕ sinϕ (cos ϕ)( tan ϕ/ ) tan ϕ/ (cosϕ + ) tan ϕ/ (cosϕ + ) sinϕ + cosϕ he product may be computed and simplified using these results: tan ϕ/ tan ϕ/ sinϕ (tan ϕ/ )(sin ϕ) tan ϕ/ tan ϕ/ sinϕ

136 36 CHAPER Matrix Algebra cosϕ tan ϕ/ tan ϕ/ sinϕ cos ϕ (cos ϕ)( tan ϕ/ ) tan ϕ/ sin ϕ (sin ϕ)(tan ϕ/ ) + cosϕ sinϕ sinϕ cosϕ which is the transformation matrix in homogeneous coordinates for a rotation in 3 Consider first applying the linear transformation on whose matrix is A, then applying a translation by the vector p to the result he matrix representation in homogeneous coordinates of the linear A transformation is, while the matrix representation in homogeneous coordinates of the I p translation is Applying these transformations in order leads to a transformation whose matrix representation in homogeneous coordinates is I p A A p which is the desired matrix 4 he matrix for the transformation in Exercise 7 was found to be / 3 / / / A p his matrix is of the form, where / 3 / A, + p 3/ / By Exercise 3, this matrix may be written as I p A that is, the composition of a linear transformation on and a translation he matrix A is the matrix of a rotation about the origin in hus the transformation in Exercise 7 is the composition of a rotation about the origin and a translation by p 4 3 3

137 7 Solutions 37 5 Since ( X, Y, Z, H ) (,,, ), the corresponding point in X Y Z 4 8 ( xyz,, ),,,, (, 6,3) H H H has coordinates 6 he homogeneous coordinates (,, 3, 4) represent the point (/4, /4,3/4) (/4, /,3/4) while the homogeneous coordinates (,, 3, 4) represent the point (/ 4, / 4, 3/ 4) (/ 4, /, 3/ 4) so the two sets of homogeneous coordinates represent the same point in 3 7 Follow Example 7a by first constructing that 3 3 matrix for this rotation he vector e is not changed by this rotation he vector e is rotated 6 toward the positive z-axis, ending up at the point (, cos 6, sin 6 ) (, /, 3 / ) he vector e 3 is rotated 6 toward the negative y-axis, stopping at the point (, cos 5, sin 5 ) (, 3 /, / ) he matrix A for this rotation is thus A / 3/ 3/ / so in homogeneous coordinates the transformation is represented by the matrix A / 3/ 3/ / 8 First construct the 3 3 matrix for the rotation he vector e is rotated 3 toward the negative y-axis, ending up at the point (cos( 3), sin ( 3), ) ( 3/, /, ) he vector e is rotated 6 toward the positive x-axis, ending up at the point (cos 6, sin 6, ) (/, 3 /, ) he vector e 3 is not changed by the rotation he matrix A for the rotation is thus 3/ / A / 3 / so in homogeneous coordinates the rotation is represented by the matrix 3/ / A / 3/ Following Example 7b, in homogeneous coordinates the translation by the vector (5,, ) is represented by the matrix 5

138 38 CHAPER Matrix Algebra hus the complete transformation is represented in homogeneous coordinates by the matrix 5 3/ / 3/ / 5 / 3 / / 3 / 9 Referring to the material preceding Example 8 in the text, we find that the matrix P that performs a perspective projection with center of projection (,, ) is he homogeneous coordinates of the vertices of the triangle may be written as (4,, 4, ), (6, 4,, ), and (,, 6, ), so the data matrix for S is and the data matrix for the transformed triangle is Finally, the columns of this matrix may be converted from homogeneous coordinates by dividing by the final coordinate: (4,,, 6) (4/6, /6, /6) (7,, ) (6, 4,, 8) (6/8, /8, /8) (75, 5, ) (,,,4) (/4,/4,/4) (5,5,) So the coordinates of the vertices of the transformed triangle are (7,, ), (75, 5, ), and (5, 5, ) As in the previous exercise, the matrix P that performs the perspective projection is he homogeneous coordinates of the vertices of the triangle may be written as (9, 3, 5, ), (, 8,, ), and (8, 7,, ), so the data matrix for S is

139 8 Solutions 39 and the data matrix for the transformed triangle is Finally, the columns of this matrix may be converted from homogeneous coordinates by dividing by the final coordinate: (9,3,,5) (9/5,3/5, /5) (6,, ) (, 8,, 8) ( /8, 8/8, /8) (5,, ) (8, 7,, 9) (8/9, 7 /9, /9) (, 3, ) So the coordinates of the vertices of the transformed triangle are (6,, ), (5,, ), and (, 3, ) [M] Solve the given equation for the vector (R, G, B), giving R X X G Y Y B Z Z [M] Solve the given equation for the vector (R, G, B), giving R Y Y G I I B 58 3 Q Q 8 SOLUIONS Notes: Cover this section only if you plan to skip most or all of Chapter 4 his section and the next cover everything you need from Sections 4 46 to discuss the topics in Section 49 and Chapters 5 7 (except for the general inner product spaces in Sections 67 and 68) Students may use Section 4 for review, particularly the able near the end of the section (he final subsection on linear transformations should be omitted) Example 6 and the associated exercises are critical for work with eigenspaces in Chapters 5 and 7 Exercises 3 36 review the Invertible Matrix heorem New statements will be added to this theorem in Section 9 Key Exercises: 5 and 3 6 he set is closed under sums but not under multiplication by a negative scalar A counterexample to the subspace condition is shown at the right u ( )u Note: Most students prefer to give a geometric counterexample, but some may choose an algebraic calculation he four exercises here should help students develop an understanding of subspaces, but they may be insufficient if you want students to be able to analyze an unfamiliar set on an exam Developing that skill seems more appropriate for classes covering Sections 4 46

140 4 CHAPER Matrix Algebra he set is closed under scalar multiples but not sums For example, the sum of the vectors u and v shown here is not in H u v u + v 3 No he set is not closed under sums or scalar multiples he subset consisting of the points on the line x x is a subspace, so any counterexample must use at least one point not on this line Here are two counterexamples to the subspace conditions: v u u + v 3u 4 No he set is closed under sums, but not under multiplication by a negative scalar u ( )u 5 he vector w is in the subspace generated by v and v if and only if the vector equation x v + x v w is consistent he row operations below show that w is not in the subspace generated by v and v [ v v w ]~ 3 5 ~ ~ he vector u is in the subspace generated by {v, v, v 3 } if and only if the vector equation x v + x v + x 3 v 3 u is consistent he row operations below show that u is not in the subspace generated by {v, v, v 3 } [ v v v3 u ]~ ~ ~ Note: For a quiz, you could use w (, 3,, 8), which is in Span{v, v, v 3 } 7 a here are three vectors: v, v, and v 3 in the set {v, v, v 3 } b here are infinitely many vectors in Span{v, v, v 3 } Col A c Deciding whether p is in Col A requires calculation: [ A p ]~ ~ 4 4 ~ he equation Ax p has a solution, so p is in Col A

141 8 Solutions [ A p ] 6 4 ~ 6 4 ~ Yes, the augmented matrix [A p] corresponds to a consistent system, so p is in Col A 9 o determine whether p is in Nul A, simply compute Ap Using A and p as in Exercise 7, Ap Since Ap, p is not in Nul A o determine whether u is in Nul A, simply compute Au Using A as in Exercise 7 and u (, 3, ), Au Yes, u is in Nul A p 4 and q 3 Nul A is a subspace of R 4 because solutions of Ax must have 4 entries, to match the columns of A Col A is a subspace of R 3 because each column vector has 3 entries p 3 and q 4 Nul A is a subspace of R 3 because solutions of Ax must have 3 entries, to match the columns of A Col A is a subspace of R 4 because each column vector has 4 entries 3 o produce a vector in Col A, select any column of A For Nul A, solve the equation Ax (Include an augmented column of zeros, to avoid errors) ~ 4 8 ~ x x3 + x4 ~ 4 ~ 4, x + x3 4x4 he general solution is x x 3 x 4, and x x 3 + 4x 4, with x 3 and x 4 free he general solution in parametric vector form is not needed All that is required here is one nonzero vector So choose any values for x 3 and x 4 (not both zero) For instance, set x 3 and x 4 to obtain the vector (,,, ) in Nul A Note: Section 8 of Study Guide introduces the ref command (or rref, depending on the technology), which produces the reduced echelon form of a matrix his will greatly speed up homework for students who have a matrix program available 4 o produce a vector in Col A, select any column of A For Nul A, solve the equation Ax : / /3 5/3 ~ ~ ~ he general solution is x (/3)x 3 and x ( 5/3) x 3, with x 3 free he general solution in parametric vector form is not needed All that is required here is one nonzero vector So choose any values of x 3 and x 4 (not both zero) For instance, set x 3 3 to obtain the vector (, 5, 3) in Nul A

142 4 CHAPER Matrix Algebra 5 Yes Let A be the matrix whose columns are the vectors given hen A is invertible because its determinant is nonzero, and so its columns form a basis for R, by the Invertible Matrix heorem (or by Example 5) (Other reasons for the invertibility of A could be given) 6 No One vector is a multiple of the other, so they are linearly dependent and hence cannot be a basis for any subspace 7 No Place the three vectors into a 3 3 matrix A and determine whether A is invertible: A 7 3 ~ 5 6 ~ 5 6 ~ he matrix A has three pivots, so A is invertible by the IM and its columns form a basis for R 3 (as pointed out in Example 5) 8 Yes Place the three vectors into a 3 3 matrix A and determine whether A is invertible: A ~ 4 7 ~ he matrix A has three pivots, so A is invertible by the IM and its columns form a basis for R 3 (as pointed out in Example 5) 9 No he vectors cannot be a basis for R 3 because they only span a plan in R 3 Or, point out that the 5 columns of the matrix cannot possibly span R 3 because the matrix cannot have a pivot in every row So the columns are not a basis for R 3 Note: he Study Guide warns students not to say that the two vectors here are a basis for R No he vectors are linearly dependent because there are more vectors in the set than entries in each vector (heorem 8 in Section 7) So the vectors cannot be a basis for any subspace a False See the definition at the beginning of the section he critical phrases for each are missing b rue See the paragraph before Example 4 c False See heorem he null space is a subspace of R n, not R m d rue See Example 5 e rue See the first part of the solution of Example 8 a False See the definition at the beginning of the section he condition about the zero vector is only one of the conditions for a subspace b rue See Example 3 c rue See heorem d False See the paragraph after Example 4 e False See the Warning that follows heorem 3

143 8 Solutions (Solution in Study Guide) A 6 5 ~ 5 6 he echelon form identifies columns and as the pivot columns A basis for Col A uses columns and of A: 6, 5 his is not 3 4 the only choice, but it is the standard choice A wrong choice is to select columns and of the echelon form hese columns have zero in the third entry and could not possibly generate the columns displayed in A 4 For Nul A, obtain the reduced (and augmented) echelon form for Ax : 4 7 x 4x3 + 7x4 5 6 his corresponds to: x + 5x3 6x4 Solve for the basic variables and write the solution of Ax in parametric vector form: x 4x3 7x x 5x3 + 6x x 3 x Basis for Nul A:, x3 x3 x4 x4 Notes: () A basis is a set of vectors For simplicity, the answers here and in the text list the vectors without enclosing the list inside set brackets his style is also easier for students I am careful, however, to distinguish between a matrix and the set or list whose elements are the columns of the matrix () Recall from Chapter that students are encouraged to use the augmented matrix when solving Ax, to avoid the common error of misinterpreting the reduced echelon form of A as itself the augmented matrix for a nonhomogeneous system (3) Because the concept of a basis is just being introduced, I insist that my students write the parametric vector form of the solution of Ax hey see how the basis vectors span the solution space and are obviously linearly independent A shortcut, which some instructors might introduce later in the course, is only to solve for the basic variables and to produce each basis vector one at a time Namely, set all free variables equal to zero except for one free variable, and set that variable equal to a suitable nonzero number A ~ 4 5 Basis for Col A:, For Nul A, obtain the reduced (and augmented) echelon form for Ax : 3 5 x 3x + 5x4 5 his corresponds to: x3 + 5x4 Solve for the basic variables and write the solution of Ax in parametric vector form: x 3x 5x4 3 5 x x Basis for Nul A: x + x 4 x3 5x4 5 x4 x4 3 5, 5

144 44 CHAPER Matrix Algebra A ~ Basis for Col A: For Nul A, obtain the reduced (and augmented) echelon form for Ax : [ A ]~ 4 x x + 7x 3 5 x + 5 x3 5x5 x + 4x ,, x x3 7x5 7 x 5 x3 5x hesolution of Ax in parametric vector form : x3 x3 x3 + x5 x 4x x 5 x 5 Basis for Nul A: {u, v} u v Note: he solution above illustrates how students could write a solution on an exam, when time is precious, namely, describe the basis by giving names to appropriate vectors found in the calculations A ~ Basis for Col A: For Nul A, [ A ] ~ he solution of Ax in parametric vector form: 4 5 x 5 x 5 x + 3x + 5x 5 x + x + 5x 3 5 x + x ,, x 3x3 5x5 3 5 x x3 5x 5 5 x3 x3 x3 + x5 x x Basis for Nul A: {u, v} u v 7 Construct a nonzero 3 3 matrix A and construct b to be almost any convenient linear combination of the columns of A

145 8 Solutions 45 8 he easiest construction is to write a 3 3 matrix in echelon form that has only pivots, and let b be any vector in R 3 whose third entry is nonzero 9 (Solution in Study Guide) A simple construction is to write any nonzero 3 3 matrix whose columns are obviously linearly dependent, and then make b a vector of weights from a linear dependence relation among the columns For instance, if the first two columns of A are equal, then b could be (,, ) 3 Since Col A is the set of all linear combinations of a,, a p, the set {a,, a p } spans Col A Because {a,, a p } is also linearly independent, it is a basis for Col A (here is no need to discuss pivot columns and heorem 3, though a proof could be given using this information) 3 If Col F R 5, then the columns of F do not span R 5 Since F is square, the IM shows that F is not invertible and the equation Fx has a nontrivial solution hat is, Nul F contains a nonzero vector Another way to describe this is to write Nul F {} 3 If Nul R contains nonzero vectors, then the equation Rx has nontrivial solutions Since R is square, the IM shows that R is not invertible and the columns of R do not span R 6 So Col R is a subspace of R 6, but Col R R 6 33 If Col Q R 4, then the columns of Q span R 4 Since Q is square, the IM shows that Q is invertible and the equation Qx b has a solution for each b in R 4 Also, each solution is unique, by heorem 5 in Section 34 If Nul P {}, then the equation Px has only the trivial solution Since P is square, the IM shows that P is invertible and the equation Px b has a solution for each b in R 5 Also, each solution is unique, by heorem 5 in Section 35 If the columns of B are linearly independent, then the equation Bx has only the trivial (zero) solution hat is, Nul B {} 36 If the columns of A form a basis, they are linearly independent his means that A cannot have more columns than rows Since the columns also span R m, A must have a pivot in each row, which means that A cannot have more rows than columns As a result, A must be a square matrix 37 [M] Use the command that produces the reduced echelon form in one step (ref or rref depending on the program) See the Section 8 in the Study Guide for details By heorem 3, the pivot columns of A form a basis for Col A A ~ Basis for Col A: For Nul A, obtain the solution of Ax in parametric vector form: x + 5x3 45x4 + 35x5 x + 5x 5x + 5x x 5x + 45x 35x Solution: x 5x + 5x 5x x3, x4, and x5 are free ,

146 46 CHAPER Matrix Algebra x 5x + 45x 35x x 5x3 5x4 5x x3 x3 x3 x4 x5 x4 x4 x 5 x 5 x + + x 3 u + x 4 v + x 5 w By the argument in Example 6, a basis for Nul A is {u, v, w} [M] A ~ he pivot columns of A form a basis for Col A:,, x + 6x4 + x5 For Nul A, solve Ax : x 54x4 39x5 x 47x 94x x 6x4 x5 x 54x4 + 39x5 Solution: x3 47x4 + 94x5 x4 and x5 are free x 6x4 x5 6 x 54x4 39x x x3 47x4 + 94x5 x x5 94 x 4 u + x 5 v x4 x4 x 5 x 5 By the method of Example 6, a basis for Nul A is {u, v} Note: he Study Guide for Section 8 gives directions for students to construct a review sheet for the concept of a subspace and the two main types of subspaces, Col A and Nul A, and a review sheet for the concept of a basis I encourage you to consider making this an assignment for your class 9 SOLUIONS Notes: his section contains the ideas from Sections that are needed for later work in Chapters 5 7 If you have time, you can enrich the geometric content of coordinate systems by discussing crystal lattices (Example 3 and Exercises 35 and 36 in Section 44) Some students might profit from reading Examples 3 from Section 44 and Examples, 4, and 5 from Section 46 Section 45 is probably not a good reference for students who have not considered general vector spaces Coordinate vectors are important mainly to give an intuitive and geometric feeling for the isomorphism between a k-dimensional subspace and R k If you plan to omit Sections 54, 56, 57 and 7, you can safely omit Exercises 8 here

147 9 Solutions 47 Exercises 6 may be assigned after students have read as far as Example Exercises 9 and use the Rank heorem, but they can also be assigned before the Rank heorem is discussed he Rank heorem in this section omits the nontrivial fact about Row A which is included in the Rank heorem of Section 46, but that is used only in Section 74 he row space itself can be introduced in Section 6, for use in Chapter 6 and Section 74 Exercises 9 6 include important review of techniques taught in Section 8 (and in Sections and 5) hey make good test questions because they require little arithmetic My students need the practice here Nearly every time I teach the course and start Chapter 5, I find that at least one or two students cannot find a basis for a two-dimensional eigenspace! 3 If [x] B, then x is formed from b and b using weights 3 and : 7 x 3b + b 3 + x 3b b b b b x x If [x] B 3, then x is formed from b and b using weights and 3: x ( )b + 3b 3 ( ) 3 + b x b b 3b x x b 3 o find c and c that satisfy x c b + c b, row reduce the augmented matrix: [ b b x ] ~ ~ Or, one can write a matrix equation as suggested by Exercise 7 and solve using the matrix inverse In either case, c 7 [x] B c As in Exercise 3, [ b b x ] ~ ~ , and c 5 [x] B c /4 [ b b x ] 5 7 ~ 8 ~ 5/4 [x] B c /4 c 5/4

148 48 CHAPER Matrix Algebra / [ b b x ] 5 ~ ~ / c 5/ [x] B c / 7 Fig suggests that w b b and x 5b + 5b, in which case, [w] B and [x] B 5 5 o confirm [x] B, compute b + 5 b x x y b b b x w z b Figure Figure Note: Figures and display what Section 44 calls B-graph paper 8 Fig suggests that x b b, y 5b + b, and z b 5b If so, then [x] B, [y] B 5, and [z] B o confirm [y] B and [z] B, compute b + b + y and b 5 b 5 5 z he information A ~ is enough to see that columns, 3, and 4 of A form a basis for Col A:,, Columns, and 4, of the echelon form certain cannot span Col A since those vectors all have zero in their fourth entries For Nul A, use the reduced echelon form, augmented with a zero column to insure that the equation Ax is kept in mind:

149 9 Solutions 49 3 x 3x x 3x 3 x3 x x, x x So x4 x3 x is the free variable x 4 3 is a basis for Nul A From this information, dim Col A 3 (because A has three pivot columns) and dim Nul A (because the equation Ax has only one free variable) he information A ~ shows that columns,, and 4 of A form a basis for Col A:,, For Nul A, 4 3 x + 3x3 3 7 x [ A ] ~ 3x3 7x5 x4 x5 x3 and x5 are free variables x 3x3 3 3 x 3x3 + 7x x x3 x3 x3 + x5 Basis for Nul A:, x4 x5 x 5 x 5 From this, dim Col A 3 and dim Nul A he information A ~ and 4 of A form a basis for Col A:,, For Nul A, [ A ] ~ x 9x + 5x 3 5 x x + x 3x 3 5 and x 3 5 x + x 4 5 are free variables shows that columns,,

150 5 CHAPER Matrix Algebra x 9x3 5x5 9 5 x x3 3x x x3 x3 x3 + x5 Basis for Nul A: x x 4 5 x 5 x 5 From this, dim Col A 3 and dim Nul A 9 5 3, he information A ~ and 5 of A form a basis for Col A:,, For Nul A [ A ] 5 ~ x + x 5x x 4 and x 4 x x 3 4 x 5 are free variables x x + 5x4 5 x x x x3 x4 x + x4 Basis for Nul A: x4 x4 x 5 From this, dim Col A 3 and dim Nul A 5, shows that columns, 3, 3 he four vectors span the column space H of a matrix that can be reduced to echelon form: ~ ~ ~ Columns, 3, and 4 of the original matrix form a basis for H, so dim H 3 Note: Either Exercise 3 or 4 should be assigned because there are always one or two students who confuse Col A with Nul A Or, they wrongly connect set of linear combinations with parametric vector form (of the general solution of Ax ) 4 he five vectors span the column space H of a matrix that can be reduced to echelon form: ~ ~ Columns and of the original matrix form a basis for H, so dim H

151 9 Solutions 5 5 Col A R 3, because A has a pivot in each row and so the columns of A span R 3 Nul A cannot equal R, because Nul A is a subspace of R 5 It is true, however, that Nul A is two-dimensional Reason: the equation Ax has two free variables, because A has five columns and only three of them are pivot columns 6 Col A cannot be R 3 because the columns of A have four entries (In fact, Col A is a 3-dimensional subspace of R 4, because the 3 pivot columns of A form a basis for Col A) Since A has 7 columns and 3 pivot columns, the equation Ax has 4 free variables So, dim Nul A 4 7 a rue his is the definition of a B-coordinate vector b False Dimension is defined only for a subspace A line must be through the origin in R n to be a subspace of R n c rue he sentence before Example concludes that the number of pivot columns of A is the rank of A, which is the dimension of Col A by definition d rue his is equivalent to the Rank heorem because rank A is the dimension of Col A e rue, by the Basis heorem In this case, the spanning set is automatically a linearly independent set 8 a rue his fact is justified in the second paragraph of this section b rue See the second paragraph after Fig c False he dimension of Nul A is the number of free variables in the equation Ax See Example d rue, by the definition of rank e rue, by the Basis heorem In this case, the linearly independent set is automatically a spanning set 9 he fact that the solution space of Ax has a basis of three vectors means that dim Nul A 3 Since a 5 7 matrix A has 7 columns, the Rank heorem shows that rank A 7 dim Nul A 4 Note: One can solve Exercises 9 without explicit reference to the Rank heorem For instance, in Exercise 9, if the null space of a matrix A is three-dimensional, then the equation Ax has three free variables, and three of the columns of A are nonpivot columns Since a 5 7 matrix has seven columns, A must have four pivot columns (which form a basis of Col A) So rank A dim Col A 4 A 4 5 matrix A has 5 columns By the Rank heorem, rank A 5 dim Nul A Since the null space is three-dimensional, rank A A 7 6 matrix has 6 columns By the Rank heorem, dim Nul A 6 rank A Since the rank is four, dim Nul A hat is, the dimension of the solution space of Ax is two he wording of this problem was poor in the first printing, because the phrase it spans a fourdimensional subspace was never defined Here is a revision that I will put in later printings of the third edition: Show that a set {v,, v 5 } in R n is linearly dependent if dim Span{v,, v 5 } 4 Solution: Suppose that the subspace H Span{v,, v 5 } is four-dimensional If {v,, v 5 } were linearly independent, it would be a basis for H his is impossible, by the statement just before the definition of dimension in Section 9, which essentially says that every basis of a p-dimensional subspace consists of p vectors hus, {v,, v 5 } must be linearly dependent 3 A 3 4 matrix A with a two-dimensional column space has two pivot columns he remaining two columns will correspond to free variables in the equation Ax So the desired construction is possible

152 5 CHAPER Matrix Algebra * * * here are six possible locations for the two pivot columns, one of which is * * A simple construction is to take two vectors in R 3 that are obviously not linearly dependent, and put two copies of these two vectors in any order he resulting matrix will obviously have a two-dimensional column space here is no need to worry about whether Nul A has the correct dimension, since this is guaranteed by the Rank heorem: dim Nul A 4 rank A 4 A rank matrix has a one-dimensional column space Every column is a multiple of some fixed vector o construct a 4 3 matrix, choose any nonzero vector in R 4, and use it for one column Choose any multiples of the vector for the other two columns 5 he p columns of A span Col A by definition If dim Col A p, then the spanning set of p columns is automatically a basis for Col A, by the Basis heorem In particular, the columns are linearly independent 6 If columns a, a 3, a 5, and a 6 of A are linearly independent and if dim Col A 4, then {a, a 3, a 5, a 6 } is a linearly independent set in a 4-dimensional column space By the Basis heorem, this set of four vectors is a basis for the column space 7 a Start with B [b b p ] and A [a a q ], where q > p For j,, q, the vector a j is in W Since the columns of B span W, the vector a j is in the column space of B hat is, a j Bc j for some vector c j of weights Note that c j is in R p because B has p columns b Let C [c c q ] hen C is a p q matrix because each of the q columns is in R p By hypothesis, q is larger than p, so C has more columns than rows By a theorem, the columns of C are linearly dependent and there exists a nonzero vector u in R q such that Cu c From part (a) and the definition of matrix multiplication A [a a q ] [Bc Bc q ] BC From part (b), Au (BC)u B(Cu) B Since u is nonzero, the columns of A are linearly dependent 8 If A contained more vectors than B, then A would be linearly dependent, by Exercise 7, because B spans W Repeat the argument with B and A interchanged to conclude that B cannot contain more vectors than A 9 [M] Apply the matrix command ref or rref to the matrix [v v x]: ~ he equation c v + c v x is consistent, so x is in the subspace H he decimal approximations suggest c 5/3 and c 8/3, and it can be checked that these values are precise hus, the B-coordinate of x is ( 5/3, 8/3) 3 [M] Apply the matrix command ref or rref to the matrix [v v v 3 x]: ~

153 Chapter Supplementary Exercises 53 he first three columns of [v v v 3 x] are pivot columns, so v, v and v 3 are linearly independent hus v, v and v 3 form a basis B for the subspace H which they span View [v v v 3 x] as an augmented matrix for c v + c v + c 3 v 3 x he reduced echelon form shows that x is in H and 3 [x] B 5 Notes: he Study Guide for Section 9 contains a complete list of the statements in the Invertible Matrix heorem that have been given so far he format is the same as that used in Section 3, with three columns: statements that are logically equivalent for any m n matrix and are related to existence concepts, those that are equivalent only for any n n matrix, and those that are equivalent for any n p matrix and are related to uniqueness concepts Four statements are included that are not in the text s official list of statements, to give more symmetry to the three columns he Study Guide section also contains directions for making a review sheet for dimension and rank Chapter SUPPLEMENARY EXERCISES a rue If A and B are m n matrices, then B has as many rows as A has columns, so AB is defined Also, A B is defined because A has m columns and B has m rows b False B must have columns A has as many columns as B has rows c rue he ith row of A has the form (,, d i,, ) So the ith row of AB is (,, d i,, )B, which is d i times the ith row of B d False ake the zero matrix for B Or, construct a matrix B such that the equation Bx has nontrivial solutions, and construct C and D so that C D and the columns of C D satisfy the equation Bx hen B(C D) and BC BD e False Counterexample: A and C f False (A + B)(A B) A AB + BA B his equals A B if and only if A commutes with B g rue An n n replacement matrix has n + nonzero entries he n n scale and interchange matrices have n nonzero entries h rue he transpose of an elementary matrix is an elementary matrix of the same type i rue An n n elementary matrix is obtained by a row operation on I n j False Elementary matrices are invertible, so a product of such matrices is invertible But not every square matrix is invertible k rue If A is 3 3 with three pivot positions, then A is row equivalent to I 3 l False A must be square in order to conclude from the equation AB I that A is invertible m False AB is invertible, but (AB) B A, and this product is not always equal to A B n rue Given AB BA, left-multiply by A to get B A BA, and then right-multiply by A to obtain BA A B o False he correct equation is (ra) r A, because (ra)(r A ) (rr )(AA ) I I p rue If the equation Ax has a unique solution, then there are no free variables in this equation, which means that A must have three pivot positions (since A is 3 3) By the Invertible Matrix heorem, A is invertible

154 54 CHAPER Matrix Algebra C (C ) 7 5 7/ 5/ A, A A A A Next, ( I A)( I + A+ A ) I + A+ A A( I + A+ A ) I + A+ A A A A I A Since A 3, ( I A)( I + A+ A ) I 4 From Exercise 3, the inverse of I A is probably I + A + A + + A n o verify this, compute n n n n n ( I A)( I + A+ + A ) I + A+ + A A( I + A+ + A ) I AA I A If A n, then the matrix B I + A + A + + A n satisfies (I A)B I Since I A and B are square, they are invertible by the Invertible Matrix heorem, and B is the inverse of I A 5 A A I Multiply by A: A 3 A A Substitute A A I: A 3 (A I) A 3A I Multiply by A again: A 4 A(3A I) 3A A Substitute the identity A A I again: Finally, A 4 3(A I) A 4A 3I 6 Let A and B By direct computation, A I, B I, and AB BA 7 (Partial answer in Study Guide) Since A B is the solution of AX B, row reduction of [A B] to [I X] will produce X A B See Exercise in Section [ A B] ~ ~ ~ 3 6 ~ 9 ~ hus, A B By definition of matrix multiplication, the matrix A satisfies A 3 3 7

155 Chapter Supplementary Exercises 55 Right-multiply both sides by the inverse of 3 7 A he left side becomes A hus, 9 Given AB and B 3, notice that ABB A Since det B 7 6, B and A ( AB) B Note: Variants of this question make simple exam questions Since A is invertible, so is A, by the Invertible Matrix heorem hen A A is the product of invertible matrices and so is invertible hus, the formula (A A) A makes sense By heorem 6 in Section, (A A) A A (A ) A A I A An alternative calculation: (A A) A A (A A) (A A) I Since A is invertible, this equation shows that its inverse is (A A) A c a For i,, n, p(x i ) c + c x i + + cn xi n row i( V) row i( V) c c n By a property of matrix multiplication, shown after Example 6 in Section, and the fact that c was chosen to satisfy Vc y, row i( V) c row i( Vc) row i( y ) yi hus, p(x i ) y i o summarize, the entries in Vc are the values of the polynomial p(x) at x,, x n b Suppose x,, x n are distinct, and suppose Vc for some vector c hen the entries in c are the coefficients of a polynomial whose value is zero at the distinct points x,, x n However, a nonzero polynomial of degree n cannot have n zeros, so the polynomial must be identically zero hat is, the entries in c must all be zero his shows that the columns of V are linearly independent c (Solution in Study Guide) When x,, x n are distinct, the columns of V are linearly independent, by (b) By the Invertible Matrix heorem, V is invertible and its columns span R n So, for every y (y,, y n ) in R n, there is a vector c such that Vc y Let p be the polynomial whose coefficients are listed in c hen, by (a), p is an interpolating polynomial for (x, y ),, (x n, y n ) If A LU, then col (A) L col (U) Since col (U) has a zero in every entry except possibly the first, L col (U) is a linear combination of the columns of L in which all weights except possibly the first are zero So col (A) is a multiple of col (L) Similarly, col (A) L col (U), which is a linear combination of the columns of L using the first two entries in col (U) as weights, because the other entries in col (U) are zero hus col (A) is a linear combination of the first two columns of L 3 a P (uu )(uu ) u(u u)u u()u P, because u satisfies u u b P (uu ) u u uu P c Q (I P)(I P) I I(P) PI + P(P) I 4P + 4P I, because of part (a)

156 56 CHAPER Matrix Algebra 4 Given u, define P and Q as in Exercise 3 by If P uu [ ], Q I P x 5, then Px 5 and Q 5 5 x Left-multiplication by an elementary matrix produces an elementary row operation: B ~ EB ~ EEB ~ EEEB 3 C so B is row equivalent to C Since row operations are reversible, C is row equivalent to B (Alternatively, show C being changed into B by row operations using the inverse of the E i ) 6 Since A is not invertible, there is a nonzero vector v in R n such that Av Place n copies of v into an n n matrix B hen AB A[v v] [Av Av] 7 Let A be a 6 4 matrix and B a 4 6 matrix Since B has more columns than rows, its six columns are linearly dependent and there is a nonzero x such that Bx hus ABx A his shows that the matrix AB is not invertible, by the IM (Basically the same argument was used to solve Exercise in Section ) Note: (In the Study Guide) It is possible that BA is invertible For example, let C be an invertible 4 4 matrix C and construct A and B [ C ] hen BA I 4, which is invertible 8 By hypothesis, A is 5 3, C is 3 5, and AC I 3 Suppose x satisfies Ax b hen CAx Cb Since CA I, x must be Cb his shows that Cb is the only solution of Ax b [M] Let A hen A 3 4 calculations by computing Instead of computing A 3 next, speed up the A A A , A A A o four decimal places, as k increases, k A , or, in rational format, k A /7 /7 /7 3/7 3/7 3/7 /7 /7 /7

157 Chapter Supplementary Exercises 57 If B 6 3, then B , B , B o four decimal places, as k increases, k B , or, in rational format, k B 8/89 8/89 8/89 33/89 33/89 33/89 38/89 38/89 38/89 [M] he 4 4 matrix A 4 is the 4 4 matrix of ones, minus the 4 4 identity matrix he MALAB command is A4 ones(4) eye(4) For the inverse, use inv(a4) /3 /3 /3 /3 /3 /3 /3 /3 A4, A 4 /3 /3 /3 /3 /3 /3 /3 /3 3/4 /4 /4 /4 /4 /4 3/4 /4 /4 /4 A5, A 5 /4 /4 3/4 /4 /4 /4 /4 /4 3/4 /4 /4 /4 /4 /4 3/4 4/5 /5 /5 /5 /5 /5 /5 4/5 /5 /5 /5 /5 /5 /5 4/5 /5 /5 /5 A6, A 6 /5 /5 /5 4/5 /5 /5 /5 /5 /5 /5 4/5 /5 /5 /5 /5 /5 /5 4/5 he construction of A 6 and the appearance of its inverse suggest that the inverse is related to I 6 In fact, A + I is /5 times the 6 6 matrix of ones Let J denotes the n n matrix of ones he conjecture is: 6 6 A n J I n and A J In n Proof: (Not required) Observe that J nj and A n J (J I ) J J J (n ) J Now compute n A n ((n ) J I) (n ) A n J A n J (J I) I Since A n is square, A n is invertible and its inverse is (n ) J I

158

159 3 SOLUIONS Notes: Some exercises in this section provide practice in computing determinants, while others allow the student to discover the properties of determinants which will be studied in the next section Determinants are developed through the cofactor expansion, which is given in heorem Exercises in this section provide the first step in the inductive proof of heorem 3 in the next section Expanding along the first row: ( 3) + 4() Expanding along the second column: ( ) + ( ) 3 + ( ) 5 3( 3) 5( ) 5 Expanding along the first row: (4) + () Expanding along the second column: ( ) 5 + ( ) ( 3) + ( ) 4 5(4) 3( ) 4( 4) Expanding along the first row: ( 4) + 3 ( 9) + 4( 5) + (3)()

160 6 CHAPER 3 Determinants Expanding along the second column: ( ) ( 4) + ( ) + ( ) 4 4( 5) + ( 5) 4( 5) Expanding along the first row: ( ) 3() + 5(5) Expanding along the second column: ( ) 3 + ( ) + ( ) 4 3() + ( 3) 4( 9) Expanding along the first row: ( 4) ( 5) 3( ) 4(4) Expanding along the first row: ( ) + 4 5() + () + 4( 6) Expanding along the first row: () 3() Expanding along the first row: (6) () + 6( 8) First expand along the third row, then expand along the first row of the remaining matrix: ( + ) 7 5 ( + ) 5 ()

161 3 Solutions 6 First expand along the second row, then expand along either the third row or the second column of the remaining matrix ( ) ( 3) ( ) 5 + ( ) 4 ( 3)(5() + 4( )) or ( ) ( 3) ( ) ( ) + ( ) ( 6) ( 3) ( ( 7) 6( 6) ) 6 here are many ways to do this determinant efficiently One strategy is to always expand along the first column of each matrix: ( + ) ( ) ( ) 3( )() 5 here are many ways to do this determinant efficiently One strategy is to always expand along the first row of each matrix: ( ) ( ) ( ) ( )( 9) 36 3 First expand along either the second row or the second column Using the second row, ( ) Now expand along the second column to find: ( ) ( )

162 6 CHAPER 3 Determinants Now expand along either the first column or third row he first column is used below ( ) ( ) 4 + ( ) 5 ( 6)(4() 5()) 6 4 First expand along either the fourth row or the fifth column Using the fifth column, ( ) Now expand along the third row to find: ( ) ( ) Now expand along either the first column or second row he first column is used below ( ) ( ) 3 + ( ) (3)(3( ) + (8)) (3)(3)( ) + ()()() + (4)()(5) ()(3)(4) (5)()(3) ( )()() ()( 3)() + (5)()() + ()(4)(4) ()( 3)() (4)()() ()(4)(5) ( 6) ()()( ) + ( 4)()() + (3)(3)(4) ()()(3) (4)()() ( )(3)( 4) + ( 8) ()()() + (3)()(3) + (5)()(4) (3)()(5) (4)()() ()()(3)

163 3 Solutions 63 a b 9 ad bc, c d c d cb da ( ad bc a b ) he row operation swaps rows and of the matrix, and the sign of the determinant is reversed a b ad bc, c d a b akd ( ) ( kcb ) kad kbc kad ( bc kc kd ) he row operation scales row by k, and the determinant is multiplied by k 3 4 8, (6 4 k) (5 3 k)4 5+ 3k 6+ 4k + + he row operation replaces row with k times row plus row, and the determinant is unchanged a b ad bc, c d a+ kc b+ kd ( a + kc ) d c ( b + kd ) ad + kcd bc kcd ad bc c d he row operation replaces row with k times row plus row, and the determinant is unchanged 3 k k k (4) () + ( 7) 5, k(4) k() + k( 7) 5k 3 he row operation scales row by k, and the determinant is multiplied by k a b c 4 3 a() b(6) + c(3) a 6b+ 3 c, a b c 3(6b5 c) (6a 6 c) + (5a 6 b) a+ 6b 3c he row operation swaps rows and of the matrix, and the sign of the determinant is reversed 5 Since the matrix is triangular, by heorem the determinant is the product of the diagonal entries: ()()() k 6 Since the matrix is triangular, by heorem the determinant is the product of the diagonal entries: ()()() k 7 Since the matrix is triangular, by heorem the determinant is the product of the diagonal entries: k ( k)()() k

164 64 CHAPER 3 Determinants 8 Since the matrix is triangular, by heorem the determinant is the product of the diagonal entries: k ()( k)() k 9 A cofactor expansion along row gives 3 A cofactor expansion along row gives 3 A 3 3 elementary row replacement matrix looks like one of the six matrices k k k,,, k,, k k In each of these cases, the matrix is triangular and its determinant is the product of its diagonal entries, which is hus the determinant of a 3 3 elementary row replacement matrix is 3 A 3 3 elementary scaling matrix with k on the diagonal looks like one of the three matrices k, k, k In each of these cases, the matrix is triangular and its determinant is the product of its diagonal entries, which is k hus the determinant of a 3 3 elementary scaling matrix with k on the diagonal is k 33 E a b c d, A, c d EA a b det E, det A ad bc, det EA cb da (ad bc) (det E)(det A) a b a b 34 E, k A, c d EA kc kd det E k, det A ad bc, det EA a(kd) (kc)b k(ad bc) (det E)(det A) k 35 E a b a+ kc b+ kd, A, c d EA c d det E, det A ad bc, det EA (a + kc)d c(b + kd) ad + kcd bc kcd (ad bc) (det E)(det A)

165 3 Solutions 65 a b a b 36 E, k A, c d EA ka + c kb + d det E, det A ad bc, det EA a(kb + d) (ka + c)b kab + ad kab bc (ad bc) (det E)(det A) 37 3 A, A, det A, det 5A 5 5det A a b ka kb 38 A, c d ka, kc kd det A ad bc, det ka ( ka)( kd) ( kb)( kc) k ( ad bc) k det A 39 a rue See the paragraph preceding the definition of the determinant b False See the definition of cofactor, which precedes heorem 4 a False See heorem b False See heorem 3 4 he area of the parallelogram determined by u, v, u + v, and is 6, since the base of the parallelogram has length 3 and the height of the parallelogram is By the same reasoning, the area of 3 x the parallelogram determined by u, x, u + x, and is also 6 X V X X X X U 4 U 3 3 x Also note that det[ u v ] det 6, and det[ u x ] det 6 he determinant of the matrix whose columns are those vectors which define the sides of the parallelogram adjacent to is equal to the area of the parallelogram a c 4 he area of the parallelogram determined by u b, v, u + v, and is cb, since the base of the parallelogram has length c and the height of the parallelogram is b X b U a c V X

166 66 CHAPER 3 Determinants a c c a Also note that det[ u v ] det cb b, and det[ v u ] det cb b he determinant of the matrix whose columns are those vectors which define the sides of the parallelogram adjacent to either is equal to the area of the parallelogram or is equal to the negative of the area of the parallelogram 43 [M] Answers will vary he conclusion should be that det (A + B) det A + det B 44 [M] Answers will vary he conclusion should be that det (AB) (det A)(det B) 45 [M] Answers will vary For 4 4 matrices, the conclusions should be that det A det A, det( A) 4 det A, det(a) 6det A, and det ( A) det A For 5 5 matrices, the conclusions should be that 5 det A det A, det( A) det A, det(a) 3det A, and det ( A) det A For 6 6 matrices, the 6 conclusions should be that det A det A, det( A) det A, det(a) 64det A, and det ( A) det A 46 [M] Answers will vary he conclusion should be that det A / det A 3 SOLUIONS Notes: his section presents the main properties of the determinant, including the effects of row operations on the determinant of a matrix hese properties are first studied by examples in Exercises he properties are treated in a more theoretical manner in later exercises An efficient method for computing the determinant using row reduction and selective cofactor expansion is presented in this section and used in Exercises 4 heorems 4 and 6 are used extensively in Chapter 5 he linearity property of the determinant studied in the text is optional, but is used in more advanced courses Rows and are interchanged, so the determinant changes sign (heorem 3b) he constant may be factored out of the Row (heorem 3c) 3 he row replacement operation does not change the determinant (heorem 3a) 4 he row replacement operation does not change the determinant (heorem 3a) (6)( 3)

167 3 Solutions ( 3) ( 4) First use a row replacement to create zeros in the second column, and then expand down the second column: Now use a row replacement to create zeros in the first column, and then expand down the first column: ( 5)(3) ( 5)(3)( 8) First use a row replacement to create zeros in the fourth column, and then expand down the fourth column: Now use a row replacement to create zeros in the first column, and then expand down the first column: ( ) 3( )( 38)

168 68 CHAPER 3 Determinants 3 First use a row replacement to create zeros in the fourth column, and then expand down the fourth column: Now use a row replacement to create zeros in the first column, and then expand down the first column: ( )( 6) ( )( 6)() First use a row replacement to create zeros in the third column, and then expand down the third column: Now expand along the second row: ( ( 9)) ()(9)() a b c a b c 5 d e f 5 d e f 5(7) 35 5g 5h 5i g h i a b c a b c 6 3d 3e 3f 3 d e f 3(7) g h i g h i a b c a b c 7 g h i d e f 7 d e f g h i g h i a b c a b c 8 a b c g h i d e f ( 7) 7 d e f d e f g h i a b c a b c a b c 9 d + a e+ b f + c d e f d e f (7) 4 g h i g h i g h i

169 3 Solutions 69 a+ d b+ e c+ f a b c d e f d e f 7 g h i g h i Since Since 3 3 4, the matrix is invertible 5 3, the matrix is not invertible Since , the matrix is not invertible 4 Since 5 Since , the columns of the matrix form a linearly independent set , the columns of the matrix form a linearly independent set Since , the columns of the matrix form a linearly dependent set 7 a rue See heorem 3 b rue See the paragraph following Example c rue See the paragraph following heorem 4 d False See the warning following Example 5 8 a rue See heorem 3 b False See the paragraphs following Example c False See Example 3 d False See heorem 5 9 By heorem 6, det B (det B) ( ) 3 3 Suppose the two rows of a square matrix A are equal By swapping these two rows, the matrix A is not changed so its determinant should not change But since swapping rows changes the sign of the determinant, det A det A his is only possible if det A he same may be proven true for columns by applying the above result to A and using heorem 5

170 7 CHAPER 3 Determinants 3 By heorem 6, (det A)(det A ) det I, so det A / det A 3 By factoring an r out of each of the n rows, det ( ra) r det A 33 By heorem 6, det AB (det A)(det B) (det B)(det A) det BA 34 By heorem 6 and Exercise 3, det ( PAP ) (det P)(det A)(det P ) (det P)(det P )(det A) (det P) (det A) det A det P det A 35 By heorem 6 and heorem 5, det U U (det U )(det U) (det U) Since U U I, detu U det I, so (det U ) hus det U ± n By heorem 6 det A (det A) Since invertible by heorem 4 4 det A, then 4 (det A ) hus det A, and A is not 37 One may compute using heorem that det A 3 and det B 8, while det AB (det A)(det B) 6 AB 7 4 hus 38 One may compute that det A and det B, while (det A)(det B) AB 6 hus det AB 39 a By heorem 6, det AB (det A)(det B) 4 3 b By Exercise 3, det5a 5 det A c By heorem 5, det B det B 3 d By Exercise 3, det A / det A / 4 e By heorem 6, det A (det A) a By heorem 6, det AB (det A)(det B) b By heorem 6, det B (det B) 3 c By Exercise 3, det A det A d By heorems 5 and 6, det A A (det A )(det A) (det A)(det A) e By heorem 6 and Exercise 3, det B AB (det B )(det A)(det B) (/ det B)(det A)(det B) det A 4 det A (a + e)d c(b + f) ad + ed bc cf (ad bc) + (ed cf) det B + det C + a b 4 det ( A + B) ( + a)( + d) cb + a + d + ad cb det A + a + d + det B, so c + d det (A + B) det A + det B if and only if a + d

171 33 Solutions 7 43 Compute det A by using a cofactor expansion down the third column: det A ( u + v )det A3 ( u + v )det A3 + ( u3 + v3 )det A33 u det A u det A + u det A + vdet A v det A + v det A det B + detc By heorem 5, det AE det ( AE) Since ( AE) E A, det AE det( E A ) Now E is itself an elementary matrix, so by the proof of heorem 3, det ( E A ) (det E )(det A ) hus it is true that det AE (det E )(det A ), and by applying heorem 5, det AE (det E)(det A) 45 [M] Answers will vary, but will show that det A A always equals while det AA should seldom be zero o see why A A should not be invertible (and thus det A A ), let A be a matrix with more columns than rows hen the columns of A must be linearly dependent, so the equation Ax must have a non-trivial solution x hus ( ) ( ) A A x A Ax A, and the equation ( A A ) x has a non-trivial solution Since A A is a square matrix, the Invertible Matrix heorem now says that A A is not invertible Notice that the same argument will not work in general for AA, since A has more rows than columns, so its columns are not automatically linearly dependent 46 [M] One may compute for this matrix that det A and cond A 3683 Note that this is the condition number, which is used in Section 3 Since det A, it is invertible and A he determinant is very sensitive to scaling, as deta det A, and det A 4 () det A he condition number is not changed at all by scaling: cond(a) cond(a) conda 3683 When A I4, det A and cond A As before the determinant is sensitive to scaling: 4 4 deta det A, and det A () det A Yet the condition number is not changed by scaling: cond(a) cond(a) cond A 4 33 SOLUIONS Notes: his section features several independent topics from which to choose he geometric interpretation of the determinant (heorem ) provides the key to changes of variables in multiple integrals Students of economics and engineering are likely to need Cramer s Rule in later courses Exercises concern Cramer s Rule, exercises 8 deal with the adjugate, and exercises 9 3 cover the geometric interpretation of the determinant In particular, Exercise 5 examines students understanding of linear independence and requires a careful explanation, which is discussed in the Study Guide he Study Guide also contains a heuristic proof of heorem 9 for matrices

172 7 CHAPER 3 Determinants 5 7 he system is equivalent to Ax b, where A 4 and 3 b We compute A( b ), A( ),deta 6,det A( ) 5,det A( ), 4 b b b det A( ) 5 det A( ) x b, x det A 6 b det A 6 4 he system is equivalent to Ax b, where A 5 and 6 b 7 We compute A( b ), A( ),deta 3,det A( ) 5,det A( ), b b b det A( ) 5 det A( ) x b, x det A 3 b det A he system is equivalent to Ax b, where A 5 6 and 7 b 5 We compute A( b ), A( ),deta 8,det A( ) 3,det A( ), b b b det A( ) 3 det A( ) 5 x b 4, x det A 8 b det A he system is equivalent to Ax b, where A 3 and 9 b 5 We compute A( b ), A( ),deta 4,det A( ) 6,det A( ), b b b det A( b) 6 3 det A( b) x, x det A 4 det A he system is equivalent to Ax b, where A 3 and b 8 We compute A() b 8, A() 3 8, A3() 3 8, b b det A 4, det A( b) 6, det A ( b) 6, det A ( b ) 4, 3 det A( b) 6 3 det A ( b) 6 det A ( b) 4 7 x x x det A 4 det A 4 det A 4 3, 4, 3

173 33 Solutions he system is equivalent to Ax b, where A and b We compute A( b), A( ), A3( ) b b, det A 4, det A( b) 6, det A ( b) 5, det A ( b ) 4, 3 det A( b) 6 det A ( b) 5 det A ( b) 4 x x x det A 4 det A 4 det A 4 3 4, 3, 3 6s 4 7 he system is equivalent to Ax b, where A 9 s and 5 b We compute 5 4 6s 5 A( b ), A( ), det A( ) s+ 8, det A( ) s45 s b 9 b b Since det A s 36 ( s 3) for s ± 3, the system will have a unique solution when s ± 3 For such a system, the solution will be det A( b) s+ 8 5s+ 4 det A( b) s45 4s5 x, x det A ( s 3) 6( s 3) det A ( s 3) 4( s 3) 3s 5 8 he system is equivalent to Ax b, where A 9 5s and 3 b We compute 3 5 3s 3 A( b ), A( ), det A( ) 5s+, det A( ) 6s7 5s b 9 b b Since det A 5s ( s + 3) for all values of s, the system will have a unique solution for all values of s For such a system, the solution will be det A( b) 5s+ 3s+ det A( b) 6s7 s9 x, x det A 5( s + 3) 3( s + 3) det A 5( s + 3) 5( s + 3) s s 9 he system is equivalent to Ax b, where A 3 6s and b 4 We compute s s A( b ), A( ),det A( ) s,det A( ) 4s s b 3 4 b b Since det A 6s + 6s 6 s( s+ ) for s,, the system will have a unique solution when s, For such a system, the solution will be det A( b) s det A( b) 4s+ 3 x, x deta 6 s( s+ ) 3( s+ ) deta 6 s( s+ ) s he system is equivalent to Ax b, where A 3s 6s and b We compute s A( b ), A( ),det A( ) 6s,det A( ) s 6s b 3s b b

174 74 CHAPER 3 Determinants Since det A s 3s 3 s(4s ) for s,/4, the system will have a unique solution when s,/4 For such a system, the solution will be det A( b) 6s det A( b) s x, x det A 3 s(4s) det A 3 s(4s) 3(4s) Since det A 3 and the cofactors of the given matrix are C, C C 3 3 C 3,, C, 3 C 3 3, C 3,, C3 3, C33 6, 3 3 adj A 3 3 and A 3 6 /3 adj A / 3 det A /3 Since det A 5 and the cofactors of the given matrix are C, C, C3, 3 C 3, 3 C 3 7, 3 7 adj A 5 and A 4 3 C, C 3, 3 C 3 5, 33 C 4, /5 3/5 7/5 adj A det A /5 /5 4/5 3 Since det A 6 and the cofactors of the given matrix are C, 5 4 C, 5 4 C 3 5, 3 5 adj A 5 and A 7 5 C, C 3, C 5, C 3 7, C, C 33 5, /6 /6 5/6 adj A /6 5/6 /6 det A /6 7/6 5/6

175 33 Solutions 75 4 Since det A and the cofactors of the given matrix are C 5, C, C 3 4, C 3, C 3 8, adj A 3 and A C, C 3 3, C 3, C 33 6, adj A 3 det A Since det A 6 and the cofactors of the given matrix are C, C, C3, 3 3 C, 3 C 3, 3 adj A 6 and A C 6, 3 C 3 9, 3 3 C, C 33 3, /3 adj A / 3 det A /6 3/ / 6 Since det A 9 and the cofactors of the given matrix are 3 3 C 9, C, C3, C 6, 3 4 C 3 4, adj A 3 and A 3 4 C 3, C 3, 3 4 C 3, 33 C 3, 3 /3 4/9 adj A / 3 / 9 det A /3 a b 7 Let A c d hen the cofactors of A are C d d, C c c, d b C b b, and C a a hus adj A c a Since det A ad bc, heorem 8 gives that d b A adj A det A ad bc c a his result is identical to that of heorem 4 in Section

176 76 CHAPER 3 Determinants 8 Each cofactor of A is an integer since it is a sum of products of entries in A Hence all entries in adj A will be integers Since det A, the inverse formula in heorem 8 shows that all the entries in A will be integers 9 he parallelogram is determined by the columns of det A 8 8 he parallelogram is determined by the columns of det A A 4, so the area of the parallelogram is 4 A 3 5, so the area of the parallelogram is First translate one vertex to the origin For example, subtract (, ) from each vertex to get a new parallelogram with vertices (, ),(, 5),(, 4), and (3, ) his parallelogram has the same area as the original, and is determined by the columns of A 5 4, so the area of the parallelogram is det A 4 4 First translate one vertex to the origin For example, subtract (, ) from each vertex to get a new parallelogram with vertices (, ),(6, ),( 3, 3), and (3, 4) his parallelogram has the same area as 6 3 the original, and is determined by the columns of A 3, so the area of the parallelogram is det A 3 he parallelepiped is determined by the columns of parallelepiped is det A 4 he parallelepiped is determined by the columns of parallelepiped is det A A, so the volume of the 4 A 4 5, so the volume of the 5 he Invertible Matrix heorem says that a 3 3 matrix A is not invertible if and only if its columns are linearly dependent his will happen if and only if one of the columns is a linear combination of the others; that is, if one of the vectors is in the plane spanned by the other two vectors his is equivalent to the condition that the parallelepiped determined by the three vectors has zero volume, which is in turn equivalent to the condition that det A 6 By definition, p + S is the set of all vectors of the form p + v, where v is in S Applying to a typical vector in p + S, we have (p + v) (p) + (v) his vector is in the set denoted by (p) + (S) his proves that maps the set p + S into the set (p) + (S) Conversely, any vector in (p) + (S) has the form (p) + (v) for some v in S his vector may be written as (p + v) his shows that every vector in (p) + (S) is the image under of some point p + v in p + S

177 33 Solutions 77 7 Since the parallelogram S is determined by the columns of 3 5, the area of S is 6 det he matrix A has det A 6 By heorem, the area of (S) is 3 det A {area of S} Alternatively, one may compute the vectors that determine the image, namely, the columns of 6 8 A[ b b ] he determinant of this matrix is 4, so the area of the image is 4 8 Since the parallelogram S is determined by the columns of 4 7, the area of S is 4 7 det he matrix A has det A 5 By heorem, the area of (S) is det A {area of S} 5 4 Alternatively, one may compute the vectors that determine the image, namely, the columns of A[ b b ] 7 3 he determinant of this matrix is, so the area of the image is 9 he area of the triangle will be one half of the area of the parallelogram determined by v and v By heorem 9, the area of the triangle will be (/) det A, where A [ v v ] 3 ranslate R to a new triangle of equal area by subtracting ( x3, y 3) from each vertex he new triangle has vertices (, ), ( xx3, y y3), and ( x x3, y y3) By Exercise 9, the area of the triangle will be x x x x det 3 3 y y3 y y 3 Now consider using row operations and a cofactor expansion to compute the determinant in the formula: x y xx3 y y3 3 3 det x x y y x y det x x3 y y3 det x x3 y y 3 x3 y3 x3 y3 By heorem 5, xx3 y y3 xx3 x x3 det det x x3 y y 3 y y3 y y 3 So the above observation allows us to state that the area of the triangle will be x y xx3 x x3 det det x y y y3 y y 3 x3 y3

178 78 CHAPER 3 Determinants u x x x3 3 a o show that (S) is bounded by the ellipsoid with equation + +, let u u a b c and let u 3 x x x Au hen u x/ a, u x/ b, and u3 x3/ c, and u lies inside S (or u + u + u3 ) if x 3 x x x3 and only if x lies inside (S) (or + + ) a b c b By the generalization of heorem, {volume of ellipsoid} {volume of ( S)} 4 4 det A {volumeof S} abc π πabc a A linear transformation that maps S onto S will map e to v, e to v, and e 3 to v 3; that is, ( e) v, ( e) v, and ( e3) v 3 he standard matrix for this transformation will be A ( e ) ( e ) ( e ) v v v [ ] [ ] 3 3 b he area of the base of S is (/)()() /, so the volume of S is (/3)(/)() /6 By part a (S) S, so the generalization of heorem gives that the volume of S is det A {volume of S} (/6) det A 33 [M] Answers will vary In MALAB, entries in B inv(a) are approximately 5 or smaller 34 [M] Answers will vary, as will the commands which produce the second entry of x For example, the MALAB command is x det([a(:,) b A(:,3:4)])/det(A) while the Mathematica command is x Det[{ranspose[A][[]],b,ranspose[A][[3]], ranspose[a][[4]]}]/det[a] 35 [M] MALAB Student Version 4 uses 57,77 flops for inv A and 4,69,45 flops for the inverse formula he inv(a) command requires only about 4% of the operations for the inverse formula Chapter 3 SUPPLEMENARY EXERCISES a rue he columns of A are linearly dependent b rue See Exercise 3 in Section 3 3 c False See heorem 3(c); in this case det 5A 5 det A d False Consider e False By heorem 6, f False See heorem 3(b) g rue See heorem 3(c) h rue See heorem 3(a) i False See heorem 5 A, B 3, and 3 A+ B det A j False See heorem 3(c); this statement is false for n n invertible matrices with n an even integer k rue See heorems 6 and 5; det A A (det A)

179 Chapter 3 Supplementary Exercises 79 l False he coefficient matrix must be invertible m False he area of the triangle is n rue See heorem 6; det A (det A) o False See Exercise 3 in Section 3 p rue See heorem a b+ c a b+ c a b+ c b a+ c ba a b ( ba)( ca) c a+ b ca ac a b c a b c a b c 4 a+ x b+ x c+ x x x x xy a+ y b+ y c+ y y y y ( ) ( )( ) ( )( )(3) ( )( )(3)( ) () ()() ()()( 3) ()()( 3)( ) Expand along the first row to obtain x y x y y x x y x + y his is an equation of the form ax + by + c, x y y x x y and since the points ( x, y ) and ( x, y ) are distinct, at least one of a and b is not zero hus the equation is the equation of a line he points ( x, y ) and ( x, y ) are on the line, because when the coordinates of one of the points are substituted for x and y, two rows of the matrix are equal and so the determinant is zero

180 8 CHAPER 3 Determinants 8 Expand along the first row to obtain x y x y y x x y x + y ( mx y) x( m) + y() his equation may be m m m rewritten as mx y mx + y, or y y m( x x) 9 a a a a a a det b b ba b a ba ( b a)( b+ a) c c ca c a ca ( c a)( c+ a) a a a a ( ba)( c a) b+ a ( ba)( c a) b+ a ( ba)( ca)( cb) c+ a cb Expanding along the first row will show that x x x3 x3 c x x ( x x )( x x )( x x ) 3 3 f () t det V c + ct+ c t + c t By Exercise 9, since x, x, and x 3 are distinct hus f (t) is a cubic polynomial he points ( x,), ( x,), and ( x 3,) are on the graph of f, since when any of x, x or x 3 are substituted for t, the matrix has two equal rows and thus its determinant (which is f (t)) is zero hus f( x i ) for i,, 3 o tell if a quadrilateral determined by four points is a parallelogram, first translate one of the vertices to the origin If we label the vertices of this new quadrilateral as, v, v, and v 3, then they will be the vertices of a parallelogram if one of v, v, or v 3 is the sum of the other two In this example, subtract (, 4) from each vertex to get a new parallelogram with vertices (, ), v (,), v (,5), and v 3 (4,4) Since v v3+ v, the quadrilateral is a parallelogram as stated he translated parallelogram has the same area as the original, and is determined by the columns of 4 A [ v v 3] 4, so the area of the parallelogram is det A A matrix A is invertible if and only if the parallelogram determined by the columns of A has nonzero area 3 By heorem 8, (adj A) A A A I By the Invertible Matrix heorem, adj A is invertible and det A (adj A) A det A A O 4 a Consider the matrix Ak O I, where k n and O is an appropriately sized zero matrix We k will show that det det A for all k n by mathematical induction Ak

181 Chapter 3 Supplementary Exercises 8 First let k Expand along the last row to obtain A O ( n+ ) + ( n+ ) det A det ( ) det A det A O Now let < k n and assume that det Ak det A Expand along the last row of A k to obtain A O ( n+ k) + ( n+ k) det Ak det ( ) det Ak det Ak det A O I hus we have proven the result, k and the determinant of the matrix in question is det A Ik O b Consider the matrix Ak Ck D, where k n, C k is an n k matrix and O is an appropriately sized zero matrix We will show that det Ak det D for all k n by mathematical induction First let k Expand along the first row to obtain O + det A det ( ) det D det D C D Now let < k n and assume that det Ak det D Expand along the first row of A k to obtain Ik O + det Ak det ( ) det Ak det Ak det D Ck D hus we have proven the result, and the determinant of the matrix in question is det D c By combining parts a and b, we have shown that A O A O I O det det det (det A)(det D) C D O I C D From this result and heorem 5, we have A B A B A O det det det (det A )(det D ) O D O D (det A)(det D) B D 5 a Compute the right side of the equation: I O A B A B X I O Y XA XB+ Y Set this equal to the left side of the equation: A B A B so that XA C XB Y D C D + XA XB+ Y Since XA C and A is invertible, X CA Since XB + Y D, Y D XB D CA B hus by Exercise 4(c), A B I O A B det det det C D CA I O D CA B (det A)(det ( D CA B)) b From part a, A B det (det A)(det ( D CA B)) det[ A( D CA B)] C D det[ AD ACA B] det[ AD CAA B] det[ AD CB]

182 8 CHAPER 3 Determinants 6 a Doing the given operations does not change the determinant of A since the given operations are all row replacement operations he resulting matrix is ab a+ b a b a b + a b b b b a b Since column replacement operations are equivalent to row operations on A and det given operations do not change the determinant of the matrix he resulting matrix is a b a b a b b b 3 b a+ ( n) b c Since the preceding matrix is a triangular matrix with the same determinant as A, n det A ( a b) ( a+ ( n ) b) A det A, the 7 First consider the case n In this case a b b b b det B aa ( b),det C ab b, a b a so det A det B+ det C a( a b) + ab b a b ( a b)( a+ b) ( a b) ( a+ ( ) b), and the formula holds for n Now assume that the formula holds for all (k ) (k ) matrices, and let A, B, and C be k k matrices By a cofactor expansion along the first column, a b b b a b k k det B ( a b) ( ab)( a b) ( a+ ( k ) b) ( a b) ( a+ ( k ) b) b b a since the matrix in the above formula is a (k ) (k ) matrix We can perform a series of row operations on C to zero out below the first pivot, and produce the following matrix whose determinant is det C: b b b a b ab Since this is a triangular matrix, we have found that det C b( a b) k hus k k k det A det B+ det C ( a b) ( a+ ( k ) b) + b( a b) ( a b) ( a+ ( k ) b), which is what was to be shown hus the formula has been proven by mathematical induction 8 [M] Since the first matrix has a 3, b 8, and n 4, its determinant is 4 3 (3 8) (3 + (4 )8) ( 5) (3 + 4) ( 5)(7) 3375 Since the second matrix has a 8, b 3, 5 4 and n 5, its determinant is (8 3) (8 + (5 )3) (5) (8 + ) (65)(),5

183 Chapter 3 Supplementary Exercises 83 9 [M] We find that,, Our conjecture then is that n o show this, consider using row replacement operations to zero out below the first pivot he resulting matrix is n Now use row replacement operations to zero out below the second pivot, and so on he final matrix which results from this process is, which is an upper triangular matrix with determinant [M] We find that , 8, Our conjecture then is that ( n ) n 3

184 84 CHAPER 3 Determinants o show this, consider using row replacement operations to zero out below the first pivot he resulting matrix is ( n ) Now use row replacement operations to zero out below the second pivot he matrix which results from this process is ( n ) his matrix has the same determinant as the original matrix, and is recognizable as a block matrix of the form A B, O D where A and D ( n) 3 4 n A B As in Exercise 4(c), the determinant of the matrix O D is (det A)(det D) det D Since D is an (n ) (n ) matrix, det D () n n n n by Exercise 9 hus the determinant of the matrix A B O D is n detd 3

185 4 SOLUIONS Notes: his section is designed to avoid the standard exercises in which a student is asked to check ten axioms on an array of sets heorem provides the main homework tool in this section for showing that a set is a subspace Students should be taught how to check the closure axioms he exercises in this section (and the next few sections) emphasize n, to give students time to absorb the abstract concepts Other vectors do appear later in the chapter: the space of signals is used in Section 48, and the spaces n of polynomials are used in many sections of Chapters 4 and 6 a If u and v are in V, then their entries are nonnegative Since a sum of nonnegative numbers is nonnegative, the vector u + v has nonnegative entries hus u + v is in V b Example: If u and c, then u is in V but cu is not in V x x cx a If u y is in W, then the vector c c u y cy is in W because ( cx)( cy) c ( xy) since xy b Example: If u 7 and v 3, then u and v are in W but u + v is not in W 5 3 Example: If u 5 and c 4, then u is in H but cu is not in H Since H is not closed under scalar multiplication, H is not a subspace of 4 Note that u and v are on the line L, but u + v is not u u+v v L 5 Yes Since the set is Span{ t }, the set is a subspace by heorem 85

186 86 CHAPER 4 Vector Spaces 6 No he zero vector is not in the set 7 No he set is not closed under multiplication by scalars which are not integers 8 Yes he zero vector is in the set H If p and q are in H, then (p + q)() p() + q() +, so p + q is in H For any scalar c, (cp)() c p() c, so cp is in H hus H is a subspace by heorem 9 he set H Span {v}, where he set H Span {v}, where v 3 hus H is a subspace of v hus H is a subspace of 3 by heorem 3 by heorem he set W Span {u, v}, where 5 u and v hus W is a subspace of 3 by heorem he set W Span {u, v}, where u and 3 v hus W is a subspace of 4 4 by heorem 3 a he vector w is not in the set { v, v, v 3} here are 3 vectors in the set { v, v, v 3} b he set Span{ v, v, v 3} contains infinitely many vectors c he vector w is in the subspace spanned by { v, v, v 3} if and only if the equation xv+ xv + x3v3 w has a solution Row reducing the augmented matrix for this system of linear equations gives 4 3, 3 6 so the equation has a solution and w is in the subspace spanned by { v, v, v 3} 4 he augmented matrix is found as in Exercise 3c Since 4 8 4, the equation xv+ xv + x3v3 w has no solution, and w is not in the subspace spanned by { v, v, v 3} 5 Since the zero vector is not in W, W is not a vector space 6 Since the zero vector is not in W, W is not a vector space

187 4 Solutions 87 7 Since a vector w in W may be written as w a + b + c S,, is a set that spans W 8 Since a vector w in W may be written as 4 3 w a + b + c 4 3 S,, is a set that spans W 9 Let H be the set of all functions described by yt ( ) ccos ωt+ csin ωt hen H is a subset of the vector space V of all real-valued functions, and may be written as H Span {cos ωt, sin ωt} By heorem, H is a subspace of V and is hence a vector space a he following facts about continuous functions must be shown he constant function f(t) is continuous he sum of two continuous functions is continuous 3 A constant multiple of a continuous function is continuous b Let H {f in C[a, b]: f(a) f(b)} Let g(t) for all t in [a, b] hen g(a) g(b), so g is in H Let g and h be in H hen g(a) g(b) and h(a) h(b), and (g + h)(a) g(a) + h(a) g(b) + h(b) (g + h)(b), so g + h is in H 3 Let g be in H hen g(a) g(b), and (cg)(a) cg(a) cg(b) (cg)(b), so cg is in H hus H is a subspace of C[a, b] he set H is a subspace of M he zero matrix is in H, the sum of two upper triangular matrices is upper triangular, and a scalar multiple of an upper triangular matrix is upper triangular he set H is a subspace of 4 M he 4 zero matrix is in H because F If A and B are matrices in H, then F(A + B) FA + FB +, so A + B is in H If A is in H and c is a scalar, then F(cA) c(fa) c, so ca is in H

188 88 CHAPER 4 Vector Spaces 3 a False he zero vector in V is the function f whose values f(t) are zero for all t in b False An arrow in three-dimensional space is an example of a vector, but not every arrow is a vector c False See Exercises,, and 3 for examples of subsets which contain the zero vector but are not subspaces d rue See the paragraph before Example 6 e False Digital signals are used See Example 3 4 a rue See the definition of a vector space b rue See statement (3) in the box before Example c rue See the paragraph before Example 6 d False See Example 8 e False he second and third parts of the conditions are stated incorrectly For example, part (ii) does not state that u and v represent all possible elements of H 5, 4 6 a 3 b 5 c 4 7 a 8 b 3 c 5 d 4 8 a 4 b 7 c 3 d 5 e 4 9 Consider u + ( )u By Axiom, u + ( )u u + ( )u By Axiom 8, u + ( )u ( + ( ))u u By Exercise 7, u hus u + ( )u, and by Exercise 6 ( )u u 3 By Axiom u u Since c is nonzero, c c, and u ( c c) u By Axiom 9, ( c c) u c ( cu) c since cu hus u c by Property (), proven in Exercise 8 3 Any subspace H that contains u and v must also contain all scalar multiples of u and v, and hence must also contain all sums of scalar multiples of u and v hus H must contain all linear combinations of u and v, or Span {u, v} Note: Exercises 3 34 provide good practice for mathematics majors because these arguments involve simple symbol manipulation typical of mathematical proofs Most students outside mathematics might profit more from other types of exercises 3 Both H and K contain the zero vector of V because they are subspaces of V hus the zero vector of V is in H K Let u and v be in H K hen u and v are in H Since H is a subspace u + v is in H Likewise u and v are in K Since K is a subspace u + v is in K hus u + v is in H K Let u be in H K hen u is in H Since H is a subspace cu is in H Likewise v is in K Since K is a subspace cu is in K hus cu is in H K for any scalar c, and H K is a subspace of V

189 4 Solutions 89 he union of two subspaces is not in general a subspace For an example in let H be the x-axis and let K be the y-axis hen both H and K are subspaces of, but H K is not closed under vector addition he subset H K is thus not a subspace of 33 a Given subspaces H and K of a vector space V, the zero vector of V belongs to H + K, because is in both H and K (since they are subspaces) and + Next, take two vectors in H + K, say w u+ v and w u + v where u and u are in H, and v and v are in K hen w+ w u+ v+ u + v ( u+ u) + ( v+ v ) because vector addition in V is commutative and associative Now u+ u is in H and v+ v is in K because H and K are subspaces his shows that w+ w is in H + K hus H + K is closed under addition of vectors Finally, for any scalar c, cw c( u+ v) cu+ cv he vector cu belongs to H and cv belongs to K, because H and K are subspaces hus, cw belongs to H + K, so H + K is closed under multiplication by scalars hese arguments show that H + K satisfies all three conditions necessary to be a subspace of V b Certainly H is a subset of H + K because every vector u in H may be written as u +, where the zero vector is in K (and also in H, of course) Since H contains the zero vector of H + K, and H is closed under vector addition and multiplication by scalars (because H is a subspace of V ), H is a subspace of H + K he same argument applies when H is replaced by K, so K is also a subspace of H + K 34 A proof that H + K Span{ u,, up, v,, v q} has two parts First, one must show that H + K is a subset of Span{ u,, up, v,, v q} Second, one must show that Span{ u,, up, v,, v q} is a subset of H + K () A typical vector H has the form cu+ + cpu p and a typical vector in K has the form dv + + d q v q he sum of these two vectors is a linear combination of u,, up, v,, v q and so belongs to Span{ u,, up, v,, v q} hus H + K is a subset of Span{ u,, up, v,, v q} () Each of the vectors u,, up, v,, v q belongs to H + K, by Exercise 33(b), and so any linear combination of these vectors belongs to H + K, since H + K is a subspace, by Exercise 33(a) hus, Span{ u,, u, v,, v } is a subset of H + K 35 [M] Since p / , 4 4 / w is in the subspace spanned by { v, v, v 3} 36 [M] Since [ A y ] q / , / y is in the subspace spanned by the columns of A

190 9 CHAPER 4 Vector Spaces 37 [M] he graph of f(t) is given below A conjecture is that f(t) cos 4t he graph of g(t) is given below A conjecture is that g(t) cos 6t [M] he graph of f(t) is given below A conjecture is that f(t) sin 3t he graph of g(t) is given below A conjecture is that g(t) cos 4t he graph of h(t) is given below A conjecture is that h(t) sin 5t

191 4 Solutions 9 4 SOLUIONS Notes: his section provides a review of Chapter using the new terminology Linear tranformations are introduced quickly since students are already comfortable with the idea from n he key exercises are 7 6, which are straightforward but help to solidify the notions of null spaces and column spaces Exercises 3 36 deal with the kernel and range of a linear transformation and are progressively more advanced theoretically he idea in Exercises 7 4 is for the student to use heorems,, or 3 to determine whether a given set is a subspace One calculates that Aw 6 3, so w is in Nul A One calculates that Aw 3 3 3, 8 4 so w is in Nul A 3 First find the general solution of Ax in terms of the free variables Since 7 6 [ A ], 4 the general solution is x 7x3 6x4, x 4x3 + x4, with x 3 and x 4 free So x 7 6 x 4 x x 3 + x 4, x3 x 4 and a spanning set for Nul A is 7 6 4, 4 First find the general solution of Ax in terms of the free variables Since 6 [ A ], the general solution is x 6x, x 3, with x and x 4 free So x 6 x x x + x 4, x3 x 4

192 9 CHAPER 4 Vector Spaces and a spanning set for Nul A is 6, 5 First find the general solution of Ax in terms of the free variables Since [ A ] 4 9, the general solution is x x 4x4, x3 9x4, x 5, with x and x 4 free So x 4 x x x3 x+ x4 9, x4 x 5 and a spanning set for Nul A is 4, 9 6 First find the general solution of Ax in terms of the free variables Since [ A ] 6 8, the general solution is x 6x3 + 8x4 x5, x x3 x4, with x 3, x 4, and x 5 free So x 6 8 x x x3 x3 + x4 + x5, x4 x 5 and a spanning set for Nul A is 6 8,,

193 4 Solutions 93 7 he set W is a subset of would be a subspace of not a vector space 8 he set W is a subset of would be a subspace of not a vector space 3 If W were a vector space (under the standard operations in 3 ), then it 3 But W is not a subspace of 3 since the zero vector is not in W hus W is 3 If W were a vector space (under the standard operations in 3 ), then it 3 But W is not a subspace of 3 since the zero vector is not in W hus W is 9 he set W is the set of all solutions to the homogeneous system of equations a b 4c, 4 a c 3d hus W Nul A, where A 3 hus W is a subspace of 4 by heorem, and is a vector space he set W is the set of all solutions to the homogeneous system of equations a + 3b c, 3 a + b + c d hus W Nul A, where A hus W is a subspace of 4 by heorem, and is a vector space he set W is a subset of would be a subspace of a vector space 4 If W were a vector space (under the standard operations in 4 ), then it 4 But W is not a subspace of 4 since the zero vector is not in W hus W is not he set W is a subset of 4 If W were a vector space (under the standard operations in 4 ), then it would be a subspace of 4 But W is not a subspace of 4 since the zero vector is not in W hus W is not a vector space 3 An element w on W may be written as 6 6 c w c d + d where c and d are any real numbers So W Col A where heorem 3, and is a vector space 4 An element w on W may be written as a w a b + b where a and b are any real numbers So W Col A where heorem 3, and is a vector space 6 A hus W is a subspace of A hus W is a subspace of by 3 by

194 94 CHAPER 4 Vector Spaces 5 An element in this set may be written as 3 3 r r + s + t s 4 4 t 3 3 where r, s and t are any real numbers So the set is Col A where 6 An element in this set may be written as b b + c + d c d where b, c and d are any real numbers So the set is Col A where 3 A 4 3 A he matrix A is a 4 matrix hus (a) Nul A is a subspace of, and (b) Col A is a subspace of 4 8 he matrix A is a 4 3 matrix hus (a) Nul A is a subspace of 3, and (b) Col A is a subspace of 4 9 he matrix A is a 5 matrix hus (a) Nul A is a subspace of 5, and (b) Col A is a subspace of he matrix A is a 5 matrix hus (a) Nul A is a subspace of 5, and (b) Col A is a subspace of Either column of A is a nonzero vector in Col A o find a nonzero vector in Nul A, find the general solution of Ax in terms of the free variables Since 3 [ A ],

195 4 Solutions 95 the general solution is x 3x, with x free Letting x be a nonzero value (say x ) gives the nonzero vector x 3 x x which is in Nul A Any column of A is a nonzero vector in Col A o find a nonzero vector in Nul A, find the general solution of Ax in terms of the free variables Since 7 6 [ A ], 4 the general solution is x 7x3 6x4, x 4x3 + x4, with x 3 and x 4 free Letting x 3 and x 4 be nonzero values (say x3 x4 ) gives the nonzero vector x x x x3 x 4 which is in Nul A 3 Consider the system with augmented matrix [ A w ] Since /3 [ A w ], the system is consistent and w is in Col A Also, since 6 Aw 3 6 w is in Nul A 4 Consider the system with augmented matrix [ A w ] Since [ A w ] / /, the system is consistent and w is in Col A Also, since 8 9 Aw w is in Nul A 5 a rue See the definition before Example b False See heorem c rue See the remark just before Example 4 d False he equation Ax b must be consistent for every b See #7 in the table on page 6 e rue See Figure f rue See the remark after heorem 3

196 96 CHAPER 4 Vector Spaces 6 a rue See heorem b rue See heorem 3 c False See the box after heorem 3 d rue See the paragraph after the definition of a linear transformation e rue See Figure f rue See the paragraph before Example 8 7 Let A be the coefficient matrix of the given homogeneous system of equations Since Ax for 3 x, x is in Nul A Since Nul A is a subspace of 3, it is closed under scalar multiplication hus 3 x is also in Nul A, and x 3, x, x 3 is also a solution to the system of equations 8 Let A be the coefficient matrix of the given systems of equations Since the first system has a solution, the constant vector b is in Col A Since Col A is a subspace of 3, it is closed under scalar 9 multiplication hus 5b 5 is also in Col A, and the second system of equations must thus have a 45 solution 9 a Since A, the zero vector is in Col A b Since Ax+ Aw A( x+ w), Ax+ Aw is in Col A c Since ca ( x) Ac ( x), cax is in Col A 3 Since ( V) W, the zero vector W of W is in the range of Let (x) and (w) be typical elements in the range of hen since ( x) + ( w) ( x+ w), ( x) + ( w ) is in the range of and the range of is closed under vector addition Let c be any scalar hen since c ( x) ( cx), c ( x ) is in the range of and the range of is closed under scalar multiplication Hence the range of is a subspace of W 3 a Let p and q be arbitary polynomials in, and let c be any scalar hen and ( p )() () () () () ( ) + q p + q p q p+ q + ( ) + ( ) ( + )() () + () () () p q p q p q p q ( cp)() p() c ( p) c c( ) ( c )() () p p p so is a linear transformation

197 4 Solutions 97 b Any quadratic polynomial q for which q () and q () will be in the kernel of he x polynomial q must then be a multiple of p ( t) t( t) Given any vector x in, the polynomial p x+ ( x x) t has p () x and p () x hus the range of is all of 3 Any quadratic polynomial q for which q () will be in the kernel of he polynomial q must then be at bt q + hus the polynomials p ( t) t and p () t t span the kernel of If a vector is in the a range of, it must be of the form a If a vector is of this form, it is the image of the polynomial a p () t a in hus the range of is : a real a 33 a For any A and B in M and for any scalar c, ( A+ B) ( A+ B) + ( A+ B) A+ B+ A + B ( A+ A ) + ( B+ B ) ( A) + ( B) and ca ( ) ( ca) ca ( ) ca ( ) so is a linear transformation b Let B be an element of M with B B, and let A B hen ( A) A+ A B+ ( B) B+ B B+ B B c Part b showed that the range of contains the set of all B in M with B B It must also be shown that any B in the range of has this property Let B be in the range of hen B (A) for some A in M hen B A+ A, and B ( A+ A ) A + ( A ) A + A A+ A B so B has the property that B B a b d Let A c d be in the kernel of hen ( A) A+ A, so a b a c a c+ b A+ A c d + b d b+ c d Solving it is found that a d and c b hus the kernel of is b : b real b 34 Let f and g be any elements in C[, ] and let c be any scalar hen (f) is the antiderivative F of f with F() and (g) is the antiderivative G of g with G() By the rules for antidifferentiation F+ G will be an antiderivative of f + g, and ( F+ G)() F() + G() + hus ( f + g) ( f) + ( g ) Likewise cf will be an antiderivative of cf, and ( cf)() cf () c hus c ( f) c( f ), and is a linear transformation o find the kernel of, we must find all functions f in C[,] with antiderivative equal to the zero function he only function with this property is the zero function, so the kernel of is {}

198 98 CHAPER 4 Vector Spaces 35 Since U is a subspace of V, V is in U Since is linear, ( V) W So W is in (U) Let (x) and (y) be typical elements in (U) hen x and y are in U, and since U is a subspace of V, x+ y is also in U Since is linear, ( x) + ( y) ( x+ y ) So ( x) + ( y ) is in (U), and (U) is closed under vector addition Let c be any scalar hen since x is in U and U is a subspace of V, cx is in U Since is linear, c ( x) c( x ) and c(x) is in (U ) hus (U) is closed under scalar multiplication, and (U) is a subspace of W 36 Since Z is a subspace of W, W is in Z Since is linear, ( V) W So V is in U Let x and y be typical elements in U hen (x) and (y) are in Z, and since Z is a subspace of W, ( x) + ( y ) is also in Z Since is linear, ( x) + ( y) ( x+ y ) So ( x+ y ) is in Z, and x+ y is in U hus U is closed under vector addition Let c be any scalar hen since x is in U, (x) is in Z Since Z is a subspace of W, c(x) is also in Z Since is linear, c ( x) ( cx ) and (cx) is in (U) hus cx is in U and U is closed under scalar multiplication Hence U is a subspace of V 37 [M] Consider the system with augmented matrix [ A w ] Since [ A w ] /95 /95 39/9 /9, 67 / 95 7 / 95 the system is consistent and w is in Col A Also, since Aw w is not in Nul A 38 [M] Consider the system with augmented matrix [ A w ] Since [ A w ] 3, the system is consistent and w is in Col A Also, since Aw w is in Nul A

199 4 Solutions [M] a o show that a 3 and 5 [ B a 3] : [ B a ] 3 [ B a ] 5 a are in the column space of B, we can row reduce the matrices [ ] /3 /3 /3 6/3 4 B a and Since both these systems are consistent, a 3 and a 5 are in the column space of B Notice that the same conclusions can be drawn by observing the reduced row echelon form for A: /3 /3 /3 6/3 A 4 b We find the general solution of Ax in terms of the free variables by using the reduced row echelon form of A given above: x ( /3) x3 (/3) x5, x ( /3) x3 + (6/3) x5, x4 4x5 with x 3 and x 5 free So x /3 /3 x /3 6/3 x x3 x3 + x5, x4 4 x 5 and a spanning set for Nul A is /3 /3 /3 6/3, 4 c he reduced row echelon form of A shows that the columns of A are linearly dependent and do not span 4 hus by heorem in Section 9, is neither one-to-one nor onto 4 [M] Since the line lies both in H Span{ v, v } and in K Span{ v3, v 4}, w can be written both as cv+ cv and c3v3 + c4v 4 o find w we must find the c j s which solve cv+ cv c3v3 c4v4 v v v v yields Row reduction of [ ] / /3,

200 CHAPER 4 Vector Spaces so the vector of c j s must be a multiple of (/3, 6/3, 4, ) One simple choice is (, 6,, 3), which gives w v 6v v3 + 3 v 4 (4, 48, 4) Another choice for w is (,, ) 43 SOLUIONS Notes: he definition for basis is given initially for subspaces because this emphasizes that the basis elements must be in the subspace Students often overlook this point when the definition is given for a vector space (see Exercise 5) he subsection on bases for Nul A and Col A is essential for Sections 45 and 46 he subsection on wo Views of a Basis is also fundamental to understanding the interplay between linearly independent sets, spanning sets, and bases Key exercises in this section are Exercises 5, which help to deepen students understanding of these different subsets of a vector space Consider the matrix whose columns are the given set of vectors his 3 3 matrix is in echelon form, and has 3 pivot positions hus by the Invertible Matrix heorem, its columns are linearly independent and span 3 So the given set of vectors is a basis for 3 Since the zero vector is a member of the given set of vectors, the set cannot be linearly independent and thus cannot be a basis for 3 Now consider the matrix whose columns are the given set of vectors his 3 3 matrix has only pivot positions hus by the Invertible Matrix heorem, its columns do not span 3 3 Consider the matrix whose columns are the given set of vectors he reduced echelon form of this matrix is 3 3 9/ 5 5/ 4 so the matrix has only two pivot positions hus its columns do not form a basis for is neither linearly independent nor does it span 3 3 ; the set of vectors 4 Consider the matrix whose columns are the given set of vectors he reduced echelon form of this matrix is so the matrix has three pivot positions hus its columns form a basis for 5 Since the zero vector is a member of the given set of vectors, the set cannot be linearly independent and thus cannot be a basis for 3 Now consider the matrix whose columns are the given set of vectors he reduced echelon form of this matrix is so the matrix has a pivot in each row hus the given set of vectors spans 3 3

201 43 Solutions 6 Consider the matrix whose columns are the given set of vectors Since the matrix cannot have a pivot in each row, its columns cannot span 3 ; thus the given set of vectors is not a basis for 3 he reduced echelon form of the matrix is so the matrix has a pivot in each column hus the given set of vectors is linearly independent 7 Consider the matrix whose columns are the given set of vectors Since the matrix cannot have a pivot in each row, its columns cannot span 3 ; thus the given set of vectors is not a basis for 3 he reduced echelon form of the matrix is so the matrix has a pivot in each column hus the given set of vectors is linearly independent 8 Consider the matrix whose columns are the given set of vectors Since the matrix cannot have a pivot in each column, the set cannot be linearly independent and thus cannot be a basis for 3 he reduced echelon form of this matrix is 3 3/ / 3 4 / so the matrix has a pivot in each row hus the given set of vectors spans 9 We find the general solution of Ax in terms of the free variables by using the reduced echelon form of A: x 3x x, x 5x3 4x4, with x 3 and x 4 free So x 3 x 5 4 x x 3 + x 4, x3 x 4 So 3 4 and a basis for Nul A is 3 5 4, 3

202 CHAPER 4 Vector Spaces We find the general solution of Ax in terms of the free variables by using the reduced echelon form of A: x 5x 7x, x 4x3 6x5, x4 3x5, with x 3 and x 5 free So x 5 7 x 4 6 x x3 x3+ x5, x4 3 x 5 So 3 5 and a basis for Nul A is , 3 Let A [ ] hen we wish to find a basis for Nul A We find the general solution of Ax in terms of the free variables: x y z with y and z free So x x y y z +, z and a basis for Nul A is, in the line 5x y Let A [ 5 ] hen we wish to find a basis for Nul A We find the general solution of Ax in terms of the free variables: y 5x with x free So We want to find a basis for the set of vectors in x x x, y 5 and a basis for Nul A is 5

203 43 Solutions 3 3 Since B is a row echelon form of A, we see that the first and second columns of A are its pivot columns hus a basis for Col A is 4, o find a basis for Nul A, we find the general solution of Ax in terms of the free variables: x 6x3 5 x4, x ( 5/) x3 (3/) x4, with x 3 and x 4 free So x 6 5 x 5/ 3/ x x 3 + x 4, x3 x 4 and a basis for Nul A is 6 5 5/ 3/, 4 Since B is a row echelon form of A, we see that the first, third, and fifth columns of A are its pivot columns hus a basis for Col A is 5 3 5,, o find a basis for Nul A, we find the general solution of Ax in terms of the free variables, mentally completing the row reduction of B to get: x x 4 x4, x3 (7 /5) x4, x 5, with x and x 4 free So x 4 x x x3 x + x47/5, x4 x 5 and a basis for Nul A is 4, 7/5

204 4 CHAPER 4 Vector Spaces 5 his problem is equivalent to finding a basis for Col A, where A [ v v v3 v4 v 5] Since the reduced echelon form of A is , we see that the first, second, and fourth columns of A are its pivot columns hus a basis for the space spanned by the given vectors is 3,, his problem is equivalent to finding a basis for Col A, where A [ v v v3 v4 v 5] Since the reduced echelon form of A is , 3 4 we see that the first, second, and third columns of A are its pivot columns hus a basis for the space spanned by the given vectors is 6,, 7 [M] his problem is equivalent to finding a basis for Col A, where A [ v v v3 v4 v 5] Since the reduced echelon form of A is / / , we see that the first, second, and third columns of A are its pivot columns hus a basis for the space spanned by the given vectors is ,,

205 43 Solutions 5 A v v v v v Since 8 [M] his problem is equivalent to finding a basis for Col A, where [ ] the reduced echelon form of A is /3 4/ /3 / , we see that the first, second, and fourth columns of A are its pivot columns hus a basis for the space spanned by the given vectors is , 9, Since 4v+ 5v 3 v3, we see that each of the vectors is a linear combination of the others hus the sets { v, v }, { v, v 3}, and { v, v 3} all span H Since we may confirm that none of the three vectors is a multiple of any of the others, the sets { v, v }, { v, v 3}, and { v, v 3} are linearly independent and thus each forms a basis for H Since v 3v + 5 v 3, we see that each of the vectors is a linear combination of the others hus the sets { v, v }, { v, v 3}, and { v, v 3} all span H Since we may confirm that none of the three vectors is a multiple of any of the others, the sets { v, v }, { v, v 3}, and { v, v 3} are linearly independent and thus each forms a basis for H a False he zero vector by itself is linearly dependent See the paragraph preceding heorem 4 b False he set { b,, b p } must also be linearly independent See the definition of a basis c rue See Example 3 d False See the subsection wo Views of a Basis e False See the box before Example 9 a False he subspace spanned by the set must also coincide with H See the definition of a basis b rue Apply the Spanning Set heorem to V instead of H he space V is nonzero because the spanning set uses nonzero vectors c rue See the subsection wo Views of a Basis d False See the two paragraphs before Example 8 e False See the warning after heorem 6 A v v v 3 v 4 hen A is square and its columns span 4 since Span{ v, v, v3, v 4} So its columns are linearly independent by the Invertible Matrix heorem, and { v, v, v3, v 4} is a basis for 3 Let [ ] 4 4 Let A [ ] v v n hen A is square and its columns are linearly independent, so its columns span n by the Invertible Matrix heorem hus { v,, v n } is a basis for n 4

206 6 CHAPER 4 Vector Spaces 5 In order for the set to be a basis for H, { v, v, v 3} must be a spanning set for H; that is, H Span{ v, v, v 3} he exercise shows that H is a subset of Span{ v, v, v 3} but there are vectors in Span{ v, v, v 3} which are not in H ( v and v 3, for example) So H Span{ v, v, v 3}, and { v, v, v 3} is not a basis for H 6 Since sin t cos t (/) sin t, the set {sin t, sin t} spans the subspace By inspection we note that this set is linearly independent, so {sin t, sin t} is a basis for the subspace 7 he set {cos ωt, sin ωt} spans the subspace By inspection we note that this set is linearly independent, so {cos ωt, sin ωt} is a basis for the subspace bt bt 8 he set { e, te } spans the subspace By inspection we note that this set is linearly independent, so bt bt { e, te } is a basis for the subspace v v k Since A has fewer columns than rows, there cannot be a pivot position in each row of A By heorem 4 in Section 4, the columns of A do not span n and thus are not a basis for 9 Let A be the n k matrix [ ] n 3 Let A be the n k matrix [ ] v v Since A has fewer rows than columns rows, there cannot be a k pivot position in each column of A By heorem 8 in Section 6, the columns of A are not linearly independent and thus are not a basis for n 3 Suppose that { v,, v p } is linearly dependent hen there exist scalars c,, cp not all zero with cv + + c p v p Since is linear, c ( v + + cv ) c( v ) + + c( v ) and p p p p c ( v + + cv ) ( ) hus c( v ) + + c ( v ) p p p p and since not all of the c i are zero, { ( v),, ( v p )} is linearly dependent 3 Suppose that { ( v),, ( v p )} is linearly dependent hen there exist scalars c,, cp not all zero with c( v ) + + c ( v ) p p Since is linear, c ( v + + cv ) c( v ) + + c( v ) ( ) p p p p Since is one-to-one c ( v + + cv ) ( ) p p implies that cv + + c p v p Since not all of the c i are zero, { v,, v p } is linearly dependent

207 43 Solutions 7 33 Neither polynomial is a multiple of the other polynomial So { p, p } is a linearly independent set in 3 Note: { p, p } is also a linearly independent set in since p and p both happen to be in 34 By inspection, p 3 p + p, or p+ p p3 By the Spanning Set heorem, Span{ p, p, p3} Span{ p, p } Since neither p nor p is a multiple of the other, they are linearly independent and hence { p, p } is a basis for Span{ p, p, p 3} 35 Let { v, v 3} be any linearly independent set in a vector space V, and let v and v 4 each be linear combinations of v and v 3 For instance, let v 5v and v4 v+ v 3 hen { v, v 3} is a basis for Span{ v, v, v3, v 4} 36 [M] Row reduce the following matrices to identify their pivot columns: u u u 3, so { u, u } is a basis for H [ ] 3 4 v v v 3, so { v, v } is a basis for K [ ] [ u u u v v v ] , 3 so { u, u, v } is a basis for H + K 37 [M] For example, writing c t+ c sin t+ c3cos t+ c4sin t cos t with t,,, 3 gives the following coefficent matrix A for the homogeneous system Ac (to four decimal places): sin cos sin cos sin cos sin cos A sin cos 4 sin cos sin 3 cos 6 sin 3 cos his matrix is invertible, so the system Ac has only the trivial solution and {t, sin t, cos t, sin t cos t} is a linearly independent set of functions

208 8 CHAPER 4 Vector Spaces 38 [M] For example, writing c + c cost+ c cos t+ c cos t+ c cos t+ c cos t+ c cos t with t,,, 3, 4, 5, 6 gives the following coefficent matrix A for the homogeneous system Ac (to four decimal places): cos cos cos cos cos cos cos cos cos cos cos cos cos cos cos cos cos cos A cos3 cos 3 cos 3 cos 3 cos 3 cos cos4 cos 4 cos 4 cos 4 cos 4 cos cos5 cos 5 cos 5 cos 5 cos 5 cos cos6 cos 6 cos 6 cos 6 cos 6 cos his matrix is invertible, so the system Ac has only the trivial solution and {, cos t, cos t, cos 3 t, cos 4 t, cos 5 t, cos 6 t} is a linearly independent set of functions 44 SOLUIONS Notes: Section 47 depends heavily on this section, as does Section 54 It is possible to cover the n parts of the two later sections, however, if the first half of Section 44 (and perhaps Example 7) is covered he linearity of the coordinate mapping is used in Section 54 to find the matrix of a transformation relative to two bases he change-of-coordinates matrix appears in Section 54, heorem 8 and Exercise 7 he concept of an isomorphism is needed in the proof of heorem 7 in Section 48 Exercise 5 is used in Section 47 to show that the change-of-coordinates matrix is invertible We calculate that x We calculate that 4 6 x 8 + ( 5) We calculate that 5 4 x 3 4 ( )

209 44 Solutions 9 4 We calculate that 3 4 x ( 4) 8 5 ( 7) he matrix [ b b x ] row reduces to 8, 5 8 so [ x ] B 5 6 he matrix [ b b x ] row reduces to 6, so 6 [ x ] B 7 he matrix [ b b b3 x ] row reduces to 8 he matrix [ b b b3 x ] row reduces to, 3, 5 9 he change-of-coordinates matrix from B to the standard basis in P B [ b b ] 9 8 he change-of-coordinates matrix from B to the standard basis in 3 8 P B [ b b b 3] so [ x ] B 3 so [ x ] B 5 is 3 is Since Since P B converts x into its B-coordinate vector, we find that [ x] B PB x / 3/ 6 4 P B converts x into its B-coordinate vector, we find that 4 6 7/ 3 7 [ x] B PB x 5 7 5/ 5 3 We must find c, c, and c 3 such that p + + c ( t ) c ( t t ) c ( t t ) ( t) 4t 7 t Equating the coefficients of the two polynomials produces the system of equations c + c3 c + c3 4 c + c + c 7 3

210 CHAPER 4 Vector Spaces We row reduce the augmented matrix for the system of equations to find 4 6,so[ ] B 6 p 7 One may also solve this problem using the coordinate vectors of the given polynomials relative to the standard basis {, t, t }; the same system of linear equations results 4 We must find c, c, and c 3 such that p + c ( t ) c ( t t ) c ( t t ) ( t) 3 t 6 t Equating the coefficients of the two polynomials produces the system of equations c + c3 3 c c3 c c + c 6 3 We row reduce the augmented matrix for the system of equations to find ,so[ ] B 3 p 6 One may also solve this problem using the coordinate vectors of the given polynomials relative to the standard basis {, t, t }; the same system of linear equations results 5 a rue See the definition of the B-coordinate vector b False See Equation (4) c False 3 is isomorphic to 4 See Example 5 6 a rue See Example b False By definition, the coordinate mapping goes in the opposite direction c rue If the plane passes through the origin, as in Example 7, the plane is isomorphic to 7 We must solve the vector equation 3 x + x + x We row reduce the augmented matrix for the system of equations to find hus we can let x 5+ 5x3 and x x3, where x 3 can be any real number Letting x 3 and x 3 produces two different ways to express as a linear combination of the other vectors: 5v v and v 3v + v 3 here are infintely many correct answers to this problem 8 For each k, b b + + b + + b, so [ b ] (,,,,) e k k n k B k 9 he set S spans V because every x in V has a representation as a (unique) linear combination of elements in S o show linear independence, suppose that S { v,, v n } and that cv+ + cnvn for some scalars c,, c n he case when c c n is one possibility By hypothesis, this is the unique

211 44 Solutions (and thus the only) possible representation of the zero vector as a linear combination of the elements in S So S is linearly independent and is thus a basis for V For w in V there exist scalars k, k, k 3, and k 4 such that w kv+ kv + k3v3 + k4v 4 () because { v, v, v3, v 4} spans V Because the set is linearly dependent, there exist scalars c, c, c 3, and c 4 not all zero, such that cv+ cv + c3v3 + c4v 4 () Adding () and () gives w w+ ( k+ c) v+ ( k + c) v + ( k3 + c3) v3 + ( k4 + c4) v 4 (3) At least one of the weights in (3) differs from the corresponding weight in () because at least one of the c is nonzero So w is expressed in more than one way as a linear combination of v, v, v 3, and v 4 i he matrix of the transformation will be P B he matrix of the transformation will be [ b b ] 3 Suppose that c [ u] B [ w] B c n By definition of coordinate vectors, u w cb + + c n b n P B n Since u and w were arbitrary elements of V, the coordinate mapping is one-to-one 4 Given y ( y,, y n ) in n, let u yb+ + ynb n hen, by definition, [ u] B y Since y was arbitrary, the coordinate mapping is onto n 5 Since the coordinate mapping is one-to-one, the following equations have the same solutions c,, cp : cu + + c u (the zero vector in V ) (4) p p [ ] cu + + c u (the zero vector in p p B B n ) (5) Since the coordinate mapping is linear, (5) is equivalent to c[ u] B + + cp[ up] B (6) hus (4) has only the trivial solution if and only if (6) has only the trivial solution It follows that { u,, u p } is linearly independent if and only if {[ u] B,,[ u p] B} is linearly independent his result also follows directly from Exercises 3 and 3 in Section 43

212 CHAPER 4 Vector Spaces 6 By definition, w is a linear combination of u,, u p if and only if there exist scalars c,, cp such that w cu + + c u (7) p p Since the coordinate mapping is linear, [ w] c[ u ] + + c [ u ] (8) B B p p B Conversely, (8) implies (7) because the coordinate mapping is one-to-one hus w is a linear combination of u,, u p if and only if [ w ] B is a linear combination of [ u],,[ u ] p Note: Students need to be urged to write not just to compute in Exercises 7 34 he language in the Study Guide solution of Exercise 3 provides a model for the students In Exercise 3, students may have difficulty distinguishing between the two isomorphic vector spaces, sometimes giving a vector in 3 as an answer for part (b) 7 he coordinate mapping produces the coordinate vectors (,,, ), (3,,, ), and (,, 3, ) respectively We test for linear independence of these vectors by writing them as columns of a matrix and row reducing: 3 3 Since the matrix has a pivot in each column, its columns (and thus the given polynomials) are linearly independent 8 he coordinate mapping produces the coordinate vectors (,,, 3), (,,, ), and (, 3,, ) respectively We test for linear independence of these vectors by writing them as columns of a matrix and row reducing: Since the matrix does not have a pivot in each column, its columns (and thus the given polynomials) are linearly dependent 9 he coordinate mapping produces the coordinate vectors (,,, ), (,,, ), and ( 8,, 6, ) respectively We test for linear independence of these vectors by writing them as columns of a matrix and row reducing: Since the matrix does not have a pivot in each column, its columns (and thus the given polynomials) are linearly dependent

213 44 Solutions 3 3 he coordinate mapping produces the coordinate vectors (, 3, 3, ), (4,, 9, ), and (,, 3, 4) respectively We test for linear independence of these vectors by writing them as columns of a matrix and row reducing: Since the matrix does not have a pivot in each column, its columns (and thus the given polynomials) are linearly dependent 3 In each part, place the coordinate vectors of the polynomials into the columns of a matrix and reduce the matrix to echelon form a b Since there is not a pivot in each row, the original four column vectors do not span isomorphism between 3 and, the given set of polynomials does not span / Since there is a pivot in each row, the original four column vectors span between 3 and, the given set of polynomials spans 3 By the 3 By the isomorphism 3 a Place the coordinate vectors of the polynomials into the columns of a matrix and reduce the matrix to echelon form: he resulting matrix is invertible since it row equivalent to I 3 he original three column vectors form a basis for 3 by the Invertible Matrix heorem By the isomorphism between 3 and, the corresponding polynomials form a basis for b Since [ q ] B ( 3,, ), q 3p+ p + p 3 One might do the algebra in or choose to compute 3 3 his combination of the columns of the matrix corresponds to the same combination of p, p, and p 3 So q () t + 3t8 t 33 he coordinate mapping produces the coordinate vectors (3, 7,, ), (5,,, ), (,,, ) and (, 6, 6, ) respectively o determine whether the set of polynomials is a basis for 3, we investigate whether the coordinate vectors form a basis for 4 Writing the vectors as the columns of a matrix and row reducing , 6 3

214 4 CHAPER 4 Vector Spaces we find that the matrix is not row equivalent to I 4 hus the coordinate vectors do not form a basis for 4 By the isomorphism between 4 and 3, the given set of polynomials does not form a basis for 3 34 he coordinate mapping produces the coordinate vectors (5, 3, 4, ), (9,, 8, 6), (6,, 5, ), and (,,, ) respectively o determine whether the set of polynomials is a basis for 3, we investigate whether the coordinate vectors form a basis for 4 Writing the vectors as the columns of a matrix, and row reducing /4 3 / we find that the matrix is not row equivalent to I 4 hus the coordinate vectors do not form a basis for By the isomorphism between 4 and 3, the given set of polynomials does not form a basis for o show that x is in Span{, }, H v v we must show that the vector equation xv+ xv x has a solution he augmented matrix [ v v x ] may be row reduced to show 4 9 5/ / Since this system has a solution, x is in H he solution allows us to find the B-coordinate vector for x: 5/3 since x xv+ xv ( 5/3) v+ (8/3) v, [ x ] B 8/3 36 o show that x is in Span{,, 3} H v v v, we must show that the vector equation xv+ xv + x3v3 x has a solution he augmented matrix [ v v v3 x ] may be row reduced to show he first three columns show that B is a basis for H Moreover, since this system has a solution, x is in H he solution allows us to find the B-coordinate vector for x: since 3 x xv+ xv + x3v3 3v+ 5v + v 3, [ x ] B 5 / 6 37 We are given that [ x ] B /4, where B 5, 3, /6 48 to the standard basis in 3, we must find x We compute that 6 / 3 x P B [ x ] B 5 3 /4 48 / 6 8 o find the coordinates of x relative

215 45 Solutions 5 / 6 38 We are given that [ x ] B /, where B 5, 3, /3 48 to the standard basis in 3, we must find x We compute that 6 / 3 x P B [ x ] B 5 3 / /3 6 o find the coordinates of x relative 45 SOLUIONS Notes: heorem 9 is true because a vector space isomorphic to n has the same algebraic properties as n ; a proof of this result may not be needed to convince the class he proof of heorem 9 relies upon the fact that the coordinate mapping is a linear transformation (which is heorem 8 in Section 44) If you have skipped this result, you can prove heorem 9 as is done in Introduction to Linear Algebra by Serge Lang (Springer- Verlag, New York, 986) here are two separate groups of true-false questions in this section; the second batch is more theoretical in nature Example 4 is useful to get students to visualize subspaces of different dimensions, and to see the relationships between subspaces of different dimensions Exercises 3 and 3 investigate the relationship between the dimensions of the domain and the range of a linear transformation; Exercise 3 is mentioned in the proof of heorem 7 in Section 48 his subspace is H Span{ v, v }, where v and v Since v and v are not multiples 3 of each other, { v, v } is linearly independent and is thus a basis for H Hence the dimension of H is 4 his subspace is H Span{ v, v }, where v 3 and v Since v and v are not multiples of each other, { v, v } is linearly independent and is thus a basis for H Hence the dimension of H is 3 his subspace is H Span{ v, v, v 3}, where v, v, and v 3 heorem 4 in 3 Section 43 can be used to show that this set is linearly independent: v, v is not a multiple of v, and (since its first entry is not zero) v 3 is not a linear combination of v and v hus { v, v, v 3} is linearly independent and is thus a basis for H Alternatively, one can show that this set is linearly v v v Hence the dimension of the subspace is 3 independent by row reducing the matrix [ 3 ] 4 his subspace is H Span{ v, v }, where v and v 3 Since v and v are not multiples of each other, { v, v } is linearly independent and is thus a basis for H Hence the dimension of H is

216 6 CHAPER 4 Vector Spaces 4 5 his subspace is H Span{ v, v, v 3}, where 5 v, 4 v, and v 3 Since v 3 v, { v, v, v 3} is linearly dependent By the Spanning Set heorem, v 3 may be removed from the set with no change in the span of the set, so H Span{ v, v } Since v and v are not multiples of each other, { v, v } is linearly independent and is thus a basis for H Hence the dimension of H is his subspace is H Span{ v, v, v 3}, where v, v 9, and v 5 3 Since 3 3 v3 (/ 3) v, { v, v, v 3} is linearly dependent By the Spanning Set heorem, v 3 may be removed from the set with no change in the span of the set, so H Span{ v, v } Since v and v are not multiples of each other, { v, v } is linearly independent and is thus a basis for H Hence the dimension of H is 3 7 his subspace is H Nul A, where A Since [ A ], the homogeneous system has only the trivial solution hus H Nul A {}, and the dimension of H is 8 From the equation a 3b + c, it is seen that (a, b, c, d) b(3,,, ) + c(,,, ) + d(,,, ) hus the subspace is H Span{ v, v, v 3}, where v (3,,,), v (,,,), and v 3 (,,,) It is easily checked that this set of vectors is linearly independent, either by appealing to heorem 4 in v v v Hence the dimension of the subspace is 3 Section 43, or by row reducing [ 3 ] a 9 his subspace is H b : a, b in Span{ v, v }, where v and v Since v and a v are not multiples of each other, { v, v } is linearly independent and is thus a basis for H Hence the dimension of H is he matrix A with these vectors as its columns row reduces to here are two pivot columns, so the dimension of Col A (which is the dimension of H) is he matrix A with these vectors as its columns row reduces to here are two pivot columns, so the dimension of Col A (which is the dimension of the subspace spanned by the vectors) is

217 45 Solutions 7 he matrix A with these vectors as its columns row reduces to here are three pivot columns, so the dimension of Col A (which is the dimension of the subspace spanned by the vectors) is 3 3 he matrix A is in echelon form here are three pivot columns, so the dimension of Col A is 3 here are two columns without pivots, so the equation Ax has two free variables hus the dimension of Nul A is 4 he matrix A is in echelon form here are three pivot columns, so the dimension of Col A is 3 here are three columns without pivots, so the equation Ax has three free variables hus the dimension of Nul A is 3 5 he matrix A is in echelon form here are two pivot columns, so the dimension of Col A is here are two columns without pivots, so the equation Ax has two free variables hus the dimension of Nul A is 6 he matrix A row reduces to here are two pivot columns, so the dimension of Col A is here are no columns without pivots, so the equation Ax has only the trivial solution hus Nul A {}, and the dimension of Nul A is 7 he matrix A is in echelon form here are three pivot columns, so the dimension of Col A is 3 here are no columns without pivots, so the equation Ax has only the trivial solution hus Nul A {}, and the dimension of Nul A is 8 he matrix A is in echelon form here are two pivot columns, so the dimension of Col A is here is one column without a pivot, so the equation Ax has one free variable hus the dimension of Nul A is 9 a rue See the box before Example 5 b False he plane must pass through the origin; see Example 4 c False he dimension of n is n + ; see Example d False he set S must also have n elements; see heorem e rue See heorem 9 a False he set is not even a subset of 3 b False he number of free variables is equal to the dimension of Nul A; see the box before Example 5 c False A basis could still have only finitely many elements, which would make the vector space finitedimensional d False he set S must also have n elements; see heorem e rue See Example 4

218 8 CHAPER 4 Vector Spaces he matrix whose columns are the coordinate vectors of the Hermite polynomials relative to the standard 3 basis {, tt,, t } of 3 is A 4 8 his matrix has 4 pivots, so its columns are linearly independent Since their coordinate vectors form a linearly independent set, the Hermite polynomials themselves are linearly independent in 3 Since there are four Hermite polynomials and dim 3 4, the Basis heorem states that the Hermite polynomials form a basis for 3 he matrix whose columns are the coordinate vectors of the Laguerre polynomials relative to the 3 standard basis {, tt,, t } of 3 is A 9 his matrix has 4 pivots, so its columns are linearly independent Since their coordinate vectors form a linearly independent set, the Laguerre polynomials themselves are linearly independent in 3 Since there are four Laguerre polynomials and dim 3 4, the Basis heorem states that the Laguerre polynomials form a basis for he coordinates of p () t 7t 8t + t with respect to B satisfy 3 3 () + ( ) + 3( + 4 ) + 4( + 8 ) c c t c t c t t t t t Equating coefficients of like powers of t produces the system of equations c c3 7 c c4 4c3 8 8c Solving this system gives c 3, c 3, c 3, c 4 3/, and 4 he coordinates of p () t 7 8t+ 3t with respect to B satisfy [ p ] B 3/ c() + c( t) + c3( 4 t+ t ) 7 8t+ 3t Equating coefficients of like powers of t produces the system of equations c + c + c3 7 c 4c3 8 c 3 3 Solving this system gives c 5, c 4, c 3 3, and 5 [ p ] B 4 3

219 45 Solutions 9 5 Note first that n since S cannot have fewer than vector Since n, V Suppose that S spans V and that S contains fewer than n vectors By the Spanning Set heorem, some subset S of S is a basis for V Since S contains fewer than n vectors, and S is a subset of S, S also contains fewer than n vectors hus there is a basis S for V with fewer than n vectors, but this is impossible by heorem since dimv n hus S cannot span V 6 If dimv dim H, then V {} and H {}, so H V Suppose that dim V dim H > hen H contains a basis S consisting of n vectors But applying the Basis heorem to V, S is also a basis for V hus H V SpanS 7 Suppose that dim k < Now n is a subspace of for all n, and dim k k, so dim k dim k his would imply that k, which is clearly untrue: for example p ( t) t is in but not in k hus the dimension of cannot be finite 8 he space C( ) contains as a subspace If C( ) were finite-dimensional, then would also be finitedimensional by heorem But is infinite-dimensional by Exercise 7, so C( ) must also be infinitedimensional 9 a rue Apply the Spanning Set heorem to the set { v,, v p } and produce a basis for V his basis will not have more than p elements in it, so dimv p b rue By heorem, { v,, v } can be expanded to find a basis for V his basis will have at least p p elements in it, so dimv p c rue ake any basis (which will contain p vectors) for V and adjoin the zero vector to it 3 a False For a counterexample, let v be a non-zero vector in 3, and consider the set {v, v} his is a linearly dependent set in 3 3, but dim 3> b rue If dimv p, there is a basis for V with p or fewer vectors his basis would be a spanning set for V with p or fewer vectors, which contradicts the assumption c False For a counterexample, let v be a non-zero vector in 3, and consider the set {v, v} his is a linearly dependent set in 3 3 with 3 vectors, and dim 3 3 Since H is a nonzero subspace of a finite-dimensional vector space V, H is finite-dimensional and has a basis Let { u,, u } be a basis for H We show that the set { ( u),, ( u )} spans (H) Let y be in p (H) hen there is a vector x in H with (x) y Since x is in H and { u,, u p } is a basis for H, x may be written as x cu+ + cpu p for some scalars c,, c p Since the transformation is linear, y ( x) ( cu + + c u ) c( u ) + + c ( u ) p p p p hus y is a linear combination of ( u),, ( u p ), and { ( u),, ( u p )} spans (H) By the Spanning Set heorem, this set contains a basis for (H) his basis then has not more than p vectors, and dim(h) p dim H 3 Since H is a nonzero subspace of a finite-dimensional vector space V, H is finite-dimensional and has a basis Let { u, u } be a basis for H In Exercise 3 above it was shown that { ( u),, ( u )} spans p (H) In Exercise 3 in Section 43, it was shown that { ( u),, ( u )} is linearly independent hus { ( u ),, ( u )} is a basis for (H), and dim(h) p dim H p p p p

220 CHAPER 4 Vector Spaces 33 [M] a o find a basis for 5 which contains the given vectors, we row reduce /3 3/ /7 8 8 /3 3/ / /7 he first, second, third, fifth, and sixth columns are pivot columns, so these columns of the original matrix ({ v, v, v3, e, e 3} ) form a basis for 5 : b he original vectors are the first k columns of A Since the set of original vectors is assumed to be linearly independent, these columns of A will be pivot columns and the original set of vectors will be included in the basis Since the columns of A include all the columns of the identity matrix, Col A n 34 [M] a he B-coordinate vectors of the vectors in C are the columns of the matrix P he matrix P is invertible because it is triangular with nonzero entries along its main diagonal hus its columns are linearly independent Since the coordinate mapping is an isomorphism, this shows that the vectors in C are linearly independent b We know that dim H 7 because B is a basis for H Now C is a linearly independent set, and the vectors in C lie in H by the trigonometric identities hus by the Basis heorem, C is a basis for H 46 SOLUIONS Notes: his section puts together most of the ideas from Chapter 4 he Rank heorem is the main result in this section Many students have difficulty with the difference in finding bases for the row space and the column space of a matrix he first process uses the nonzero rows of an echelon form of the matrix he second process uses the pivots columns of the original matrix, which are usually found through row reduction Students may also have problems with the varied effects of row operations on the linear dependence relations among the rows and columns of a matrix Problems of the type found in Exercises 9 6 make excellent test questions Figure and Example 4 prepare the way for heorem 3 in Section 6; Exercises 7 9 anticipate Example 6 in Section 74

221 46 Solutions he matrix B is in echelon form here are two pivot columns, so the dimension of Col A is here are two pivot rows, so the dimension of Row A is here are two columns without pivots, so the equation Ax has two free variables hus the dimension of Nul A is A basis for Col A is the pivot columns of A: 4, 5 6 A basis for Row A is the pivot rows of B: {(,,,5),(,,5, 6) } o find a basis for Nul A row reduce to reduced echelon form: 5 A 5/ 3 he solution to A x in terms of free variables is x x3 5x4, x (5/ ) x3 3x4 with x 3 and x 4 free hus a basis for Nul A is 5 5/ 3, he matrix B is in echelon form here are three pivot columns, so the dimension of Col A is 3 here are three pivot rows, so the dimension of Row A is 3 here are two columns without pivots, so the equation A x has two free variables hus the dimension of Nul A is A basis for Col A is the pivot columns of A: 4 9 6,, A basis for Row A is the pivot rows of B: {(, 3,,5, 7),(,,, 3,8),(,,,,5) } o find a basis for Nul A row reduce to reduced echelon form: 3 5 3/ A he solution to A x in terms of free variables is x 3x 5x4, x3 (3/ ) x4, x 5, with x and x 4 free hus a basis for Nul A is 3 5, 3/

222 CHAPER 4 Vector Spaces 3 he matrix B is in echelon form here are three pivot columns, so the dimension of Col A is 3 here are three pivot rows, so the dimension of Row A is 3 here are two columns without pivots, so the equation A x has two free variables hus the dimension of Nul A is A basis for Col A is the pivot columns of A: 6 3 3,, A basis for Row A is the pivot rows of B: {(, 3,6,,5),(,,3,,),(,,,,3) } o find a basis for Nul A row reduce to reduced echelon form: 3/ 9/ 4/3 A 3 he solution to A x in terms of free variables is x (3/ ) x + (9 / ) x5, x3 (4/3) x5, with x and x 5 free hus a basis for Nul A is 3/ 9/, 4/3 3 3 x, he matrix B is in echelon form here are three pivot columns, so the dimension of Col A is 3 here are three pivot rows, so the dimension of Row A is 3 here are three columns without pivots, so the equation A x has three free variables hus the dimension of Nul A is 3 A basis for Col A is the pivot columns of A: 7,, 3 5 A basis for Row A is the pivot rows of B: {(,, 3,7,9, 9),(,,,3,4, 3),(,,,,, ) } o find a basis for Nul A row reduce to reduced echelon form: A x

223 46 Solutions 3 he solution to A x in terms of free variables is x x3 9x5 x6, x x3 7x5 3x6, x4 x5 + x6, with x 3, x 5, and x 6 free hus a basis for Nul A is 9 7 3,, 5 By the Rank heorem, dimnul A 8 rank A Since dimrow A rank A,dimRow A 3 Since rank A dimcol A dimrow A, ranka 3 6 By the Rank heorem, dimnul A 3 rank A 3 3 Since dimrow A rank A, dimrow A 3 Since rank A dimcol A dimrow A, rank A 3 7 Yes, Col A 4 Since A has four pivot columns, dimcol A 4 hus Col A is a four-dimensional subspace of 4, and Col A 4 No, Nul A 3 It is true that dimnul A 3, but Nul A is a subspace of 7 8 Since A has four pivot columns, rank A 4, and dimnul A 6 rank A 6 4 No Col A 4 It is true that dimcol A rank A 4, but Col A is a subspace of 9 Since dimnul A 4, rank A 6 dimnul A 6 4 So dimcol A rank A Since dimnul A 5, rank A 6 dimnul A 6 5 So dimcol A rank A Since dimnul A, rank A 5 dimnul A 5 3 So dimrow A dimcol A rank A 3 Since dimnul A 4, rank A 6 dimnul A 6 4 So dimrow A dimcol A rank A 3 he rank of a matrix A equals the number of pivot positions which the matrix has If A is either a 7 5 matrix or a 5 7 matrix, the largest number of pivot positions that A could have is 5 hus the largest possible value for rank A is 5 4 he dimension of the row space of a matrix A is equal to rank A, which equals the number of pivot positions which the matrix has If A is either a 4 3 matrix or a 3 4 matrix, the largest number of pivot positions that A could have is 3 hus the largest possible value for dimrow A is 3 5 Since the rank of A equals the number of pivot positions which the matrix has, and A could have at most 6 pivot positions, rank A 6 hus dimnul A 8 rank A Since the rank of A equals the number of pivot positions which the matrix has, and A could have at most 4 pivot positions, rank A 4 hus dimnul A 4 rank A a rue he rows of A are identified with the columns of A See the paragraph before Example b False See the warning after Example c rue See the Rank heorem d False See the Rank heorem e rue See the Numerical Note before the Practice Problem 5

224 4 CHAPER 4 Vector Spaces 8 a False Review the warning after heorem 6 in Section 43 b False See the warning after Example c rue See the remark in the proof of the Rank heorem d rue his fact was noted in the paragraph before Example 4 It also follows from the fact that the rows of A are the columns of ( A ) A e rue See heorem 3 9 Yes Consider the system as A x, where A is a 5 6 matrix he problem states that dimnula By the Rank heorem, rank A 6 dimnul A 5 hus dimcol A rank A 5, and since Col A is a subspace of 5, Col A 5 So every vector b in 5 is also in Col A, and A x b, has a solution for all b No Consider the system as A x b, where A is a 6 8 matrix he problem states that dimnul A By the Rank heorem, rank A 8 dimnul A 6 hus dimcol A rank A 6, and since Col A is a subspace of 6, Col A 6 So every vector b in 6 is also in Col A, and A x b has a solution for all b hus it is impossible to change the entries in b to make A x b into an inconsistent system No Consider the system as A x b, where A is a 9 matrix Since the system has a solution for all b in 9, A must have a pivot in each row, and so ranka 9 By the Rank heorem, dimnula 9 hus it is impossible to find two linearly independent vectors in Nul A No Consider the system as A x, where A is a matrix Since A has at most pivot positions, ranka By the Rank heorem, dimnula ranka hus it is impossible to find a single vector in Nul A which spans Nul A 3 Yes, six equations are sufficient Consider the system as A x, where A is a 8 matrix he problem states that dimnul A By the Rank heorem, rank A 8 dimnul A 6 hus dimcol A rank A 6 So the system A x is equivalent to the system B x, where B is an echelon form of A with 6 nonzero rows So the six equations in this system are sufficient to describe the solution set of A x 4 Yes, No Consider the system as A x b, where A is a 7 6 matrix Since A has at most 6 pivot positions, rank A 6 By the Rank heorem, dim Nul A 6 rank A If dimnul A, then the system A x b will have no free variables he solution to A x b, if it exists, would thus have to be unique Since rank A 6, Col A will be a proper subspace of 7 hus there exists a b in 7 for which the system A x b is inconsistent, and the system A x b cannot have a unique solution for all b 5 No Consider the system as A x b, where A is a matrix he problem states that dim NulA 3 By the Rank heorem, dimcol A rank A dimnul A 9 hus Col A will be a proper subspace of hus there exists a b in for which the system A x b is inconsistent, and the system A x b cannot have a solution for all b 6 Consider the system A x, where A is a m n matrix with m> n Since the rank of A is the number of pivot positions that A has and A is assumed to have full rank, rank A n By the Rank heorem, dimnula n rank A So Nul A { }, and the system A x has only the trivial solution his happens if and only if the columns of A are linearly independent 7 Since A is an m n matrix, Row A is a subspace of subspace of n Likewise since A is an n m matrix, Row n, Col A is a subspace of A is a subspace of m, and Nul A is a m, Col A is a

225 46 Solutions 5 subspace of n, and Nul A is a subspace of m Since Row A Col A and Col A Row A, there are four dinstict subspaces in the list: Row A, Col A, Nul A, and Nul A 8 a Since A is an m n matrix and dimrow A rank A, dimrow A + dimnul A rank A + dimnul A n b Since A is an n m matrix and dimcol dimrow dimcol A A A rank A, dimcol A+ dimnul A rank A + dimnul A m 9 Let A be an m n matrix he system Ax b will have a solution for all b in m if and only if A has a pivot position in each row, which happens if and only if dimcol A m By Exercise 8 b, dimcol A m if and only if dimnul A m m, or Nul A { } Finally, Nul A { } if and only if the equation A x has only the trivial solution 3 he equation Ax b is consistent if and only if rank [ A ] b rank A because the two ranks will be equal if and only if b is not a pivot column of [ A b ] he result then follows from heorem in Section a b c uv 3 a b c 3a 3b 3 c Each column of 5 5a 5b 5c 3 Compute that [ ] dimcol uv, unless a b c, in which case In any case, rank uv dimcol uv uv is a multiple of u, so uv is the 3 3 zero matrix and dimcol uv 3 Note that the second row of the matrix is twice the first row hus if v (, 3, 4), which is the first row of the matrix, 3 4 uv [ 3 4 ] 6 8 A u u u 3, and assume that rank A Suppose that u hen { u } is basis for Col A, since Col A is assumed to be one-dimensional hus there are scalars x and y with u xu and u3 yu, and A uv, where v x y 33 Let [ ] If u but u, then similarly { u } is basis for Col A, since Col A is assumed to be onedimensional hus there is a scalar x with u3 xu, and A uv, where v x If u u but u3, then A uv 3, where v 34 Let A be an m n matrix with of rank r >, and let U be an echelon form of A Since A can be reduced to U by row operations, there exist invertible elementary matrices E,, Ep with ( Ep E ) A U hus

226 6 CHAPER 4 Vector Spaces A ( E E ) U, since the product of invertible matrices is invertible Let p E ( E E ) ; then A EU Let the columns of E be denoted by c,, c m Since the rank of A is r, U has r nonzero rows, which can be denoted,, d d By the column-row expansion of A (heorem in Section 4): d A EU [ c c ] c d + + c d r dr m r r, which is the sum of r rank matrices 35 [M] a Begin by reducing A to reduced echelon form: 3/ 5 3 / / A / 7 A basis for Col A is the pivot columns of A, so matrix C contains these columns: C A basis for Row A is the pivot rows of the reduced echelon form of A, so matrix R contains these rows: 3/ 5 3 / / R / 7 o find a basis for Nul A row reduce to reduced echelon form, note that the solution to Ax in terms of free variables is x (3/ ) x3 5x5 + 3 x7, x (/ ) x3 (/ ) x5 x7, x4 (/ ) x5 7 x7, x6 x7, with x 3, x 5, and x 7 free hus matrix N is 3/ 5 3 / / N / 7 p

227 46 Solutions 7 b he reduced echelon form of A A is / 4/ 8/, so the solution to A x in terms of free variables is x (/) x5, x4 (8/) x5, with x 5 free hus matrix M is / 4/ M 8/ he matrix S R N is 7 7 because the columns of x x5 R and N are in (4/), x 3, 7 and dimrow A + dimnul A 7 he matrix [ C M] is 5 5 because the columns of C and M are in dimcol A+ dimnul A 5 Both S and are invertible because their columns are linearly independent his fact will be proven in general in heorem 3 of Section 6 36 [M] Answers will vary, but in most cases C will be 6 4, and will be constructed from the first 4 columns of A In most cases R will be 4 7, N will be 7 3, and M will be 6 37 [M] he C and R from Exercise 35 work here, and A CR 38 [M] If A is nonzero, then A CR Note that CR [ C C C ] r r r n, where r,, r n are the columns of R he columns of R are either pivot columns of R or are not pivot columns of R Consider first the pivot columns of R he i th pivot column of R is e i, the i th column in the identity matrix, so Ce i is the i th pivot column of A Since A and R have pivot columns in the same locations, when C multiplies a pivot column of R, the result is the corresponding pivot column of A in its proper location th Suppose r j is a nonpivot column of R hen r j contains the weights needed to construct the j column of A from the pivot columns of A, as is discussed in Example 9 of Section 43 and in the paragraph th j column of A from the preceding that example hus r j contains the weights needed to construct the columns of C, and C r a j j 5 and

228 8 CHAPER 4 Vector Spaces 47 SOLUIONS Notes: his section depends heavily on the coordinate systems introduced in Section 44 he row reduction algorithm that produces P c B can also be deduced from Exercise in Section, by row reducing PC PB to I PC P B he change-of-coordinates matrix here is interpreted in Section 54 as the matrix of the identity transformation relative to two bases 6 a Since b 6c c and b 9c4 c, [ ] C, b Since x 3b+ b, 3 [ x ] B and [ ] C P [ x x ] B B C 4 b a Since b c +4c and b 5c3 c, [ ] C, 4 b Since x 5b+ 3 b, 5 [ x ] B 3 and 5 5 [ ] C P x [ x ] B C B b 9 [ b ] C, 4 and 5 [ b ] C, 3 and C P B C P B Equation (ii) is satisfied by P for all x in V 4 Equation (i) is satisfied by P for all x in V 4 5 a Since a 4 b b, a b+ b + b 3, and a3 b b 3, [ ] B, 4 [ a 3] B, and P B A b Since x 3a+ 4 a + a 3, 3 [ x ] A 4 and [ x ] B P 4 B A a a [ ] B,

229 47 Solutions 9 6 a Since f d d + d 3, f 3 d + d3, and f 3 3d + d 3, [ ] D, 3 and P 3 D F b Since x f f + f 3, 7 o find P, hus 8 o find hus 9 o find hus o find hus [ x ] F and 3 4 [ x] D P [ x ] F 3 7 D F 3 C B row reduce the matrix [ c c b b ] : 3 5 3, 5 and P P B C C B 5 3 [ c c b b ] P C B P C B, row reduce the matrix [ c c b b ] : , 4 3 and P 3 P B C C B 4 3 [ c c b b ] P C B P C B, row reduce the matrix [ c c b b ] : 9 4 9, 4 and P P B C C B 4 9 [ c c b b ] P C B P C B, row reduce the matrix [ c c b b ] : , 5 and P 3 P B C C B 5 8 [ c c b b ] P C B [ ] 3, f f D f 3 3 [ ] D, a False See heorem 5 b rue See the first paragraph in the subsection Change of Basis in n

230 3 CHAPER 4 Vector Spaces a rue he columns of 3 Let P C B are coordinate vectors of the linearly independent set B See the second paragraph after heorem 5 b False he row reduction is discussed after Example he matrix P obtained there satisfies [ x] P[ x ] C B 3 B { b, b, b } { t+ t,3 5t+ 4 t,t+ 3 t } and let C { c, c, c } {, t, t } he C-coordinate vectors of b, b, and b 3 are 3 [ b] C,[ ] C 5,[ 3] C b b 4 3 So P C B Let x + t hen the coordinate vector [ x ] B satisfies P [ x] B [ x ] C C B his system may be solved by row reducing its augmented matrix: ,so[ ] B x Let B { b, b, b 3} { 3 t,+ t 5 t,+ t} and let vectors of b, b, and b 3 are [ b] C,[ ] C,[ 3] C b b 3 5 So Let P C B 3 5 x t hen the coordinate vector [ x ] B satisfies P [ x] B [ x ] C C B C { c, c, c } {, t, t } he C-coordinate 3

231 47 Solutions 3 his system may be solved by row reducing its augmented matrix: 3 3,so[ ] B x 3 5 and t 3( 3 t ) ( + t 5 t ) + ( + t) 5 (a) B is a basis for V (b) the coordinate mapping is a linear transformation (c) of the product of a matrix and a vector (d) the coordinate vector of v relative to B 6 (a) [ b] C Q[ b] B Q Qe (b) [ b k] C (c) [ b ] Q[ b ] Qe k C k B k 7 [M] a Since we found P in Exercise 34 of Section 45, we can calculate that P b Since P is the change-of-coordinates matrix from C to B, will be the change-of-coordinates matrix from B to C By heorem 5, the columns of P will be the C-coordinate vectors of the basis vectors in B hus cos t ( + cos t) 3 cos t (3cos t+ cos 3 t) 4 4 cos t (3 + 4cos t+ cos 4 t) 8 5 cos t (cos t+ 5cos 3t+ cos 5 t) 6 6 cos t ( + 5cos t+ 6cos 4t+ cos 6 t) 3 P

232 3 CHAPER 4 Vector Spaces 8 [M] he C-coordinate vector of the integrand is (,,, 5, 6, 5, ) Using exercise, the B- coordinate vector of the integrand will be P (,,, 5, 6, 5, ) ( 6, 55/8, 69/8, 45/6, 3, 5/6, 3/8) hus the integral may be rewritten as cos t cos t+ cos 3t 3cos 4t+ cos 5t cos 6 tdt, which equals t+ sin t sin t+ sin 3t sin 4t+ sin 5t sin 6 t+ C P from the previous 9 [M] a If C is the basis { v, v, v 3}, then the columns of P are [ u ] C, [ u ] C, and [ u 3] C So [ ], u u u v v v P In the current exercise, u [ v v v ] u and [ ] [ ] j 3 C [ u u u ] b Analogously to part a, [ v v v3] [ w w w 3 ] P, so [ w w w ] [ v v v ] P In the current exercise, 3 [ w w w ] a P P P D B D CC B Let x be any vector in the two-dimensional vector space Since matrix from B to C and But since hus P D C P C B is the change-of-coordinates matrix from C to D, [ x] P [ x] and[ x] P [ x] P P [ x ] C B D C B C B D C D CC B D P is the change-of-coordinates matrix from B to D, B [ x] [ x ] D P D B P [ x] P P [ x ] B D B D CC B for any vector [ x ] B in P P P B D B D CC B B, and is the change-of-coordinates

233 48 Solutions b [M] For example, let B,, 5 C,, 5 and can calculate the change-of-coordinates matrices: P C B 5 8/3 8/3 P /3 D C 4/ /3 6/3 4/3 6/3 P / 3 5/3 D B 6/3 5/ 3 One confirms easily that 4/3 6/3 8/3 3 P P P 6/ 3 5/3 4/ 3 5 D B D CC B D, 8 5 hen we 48 SOLUIONS Notes: his is an important section for engineering students and worth extra class time o spend only one lecture on this section, you could cover through Example 5, but assign the somewhat lengthy Example 3 for reading Finding a spanning set for the solution space of a difference equation uses the Basis heorem (Section 45) and heorem 7 in this section, and demonstrates the power of the theory of Chapter 4 in helping to solve applied problems his section anticipates Section 57 on differential equations he reduction of an n th order difference equation to a linear system of first order difference equations was introduced in Section, and is revisited in Sections 49 and 56 Example 3 is the background for Exercise 6 in Section 65 Let y hen k k k+ k+ k k+ k+ k y + y 8y + ( ) 8( ) k ( + 8) k () forallk Since the difference equation holds for all k, k is a solution Let y ( 4) k hen k k+ k+ k k+ k+ k y + y 8 y ( 4) + ( 4) 8( 4) k ( 4) (( 4) + ( 4) 8) k ( 4) () for all k Since the difference equation holds for all k, ( 4) k is a solution Let y 3 hen k k k+ y + 9y 3 9(3 ) k k k 3(3 9) k 3() forallk k

234 34 CHAPER 4 Vector Spaces Since the difference equation holds for all k, 3 k is a solution k Let y ( 3) hen k k+ y + 9 y ( 3) 9( 3) k k k ( 3) (( 3) 9) k ( 3) () for all k Since the difference equation holds for all k, ( 3) k is a solution k 3 he signals k and ( 4) k are linearly independent because neither is a multiple of the other; that is, k k there is no scalar c such that c( 4) for all k By heorem 7, the solution set H of the difference equation yk+ + yk+ 8yk is two-dimensional By the Basis heorem, the two linearly independent signals k and ( 4) k form a basis for H 4 he signals 3 k and ( 3) k are linearly independent because neither is a multiple of the other; that is, there k k is no scalar c such that 3 c( 3) for all k By heorem 7, the solution set H of the difference equation yk+ 9yk is two-dimensional By the Basis heorem, the two linearly independent signals 3 k and ( 3) k form a basis for H 5 Let y ( 3) hen k k k+ k+ k k+ k+ k y + 6y + 9 y ( 3) + 6( 3) + 9( 3) k ( 3) (( 3) + 6( 3) + 9) k ( 3) () for all k Since the difference equation holds for all k, ( 3) k is in the solution set H Let y k( 3) hen k k k+ k+ k k+ k+ k y + 6y + 9 y ( k + )( 3) + 6( k + )( 3) + 9 k( 3) k ( 3) (( k + )( 3) + 6( k + )( 3) + 9 k) k ( 3) (9k + 88k 8+ 9 k) k ( 3) () for all k Since the difference equation holds for all k, k( 3) k is in the solution set H he signals ( 3) k and k( 3) k are linearly independent because neither is a multiple of the other; k k that is, there is no scalar c such that ( 3) ck( 3) for all k and there is no scalar c such that k k c( 3) k( 3) for all k By heorem 7, dim H, so the two linearly independent signals 3 k and ( 3) k form a basis for H by the Basis heorem

235 48 Solutions 35 k kπ 6 Let yk 5cos hen k+ ( k + ) π k kπ yk+ + 5yk 5 cos cos k ( k + ) π kπ 5 5 cos + 5cos k kπ kπ 5 5 cos + π + cos k 5 5 () for all k kπ since cos(t + π) cos t for all t Since the difference equation holds for all k, 5cos is in the solution set H k kπ Let yk 5sin hen y k+ k+ ( k + ) π k kπ + 5yk 5 sin sin k ( k + ) π kπ 5 5 sin + 5sin k kπ kπ 5 5 sin + π + sin k 5 5 () for all k kπ since sin(t + π) sin t for all t Since the difference equation holds for all k, 5sin is in the solution set H k kπ k kπ he signals 5cos and 5sin are linearly independent because neither is a multiple of the other By k kπ k kπ heorem 7, dim H, so the two linearly independent signals 5cos and 5sin form a basis for H by the Basis heorem k 7 Compute and row reduce the Casorati matrix for the signals, convenience: ( ) ( ) ( ) k, and ( ) k k k, setting k for his Casorati matrix is row equivalent to the identity matrix, thus is invertible by the IM Hence the set k k k of signals {,,( ) } is linearly independent in he exercise states that these signals are in the solution set H of a third-order difference equation By heorem 7, dim H 3, so the three linearly k k independent signals,, ( ) k form a basis for H by the Basis heorem

236 36 CHAPER 4 Vector Spaces k 8 Compute and row reduce the Casorati matrix for the signals, convenience: 4 ( 5) 4 ( 5) 4 ( 5) k 4, k and ( 5), setting k for his Casorati matrix is row equivalent to the identity matrix, thus is invertible by the IM Hence the set k k k of signals {,4,( 5) } is linearly independent in he exercise states that these signals are in the solution set H of a third-order difference equation By heorem 7, dim H 3, so the three linearly k k independent signals, 4, ( 5) k form a basis for H by the Basis heorem k 9 Compute and row reduce the Casorati matrix for the signals, convenience: 3 cos 3 sin π π 3 cos 3 sin 3 cosπ 3 sinπ k k k k π 3cos, and 3 sin, setting k for his Casorati matrix is row equivalent to the identity matrix, thus is invertible by the IM Hence the set k k kπ k kπ of signals {, 3 cos, 3 sin } is linearly independent in he exercise states that these signals are in the solution set H of a third-order difference equation By heorem 7, dim H 3, so the three linearly k k kπ k kπ independent signals, 3cos, and 3sin, form a basis for H by the Basis heorem k k k Compute and row reduce the Casorati matrix for the signals ( ), k( ), and 5, convenience: ( ) ( ) 5 ( ) ( ) 5 ( ) ( ) 5 π setting k for his Casorati matrix is row equivalent to the identity matrix, thus is invertible by the IM Hence the set k k k of signals {( ), k( ), 5 } is linearly independent in he exercise states that these signals are in the solution set H of a third-order difference equation By heorem 7, dim H 3, so the three linearly k k independent signals ( ), k( ), and 5 k form a basis for H by the Basis heorem he solution set H of this third-order difference equation has dim H 3 by heorem 7 he two signals ( ) k and 3 k cannot possibly span a three-dimensional space, and so cannot be a basis for H he solution set H of this fourth-order difference equation has dim H 4 by heorem 7 he two signals k and ( ) k cannot possibly span a four-dimensional space, and so cannot be a basis for H 3 he auxiliary equation for this difference equation is r r+ /9 By the quadratic formula (or factoring), r /3 or r /3, so two solutions of the difference equation are (/3) k and (/ 3) k he signals (/3) k and (/ 3) k are linearly independent because neither is a multiple of the other

237 48 Solutions 37 By heorem 7, the solution space is two-dimensional, so the two linearly independent signals (/3) k and (/ 3) k form a basis for the solution space by the Basis heorem 4 he auxiliary equation for this difference equation is r 7r+ By the quadratic formula (or factoring), r 3 or r 4, so two solutions of the difference equation are 3 k k and 4 he signals 3 k and 4 k are linearly independent because neither is a multiple of the other By heorem 7, the solution space is two-dimensional, so the two linearly independent signals 3 k and 4 k form a basis for the solution space by the Basis heorem 5 he auxiliary equation for this difference equation is r 5 By the quadratic formula (or factoring), r 5 or r 5, so two solutions of the difference equation are 5 k k and ( 5) he signals 5 k and ( 5) k are linearly independent because neither is a multiple of the other By heorem 7, the solution space is two-dimensional, so the two linearly independent signals 5 k and ( 5) k form a basis for the solution space by the Basis heorem 6 he auxiliary equation for this difference equation is 6r + 8r 3 By the quadratic formula (or factoring), r /4 or r 3/4, so two solutions of the difference equation are (/ 4) k k and ( 3/ 4) he signals (/ 4) k and ( 3/ 4) k are linearly independent because neither is a multiple of the other By heorem 7, the solution space is two-dimensional, so the two linearly independent signals (/ 4) k and ( 3/4) k form a basis for the solution space by the Basis heorem 7 Letting a 9 and b 4/9 gives the difference equation Yk+ 3 Yk+ + 4Yk First we find a particular solution Yk of this equation, where is a constant he solution of the equation is, so is a particular solution to Yk+ 3 Yk+ + 4Yk Next we solve the homogeneous difference equation Yk+ 3 Yk+ + 4Yk he auxiliary equation for this difference equation is r 3 r+ 4 By the quadratic formula (or factoring), r 8 or r 5, so two solutions of the homogeneous difference equation are 8 k k and 5 he signals (8) k and (5) k are linearly independent because neither is a multiple of the other By heorem 7, the solution space is two-dimensional, so the two linearly independent signals (8) k and (5) k form a basis for the solution space of the homogeneous difference equation by the Basis heorem ranslating the solution space of the homogeneous difference equation by the particular solution of the nonhomogeneous difference equation gives us the general k k solution of Yk+ 3 Yk+ + 4Yk : Yk c(8) + c(5) + As k increases the first two terms in the solution approach, so Y k approaches 8 Letting a 9 and b 5 gives the difference equation Yk+ 35 Yk+ + 45Yk First we find a particular solution Yk of this equation, where is a constant he solution of the equation is, so is a particular solution to Yk+ 3 Yk+ + 4Yk Next we solve the homogeneous difference equation Yk+ 35 Yk+ + 45Yk he auxiliary equation for this difference equation is r 35 r+ 45 By the quadratic formula (or factoring), r 6 or r 75, so two solutions of the homogeneous difference equation are 6 k k and 75 he signals (6) k and (75) k are linearly independent because neither is a multiple of the other By heorem 7, the solution space is twodimensional, so the two linearly independent signals (6) k and (75) k form a basis for the solution space of the homogeneous difference equation by the Basis heorem ranslating the solution space of the

238 38 CHAPER 4 Vector Spaces homogeneous difference equation by the particular solution of the nonhomogeneous difference k k equation gives us the general solution of Yk+ 35 Yk+ + 45Yk : Yk c(6) + c(75) + 9 he auxiliary equation for this difference equation is r + 4r+ By the quadratic formula, r + 3 or r 3, so two solutions of the difference equation are ( + 3) k k and ( 3) he signals ( + 3) k and ( 3) k are linearly independent because neither is a multiple of the other By heorem 7, the solution space is two-dimensional, so the two linearly independent signals ( + 3) k and ( 3) k form a basis for the solution space by the Basis heorem hus a general k k solution to this difference equation is yk c( + 3) + c( 3) Let a + 3 and b 3 Using the solution from the previous exercise, we find that N N y ca + cb 5 and yn ca + cb his is a system of linear equations with variables c and c whose augmented matrix may be row reduced: N 5b a b 5 N N b a a b N N N a b 5a N N b a a b so N N 5b 5a, c N N N N c b aa b b aa b (Alternatively, Cramer s Rule may be applied to get the same solution) hus k k k N N k yk ca + c b 5( ab a b) N N b a a b he smoothed signal z k has the following values: z ( ) / 3 7, z ( ) / 3 5, z 3 ( )/3 4, z 4 ( ) / 3 3, z 5 ( )/3 4, z 6 ( )/3 5, z 7 ( )/3 6, z 8 ( )/3 6, z 9 ( ) /3 7, z ( ) /3 8, z ( ) / 3 9, z ( ) / 3 8, z 3 ( )/ original data smoothed data a he smoothed signal z k has the following values: z 35 y + 5 y+ 35 y 35() + 5(7) + 35(3) 4, z 35 y3 + 5 y + 35 y 35( 7) + 5() + 35(7), z 35 y4 + 5 y y 35( 3) + 5( 7) + 35() 4, z 35 y + 5 y + 35 y 35( 7) + 5( 3) + 35( 7),

239 48 Solutions 39 z4 35 y6 + 5 y y4 35() + 5( 7) + 35( 3) 4, z5 35 y7 + 5 y y5 35(7) + 5() + 35( 7), z6 35 y8 + 5 y y6 35(3) + 5(7) + 35() 4, z7 35 y9 + 5 y y7 35(7) + 5(3) + 35(7), z 35 y + 5 y + 35 y 35() + 5(7) + 35(3) 4, b his signal is two times the signal output by the filter when the input (in Example 3) was y cos(πt/4) his is expected because the filter is linear he output from the input cos(πt/4) + cos(3πt/4) should be two times the output from cos(πt/4) plus the output from cos(3πt/4) (which is zero) 3 a y + y 45, y, k k b [M] MALAB code to create the table: pay 45, y, m, table [;y] while y>45 y *y-pay m m+ table [table [m;y]] end m,y Mathematica code to create the table: pay 45; y ; m ; balancetable {{, y}}; While[y > 45, {y *y - pay; m m +, Appendo[balancetable, {m, y}]}]; m y c [M] At month 6, the last payment is $488 he total paid by the borrower is $, a y + 5y, y, k k b [M] MALAB code to create the table: pay, y, m, table [;y] for m : 6 y 5*y+pay table [table [m;y]] end interest y-6*pay- Mathematica code to create the table: pay ; y ; amounttable {{, y}}; Do[{y 5*y + pay; Appendo[amounttable, {m, y}]},{m,,6}]; interest y-6*pay-

240 4 CHAPER 4 Vector Spaces c [M] he total is $6355 at k 4, $,96 at k 48, and $5,386 at k 6 When k 6, the interest earned is $386 5 o show that yk k is a solution of yk+ + 3k+ 4yk k + 7, substitute yk + ( k ) : y y ( k + ) + 3( k + ) 4k k+ k+ k ( k + 4k + 4) + 3( k + k + ) 4k k + 4k k + 6k + 3 4k k + 7 for all k yk k, yk + ( k + ), and he auxiliary equation for the homogeneous difference equation yk+ + 3yk+ 4yk is r + 3r 4 By the quadratic formula (or factoring), r 4 or r, so two solutions of the difference equation are ( 4) k k and he signals ( 4) k and k are linearly independent because neither is a multiple of the other By heorem 7, the solution space is two-dimensional, so the two linearly independent signals ( 4) k and k form a basis for the solution space of the homogeneous difference equation by the Basis heorem he k k k general solution to the homogeneous difference equation is thus c( 4) + c c( 4) + c Adding the particular solution k of the nonhomogeneous difference equation, we find that the general solution of the k difference equation y + + 3y + 4y k + 7 is y k + c ( 4) + c k k k k 6 o show that yk + k is a solution of yk+ 8yk+ + 5yk 8k +, substitute yk + k, yk + + ( k + ) + k, and yk + + ( k + ) 3+ k : yk+ 8yk+ + 5 yk (3 + k) 8( + k) + 5( + k) 3 + k 6 8k k 8k + forallk he auxiliary equation for the homogeneous difference equation y + 8y + + 5y is k k k r 8r+ 5 By the quadratic formula (or factoring), r 5 or r 3, so two solutions of the difference equation are 5 k k and 3 he signals 5 k and 3 k are linearly independent because neither is a multiple of the other By heorem 7, the solution space is two-dimensional, so the two linearly independent signals 5 k and 3 k form a basis for the solution space of the homogeneous difference equation by the Basis k k heorem he general solution to the homogeneous difference equation is thus c 5 + c 3 Adding the particular solution + k of the nonhomogeneous difference equation, we find that the general solution k k of the difference equation yk+ 8yk+ + 5yk 8k + is y + k + c 5 + c 3 7 o show that yk k is a solution of yk+ (9/ ) yk+ + yk 3k +, substitute yk k, yk + ( k + ) k, and yk + ( k + ) k: yk+ (9/) yk+ + yk ( k) (9/)( k) + ( k) k + 9k + 4 4k 3k + forallk he auxiliary equation for the homogeneous difference equation y + (9/ ) y + + y is k k k k r (9/ ) r+ By the quadratic formula (or factoring), r 4 or r /, so two solutions of the difference equation are 4 k k and (/ ) he signals 4 k and (/ ) k are linearly independent because neither is a multiple of the other By heorem 7, the solution space is two-dimensional, so the two

241 48 Solutions 4 linearly independent signals 4 k and (/ ) k form a basis for the solution space of the homogeneous difference equation by the Basis heorem he general solution to the homogeneous difference k k k k equation is thus c 4 + c (/) c 4 + c Adding the particular solution k of the nonhomogeneous difference equation, we find that the general solution of the difference equation k k yk+ (9/ ) yk+ + yk 3k + is y k + c 4 + c k 8 o show that yk k 4 is a solution of yk+ + (3/ ) yk+ yk + 3 k, substitute yk k 4, yk + ( k + ) 4 k, and yk + ( k + ) 4 k : yk+ + (3/ ) yk+ yk k + (3/ )(k ) (k 4) k + 3k 3 k k forallk he auxiliary equation for the homogeneous difference equation y + + (3/ ) y + y is r k k k + (3/ ) r By the quadratic formula (or factoring), r or r /, so two solutions of the difference equation are ( ) k k and (/ ) he signals ( ) k and (/ ) k are linearly independent because neither is a multiple of the other By heorem 7, the solution space is two-dimensional, so the two linearly independent signals ( ) k and (/ ) k form a basis for the solution space of the homogeneous difference equation by the Basis heorem he general solution to the homogeneous k k k k difference equation is thus c ( ) + c (/) c ( ) + c Adding the particular solution k 4 of the nonhomogeneous difference equation, we find that the general solution of the difference k k equation yk+ + (3/ ) yk+ yk + 3k is y k 4 + c ( ) + c k 9 Let x k yk y k + hen yk + y k + 3 yk+ yk y k y + k+ xk+ Ax k yk+ 3 yk+ y y k+ 4 k+ 3 yk 3 Let x k y k+ hen y k + yk+ yk xk+ y k + y k+ Ax k y /6 3/4y k+ 3 k+ 3 he difference equation is of order Since the equation yk yk+ + 6yk+ holds for all k, it holds if k is replaced by k Performing this replacement transforms the equation into y + 5y + 6y, which is also true for all k he transformed equation has order k+ k+ k 3 he order of the difference equation depends on the values of a, a, and a 3 If a3, then the order is 3 If a 3 and a, then the order is If a3 a and a, then the order is If a3 a a, then the order is, and the equation has only the zero signal for a solution 33 he Casorati matrix C(k) is yk zk k k k Ck ( ) y k z + k+ ( k + ) ( k + ) k +

242 4 CHAPER 4 Vector Spaces In particular, 4 8 C(), C( ), and C( ) none of which are invertible In fact, C(k) is not invertible for all k, since det C( k) k ( k + ) k + ( k + ) k k k( k + ) k k + ( k + ) k ( ) If k or k, det C(k) If k >, then k + > and k k + (k + ) k k(k + ) (k + )k, so det C(k) If k <, then k + < and k k + (k + ) k k(k + ) + (k + )k, so det C(k) hus detc(k) for all k, and C(k) is not invertible for all k Since C(k) is not invertible for all k, it provides no information about whether the signals { y k } and { z k } are linearly dependent or linearly independent In fact, neither signal is a multiple of the other, so the signals { y k } and { z k } are linearly independent 34 No, the signals could be linearly dependent, since the vector space V of functions considered on the entire real line is not the vector space of signals For example, consider the functions f (t) sinπt, g(t) sin πt, and h(t) sin 3πt he functions f, g, and h are linearly independent in V since they have different periods and thus no function could be a linear combination of the other two However, sampling the functions at any integer n gives f (n) g(n) h(n), so the signals are linearly dependent in 35 Let { y k } and { z k } be in, and let r be any scalar he k term of { yk} + { zk} is yk + zk, while the term of r{ y k } is ry k hus ({ yk} + { zk}) { yk + zk} ( yk+ + zk+ ) + a( yk+ + zk+ ) + b( yk + zk) ( yk+ + ayk+ + byk) + ( zk+ + azk+ + bzk) { yk} + { zk},and ry ({ k}) ry { k} ryk+ + a( ryk+ ) + b( ryk) r( yk+ + ayk+ + byk) r{ y k } so has the two properties that define a linear transformation 36 Let z be in V, and suppose that x p in V satisfies ( xp ) z Let u be in the kernel of ; then (u) Since is a linear transformation, ( u+ xp) ( u) + ( xp) + z z, so the vector x u+ x p satisfies the nonhomogeneous equation (x) z 37 We compute that ( D)( y, y, y, ) ( D( y, y, y, )) (, y, y, y, ) ( y, y, y, ) while ( D )( y, y, y, ) D( ( y, y, y, )) D( y, y, y3, ) (, y, y, y3, ) hus D I (the identity transformation on ), while D I th th k

243 49 Solutions SOLUIONS Notes: his section builds on the population movement example in Section he migration matrix is examined again in Section 5, where an eigenvector decomposition shows explicitly why the sequence of state vectors x k tends to a steady state vector he discussion in Section 5 does not depend on prior knowledge of this section a Let N stand for News and M stand for Music hen the listeners behavior is given by the table From: N M o: 7 6 N 3 4 M so the stochastic matrix is 7 6 P 3 4 b Since % of the listeners are listening to news at 8: 5, the initial state vector is x c here are two breaks between 8: 5 and 9: 5, so we calculate x : x Px x Px hus 33% of the listeners are listening to news at 9: 5 a Let the foods be labelled,, and 3 hen the animals behavior is given by the table From: 3 o: so the stochastic matrix is P b here are two trials after the initial trial, so we calculate x he initial state vector is x Px x Px hus the probability that the animal will choose food # is 35

244 44 CHAPER 4 Vector Spaces 3 a Let H stand for Healthy and I stand for Ill hen the students conditions are given by the table From: H I o: H 5 55 I so the stochastic matrix is P b Since % of the students are ill on Monday, the initial state vector is x For uesday s percentages, we calculate x ; for Wednesday s percentages, we calculate x : x Px x Px hus 5% of the students are ill on uesday, and 5% are ill on Wednesday c Since the student is well today, the initial state vector is x We calculate x : x Px x Px hus the probability that the student is well two days from now is 95 4 a Let G stand for good weather, I for indifferent weather, and B for bad weather hen the change in the weather is given by the table From: G I B o: G I 3 B so the stochastic matrix is b he initial state vector is P We calculate x : x Px hus the chance of bad weather tomorrow is %

245 49 Solutions 45 c he initial state vector is x 4 We calculate x : x Px x Px hus the chance of good weather on Wednesday is 48% We solve Px x by rewriting the equation as (P I )x, where P I 9 6 Row reducing the augmented matrix for the homogeneous system (P I )x gives 9 6 /3 9 6 hus x /3 x x, x and one solution is 3 Since the entries in 3 sum to 5, multiply by /5 to /5 4 obtain the steady-state vector q 3/ We solve Px x by rewriting the equation as (P I )x, where P I 5 Row reducing the augmented matrix for the homogeneous system (P I )x gives 5 5/ 5 hus x 5/ x 5 x, x and one solution is Since the entries in 5 sum to 7, multiply by /7 to 5/7 74 obtain the steady-state vector q / We solve Px x by rewriting the equation as (P I )x, where P I 3 reducing the augmented matrix for the homogeneous system (P I )x gives 3 3 Row

246 46 CHAPER 4 Vector Spaces x hus x x x 3, and one solution is x 3 /4 5 obtain the steady-state vector q / 5 /4 5 Since the entries in sum to 4, multiply by /4 to 3 8 We solve Px x by rewriting the equation as (P I )x, where P I 8 4 Row reducing the augmented matrix for the homogeneous system (P I )x gives / x hus x x x 3 /, and one solution is Since the entries in x 3 /5 4 obtain the steady-state vector q /5 /5 4 sum to 5, multiply by /5 to 9 Since 84 P 6 8 has all positive entries, P is a regular stochastic matrix k k 8 Since P k 8 stochastic matrix will have a zero as its (,) entry for all k, so P is not a regular 7 6 From Exercise, P, 3 4 so augmented matrix gives 3 6 P I 3 6 Solving (P I )x by row reducing the hus x x x, x and one solution is Since the entries in sum to 3, multiply by /3 to /3 667 obtain the steady-state vector q /3 333

247 49 Solutions From Exercise, P 5 5 5, so reducing the augmented matrix gives P I Solving (P I )x by row x hus x x x 3, and one solution is Since the entries in sum to 3, multiply by /3 to x 3 /3 333 obtain the steady-state vector q /3 333 hus in the long run each food will be preferred /3 333 equally a From Exercise 3, P, 5 55 so the augmented matrix gives 5 45 P I 5 45 Solving (P I )x by row reducing hus x 9 x 9 x, x and one solution is Since the entries in 9 sum to, multiply by / 9/ 9 to obtain the steady-state vector q / b After many days, a specific student is ill with probability, and it does not matter whether that student is ill today or not From Exercise 4, P 3 3 5, so 3 the augmented matrix gives P I Solving (P I )x by row reducing x hus x x x 3, and one solution is Since the entries in sum to 6, multiply by /6 to x 3 / 5 obtain the steady-state vector q /3 333 hus in the long run the chance that a day has good /6 67 weather is 5%

248 48 CHAPER 4 Vector Spaces [M] Let P, so augmented matrix gives 79 9 P I 79 9 Solving (P I )x by row reducing the hus x 6 x 6 x, x and one solution is Since the entries in 6 sum to , multiply by /6 to obtain the steady-state vector q hus about 39% of the total US population would eventually live in California [M] Let P 9, so augmented matrix gives 9 P I Solving (P I )x by row reducing the x hus x x x 3 999, and one solution is 999 Since the entries in 999 sum to x , multiply by / to obtain the steady-state vector q 999 hus on a typical day, about (999)() 8 cars will be rented or available from the downtown location 7 a he entries in each column of P sum to Each column in the matrix P I has the same entries as in P except one of the entries is decreased by hus the entries in each column of P I sum to, and adding all of the other rows of P I to its bottom row produces a row of zeros b By part a, the bottom row of P I is the negative of the sum of the other rows, so the rows of P I are linearly dependent c By part b and the Spanning Set heorem, the bottom row of P I can be removed and the remaining (n ) rows will still span the row space of P I hus the dimension of the row space of P I is less than n Alternatively, let A be the matrix obtained from P I by adding to the bottom row all the other rows hese row operations did not change the row space, so the row space of P I is spanned by the nonzero rows of A By part a, the bottom row of A is a zero row, so the row space of P I is spanned by the first (n ) rows of A d By part c, the rank of P I is less than n, so the Rank heorem may be used to show that dimnul(p I ) n rank(p I ) > Alternatively the Invertible Martix heorem may be used since P I is a square matrix

249 49 Solutions 49 8 If α β then P Notice that Px x for any vector x in, and that and are two linearly independent steady-state vectors in this case α β If α or β, we solve (P I )x where P I α β Row reducing the augmented matrix gives α β α β α β x β So α x β x, and one possible solution is to let x β, x α hus x x α Since the entries β in α sum to α + β, multiply by /(α + β ) to obtain the steady-state vector β q α + β α 9 a he product Sx equals the sum of the entries in x hus x is a probability vector if and only if its entries are nonnegative and Sx b Let P [ p p p n ], where p, p,, n [ p p p ] [ ] SP S S S n S p are probability vectors By part a, c By part b, S(Px) (SP)x Sx he entries in Px are nonnegative since P and x have only nonnegative entries By part a, the condition S(Px) shows that Px is a probability vector Let P [ p p p ] so P PP [ P P P ] n, P are probability vectors, so P is a stochastic matrix p p p By Exercise 9c, the columns of n Alternatively, SP S by Exercise 9b, since P is a stochastic matrix Right multiplication by P gives SP SP, so SP S implies that SP S Since the entries in P are nonnegative, so are the entries in P, and P is stochastic matrix [M] a o four decimal places, P, P, P P he columns of P k are converging to a common vector as k increases he steady state vector q for P is, k q which is the vector to which the columns of P are converging 89 9

250 5 CHAPER 4 Vector Spaces b o four decimal places, Q Q Q Q Q , Q , , Q , , Q , , Q , Q he steady state vector q for Q is 88 k q Conjecture: the columns of P, where P is a regular 765 stochastic matrix, converge to the steady state vector for P as k increases th c Let P be an n n regular stochastic matrix, q the steady state vector of P, and e j the j column of the n n identity matrix Consider the Markov chain { x k } where xk+ Px k and x e j By k k k th k heorem 8, xk P x converges to q as k But P x P e j, which is the j column of P th k k hus the j column of P q q q [M] Answers will vary MALAB Student Version 4 code for Method (): Arandstoc(3); flops(); tic, xnulbasis(a-eye(3)); qx/sum(x); toc, flops MALAB Student Version 4 code for Method (): Arandstoc(3); flops(); tic, BA^; qb(:,); toc, flops P converges to q as k ; that is, [ ]

251 Chapter 4 Supplementary Exercises 5 Chapter 4 SUPPLEMENARY EXERCISES a rue his set is Span{ v, v }, and every subspace is itself a vector space p b rue Any linear combination of v,, v p is also a linear combination of v,, v p, v p using the zero weight on v p c False Counterexample: ake v p v hen { v, v p } is linearly dependent d False Counterexample: Let { e, e, e 3} be the standard basis for 3 hen { e, e } is a linearly independent set but is not a basis for 3 e rue See the Spanning Set heorem (Section 43) f rue By the Basis heorem, S is a basis for V because S spans V and has exactly p elements So S must be linearly independent g False he plane must pass through the origin to be a subspace h False Counterexample: i rue his statement appears before heorem 3 in Section 46 j False Row operations on A do not change the solutions of Ax k False Counterexample: A 3 6 ; A has two nonzero rows but the rank of A is l False If U has k nonzero rows, then rank A k and dimnul A n k by the Rank heorem m rue Row equivalent matrices have the same number of pivot columns n False he nonzero rows of A span Row A but they may not be linearly independent o rue he nonzero rows of the reduced echelon form E form a basis for the row space of each matrix that is row equivalent to E p rue If H is the zero subspace, let A be the 3 3 zero matrix If dim H, let {v} be a basis for H and set A [ v v v ] If dim H, let {u,v} be a basis for H and set A [ u v v ], for example If dim H 3, then H 3, so A can be any 3 3 invertible matrix Or, let {u, v, w} be a A u v w basis for H and set [ ] q False Counterexample: A If rank A n (the number of columns in A), then the transformation x Ax is one-to-one r rue If x Ax is onto, then Col A s rue See the second paragraph after heorem 5 in Section 47 t False he th j column of P C B is b j C m and rank A m See heorem (a) in Section 9

252 5 CHAPER 4 Vector Spaces he set is SpanS, where S,, Note that S is a linearly dependent set, but each pair of vectors in S forms a linearly independent set hus any two of the three vectors 5, 8, will be a basis for SpanS 3 he vector b will be in W Span{ u, u } if and only if there exist constants c and c with cu+ cu b Row reducing the augmented matrix gives b b 4 b 4 b b b3 b+ b + b3 so W Span{ u, u } is the set of all ( b, b, b 3) satisfying b+ b + b3 4 he vector g is not a scalar multiple of the vector f, and f is not a scalar multiple of g, so the set {f, g} is linearly independent Even though the number g(t) is a scalar multiple of f(t) for each t, the scalar depends on t 5 he vector p is not zero, and p is not a multiple of p However, p 3 is p+ p, so p 3 is discarded he vector p 4 cannot be a linear combination of p and p since p 4 involves t but p and p do not involve t he vector p 5 is (3/ ) p (/ ) p + p 4 (which may not be so easy to see at first) hus p 5 is a linear combination of p, p, and p 4, so p 5 is discarded So the resulting basis is { p, p, p 4} 6 Find two polynomials from the set { p,, p 4} that are not multiples of one another his is easy, because one compares only two polynomials at a time Since these two polynomials form a linearly independent set in a two-dimensional space, they form a basis for H by the Basis heorem 7 You would have to know that the solution set of the homogeneous system is spanned by two solutions In this case, the null space of the 8 coefficient matrix A is at most two-dimensional By the Rank heorem, dimcol A dimnul A 8 Since Col A is a subspace of 8, Col A 8 hus Ax b has a solution for every b in 8 8 If n, then H and V are both the zero subspace, and H V If n >, then a basis for H consists of n linearly independent vectors u,, u n hese vectors are also linearly independent as elements of V But since dimv n, any set of n linearly independent vectors in V must be a basis for V by the Basis heorem So u,, u n span V, and H Span{ u,, u n} V 9 Let : n m be a linear transformation, and let A be the m n standard matrix of a If is one-to-one, then the columns of A are linearly independent by heoerm in Section 9, so dimnul A By the Rank heorem, dimcol A n n, which is the number of columns of A As noted in Section 4, the range of is Col A, so the dimension of the range of is n

253 Chapter 4 Supplementary Exercises 53 b If maps n onto m, then the columns of A span m by heoerm in Section 9, so dimcol A m By the Rank heorem, dimnul A n m As noted in Section 4, the kernel of is Nul A, so the dimension of the kernel of is n m Note that n m must be nonnegative in this case: since A must have a pivot in each row, n m Let S { v,, v p} If S were linearly independent and not a basis for V, then S would not span V In this case, there would be a vector v p+ in V that is not in Span{ v,, v p} Let S { v,, vp, v p+ } hen S is linearly independent since none of the vectors in S is a linear combination of vectors that precede it Since S has more elements than S, this would contradict the maximality of S Hence S must be a basis for V If S is a finite spanning set for V, then a subset of S is a basis for V Denote this subset of S by S Since S is a basis for V, S must span V Since S is a minimal spanning set, S cannot be a proper subset of S hus S S, and S is a basis for V a Let y be in Col AB hen y ABx for some x But ABx A(Bx), so y A(Bx), and y is in Col A hus Col AB is a subspace of Col A, so rank AB dimcol AB dimcol A rank A by heorem in Section 45 b By the Rank heorem and part a: rank AB rank( AB) rank B A rank B rank B 3 By Exercise, rank PA rank A, and rank PA rank A rank A rank( P P) A rank P ( PA) rank PA, so 4 Note that ( ) AQ Q A Since Q is invertible, we can use Exercise 3 to conclude that rank( AQ) rank Q A rank A Since the ranks of a matrix and its transpose are equal (by the Rank heorem), rank AQ rank A 5 he equation AB O shows that each column of B is in Nul A Since Nul A is a subspace of n, all linear combinations of the columns of B are in Nul A hat is, Col B is a subspace of Nul A By heorem in Section 45, rank B dimcol B dimnul A By this inequality and the Rank heorem applied to A, n rank A + dimnul A rank A + rank B 6 Suppose that rank A r and rank B r hen there are rank factorizations A CR and B CR of A and B, where C is m r with rank r, C is m r with rank r, R is r n with rank r, and R is r n with rank r Create an m ( r+ r) matrix C [ C C] and an ( r+ r) n matrix R by stacking R over R hen R A+ B CR+ CR [ C C] CR R Since the matrix CR is a product, its rank cannot exceed the rank of either of its factors by Exercise Since C has r + r columns, the rank of C cannot exceed r + r Likewise R has r + r rows, so the rank of R cannot exceed r + r hus the rank of A + B cannot exceed r + r rank A+ rank B, or rank (A + B) rank A + rank B

254 54 CHAPER 4 Vector Spaces 7 Let A be an m n matrix with rank r (a) Let A consist of the r pivot columns of A he columns of A are linearly independent, so A is an m r matrix with rank r (b) By the Rank heorem applied to A, the dimension of RowA is r, so A has r linearly independent rows Let A consist of the r linearly independent rows of A hen A is an r r matrix with linearly independent rows By the Invertible Matrix heorem, A is invertible 8 Let A be a 4 4 matrix and B be a 4 matrix, and let u,, u 3 be a sequence of input vectors in a Use the equation xk+ Axk + Bu k for k,, 4, k,,4, with x x Ax + Bu Bu x Ax + Bu ABu + Bu 3 A + B A( AB + B ) + B A B + AB + B 4 A 3 + B 3 A( A B + AB + B ) + B 3 3 ABu + ABu+ ABu + Bu 3 x x u u u u u u u x x u u u u u u 3 3 u u B AB A B A B M u u Note that M has 4 rows because B does, and that M has 8 columns because B and each of the matrices k AB have columns he vector u in the final equation is in 8, because each u k is in b If (A, B) is controllable, then the controlability matrix has rank 4, with a pivot in each row, and the columns of M span 4 herefore, for any vector v in 4, there is a vector u in 8 such that v Mu However, from part a we know that x4 M u when u is partitioned into a control sequence u,, u his particular control sequence makes x4 v 3 9 o determine if the matrix pair (A, B) is controllable, we compute the rank of the matrix B AB A B o find the rank, we row reduce: B AB A B he rank of the matrix is 3, and the pair (A, B) is controllable o determine if the matrix pair (A, B) is controllable, we compute the rank of the matrix B AB A B o find the rank, we note that : 5 9 B AB A B 7 45 he rank of the matrix must be less than 3, and the pair (A, B) is not controllable

255 Chapter 4 Supplementary Exercises 55 [M] o determine if the matrix pair (A, B) is controllable, we compute the rank of the matrix 3 B AB A B A B o find the rank, we row reduce: B AB A B A B he rank of the matrix is 3, and the pair (A, B) is not controllable [M] o determine if the matrix pair (A, B) is controllable, we compute the rank of the matrix 3 B AB A B A B o find the rank, we row reduce: 3 5 B AB A B A B he rank of the matrix is 4, and the pair (A, B) is controllable

256

257 5 SOLUIONS Notes: Exercises 6 reinforce the definitions of eigenvalues and eigenvectors he subsection on eigenvectors and difference equations, along with Exercises 33 and 34, refers to the chapter introductory example and anticipates discussions of dynamical systems in Sections 5 and 56 he number is an eigenvalue of A if and only if the equation A x x has a nontrivial solution his equation is equivalent to ( A I) x Compute 3 A I he columns of A are obviously linearly dependent, so ( A I) x has a nontrivial solution, and so is an eigenvalue of A he number is an eigenvalue of A if and only if the equation A xx has a nontrivial solution his equation is equivalent to ( A+ I) x Compute A+ I he columns of A are obviously linearly dependent, so ( A+ I) x has a nontrivial solution, and so is an eigenvalue of A 3 Is Ax a multiple of x? Compute λ 4 So 4 is not an eigenvector of A Is Ax a multiple of x? Compute 4 he second entries of x and Ax shows 3+ that if Ax is a multiple of x, then that multiple must be 3+ Check 3+ times the first entry of x: (3 + )( + ) his matches the first entry of A x, so eigenvalue is 3+ + is an eigenvector of A, and the corresponding 57

258 58 CHAPER 5 Eigenvalues and Eigenvectors 5 Is Ax a multiple of x? Compute eigenvalue So 4 3 is an eigenvector of A for the 6 Is Ax a multiple of x? Compute A for the eigenvalue ( ) So is an eigenvector of 7 o determine if 4 is an eigenvalue of A, decide if the matrix A 4I is invertible 3 4 A 4I Invertibility can be checked in several ways, but since an eigenvector is needed in the event that one exists, the best strategy is to row reduce the augmented matrix for ( A 4 I) x : he equation ( A 4 I) x has a nontrivial solution, so 4 is an eigenvalue Any nonzero solution of ( A 4 I) x is a corresponding eigenvector he entries in a solution satisfy x+ x3 and x x 3, with x 3 free he general solution is not requested, so to save time, simply take any nonzero value for x 3 to produce an eigenvector If x 3, then x (,, ) Note: he answer in the text is (,, ), written in this form to make the students wonder whether the more common answer given above is also correct his may initiate a class discussion of what answers are correct 8 o determine if 3 is an eigenvalue of A, decide if the matrix A 3I is invertible 3 A 3I Row reducing the augmented matrix [(A 3 I) ] yields: he equation ( A 3 I) x has a nontrivial solution, so 3 is an eigenvalue Any nonzero solution of ( A 3 I) x is a corresponding eigenvector he entries in a solution satisfy x 3x3 and x x 3, with x 3 free he general solution is not requested, so to save time, simply take any nonzero value for x 3 to produce an eigenvector If x 3, then x (3,, )

259 5 Solutions For λ : A I 4 he augmented matrix for ( A I) x is hus x and x is free he general solution of ( A I) x is x e, where e, and so e is a basis for the eigenspace corresponding to the eigenvalue 5 5 For λ 5: A 5I 5 4 he equation ( A 5 I) x leads to x 4x, so that x x and x is free he general solution x x is x x x So is a basis for the eigenspace For λ 4: A 4 I / 6 he augmented matrix for ( A 4 I) x is 4 6 hus x (3/ ) x and x (3/ ) x 3 / x is free he general solution is x A basis for the eigenspace corresponding x x to 4 is 3 / Another choice is A I / 3 he augmented matrix for ( A I) x is 3 hus x ( / 3) x and x (/ 3) x / 3 x is free he general solution is x A basis for the eigenspace x x / 3 corresponding to is Another choice is For λ : A I 3 3 he augmented matrix for ( A I) x is x is free A basis for the eigenspace corresponding to is For λ 5: A 5 I / 3 3 hus x ( / 3) x and 3 / Another choice is 3

260 6 CHAPER 5 Eigenvalues and Eigenvectors 4 he augmented matrix for ( A 5 I) x is 3 6 hus x x and x is free x x he general solution is x x x A basis for the eigenspace is 3 For λ : 4 3 A I 3x+ x3 he equations for ( A I) x are easy to solve: x Row operations hardly seem necessary Obviously x is zero, and hence x 3 is also zero here are three-variables, so x is free he general solution of ( A I) x is x e, where e (,, ), and so e provides a basis for the eigenspace For λ : 4 A I / [( A I) ] / So x (/ ) x3, x x3, with x 3 free he general solution of ( A I) x is x 3 A nice basis vector for the eigenspace is For λ 3: 4 3 A 3I 3 3 [( A 3 I ) ] So x x3, x x 3, with x 3 free A basis vector for the eigenspace is

261 5 Solutions For λ : A( I) A+ I he augmented matrix for [ A( ) I ] x, or ( A+ I ) x, is 3 / 3 / 3 [( A+ ) I ] 3 3 / / / 3 3 / hus x (/ 3) x3, x (/ 3) x 3, with x 3 free he general solution of ( A+ I) x is x 3 3 / 3 / A basis for the eigenspace corresponding to is 3 / ; another is For λ 3 : [( A 3 I) ] 3 hus x+ x + 3x3, with x and 4 6 x 3 free he general solution of ( A 3 I ) x, is x 3x x x x x 3 Basis for the eigenspace + :, x 3 Note: For simplicity, the text answer omits the set brackets I permit my students to list a basis without the set brackets Some instructors may prefer to include brackets For λ 4: A 4 I [( A 4 I) ] So x 3 x3, x 3 x 3, with x 3 and x 4 free variables he general solution of ( A 4 I) x is x x 3 x 3x x x 3 + x 4 Basis for the eigenspace : x 3 x, 3 x4 x 4 Note: I urge my students always to include the extra column of zeros when solving a homogeneous system Exercise 6 provides a situation in which failing to add the column is likely to create problems for a student, because the matrix A 4I itself has a column of zeros

262 6 CHAPER 5 Eigenvalues and Eigenvectors 7 he eigenvalues of 8 he eigenvalues of are,, and, on the main diagonal, by heorem are 4,, and 3, on the main diagonal, by heorem 3 9 he matrix 3 is not invertible because its columns are linearly dependent So the number is 3 an eigenvalue of the matrix See the discussion following Example he matrix A is not invertible because its columns are linearly dependent So the number is an eigenvalue of A Eigenvectors for the eigenvalue are solutions of A x and therefore have 3 entries that produce a linear dependence relation among the columns of A Any nonzero vector (in R ) whose entries sum to will work Find any two such vectors that are not multiples; for instance, (,, ) and (,, ) a False he equation A xλx must have a nontrivial solution b rue See the paragraph after Example 5 c rue See the discussion of equation (3) d rue See Example and the paragraph preceding it Also, see the Numerical Note e False See the warning after Example 3 a False he vector x in A xλx must be nonzero b False See Example 4 for a two-dimensional eigenspace, which contains two linearly independent eigenvectors corresponding to the same eigenvalue he statement given is not at all the same as heorem In fact, it is the converse of heorem (for the case r ) c rue See the paragraph after Example d False heorem concerns a triangular matrix See Examples 3 and 4 for counterexamples e rue See the paragraph following Example 3 he eigenspace of A corresponding to λ is the null space of the matrix AλI 3 If a matrix A were to have three distinct eigenvalues, then by heorem there would correspond three linearly independent eigenvectors (one for each eigenvalue) his is impossible because the vectors all belong to a two-dimensional vector space, in which any set of three vectors is linearly dependent See heorem 8 in Section 7 In general, if an n n matrix has p distinct eigenvalues, then by heorem there would be a linearly independent set of p eigenvectors (one for each eigenvalue) Since these vectors belong to an n-dimensional vector space, p cannot exceed n 4 A simple example of a matrix with only one distinct eigenvalue is a triangular matrix with the same number on the diagonal By experimentation, one finds that if such a matrix is actually a diagonal matrix then the eigenspace is two dimensional, and otherwise the eigenspace is only one dimensional Examples: 4 4 and 4 5 4

263 5 Solutions 63 5 If λ is an eigenvalue of A, then there is a nonzero vector x such that A x λ x Since A is invertible, A Ax A (λ x), and so x λ( A x) Since x (and since A is invertible), λ cannot be zero hen λ x A x, which shows that λ is an eigenvalue of A Note: he Study Guide points out here that the relation between the eigenvalues of A and the so-called inverse power method for estimating an eigenvalue of a matrix See Section 58 A is important in 6 Suppose that A is the zero matrix If A x λx for some x, then A x A( Ax) A(λ x) λax λ x Since x is nonzero, λ must be nonzero hus each eigenvalue of A is zero 7 Use the Hint in the text to write, for any λ,( A λ I) A (λ I) A λ I Since ( A λ I) is invertible if and only if A λi is invertible (by heorem 6(c) in Section ), it follows that A λi is not invertible if and only if A λi is not invertible hat is, λ is an eigenvalue of A if and only if λ is an eigenvalue of A Note: If you discuss Exercise 7, you might ask students on a test to show that A and A have the same characteristic polynomial (discussed in Section 5) Since det A det A, for any square matrix A, det( A λ I) det( A λ I) det( A (λ I) ) det( A λ I) 8 If A is lower triangular, then A is upper triangular and has the same diagonal entries as A Hence, by the part of heorem already proved in the text, these diagonal entries are eigenvalues of A By Exercise 7, they are also eigenvalues of A 9 Let v be the vector in n R whose entries are all ones hen Av sv 3 Suppose the column sums of an n n matrix A all equal the same number s By Exercise 9 applied to A in place of A, the number s is an eigenvalue of A By Exercise 7, s is an eigenvalue of A 3 Suppose reflects points across (or through) a line that passes through the origin hat line consists of all multiples of some nonzero vector v he points on this line do not move under the action of A So ( v) v If A is the standard matrix of, then A v v hus v is an eigenvector of A corresponding to the eigenvalue he eigenspace is Span { v } Another eigenspace is generated by any nonzero vector u that is perpendicular to the given line (Perpendicularity in R should be a familiar concept even though n orthogonality in R has not been discussed yet) Each vector x on the line through u is transformed into the vector x he eigenvalue is 33 (he solution is given in the text) a Replace k by k + in the definition of x k, and obtain x k+ k+ k+ cλ u+ c µ v k k b Axk A( cλ u+ cµ v) k k cλ Au+ cµ Av by linearity k k cλλ u+ cµµ v since u and v are eigenvectors x k +

264 64 CHAPER 5 Eigenvalues and Eigenvectors 34 You could try to write x as linear combination of eigenvectors, v,, v p If λ,,λ p are corresponding eigenvalues, and if x cv + + c p v p, then you could define k k k cλ + + cpλp p x v v In this case, for k,,,, k k Ax A( cλ v + + c λ v ) k p p p k k cλ Av+ + cpλpavp k+ k+ cλ v cpλp vp he vi k + Linearity + + are eigenvectors x 35 Using the figure in the exercise, plot ( u ) as u, because u is an eigenvector for the eigenvalue of the standard matrix A Likewise, plot ( v ) as 3 v, because v is an eigenvector for the eigenvalue 3 Since is linear, the image of w is ( w) ( u+ v) ( u) + ( v) 36 As in Exercise 35, ( u) u and ( v) 3v because u and v are eigenvectors for the eigenvalues and 3, respectively, of the standard matrix A Since is linear, the image of w is ( w) ( u+ v) ( u) + ( v ) Note: he matrix programs supported by this text all have an eigenvalue command In some cases, such as MALAB, the command can be structured so it provides eigenvectors as well as a list of the eigenvalues At this point in the course, students should not use the extra power that produces eigenvectors Students need to be reminded frequently that eigenvectors of A are null vectors of a translate of A hat is why the instructions for Exercises tell students to use the method of Example 4 It is my experience that nearly all students need manual practice finding eigenvectors by the method of Example 4, at least in this section if not also in Sections 5 and 53 However, [M] exercises do create a burden if eigenvectors must be found manually For this reason, the data files for the text include a special command, nulbasis for each matrix program (MALAB, Maple, etc) he output of nulbasis (A) is a matrix whose columns provide a basis for the null space of A, and these columns are identical to the ones a student would find by row reducing the augmented matrix [ A ] With nulbasis, student answers will be the same (up to multiples) as those in the text I encourage my students to use technology to speed up all numerical homework here, not just the [ M ] exercises, 37 [M] Let A be the given matrix Use the MALAB commands eig and nulbasis (or equivalent commands) he command ev eig(a) computes the three eigenvalues of A and stores them in a vector ev In this exercise, ev (3, 3, 3) he eigenspace for the eigenvalue 3 is the null space of A 3 I Use nulbasis to produce a basis for each null space If the format is set for rational display, the result is 59 / nulbasis(a -ev()*eye(3)) 9 / 5 For simplicity, scale the entries by 9 A basis for the eigenspace for λ 3: 9

265 5 Solutions 65 For the next eigenvalue, 3, compute nulbasis(a -ev()*eye(3)) Basis for eigenspace for λ 3:, here is no need to use ev(3) because it is the same as ev() 38 [M] ev eig(a) (3,,, 3) For λ 3 : / 3 / 4/ 3 4 nulbasis (A -ev()*eye(4)) Basis for eigenspace :, 3 For λ 7 / : nulbasis(a -ev()*eye(4)) Basis: 7, 7 39 [M] For λ 5, basis:,, For λ, basis: , [M] ev eig(a) ( , , 3,, ) he first two eigenvalues are the roots of λ 5λ Basis for λ ev() : , for λ ev() : For the eigenvalues 3 and, the eigenbases are, and,, respectively Note: Since so many eigenvalues in text problems are small integers, it is easy for students to form a habit of entering a value for λ in nulbasis (A -λi) based on a visual examination of the eigenvalues produced by eig(a)when only a few decimal places for λ are displayed Exercise 4 may help your students discover the dangers of this approach

266 66 CHAPER 5 Eigenvalues and Eigenvectors 5 SOLUIONS Notes: Exercises 9 4 can be omitted, unless you want your students to have some facility with determinants of 3 3 matrices In later sections, the text will provide eigenvalues when they are needed for matrices larger than If you discussed partitioned matrices in Section 4, you might wish to bring in Supplementary Exercises 4 in Chapter 5 (Also, see Exercise 4 of Section 4) Exercises 5 and 7 support the subsection on dynamical systems he calculations in these exercises and Example 5 prepare for the discussion in Section 56 about eigenvector decompositions λ λ 7 A 7, A λi 7 7 he characteristic polynomial is λ λ det( Aλ I) ( λ) 7 4 4λ+λ 49 λ 4λ 45 In factored form, the characteristic equation is ( λ9)( λ+ 5), so the eigenvalues of A are 9 and λ 3 A, 3 5 A λi 3 5 he characteristic polynomial is λ det( A λi) (5 λ)(5 λ) 3 3 λ λ+ 6 Since λ λ + 6 ( λ 8)( λ ), the eigenvalues of A are 8 and 3 3λ A, A λi he characteristic polynomial is λ det( Aλ I) (3 λ)( λ) ( )() λ λ Use the quadratic formula to solve the characteristic equation and find the eigenvalues: b b 4ac 4 4 λ ± ± + ± a A , A I λ λ he characteristic polynomial of A is λ det( A λi) (5 λ)(3 λ) ( 3)( 4) λ 8λ+ 3 Use the quadratic formula to solve the characteristic equation and find the eigenvalues: 8± 64 4(3) 8± 3 λ 4± 3 A λ, λ 4 4λ A I he characteristic polynomial of A is det( Aλ I) ( λ)(4 λ) ()( ) λ 6λ+ 9 ( λ 3) hus, A has only one eigenvalue 3, with multiplicity 3 4 3λ 4 A, 4 8 A λi 4 8 he characteristic polynomial is λ det( A λi) (3 λ)(8 λ) ( 4)(4) λ λ+ 4

267 5 Solutions 67 Use the quadratic formula to solve det ( A λ I) : 4(4) 39 λ ± ± hese values are complex numbers, not real numbers, so A has no real eigenvalues here is no nonzero vector x in R such that Ax λx, because a real vector Ax cannot equal a complex multiple of x 7 A , A I λ λ he characteristic polynomial is λ det( A λi) (5 λ)(4 λ) (3)( 4) λ 9λ+ 3 Use the quadratic formula to solve det ( A λi) : 9± 8 4(3) 9± 47 λ hese values are complex numbers, not real numbers, so A has no real eigenvalues here is no nonzero vector x in R such that Ax λ x, because a real vector Ax cannot equal a complex multiple of x 8 7 7λ A, 3 A λi 3 he characteristic polynomial is λ det( A λi) (7 λ)(3 λ) ( )() λ λ+ 5 Since λ λ + 5 ( λ 5), the only eigenvalue is 5, with multiplicity 9 λ det( A λi) det 3 λ From the special formula for 3 3 determinants, the 6 λ characteristic polynomial is det( A λi) ( λ)(3 λ)( λ) + + ( )()(6) (6)( )( λ) ( λ 4λ+ 3)( λ) + 6( λ) 3 λ + 4λ 3λ + 66λ 3 λ + 4λ 9λ6 (his polynomial has one irrational zero and two imaginary zeros) Another way to evaluate the determinant is to interchange rows and (which reverses the sign of the determinant) and then make one row replacement: λ 3λ det 3 det λ λ 6 λ 6 λ 3λ det (5 5)(3 ) (5 5)( ) + λ λ + λ 6 λ Next, expand by cofactors down the first column he quantity above equals ( 5λ 5)(3 λ) 5 5λ det [( 5λ 5)(3 λ)( λ) ( 5 5 λ)(6)] 6 λ 3 ( λ)(3 λ)( λ) ( + λ)(6) ( λ 4λ+ 3)( λ) 6 6λ λ + 4λ 9λ6

268 68 CHAPER 5 Eigenvalues and Eigenvectors λ 3 det( A λi) det 3 λ From the special formula for 3 3 determinants, the λ characteristic polynomial is det( A λi) ( λ)( λ)( λ) ( λ) ( λ) ( λ) λ λ + 4λ + 9λ λ + 4λ + he special arrangements of zeros in A makes a cofactor expansion along the first row highly effective 4 λ 3 λ det( A λi) det 5 3 λ (4 λ) det λ λ 3 (4 λ)(3 λ)( λ) (4 λ)( λ 5λ+ 6) λ + 9λ 6λ+ 4 If only the eigenvalues were required, there would be no need here to write the characteristic polynomial in expanded form Make a cofactor expansion along the third row: λ λ det( A λi) det 3 4 λ ( λ) det 3 4λ λ 3 ( λ)( λ)(4 λ) λ + 5λ λ8 3 Make a cofactor expansion down the third column: 6λ 6λ det( A λi) det 9 λ (3 λ) det 9 λ λ (3 λ)[(6 λ)(9 λ) ( )( )] (3 λ)( λ 5λ+ 5) 3 λ + 8λ 95λ + 5 or (3 λ)( λ 5)( λ ) 4 Make a cofactor expansion along the second row: 5λ 3 5 λ 3 det( A λi) det λ ( λ) det 6 λ 6 7 λ ( λ) [(5 λ)( λ) 3 6] ( λ)( λ 3λ8) 3 λ + 4λ + 5λ8 or ( λ)( λ 7)( λ+ 4) 5 Use the fact that the determinant of a triangular matrix is the product of the diagonal entries: 4λ 7 3λ 4 6 det( A λi) det (4 λ)(3 λ) ( λ) 3λ 8 λ he eigenvalues are 4, 3, 3, and

269 5 Solutions 69 6 he determinant of a triangular matrix is the product of its diagonal entries: 5 λ 8 4λ det( A λi) det (5 λ)( 4 λ)( λ) 7 λ 5 λ he eigenvalues are 5,,, and 4 7 he determinant of a triangular matrix is the product of its diagonal entries: 3 λ 5 λ 3 8 λ (3 λ) ( λ) ( λ) 7 λ 4 9 3λ he eigenvalues are 3, 3,,, and 8 Row reduce the augmented matrix for the equation ( A 5 I) x : h h 6 h For a two-dimensional eigenspace, the system above needs two free variables his happens if and only if h 6 9 Since the equation det( Aλ I ) ( λλ)( λ λ) ( λ n λ) holds for all λ, set λ and conclude that det A λλ λ n det( A λ I) det( A λ I ) det( AλI ) ranspose property det( Aλ I) heorem 3(c) a False See Example b False See heorem 3 c rue See heorem 3 d False See the solution of Example 4 a False See the paragraph before heorem 3 b False See heorem 3 c rue See the paragraph before Example 4 d False See the warning after heorem 4 3 If A QR, with Q invertible, and if A RQ, then write A is similar to A, A Q QRQ Q AQ which shows that

270 7 CHAPER 5 Eigenvalues and Eigenvectors 4 First, observe that if P is invertible, then heorem 3(b) shows that det I det( PP ) (det P)(det P ) Use heorem 3(b) again when A PBP, det A det( PBP ) (det P)(det B)(det P ) (det B)(det P)(det P ) det B 5 Example 5 of Section 49 showed that A v v, which means that v is an eigenvector of A corresponding to the eigenvalue a Since A is a matrix, the eigenvalues are easy to find, and factoring the characteristic polynomial is easy when one of the two factors is known 6 3 det λ (6 λ)(7 λ) (3)(4) λ 3 λ + 3 ( λ )( λ 3) 4 7λ he eigenvalues are and 3 For the eigenvalue 3, solve ( A 3 I) x : Here x x, with x free he general solution is not needed Set x to find an eigenvector v A suitable basis for R is { v, v } b Write x v +cv : / 37 / + 47 c / / By inspection, c is / 4 (he value of c depends on how v is scaled) c For k,,, define x k k A x hen x A( v+ cv) Av+ cav v+ c(3) v, because v and v are eigenvectors Again x Ax A( v + c( 3) v ) Av + c( 3) Av v + c( 3)( 3) v k k + c(3) Continuing, the general pattern is x v v As k increases, the second term tends to and so x k tends to v 6 If a, then a b a b A, U and det A ( a)( d ca b) ad bc If a, then c d d ca b b c d A U c d b (with one interchange), so det A ( ) ( cb) bc ad bc 7 a A v v, A v 5 v, A v3 v 3 b he set { v, v, v 3} is linearly independent because the eigenvectors correspond to different eigenvalues (heorem ) Since there are three vectors in the set, the set is a basis for 3 So there exist unique constants such that x cv+ cv + c3v 3, and w x cw v + cw v + c3w v 3 Since x and v are probability vectors and since the entries in v and v 3 sum to, the above equation shows that c c By (b), x c v + c v + c 3 v 3 Using (a), k k k k k k xk A x c A v + c A v + c A v v + c (5) v + c () v v ask

271 5 Solutions 7 8 [M] Answers will vary, but should show that the eigenvectors of A are not the same as the eigenvectors of A, unless, of course, A A 9 [M] Answers will vary he product of the eigenvalues of A should equal det A 3 [M] he characteristic polynomials and the eigenvalues for the various values of a are given in the following table: a Characteristic Polynomial Eigenvalues t+ 4t t 3 79,, t+ 4t t 74,, t+ 4t t,, t+ 4t t 5 ± 9747i, t+ 4t t 5 ± 4663i, he graphs of the characteristic polynomials are: Notes: An appendix in Section 53 of the Study Guide gives an example of factoring a cubic polynomial with integer coefficients, in case you want your students to find integer eigenvalues of simple 3 3 or perhaps 4 4 matrices he MALAB box for Section 53 introduces the command poly (A), which lists the coefficients of the characteristic polynomial of the matrix A, and it gives MALAB code that will produce a graph of the characteristic polynomial (his is needed for Exercise 3) he Maple and Mathematica appendices have corresponding information he appendices for the I and HP calculators contain only the commands that list the coefficients of the characteristic polynomial

272 7 CHAPER 5 Eigenvalues and Eigenvectors 53 SOLUIONS P 5 7,,, 3 D A PDP and 4 4 A PD P We compute and A P ,, 5 D P 3,,, 3 5 / D A PDP and 4 4 A PD P We compute P 5 3 4,, 3 6 / D and A 3 5 / A k k k k a a PD P 3 k 3 k k 3 3 k b a b b 4 k k k k k A PD P k 3 k 4 k 3 5 By the Diagonalization heorem, eigenvectors form the columns of the left factor, and they correspond respectively to the eigenvalues on the diagonal of the middle factor λ 5: λ ; :, 6 As in Exercise 5, inspection of the factorization gives: λ 4: λ 5 ; :, 7 Since A is triangular, its eigenvalues are obviously ± For λ : A I 6 he equation ( A I) x amounts to 6x x, so x (/ 3) x with 3 / x free he general solution is x, and a nice basis vector for the eigenspace is v 3 For λ : A+ I 6 he equation ( A+ I) x amounts to x, so x with x free he general solution is x, and a basis vector for the eigenspace is v From v and v construct P v v 3 hen set correspond to v and v respectively D, where the eigenvalues in D

273 53 Solutions 73 8 Since A is triangular, its only eigenvalue is obviously 5 For λ 5: A 5 I he equation ( A 5 I) x amounts to x, so x with x free he general solution is x Since we cannot generate an eigenvector basis for, A is not diagonalizable 9 o find the eigenvalues of A, compute its characteristic polynomial: 3 λ det( A λ I) det (3λ)(5 λ) ( )() λ 8λ + 6 (λ 4) 5 λ hus the only eigenvalue of A is 4 For λ 4: A 4 I he equation ( A 4 I) x amounts to x+ x, so x x with x free he general solution is x Since we cannot generate an eigenvector basis for, A is not diagonalizable o find the eigenvalues of A, compute its characteristic polynomial: λ 3 det( A λ I) det (λ)( λ) (3)(4) λ 3λ (λ 5)(λ + ) 4 λ hus the eigenvalues of A are 5 and 3 3 For λ 5: A 5 I 4 4 he equation ( A 5 I) x amounts to x x, so x x with x free he general solution is x, and a basis vector for the eigenspace is v 4 3 For λ : A+ I 4 3 he equation ( A+ I) x amounts to 4x+ 3x, so x ( 3/ 4) x / 34 with x free he general solution is x, and a nice basis vector for the eigenspace is 3 v 4 3 From v and v construct P v v 4 hen set D correspond to v and v respectively 5 D, where the eigenvalues in he eigenvalues of A are given to be,, and / 4 For λ 3: A 3I 3, and row reducing [ A 3I ] yields 3 4 / 3 4 / general solution is x 3 34 /, and a nice basis vector for the eigenspace is v 3 4 he

274 74 CHAPER 5 Eigenvalues and Eigenvectors 3 4 / 3 For λ : A I 3, and row reducing [ A I ] yields he 3 3 / general solution is x 3, and a nice basis vector for the eigenspace is v For λ : A I 3 3, and row reducing [ AI ] yields 3 solution is x 3, and a basis vector for the eigenspace is v 3 From v, v and v 3 construct P v v v3 3 3 hen set D 4 3 eigenvalues in D correspond to v, v and v 3 respectively he eigenvalues of A are given to be and 8 he general 3, where the 4 For λ 8: A 8I 4, and row reducing [ A 8I ] yields he 4 general solution is x 3, and a basis vector for the eigenspace is v For λ : A I, and row reducing [ A I ] yields he general solution is + 3 x x, and a basis for the eigenspace is { v, v 3}, From v, v and v 3 construct P v v v3 hen set eigenvalues in D correspond to v, v and v 3 respectively 8 D, where the

275 53 Solutions 75 3 he eigenvalues of A are given to be 5 and 3 For λ 5: A 5I, A 5I 3 solution is x 3, and a basis for the eigenspace is v and row reducing [ ] yields he general For λ : A I, and row reducing [ A I ] yields he general solution is + 3 x x, and a basis for the eigenspace is { v, v 3}, From v, v and v 3 construct P v v v3 hen set eigenvalues in D correspond to v, v and v 3 respectively 4 he eigenvalues of A are given to be 5 and 4 5 D, where the For λ 5: A 5I 4, and row reducing [ A 5I ] yields he general solution is + 3 x x, and a basis for the eigenspace is { v, v }, / For λ 4: A 4I 4, and row reducing [ A 4I ] yields he general / solution is x 3, and a nice basis vector for the eigenspace is v 3 From v, v and v 3 construct P v v v3 hen set eigenvalues in D correspond to v, v and v 3 respectively 5 D 5, where the 4

276 76 CHAPER 5 Eigenvalues and Eigenvectors 5 he eigenvalues of A are given to be 3 and For λ 3: A 3I 8, and row reducing [ A 3I ] yields he general solution is + 3 x x, and a basis for the eigenspace is { v, v }, For λ : A I 4 8, and row reducing [ A I ] yields 6 solution is x 3, and a basis for the eigenspace is v 3 4 From v, v and v 3 construct P v v v3 hen set the eigenvalues in D correspond to v, v and v 3 respectively 6 he eigenvalues of A are given to be and he general 3 D 3, where For λ : A I 3, and row reducing [ A I ] yields he general solution is + 3 x x, and a basis for the eigenspace is { v, v }, 4 6 For λ : A I 3, and row reducing [ A I ] yields 4 solution is x 3, and a basis for the eigenspace is v 3 3 From v, v and v 3 construct P v v v3 hen set the eigenvalues in D correspond to v, v and v 3 respectively he general D, where

277 53 Solutions 77 7 Since A is triangular, its eigenvalues are obviously 4 and 5 For λ 4: A 4I, and row reducing [ A 4I ] yields he general solution is x, and a basis for the eigenspace is v Since λ 5 must have only a one-dimensional eigenspace, we can find at most linearly independent eigenvectors for A, so A is not diagonalizable 8 An eigenvalue of A is given to be 5; an eigenvector v is also given o find the eigenvalue corresponding to v, compute A v v hus the eigenvalue in 6 6 question is / 3 / 3 For λ 5: A 5I 6 8, and row reducing [ A 5I ] yields / 3 / he general solution is x 3 + x, and a nice basis for the eigenspace is 4, 3, 3 { v v } From v, v and v 3 construct P v v v3 3 hen set D 5, where the 3 5 eigenvalues in D correspond to v, v and v 3 respectively Note that this answer differs from the text here, P v v3 v and the entries in D are rearranged to match the new order of the eigenvectors According to the Diagonalization heorem, both answers are correct 9 Since A is triangular, its eigenvalues are obviously, 3, and 5 For λ : 3 3 9, A I and row reducing [ A I ] yields he general solution is x3 + x 4, and a nice basis for the eigenspace is { v, v },

278 78 CHAPER 5 Eigenvalues and Eigenvectors 3 9 / 3 For λ 3: 3 A I, and row reducing [ A 3 I ] yields 3 / 3 he general solution is x, and a nice basis for the eigenspace is v For λ 5: A 5 I, and row reducing 3 [ A 5 I ] yields 3 general solution is x, and a basis for the eigenspace is v 4 3 From v, v, v 3 and v 4 construct P v v v3 v4 hen set where the eigenvalues in D correspond to, he D, 3 5 v v and v 3 respectively Note that this answer differs from the text here, P [ v4 v3 v v ] and the entries in D are rearranged to match the new order of the eigenvectors According to the Diagonalization heorem, both answers are correct Since A is triangular, its eigenvalues are obviously 4 and For λ 4: 4, A I and row reducing [ A 4 I ] general solution is + 4, For λ : yields x x and a basis for the eigenspace is { v v }, A I and row reducing [ A I ] yields he,, he general solution is x3 + x 4, and a basis for the eigenspace is { v3, v 4},

279 53 Solutions 79 From v, v, v 3 and v 4 construct P v v v3 v4 hen set where the eigenvalues in D correspond to v, v and v 3 respectively 4 4 D, a False he symbol D does not automatically denote a diagonal matrix b rue See the remark after the statement of the Diagonalization heorem c False he 3 3 matrix in Example 4 has 3 eigenvalues, counting multiplicities, but it is not diagonalizable d False Invertibility depends on not being an eigenvalue (See the Invertible Matrix heorem) A diagonalizable matrix may or may not have as an eigenvalue See Examples 3 and 5 for both possibilities a False he n eigenvectors must be linearly independent See the Diagonalization heorem b False he matrix in Example 3 is diagonalizable, but it has only distinct eigenvalues (he statement given is the converse of heorem 6) c rue his follows from AP PD and formulas () and () in the proof of the Diagonalization heorem d False See Example 4 he matrix there is invertible because is not an eigenvalue, but the matrix is not diagonalizable 3 A is diagonalizable because you know that five linearly independent eigenvectors exist: three in the three-dimensional eigenspace and two in the two-dimensional eigenspace heorem 7 guarantees that the set of all five eigenvectors is linearly independent 4 No, by heorem 7(b) Here is an explanation that does not appeal to heorem 7: Let v and v be eigenvectors that span the two one-dimensional eigenspaces If v is any other eigenvector, then it belongs to one of the eigenspaces and hence is a multiple of either v or v So there cannot exist three linearly independent eigenvectors By the Diagonalization heorem, A cannot be diagonalizable 5 Let { v } be a basis for the one-dimensional eigenspace, let v and v 3 form a basis for the twodimensional eigenspace, and let v 4 be any eigenvector in the remaining eigenspace By heorem 7, { v, v, v3, v 4} is linearly independent Since A is 4 4, the Diagonalization heorem shows that A is diagonalizable 6 Yes, if the third eigenspace is only one-dimensional In this case, the sum of the dimensions of the eigenspaces will be six, whereas the matrix is 7 7 See heorem 7(b) An argument similar to that for Exercise 4 can also be given 7 If A is diagonalizable, then A PDP for some invertible P and diagonal D Since A is invertible, is not an eigenvalue of A So the diagonal entries in D (which are eigenvalues of A) are not zero, and D is invertible By the theorem on the inverse of a product, A ( PDP ) ( P ) D P PD P Since D is obviously diagonal, A is diagonalizable

280 8 CHAPER 5 Eigenvalues and Eigenvectors 8 If A has n linearly independent eigenvectors, then by the Diagonalization heorem, invertible P and diagonal D Using properties of transposes, A ( PDP ) ( P ) D P ( P) DP QDQ A PDP for some where Q ( P ) hus linearly independent eigenvectors of A A is diagonalizable By the Diagonalization heorem, the columns of Q are n 9 he diagonal entries in D are reversed from those in D So interchange the (eigenvector) columns of P to make them correspond properly to the eigenvalues in D In this case, 3 P and D 5 Although the first column of P must be an eigenvector corresponding to the eigenvalue 3, there is 3 nothing to prevent us from selecting some multiple of, say, 6 and letting 3 P 6 We now have three different factorizations or diagonalizations of A: A PDP PD P P DP 3 A nonzero multiple of an eigenvector is another eigenvector o produce P, simply multiply one or both columns of P by a nonzero scalar unequal to 3 For a matrix A to be invertible, its eigenvalues must be nonzero A first attempt at a construction 3 might be something such as, 4 whose eigenvalues are and 4 Unfortunately, a matrix with 3 two distinct eigenvalues is diagonalizable (heorem 6) So, adjust the construction to, which a b works In fact, any matrix of the form a has the desired properties when a and b are nonzero he eigenspace for the eigenvalue a is one-dimensional, as a simple calculation shows, and there is no other eigenvalue to produce a second eigenvector 3 Any matrix with two distinct eigenvalues is diagonalizable, by heorem 6 If one of those a b eigenvalues is zero, then the matrix will not be invertible Any matrix of the form has the desired properties when a and b are nonzero he number a must be nonzero to make the matrix diagonalizable; b must be nonzero to make the matrix not diagonal Other solutions are a b a and b

281 53 Solutions A, ev eig(a)(5,,-,-) 5 nulbasis(a-ev()*eye(4)) 5 A basis for the eigenspace of λ 5 is 5 nulbasis(a-ev()*eye(4)) 3 5 A basis for the eigenspace of λ is nulbasis(a-ev(3)*eye(4)), 6 3 A basis for the eigenspace of λ is, hus we construct P and D A, ev eig(a)(-4,4,,-4)

282 8 CHAPER 5 Eigenvalues and Eigenvectors nulbasis(a-ev()*eye(4)), A basis for the eigenspace of λ 4 is, nulbasis(a-ev()*eye(4)) A basis for the eigenspace of λ 4 is 36 5 nulbasis(a-ev(3)*eye(4)) A basis for the eigenspace of λ is hus we construct P 4 and D A 8 3 4, ev eig(a)(5,,3,5,) nulbasis(a-ev()*eye(5)),

283 53 Solutions A basis for the eigenspace of λ 5 is 3, nulbasis(a-ev()*eye(5)) 4, A basis for the eigenspace of λ is, nulbasis(a-ev(3)*eye(5)) 5 A basis for the eigenspace of λ 3 is hus we construct P and D A 6 4, ev eig(a)(3,5,7,5,3)

284 84 CHAPER 5 Eigenvalues and Eigenvectors 5 5 nulbasis(a-ev()*eye(5)) 5, A basis for the eigenspace of λ 3 is, 5 nulbasis(a-ev()*eye(5)), A basis for the eigenspace of λ 5is, 3333 nulbasis(a-ev(3)*eye(5)) A basis for the eigenspace of λ 7 is hus we construct P and D Notes: For your use, here is another matrix with five distinct real eigenvalues o four decimal places, they are 654, 98785, 3838, 3 733, and 6 345

285 54 Solutions he MALAB box in the Study Guide encourages students to use eig (A) and nulbasis to practice the diagonalization procedure in this section It also remarks that in later work, a student may automate the process, using the command [ P D ] eig (A) You may wish to permit students to use the full power of eig in some problems in Sections 55 and SOLUIONS 3 Since ( b) 3d 5 d, [ ( b)] D 5 Likewise ( b) d+ 6d implies that [ ( b )] D 6 and ( b3) 4d implies that [ ( b3)] D 4 hus the matrix for relative to B and 3 D is [ ( )] [ ( )] [ ( 3)] b D b D b D Since ( d) b 3 b, [ ( d)] B 3 Likewise ( d) 4b+ 5b implies that [ ( d)] B 5 4 hus the matrix for relative to D and B is [ ( )] [ ( )] d B d B a ( e) b b + b3, ( e) bb b3, ( e3) b b + b3 [ ( e )] [ ( )] [ ( )], e, e b B B 3 B c he matrix for relative to E and B is [ [ ( e)] B [ ( e)] B [ ( e3)] ] B 4 Let E { e, e} be the standard basis for 4 Since [ ( b)] E ( b),[ ( )] ( ), b E b 5 and [ ( b3)] E ( b3), 3 the matrix for relative to B and E is [[ ( b)] E [ ( b)] E [ ( b3)] E] 4 5 3

286 86 CHAPER 5 Eigenvalues and Eigenvectors 5 a 3 ( p ) ( t+ 5)( t+ t ) 3t+ 4t + t b Let p and q be polynomials in, and let c be any scalar hen ( p() t + q()) t ( t+ 5)[ p() t + q()] t ( t+ 5) p() t + ( t+ 5) q() t ( p( t)) + ( q( t)) c ( p( t)) ( t+ 5)[ c p( t)] c ( t+ 5) p( t) c [ p( t)] and is a linear transformation 5 3 c Let B {, tt, } and C {, t, t, t } Since ( b) () ( t+ 5)() t+ 5, [ ( b)] C Likewise 5 since ( b) ( t) ( t+ 5)( t) t + 5 t, [ ( b)] C, and since 3 ( b3) ( t ) ( t+ 5)( t ) t + 5 t, [ ( b3)] C hus the matrix for relative to B and C is [ [ ( b)] [ ( )] [ ( 3)] ] C b C b C 5 6 a 3 4 ( p ) ( t+ t ) + t ( t+ t ) t+ 3t t + t b Let p and q be polynomials in, and let c be any scalar hen ( p( t) + q( t)) [ p( t) + q( t)] + t [ p( t) + q( t)] [ p( t) + t p( t)] + [ q( t) + t q( t)] ( p( t)) + ( q( t)) c ( p( t)) [ c p( t)] + t[ c p( t)] c [ p( t) + t p( t)] c [ p( t)] and is a linear transformation

287 54 Solutions c Let B {, tt, } and C {, t, t, t, t } Since ( b) () + t () t +,[ ( b)] C 3 Likewise since ( b) ( t) t+ ( t )( t) t + t, [ ( b)] C, and 4 since ( b3) ( t ) t + ( t )( t ) t + t,[ ( b3)] C hus the matrix for relative to B and C is [ [ ( b)] C [ ( b)] C [ ( b3)] C] 3 7 Since ( ) () 3+ 5,[ ( )] 5 b t b B Likewise since ( b) ( t) t+ 4 t,[ ( b)] B, 4 and since ( b3) ( t ) t,[ ( b3)] B hus the matrix representation of relative to the basis 3 B is [ ( )] [ ( )] [ ( 3)] b 5 B b B b B Perhaps a faster way is to realize that the 4 information given provides the general form of ( p ) as shown in the figure below: (5 ) + (4 + ) a at a t a a a t a a t coordinate coordinate mapping mapping a 3a multiplication a 5aa by[ ] B 4 a a + a

288 88 CHAPER 5 Eigenvalues and Eigenvectors he matrix that implements the multiplication along the bottom of the figure is easily filled in by inspection: a 3a??? 3 a 5a a implies that [ ] B 5??? a 4a a??? Since [3b 4 b] 4 B, [ (3b 4 b)] B [ ] B[3b 4 b ] B and b b b b + b3 9 a (3 4 ) ( ) ( p ) 5 3() () 8 b Let p and q be polynomials in, and let c be any scalar hen ( p+ q)( ) p( ) + q( ) p( ) q( ) ( p+ q) ( )() () () () () p+ q p + q p + q ( p) + ( q) ( p+ q)() p() + q() p() q() ( c p)( ) c ( p( )) p( ) c ( p) ( c )() c( ()) c () p p p c ( p) ( c p)() c ( p()) p() and is a linear transformation c Let B {, tt, } and E { e, e, e3} be the standard basis for 3 Since [ ( )] ( ) (), [ ( )] ( ) ( ) b E b b E b t, and [ ( b3)] ( 3) ( ) E b t, the matrix for relative to B and E is [ ( )] [ ( )] [ ( 3)] b E b E b E a Let p and q be polynomials in 3, and let c be any scalar hen ( p+ q)( 3) p( 3) + q( 3) p( 3) q( 3) ( + )( ) ( ) + ( ) ( + ) p q p q ( ) ( ) p q p q + ( p) + ( q) ( p + q)() p() + q() p() q() ( p+ q)(3) p(3) + q(3) p(3) q(3) ( c p)( 3) c ( p( 3)) p( 3) ( c )( ) c ( ( )) ( ) c ( ) p p c p p c ( p) ( c p)() c ( p()) p() ( c p)(3) c ( p(3)) p(3) and is a linear transformation

289 54 Solutions 89 3 b Let B {, tt,, t } and E { e, e, e3, e4} be the standard basis for 3 Since 3 9 [ ( b)] ( b) (), [ ( b)] ( b) ( t), [ ( b3)] ( b3) ( t ), [ ( b4)] ( 4) ( ) E b t, the matrix for relative to B and E is [ ( )] [ ( )] [ ( 3)] [ ( 4)] b E b E b E b E E E E and Following Example 4, if P b b, then the B-matrix is P AP 5 3 Following Example 4, if P b b, then the B-matrix is 4 3 P AP Start by diagonalizing A he characteristic polynomial is of A are and 3 λ 4λ + 3 (λ )(λ 3), so the eigenvalues For λ : A I 3 3 he equation ( A I) x amounts to x+ x, so x x with x free A basis vector for the eigenspace is thus v 3 For λ 3: A 3 I 3 he equation ( A 3 I) x amounts to 3x+ x, so x (/ 3) x with x free A nice basis vector for the eigenspace is thus v 3 From v and v we may construct P v v 3 which diagonalizes A By heorem 8, the basis B { v, v } has the property that the B-matrix of the transformation x Ax is a diagonal matrix

290 9 CHAPER 5 Eigenvalues and Eigenvectors 4 Start by diagonalizing A he characteristic polynomial is eigenvalues of A are 8 and λ 6λ 6 (λ 8)(λ + ), so the 3 3 For λ 8: A 8 I 7 7 he equation ( 8 ) A I x amounts to x+ x, so x x with x free A basis vector for the eigenspace is thus v 7 3 For λ : A+ I 7 3 he equation ( ) A I x amounts to 7x 3x, so x (3/ 7) x 3 with x free A nice basis vector for the eigenspace is thus v 7 3 From v and v we may construct P v v 7 which diagonalizes A By heorem 8, the basis B { v, v } has the property that the B-matrix of the transformation x Ax is a diagonal matrix 5 Start by diagonalizing A he characteristic polynomial is eigenvalues of A are 5 and λ 7λ + (λ 5)(λ ), so the For λ 5: A 5 I he equation ( A 5 I) x amounts to x+ x, so x x with x free A basis vector for the eigenspace is thus v For λ : A I he equation ( ) A I x amounts to x x, so x x with x free A basis vector for the eigenspace is thus v From v and v we may construct P v v which diagonalizes A By heorem 8, the basis B { v, v } has the property that the B-matrix of the transformation x Ax is a diagonal matrix 6 Start by diagonalizing A he characteristic polynomial is 5 and λ 5λ λ(λ 5), so the eigenvalues of A are 3 6 For λ 5: A 5 I he equation ( A 5 I) x amounts to x+ x, so x x with x free A basis vector for the eigenspace is thus v 6 For λ : A I 3 he equation ( A I) x amounts to x 3x, so x 3x with 3 x free A basis vector for the eigenspace is thus v

291 54 Solutions 9 3 From v and v we may construct P v v which diagonalizes A By heorem 8, the basis B { v, v } has the property that the B-matrix of the transformation x Ax is a diagonal matrix 7 a We compute that Ab 3 b so b is an eigenvector of A corresponding to the eigenvalue he characteristic polynomial of A is λ 4λ + 4 (λ ), so is the only eigenvalue for A Now A I, which implies that the eigenspace corresponding to the eigenvalue is one-dimensional hus the matrix A is not diagonalizable b Following Example 4, if P b b, then the B-matrix for is P AP If there is a basis B such that [ ] B is diagonal, then A is similar to a diagonal matrix, by the second paragraph following Example 3 In this case, A would have three linearly independent eigenvectors However, this is not necessarily the case, because A has only two distinct eigenvalues 9 If A is similar to B, then there exists an invertible matrix P such that P AP B hus B is invertible because it is the product of invertible matrices By a theorem about inverses of products, B P A ( P ) P A P, which shows that A is similar to B, If A PBP then similar to B A ( PBP )( PBP ) PB( P P) BP PB I BP PB P So A is By hypothesis, there exist invertible P and Q such that P BP A and Q CQ A hen P BP Q CQ Left-multiply by Q and right-multiply by to obtain QP BPQ QQ CQQ So C QP BPQ ( PQ ) B( PQ ), which shows that B is similar to C If A is diagonalizable, then A PDP for some P Also, if B is similar to A, then for some Q hen B Q( PDP ) Q ( QP) D( P Q ) ( QP) D( QP) So B is diagonalizable 3 If A x λx, x, then P Ax λ P x If B P AP, then Q B QAQ BP ( x) P APP ( x) P Ax λp x (*) by the first calculation Note that P x, because x and P is invertible Hence (*) shows that P x is an eigenvector of B corresponding to λ (Of course, λ is an eigenvalue of both A and B because the matrices are similar, by heorem 4 in Section 5), 4 If A PBP then rank A rank P( BP ) rank BP, by Supplementary Exercise 3 in Chapter 4 Also, rank BP rank B, by Supplementary Exercise 4 in Chapter 4, since P is invertible hus rank A rank B

292 9 CHAPER 5 Eigenvalues and Eigenvectors, 5 If A PBP then tr( A) tr(( PB) P ) tr( P ( PB)) By the trace property tr( P PB) tr( IB) tr( B) If B is diagonal, then the diagonal entries of B must be the eigenvalues of A, by the Diagonalization heorem (heorem 5 in Section 53) So tr A tr B {sum of the eigenvalues of A } 6 If A PDP for some P, then the general trace property from Exercise 5 shows that tr A tr [( PD) P ] tr [ P PD] tr D (Or, one can use the result of Exercise 5 that since A is similar to D, tr A tr D ) Since the eigenvalues of A are on the main diagonal of D, tr D is the sum of the eigenvalues of A 7 For each j, I ( b j ) b j Since the standard coordinate vector of any vector in n is just the vector itself, [ I( b j )] ε b j hus the matrix for I relative to B and the standard basis E is simply b b b n his matrix is precisely the change-of-coordinates matrix P B defined in Section 44 8 For each j, I ( b j ) b j, and [ I ( b j )] C [ b j] C By formula (4), the matrix for I relative to the bases B and C is M [ b ] [ b ] [ b ] C C n C In heorem 5 of Section 47, this matrix was denoted by matrix from B to C P C B and was called the change-of-coordinates 9 If B { b,, b n }, then the B-coordinate vector of b j is e j, the standard basis vector for instance, b b+ b + + b n hus [ I ( bj)] B [ bj] B ej, and [ I] [ I( b )] [ I( b )] [ e e ] I B B n B n n For 3 [M] If P is the matrix whose columns come from B, then the B-matrix of the transformation D P AP From the data in the text, A P 3, b b b, D x Ax is

293 54 Solutions 93 3 [M] If P is the matrix whose columns come from B, then the B-matrix of the transformation is D P AP From the data in the text, 3 [M] A 4 6 3, P b b b, / D / A, ev eig(a) (, 4, 4, 5) 5 nulbasis(a-ev()*eye(4)) 5 3 A basis for the eigenspace of λ is b nulbasis(a-ev()*eye(4)), A basis for the eigenspace of λ 4 is { b, b 3}, nulbasis(a-ev(4)*eye(4)) 3 A basis for the eigenspace of λ 5 is b he basis B { b, b, b3, b 4} is a basis for 4 with the property that [ ] B is diagonal x Ax

294 94 CHAPER 5 Eigenvalues and Eigenvectors Note: he Study Guide comments on Exercise 5 and tells students that the trace of any square matrix A equals the sum of the eigenvalues of A, counted according to multiplicities his provides a quick check on the accuracy of an eigenvalue calculation You could also refer students to the property of the determinant described in Exercise 9 of Section 5 55 SOLUIONS λ A A λi 3, 3 λ det( A λ I) (λ)(3 λ) ( ) λ 4λ + 5 Use the quadratic formula to find the eigenvalues: λ 4± 6 ±i Example gives a shortcut for finding one eigenvector, and Example 5 shows how to write the other eigenvector with no effort i For λ + i: A ( + i) I he equation ( A λ I) x gives i ( ix ) x x+ ( i) x As in Example, the two equations are equivalent each determines the same relation between x and x So use the second equation to obtain x ( ix ), with x free he general solution is + i +i x, and the vector v provides a basis for the eigenspace i For λ i: Let v v he remark prior to Example 5 shows that v is automatically an eigenvector for + i In fact, calculations similar to those above would show that { v } is a basis for the eigenspace (In general, for a real matrix A, it can be shown that the set of complex conjugates of the vectors in a basis of the eigenspace for λ is a basis of the eigenspace for λ ) 5 5 A he characteristic polynomial is λ 6± ± i λ 6λ +, so the eigenvalues of A are i 5 For λ 3 + i: A (3 + i) I i he equation ( (3 ) ) A + i I x amounts to + i x+ ( i) x, so x ( + ix ) with x free A basis vector for the eigenspace is thus v i For λ 3 i: A basis vector for the eigenspace is v v

295 55 Solutions A 3 he characteristic polynomial is λ 4± 36 ± 3 i λ 4λ + 3, so the eigenvalues of A are 3i 5 For λ + 3i: A ( + 3 i) I 3 he equation ( A ( + 3 i) I) x amounts to i 3i x+ ( 3 i) x, so x x with x free A nice basis vector for the eigenspace is thus 3 i v + 3 i For λ 3i: A basis vector for the eigenspace is v v 5 A 3 he characteristic polynomial is λ 8± 4 4 ±i For λ 4 + i: x+ ( i) x, so x ( ix ) λ 8λ + 7, so the eigenvalues of A are i A (4 + i) I he equation ( A (4 + i) I) x amounts to i + i + with x free A basis vector for the eigenspace is thus v i For λ 4 i: A basis vector for the eigenspace is v v A 8 4 he characteristic polynomial is λ 4± 6 ± i λ 4λ + 8, so the eigenvalues of A are i For λ + i: A ( + i) I 8 he equation ( A ( + i) I) x amounts to i ( ix ) + x, so x ( + ix ) with x free A basis vector for the eigenspace is thus v + i For λ i: A basis vector for the eigenspace is v v i 4 3 A 3 4 he characteristic polynomial is λ 8± 36 4± 3 i λ 8λ + 5, so the eigenvalues of A are

296 96 CHAPER 5 Eigenvalues and Eigenvectors 3i 3 For λ 4 + 3i: A (4 + 3 i) I 3 3 he equation ( A (4 + 3 i) I) x amounts to x+ ix, so i i x ix with x free A basis vector for the eigenspace is thus v i For λ 4 3i: A basis vector for the eigenspace is v v 7 3 A From Example 6, the eigenvalues are 3 ± i he scale factor for the transformation 3 x Ax is r λ ( 3) + For the angle of rotation, plot the point ( ab, ) ( 3, ) in the xy-plane and use trigonometry: ϕ arctan ( ba / ) arctan (/ 3) π/ 6 radians Note: Your students will want to know whether you permit them on an exam to omit calculations for a matrix of the form a b b a and simply write the eigenvalues a± bi A similar question may arise about the corresponding eigenvectors, i and, which are announced in the Practice Problem Students may have i trouble keeping track of the correspondence between eigenvalues and eigenvectors A From Example 6, the eigenvalues are 3± 3 i he scale factor for the transformation 3 3 x Ax is r λ ( 3) From trigonometry, the angle of rotation ϕ is arctan ( ba / ) arctan ( / 3 3) π/ 3 radians 3 / / A From Example 6, the eigenvalues are 3 / ± () / i he scale factor for the / 3 / transformation x Ax is r λ ( 3/ ) + (/ ) From trigonometry, the angle of rotation ϕ is arctan ( ba / ) arctan (( / ) / ( 3/ )) π/ 5 6 radians

297 55 Solutions A 5 5 From Example 6, the eigenvalues are 5 ± 5 i he scale factor for the transformation x Ax is r λ ( 5) From trigonometry, the angle of rotation ϕ is arctan( ba / ) arctan(5 /( 5)) 3π/ 4 radians A From Example 6, the eigenvalues are ± i he scale factor for the transformation x Ax is r λ ( ) + ( ) / From trigonometry, the angle of rotation ϕ is arctan ( ba / ) arctan ( / ) π/ 4 radians 3 A 3 From Example 6, the eigenvalues are ± 3 i he scale factor for the transformation x Ax is r λ + (3) 3 From trigonometry, the angle of rotation ϕ is arctan ( ba / ) arctan ( ) π/ radians 3 From Exercise, λ ±i, and the eigenvector and Im v, take i v corresponds to λ i Since Re P hen compute v 3 C P AP 3 Actually, heorem 9 gives the formula for C Note that the eigenvector v corresponds to a bi instead of a+ bi If, for instance, you use the eigenvector for + i, your C will be Notes: he Study Guide points out that the matrix C is described in heorem 9 and the first column of C is the real part of the eigenvector corresponding to a bi, not a+ bi, as one might expect Since students may forget this, they are encouraged to compute C from the formula C P AP, as in the solution above he Study Guide also comments that because there are two possibilities for C in the factorization of a matrix as in Exercise 3, the measure of rotation of the angle associated with the transformation x Ax is determined only up to a change of sign he orientation of the angle is determined by the change of variable x P u See Figure 4 in the text A From Exercise, the eigenvalues of A are λ 3 ±i, and the eigenvector i v corresponds to λ 3 i By heorem 9, P [Re v Im v ] and C P AP 3

298 98 CHAPER 5 Eigenvalues and Eigenvectors A 3 From Exercise 3, the eigenvalues of A are λ ± 3 i, and the eigenvector + 3i 3 v corresponds to λ 3 i By heorem 9, P [Re v Im v] and C P AP A 3 From Exercise 4, the eigenvalues of A are λ 4 ±i, and the eigenvector i v corresponds to λ 4 i By heorem 9, P [ Re v Im v ] and 5 4 C P AP A 4 he characteristic polynomial is λ + λ +, so the eigenvalues of A are λ 6± 8 i o find an eigenvector corresponding to 6 8 i, we compute 6 + 8i 8 A( 6 8 i) I i he equation ( A( 6 8 i) I) x amounts to 4 x+ ( 6+ 8 i) x, so x (( i) / 5) x i with x free A nice eigenvector corresponding to 6 8i is thus v 5 By heorem 9, P [ Re v Im v ] 5 and C P AP A 4 6 he characteristic polynomial is λ 6λ +, so the eigenvalues of A are λ 8± 6 i o find an eigenvector corresponding to 8 6 i, we compute + 6i A (8 6) i I 4 + 6i he equation ( A ( 8 6 i) I) x amounts to 4 x+ ( + 6 i) x, so x (( 3 i) / ) x with x free 3 i A nice eigenvector corresponding to 8 6i is thus v By heorem 9, P [ Re v Im v ] and C P AP

299 55 Solutions A 56 4 he characteristic polynomial is λ 9λ +, so the eigenvalues of A are λ 96 ± 8 i o find an eigenvector corresponding to 96 8 i, we compute i 7 A (96 8) i I i he equation ( A ( 96 8 i) I) x amounts to 56 x+ ( i) x, so x (( i) / ) x with i x free A nice eigenvector corresponding to 96 8i is thus v By heorem 9, P [ Re v Im v ] and C P AP A 9 he characteristic polynomial is λ 56λ +, so the eigenvalues of A are λ 8 ± 96 i o find an eigenvector corresponding to 8 96 i, we compute 9+ 96i 4 A (8 96) i I i he equation ( A ( 8 96 i) I) x amounts to 9 x+ (9 + 96) i x, so x (( i) / ) x with i x free A nice eigenvector corresponding to 8 96i is thus v By heorem 9, P [ Re v Im v ] and C P AP he first equation in () is ( 3+ 6 ix ) 6x We solve this for x to find that x (( 3+ 6 i) / 6) x (( + i) / ) x Letting x, we find that y + i is an eigenvector for + i 4i + i the matrix A Since y + i 5 5 v the vector y is a complex multiple of the 5 vector v used in Example Since A( µ x) µ ( Ax) µ (λ x) λ( µ x), µ x is an eigenvector of A 3 (a) properties of conjugates and the fact that x x (b) Ax A x and A is real (c) x Ax is a scalar and hence may be viewed as a matrix (d) properties of transposes (e) A A and the definition of q 4 x A x x ( λ x) λ x x because x is an eigenvector It is easy to see that because zz is nonnegative for every complex number z Since Next, write x u+ i v, where u and v are real vectors hen Ax A( u+ iv) Au+ iav and λx λu+ iλv x x is real (and positive) x Ax is real, by Exercise 3, so is λ

300 3 CHAPER 5 Eigenvalues and Eigenvectors he real part of Ax is Au because the entries in A, u, and v are all real he real part of λx is λu because λ and the entries in u and v are real Since Ax and λx are equal, their real parts are equal, too (Apply the corresponding statement about complex numbers to each entry of Ax) hus A u λ u, which shows that the real part of x is an eigenvector of A 5 Write x Re x+ i (Im x), so that Ax A(Re x) + ia (Im x) Since A is real, so are A(Re x ) and A (Im x) hus A(Re x ) is the real part of Ax and A(Im x ) is the imaginary part of Ax 6 a If λ abi, then Av λ v ( a bi)(re v+ i Im v) ( a Re v+ b Im v) + i( a Im vb Re v) Re Av Im Av By Exercise 5, A(Re v) Re Av a Re v+ b Im v A(Im v) Im Av b Re v+ a Im v b Let [ v v] P Re Im By (a), a b A(Re v ) P, A(Im ) P b a v So [ (Re v) (Im v) ] AP A A a b a b P P P PC b a b a [ M] A ev eig(a) (+5i,-5i,3+i,3-i) For λ 5 i, an eigenvector is nulbasis(a-ev()*eye(4)) 5-5i - + i - i 5 5i so that v For λ 3 i, an eigenvector is nulbasis(a -ev(4)*eye(4)) -5 - i + 5i

301 55 Solutions i 5 5i so that v 75 5i Hence by heorem 9, P Re v Im v Re v Im v and C Other choices are possible, but C must equal P AP [ M ] A ev eig(a) (-4+i,-4-i,-+5i,--5i) For λ 4 i, an eigenvector is nulbasis(a-ev()*eye(4)) - - i - + i - i i + i so that v i For λ 5 i, an eigenvector is nulbasis(a-ev(4)*eye(4)) - i -5-5i i i so that v + i

302 3 CHAPER 5 Eigenvalues and Eigenvectors Hence by heorem 9, P Re v Im v Re v Im v and 4 4 C Other choices are possible, but C must equal P AP SOLUIONS he exercise does not specify the matrix A, but only lists the eigenvalues 3 and /3, and the 9 corresponding eigenvectors v and v Also, x a o find the action of A on x, express x in terms of v and v hat is, find c and c such that x cv+ c v his is certainly possible because the eigenvectors v and v are linearly independent (by inspection and also because they correspond to distinct eigenvalues) and hence form a basis for R (wo linearly independent vectors in R automatically span R ) he row reduction 9 5 v v x 4 shows that x 5v 4 v Since v and v are eigenvectors (for the eigenvalues 3 and /3): 5 4/ 3 49/ 3 x Ax 5Av 4Av 5 3v4 (/ 3) v 5 4/ 3 4/ 3 b Each time A acts on a linear combination of v and v, the v term is multiplied by the eigenvalue 3 and the v term is multiplied by the eigenvalue /3: A A[5 3 4(/ 3) ] 5(3) 4(/ 3) x x v v v v k In general, xk 5(3) v 4(/ 3) v, for k k 3 he vectors v 3 3, v, v are eigenvectors of a 3 3 matrix A, corresponding to eigenvalues 3, 4/5, and 3/5, respectively Also, x 5 o describe the solution of the equation 3 xk+ Axk( k,, ), first write x in terms of the eigenvectors 3 v v v3 x 3 5 x v + v + v

303 56 Solutions 33 hen, A A + A + A 3 + / + / 3 x (v v v ) v v v 3 v (4 5) v (3 5) v In general, k k k 3 xk 3 v + (45) / v + (35) / v For all k sufficiently large, k k xk 3 v det( ) ( 5 )( ) A, + + A λi λ λ λ λ his characteristic polynomial factors as ( λ 9)( λ 7), so the eigenvalues are 9 and 7 If v and v denote corresponding eigenvectors, and if x cv+ c v, then x Ac ( v+ cv) ca v+ ca v c(9) v+ c(7) v and for k, k k xk c (9) v + c (7) v For any choices of c and c, both the owl and wood rat populations decline over time A, det( ) ( 5 )( ) ( 4)( 5) A λi λ λ λ λ his characteristic polynomial factors as ( λ )( λ 6), so the eigenvalues are and 6 For the eigenvalue, solve ( A I ) x : 5 A basis for the eigenspace is v 5 Let v be an eigenvector for the eigenvalue 6 (he entries in v are not important for the long-term behavior of the system) If x cv+ c v, then x ca v+ cav cv+ c (6) v, and for k sufficiently large, 4 k 4 k c x + c(6) c 5 v 5 Provided that c, the owl and wood rat populations each stabilize in size, and eventually the populations are in the ratio of 4 owls for each 5 thousand rats If some aspect of the model were to change slightly, the characteristic equation would change slightly and the perturbed matrix A might not have as an eigenvalue If the eigenvalue becomes slightly large than, the two populations will grow; if the eigenvalue becomes slightly less than, both populations will decline det( ) A, + 35 A λi λ λ he quadratic formula provides the roots of the characteristic equation: 6 ± 6 4( 5775) 6 ± 5 λ 5 and 55 Because one eigenvalue is larger than one, both populations grow in size heir relative sizes are determined eventually by the entries in the eigenvector corresponding to 5 Solve ( A 5 I ) x : An eigenvector is 35 5 v 3 Eventually, there will be about 6 spotted owls for every 3 (thousand) flying squirrels

304 34 CHAPER 5 Eigenvalues and Eigenvectors When p 5, A, 5 and det( A λi) λ 6λ + 63 ( λ 9)( λ 7) he eigenvalues of A are 9 and 7, both less than in magnitude he origin is an attractor for the dynamical system and each trajectory tends toward So both populations of owls and squirrels eventually perish he calculations in Exercise 4 (as well as those in Exercises 35 and 7 in Section 5) show that if the largest eigenvalue of A is, then in most cases the population vector x k will tend toward a multiple of the eigenvector corresponding to the eigenvalue [If v and v are eigenvectors, with v corresponding to λ, and if x cv+ c v, then x k tends toward c v, provided c is not zero] So the problem here is to determine the value of the predation parameter p such that the largest eigenvalue of A is Compute the characteristic polynomial: 4 3 det λ (4 λ)( λ) + 3p λ 6 λ + ( p) p λ By the quadratic formula, 6 ± 6 4( p) λ he larger eigenvalue is when ( p) and 56 9 p 4 In this case, 64 p 6, and p 4 7 a he matrix A in Exercise has eigenvalues 3 and /3 Since 3 > and / 3 <, the origin is a saddle point b he direction of greatest attraction is determined by v, the eigenvector corresponding to the eigenvalue with absolute value less than he direction of greatest repulsion is determined by v, the eigenvector corresponding to the eigenvalue greater than c he drawing below shows: () lines through the eigenvectors and the origin, () arrows toward the origin (showing attraction) on the line through v and arrows away from the origin (showing repulsion) on the line through v, (3) several typical trajectories (with arrows) that show the general flow of points No specific points other than v and v were computed his type of drawing is about all that one can make without using a computer to plot points

305 56 Solutions 35 Note: If you wish your class to sketch trajectories for anything except saddle points, you will need to go beyond the discussion in the text he following remarks from the Study Guide are relevant Sketching trajectories for a dynamical system in which the origin is an attractor or a repellor is more difficult than the sketch in Exercise 7 here has been no discussion of the direction in which the trajectories bend as they move toward or away from the origin For instance, if you rotate Figure of Section 56 through a quarter-turn and relabel the axes so that x is on the horizontal axis, then the new figure corresponds to the matrix A with the diagonal entries 8 and 64 interchanged In general, if A is a diagonal matrix, with positive diagonal entries a and d, unequal to, then the trajectories lie on the axes or on curves s whose equations have the form x rx ( ), where s (ln d) /(ln a) and r depends on the initial point x (See Encounters with Chaos, by Denny Gulick, New York: McGraw-Hill, 99, pp 47 5) 8 he matrix from Exercise has eigenvalues 3, 4/5, and 3/5 Since one eigenvalue is greater than and the others are less than one in magnitude, the origin is a saddle point he direction of greatest repulsion is the line through the origin and the eigenvector (,, 3) for the eigenvalue 3 he direction of greatest attraction is the line through the origin and the eigenvector (,, 3 3 7) for the smallest eigenvalue 3/ det( ) λ 5λ A, Aλ I ± 5 4() 5 ± 5 5 ± 5 λ and 5 he origin is a saddle point because one eigenvalue is greater than and the other eigenvalue is less than in magnitude he direction of greatest repulsion is through the origin and the eigenvector v found 3 3 below Solve ( A I) x :, so x x, and x is free ake v he direction of greatest attraction is through the origin and the eigenvector v found below Solve 3 5 ( A 5 I ) x :, 3 so x 5 x, and x is free ake v 4 A, A I det( λ ) λ 4λ 45 4 ± 4 4(45) 4 ± 6 4 ± 4 λ 5 and 9 he origin is an attractor because both eigenvalues are less than in magnitude he direction of greatest attraction is through the origin and the eigenvector v found below Solve 4 ( A 5 I ) x :, 3 6 so x x, and x is free ake v 4 5 det( λ ) λ 7λ 7 A, A I ± 7 4(7) 7 ± 7 ± λ 8 and 9 he origin is an attractor because both eigenvalues are less than in magnitude he direction of greatest attraction is through the origin and the eigenvector v found below Solve ( A 8 I ) x :, 4 5 so x 5 x, and x is free ake v 4

306 36 CHAPER 5 Eigenvalues and Eigenvectors A, A I det( λ ) λ 9λ 88 9 ± 9 4( 88) 9 ± 9 9 ± 3 λ 8 and he origin is a saddle point because one eigenvalue is greater than and the other eigenvalue is less than in magnitude he direction of greatest repulsion is through the origin and the eigenvector v found 6 6 below Solve ( A I ) x :, 3 3 so x x, and x is free ake v he direction of greatest attraction is through the origin and the eigenvector v found below Solve 3 6 ( A 8 I ) x :, 3 6 so x x, and x is free ake v 3 A, A I det( λ ) λ 3λ 3 3 ± 3 4(3) 3 ± 3 ± λ and he origin is a repellor because both eigenvalues are greater than in magnitude he direction of greatest repulsion is through the origin and the eigenvector v found below Solve ( A I ) x :, 4 3 so x 75 x, and x is free ake v det( λ ) λ 4λ 43 A, A I ± 4 4(43) 4 ± 4 4 ± λ and 3 he origin is a repellor because both eigenvalues are greater than in magnitude he direction of greatest repulsion is through the origin and the eigenvector v found below Solve ( A 3 I ) x :, 4 6 so x 5 x, and x is free ake v 5 4 A Given eigenvector v 6 and eigenvalues 5 and o find the eigenvalue for v, compute 4 Av v hus v is an eigenvector for λ x x3 For λ 5: , x 3 x Set v x3 isfree

307 56 Solutions 37 x x For λ : 3 6 3, Set 3 x v3 3 3 x3 is free Given x (, 3, 7), find weights such that x cv+ cv + c 3v3 6 [M] v v v3 x x v+ v + 3v3 x Av+ Av + 3Av3 v+ (5) v + 3() v3, and x v + ( 5) kv + 3( ) kv As k increases, x approaches v k 3 k 9 9 A 9 89 ev eig(a) o four decimal places, / v 99 Exact : 9/ 99 nulbasis(a -eye(3)) v nulbasis(a -ev()*eye(3)) v 3 nulbasis(a -ev(3)*eye(3)) he general solution of the dynamical system is xk cv+ c(89) v + c 3(8) v3 Note: When working with stochastic matrices and starting with a probability vector (having nonnegative entries whose sum is ), it helps to scale v to make its entries sum to If v (9/ 9, 9/ 9, 99/ 9), or ( 435, 9, 474) to three decimal places, then the weight c above turns out to be See the text s discussion of Exercise 7 in Section 5 k k 7 a b 6 A 3 8 λ 6 det λ λ λ he eigenvalues of A are given by 8 ± ( 8) 4( 48) 8± 56 8± 6 λ and 4 he numbers of juveniles and adults are increasing because the largest eigenvalue is greater than he eventual growth rate of each age class is, which is % per year

308 38 CHAPER 5 Eigenvalues and Eigenvectors o find the eventual relative population sizes, solve ( A I ) x : 6 / 43 x (4/ 3) x 4 Set 3 4 v x is free 3 Eventually, there will be about 4 juveniles for every 3 adults c [M] Suppose that the initial populations are given by x (5, ) he Study Guide describes how to generate the trajectory for as many years as desired and then to plot the values for each population Let x k (j k, a k ) hen we need to plot the sequences {j k},{a k},{jk + a k }, and {jk/ a k } Adjacent points in a sequence can be connected with a line segment When a sequence is plotted, the resulting graph can be captured on the screen and printed (if done on a computer) or copied by hand onto paper (if working with a graphics calculator) 8 a b 4 A i ev eig(a) i 48 he long-term growth rate is 5, about 5 % per year 38 v nulbasis(a -ev(3)*eye(3)) 64 For each adults, there will be approximately 38 calves and yearlings Note: he MALAB box in the Study Guide and the various technology appendices all give directions for generating the sequence of points in a trajectory of a dynamical system Details for producing a graphical representation of a trajectory are also given, with several options available in MALAB, Maple, and Mathematica 57 SOLUIONS From the eigendata (eigenvalues and corresponding eigenvectors) given, the eigenfunctions for the 4t t differential equation x Ax are v e and v e he general solution of x Ax has the form 3 c e c e 4t t + 6 he initial condition x () determines c and c : 3 4() () 6 c e + c e 3 6 5/ / 3 hus c 5 /, c 3, / and 5 3 4t 3 t x() t e e

309 57 Solutions 39 From the eigendata given, the eigenfunctions for the differential equation x Ax are he general solution of x Ax has the form 3t t c e + c e he initial condition x () 3 determines c and c : 3() () c e + c e 3 / 3 5/ hus c /, c 5, / and 3t 5 t x() t + e e 3t e v and v t e det( λ ) λ (λ )(λ ) A, + A I Eigenvalues: and For λ : For λ : 3 3 3, 3 so x 3x with x free ake x and v For the initial condition 3 3, so x x with x free ake x and v 3 x (), find c and c such that cv+ c v x() : 3 3 5/ v v x() 9/ 3 hus c 5 /, c 9, / and 5 t () 9 t x t + e e Since one eigenvalue is positive and the other is negative, the origin is a saddle point of the dynamical system described by x A x he direction of greatest attraction is the line through v and the origin he direction of greatest repulsion is the line through v and the origin A, A I + 4 For λ 3: For λ : 5 det( λ ) λ λ 3 (λ )(λ 3) Eigenvalues: and 3 5 5, so x x with x free ake x and v 5 5, 5 so 5 x 5x with x free ake x and v

310 3 CHAPER 5 Eigenvalues and Eigenvectors For the initial condition 3 x (), find c and c such that cv+ cv x () : 5 3 3/ 4 v v x() 5/ 4 3 3t 5 5 t hus c 3/ 4, c 5/ 4, and x() t 4 e 4 e Since one eigenvalue is positive and the other is negative, the origin is a saddle point of the dynamical system described by x A x he direction of greatest attraction is the line through v and the origin he direction of greatest repulsion is the line through v and the origin 5 6 7, A 3 3 det ( A λ I ) λ λ + 4 (λ 4)(λ 6) Eigenvalues: 4 and 6 For λ 4: For λ 6: 3 / 3, 3 so x (/ 3) x with x free ake x 3 and v 3, 3 3 so x x with x free ake x and v 3 For the initial condition x (), find c and c such that cv+ c v x() : 3 / v v x() 3 7/ hus c /, c 7, / and 4t 7 6t x() t + 3 e e Since both eigenvalues are positive, the origin is a repellor of the dynamical system described by x A x he direction of greatest repulsion is the line through v and the origin, A 3 4 det ( A λ I ) λ + 3λ + (λ + )(λ + ) Eigenvalues: and For λ : For λ : 3 / 3, 3 so x (/ 3) x with x free ake x 3 and v 3, 3 3 so x x with x free ake x and v 3 For the initial condition x (), find c and c such that cv+ cv x () : 3 [ v v x()] 3 5 hus c, c 5, and t t x() t e e Since both eigenvalues are negative, the origin is an attractor of the dynamical system described by x A x he direction of greatest attraction is the line through v and the origin

311 57 Solutions From Exercise 5, A, 3 3 with eigenvectors v 3 and v corresponding to eigenvalues 4 4 and 6 respectively o decouple the equation x A x, set P [ v v] 3 and let D, 6 so that A PDP and D P AP Substituting x( t) Py ( t) into x Ax we have d ( P y) A ( P y) PDP ( P y) PD y dt d d d Since P has constant entries, ( Py) P ( ( y)), so that left-multiplying the equality P( ( y)) PD y by P yields y D y, or y () t 4 y() t () 6 () y t y t dt dt 8 From Exercise 6, A, 3 4 with eigenvectors v 3 and v corresponding to eigenvalues and respectively o decouple the equation x A x, set P v v 3 and let D, so that A PDP and D P AP Substituting x( t) Py ( t) into x Ax we have d ( P ) A ( P ) PDP ( P ) PD dt y y y y d d d Since P has constant entries, ( Py) P ( ( y) ), so that left-multiplying the equality P( ( y) ) by P yields y D y, or y () t y() t () () y t y t dt dt dt dt PDy 9 3 i A An eigenvalue of A is + i with corresponding eigenvector v he complex eigenfunctions v e λt λ and ve t form a basis for the set of all complex solutions to x A x he general complex solution is i ( + it ) + i ( it ) c e + c e where c and c are arbitrary complex numbers o build the general real solution, rewrite ( + it ) i t it i t ve (cos sin ) e e e t+ i t cost icost+ isin ti sin t t e cost+ isin t cost+ sin t sin tcost t+ t cos e i sin e t t ( ) v e +it as:

312 3 CHAPER 5 Eigenvalues and Eigenvectors he general real solution has the form cost+ sin t t sin tcost t c e c e cost + sin t where c and c now are real numbers he trajectories are spirals because the eigenvalues are complex he spirals tend toward the origin because the real parts of the eigenvalues are negative 3 + i A An eigenvalue of A is + i with corresponding eigenvector v he complex λt λt eigenfunctions v e and v e form a basis for the set of all complex solutions to x Ax he general complex solution is + i ( + it ) i ( it ) c e + c e where c and c are arbitrary complex numbers o build the general real solution, rewrite ( + it ) + i t it + i t ve (cos sin ) e e e t+ i t cost+ icost+ isin t+ i sin t t e costisint cost sin t t sin t+ cost t e + i e cost sint ( ) v e +it as: he general real solution has the form cost sin t t sin t+ cost t c e + c e cost sint where c and c now are real numbers he trajectories are spirals because the eigenvalues are complex he spirals tend away from the origin because the real parts of the eigenvalues are positive i A 3 An eigenvalue of A is 3i with corresponding eigenvector v he complex λt λt eigenfunctions v e and v e form a basis for the set of all complex solutions to x Ax he general complex solution is 3+ 3i (3 it ) 33i ( 3 it ) c e + c e (3 it ) where c and c are arbitrary complex numbers o build the general real solution, rewrite v e as: (3 it ) 3+ 3 i ve (cos3 t i sin 3 t ) + 3cos3t3sin3t 3sin3t+ 3cos3t + i cos3t sin3t he general real solution has the form 3cos3t3sin3t 3sin3t+ 3cos3t c c cos3t + sin3t where c and c now are real numbers he trajectories are ellipses about the origin because the real parts of the eigenvalues are zero

313 57 Solutions i A 4 5 An eigenvalue of A is + i with corresponding eigenvector v he complex λt λt eigenfunctions v e and v e form a basis for the set of all complex solutions to x Ax he general complex solution is 3 i ( + it ) 3+ i ( it ) c e + c e ( it ) where c and c are arbitrary complex numbers o build the general real solution, rewrite v e + as: ( + it ) 3 i t ve e (cos t isin t) + 3cost+ sint t 3sintcost e + i e cost sint he general real solution has the form 3cost+ sint t 3sintcost t c e c e cost + sint where c and c now are real numbers he trajectories are spirals because the eigenvalues are complex he spirals tend toward the origin because the real parts of the eigenvalues are negative i A 6 An eigenvalue of A is + 3i with corresponding eigenvector v he complex λt λt eigenfunctions v e and v e form a basis for the set of all complex solutions to x Ax he general complex solution is + i (+ 3) it i (3) it c e + c e ( 3 it ) where c and c are arbitrary complex numbers o build the general real solution, rewrite v e + as: (+ 3 it ) + i t ve (cos3 sin 3 ) e t+ i t cos3t sin 3t t sin 3t+ cos3t e + i e cos3t sin3t he general real solution has the form cos3t sin 3t t sin 3t+ cos3t t c e c e cos3t + sin3t where c and c now are real numbers he trajectories are spirals because the eigenvalues are complex he spirals tend away from the origin because the real parts of the eigenvalues are positive t t 4 i A 8 An eigenvalue of A is i with corresponding eigenvector v 4 he complex λt λt eigenfunctions v e and v e form a basis for the set of all complex solutions to x Ax he general complex solution is i ( it ) + i ( it ) c e + c e 4 4

314 34 CHAPER 5 Eigenvalues and Eigenvectors where c and c are arbitrary complex numbers o build the general real solution, rewrite ( it ) i ve (cos sin ) 4 t+ i t cost+ sin t sin tcost + i 4cost 4sint ( it ) v e as: he general real solution has the form cos t+ sin t sin tcos t c c 4cost + 4sint where c and c now are real numbers he trajectories are ellipses about the origin because the real parts of the eigenvalues are zero 5 [M] 8 6 A he eigenvalues of A are: 7 5 ev eig(a) - - nulbasis(a-ev()*eye(3)) so that v 4 nulbasis(a-ev()*eye(3)) - 6 so that v 5 nulbasis (A-ev(3)*eye(3)) - so that v 3

315 57 Solutions t t t Hence the general solution is x () t c e c e c e he origin is a saddle point 4 5 A solution with c is attracted to the origin while a solution with c c3 is repelled [M] A 5 4 he eigenvalues of A are: 4 5 ev eig(a) 4 3 nulbasis(a-ev()*eye(3)) so that v 3 nulbasis(a-ev()*eye(3)) 3-3 so that v nulbasis(a-ev(3)*eye(3)) so that v t 3t t Hence the general solution is x () t c e c e c e he origin is a repellor, because 3 all eigenvalues are positive All trajectories tend away from the origin

316 36 CHAPER 5 Eigenvalues and Eigenvectors [M] A 3 9 he eigenvalues of A are: ev eig(a) 5 + i 5 - i nulbasis(a-ev()*eye(3)) i i 3 34i so that v 9+ 4i 3 nulbasis (A-ev()*eye(3)) i i i so that v 94i 3 nulbasis (A-ev(3)*eye(3)) -3 3 so that v 3 Hence the general complex solution is 3 34i i 3 x () t c 9 4i e c 9 4i e c e 3 3 (5+ ) it (5) it t 3 Rewriting the first eigenfunction yields 3 34i 3cos t+ 34sin t 3sin t 34cos t 9 4 i e (cost isin t) 9cost 4sint e i 9sint 4cost e 3 3cost 3sint 5t 5t 5t

317 57 Solutions 37 Hence the general real solution is 3cost+ 34sin t 3sin t34cost x () t c 9cost 4sint e c 9sint 4cost e c e 3cost 3sint t t t where c, c, and c 3 are real he origin is a repellor, because the real parts of all eigenvalues are positive All trajectories spiral away from the origin 8 [M] 53 3 A he eigenvalues of A are: ev eig(a) i 5 - i nulbasis(a-ev()*eye(3)) 5 so that v nulbasis(a-ev()*eye(3)) 6 + i 9 + 3i 6+ i so that v 9+ 3i nulbasis(a-ev(3)*eye(3)) i 6 i so that v 3 93i Hence the general complex solution is 6+ i 6i x () t c e c 9 3i e c 3 9 3i e 7 t (5 + i) t (5 i) t

318 38 CHAPER 5 Eigenvalues and Eigenvectors Rewriting the second eigenfunction yields 6+ i 6cost sint 6sint+ cost (cos + sin ) 9cos 3sin + 9sin + 3cos i e t i t t t e i t t e cost sint 5t 5t 5t Hence the general real solution is 6cost sint 6sint+ cost x ( t) c e c 9cost 3sint e c 3 9sin t 3cost e cost sin t 7t 5t 5t where c, c, and c 3 are real When c c3 the trajectories tend toward the origin, and in other cases the trajectories spiral away from the origin 9 [M] Substitute R 5 /, R 3 /, C 4, and C 3 into the formula for A given in Example, and use a matrix program to find the eigenvalues and eigenvectors: 3/ 4 3 A λ 5 λ 5, : v, : v 5t 3 5 t 4 he general solution is thus x() t c + e c e he condition x () 4 implies 3 c 4 that 4 By a matrix program, c 5 / and c, / so that c v () t 5 5t 3 5 t () t e e v () t x [M] Substitute R 5 /, R 3 /, C 4, and C into the formula for A given in Example, and use a matrix program to find the eigenvalues and eigenvectors: / 3 A λ λ 5 3 3, : v, : 3 v / / 3 t 5 t 3 he general solution is thus x() t c + 3 e c 3 e he condition x () 3 implies c 3 that By a matrix program, c 53 / and c 3, / so that c v () t 5 t 5 t () t e e v () t x [M] A 5 5 Using a matrix program we find that an eigenvalue of A is 3 + 6i with + 6 i corresponding eigenvector v 5 he conjugates of these form the second eigenvalue-eigenvector pair he general complex solution is + 6i ( 3+ 6) it 6i ( 36) it x () t c e + c e 5 5

319 57 Solutions 39 where c and c are arbitrary complex numbers Rewriting the first eigenfunction and taking its real and imaginary parts, we have ( 3+ 6 it ) + 6i 3t ve (cos6 + sin 6 ) 5 e t i t cos6t 6sin6t sin6t+ 6cos6t + 5cos6 e i 5sin6 e t t 3t 3t he general real solution has the form cos6t 6sin6t t sin6t+ 6cos6t x () t c e + c e 5cos6t 5sin6t 3 3t where c and c now are real numbers o satisfy the initial condition x (), 5 we solve 6 c c to get c 3, c We now have i () t cos6t 6sin6t sin6t+ 6cos6t sin6t () 3 () x t 5cos6 e 5sin6 e 5cos6 5sin6 e vc t t t t t L 3t 3t 3t [M] A 4 8 Using a matrix program we find that an eigenvalue of A is 4 + 8i with i corresponding eigenvector v he conjugates of these form the second eigenvalueeigenvector pair he general complex solution is i ( 4+ 8 it ) + i ( 4 8 it ) x () t c e + c e where c and c are arbitrary complex numbers Rewriting the first eigenfunction and taking its real and imaginary parts, we have ( 4+ 8 it ) i 4t ve (cos 8 sin 8 ) e t+ i t cos 8t+ sin 8t sin 8t cos 8t e + i e cos 8t sin 8t 4t 4t he general real solution has the form cos 8t+ sin 8t t sin 8t cos 8t x () t c e + c e cos 8t sin 8t 4 4t where c and c now are real numbers o satisfy the initial condition x (), we solve c c + to get c, c 6 We now have i () t cos 8t+ sin 8t sin 8t cos 8 t 3sin 8t () t e 6 e e vc () t x cos 8t sin 8t cos 8t 6sin 8t L 4 t 4 t 4 t

320 3 CHAPER 5 Eigenvalues and Eigenvectors 58 SOLUIONS he vectors in the given sequence approach an eigenvector v he last vector in the sequence, x 4, 336 is probably the best estimate for v o compute an estimate for λ, examine A x4 665 his vector is approximately λv From the first entry in this vector, an estimate of λ is he vectors in the given sequence approach an eigenvector v he last vector in the sequence, 5 x 4, is probably the best estimate for v o compute an estimate for λ, examine 536 A x4 564 his vector is approximately λv From the second entry in this vector, an estimate of λ is he vectors in the given sequence approach an eigenvector v he last vector in the sequence, 588 x 4, is probably the best estimate for v o compute an estimate for λ, examine 4594 A x4 975 his vector is approximately λv From the second entry in this vector, an estimate of λ is he vectors in the given sequence approach an eigenvector v he last vector in the sequence, x 4, 75 is probably the best estimate for v o compute an estimate for λ, examine 4 A x4 39 his vector is approximately λv From the first entry in this vector, an estimate of λ is 4 5 Since A x 34 is an estimate for an eigenvector, the vector v is a vector with a in its second entry that is close to an eigenvector of A o estimate the dominant 45 eigenvalue λ of A, compute A v 5 From the second entry in this vector, an estimate of λ is 5 6 Since A x 493 is an estimate for an eigenvector, the vector v is a vector with a in its second entry that is close to an eigenvector of A o estimate the dominant 8 eigenvalue λ of A, compute A v 4 4 From the second entry in this vector, an estimate of λ is 44

321 58 Solutions [M] A, 8 5 x he data in the table below was calculated using Mathematica, which carried more digits than shown here k x k Ax k µ k he actual eigenvalue is 3 8 [M] A, 4 5 x he data in the table below was calculated using Mathematica, which carried more digits than shown here k x k Ax k µ k he actual eigenvalue is [M] A, x he data in the table below was calculated using Mathematica, which 3 carried more digits than shown here k x k Ax k µ k hus µ and µ he actual eigenvalue is (7 + 97) /, or to five decimal places

322 3 CHAPER 5 Eigenvalues and Eigenvectors [M] A 9, x he data in the table below was calculated using Mathematica, which 9 carried more digits than shown here k x k Ax k µ k hus µ and µ he actual eigenvalue is 5 [M] A, x he data in the table below was calculated using Mathematica, which carried more digits than shown here k x k Ax k µ k R( x k ) he actual eigenvalue is 6 he bottom two columns of the table show that R( x k ) estimates the eigenvalue more accurately than µ k 3 [M] A, x he data in the table below was calculated using Mathematica, which carried more digits than shown here k x k Ax k µ k R( x k )

323 58 Solutions 33 he actual eigenvalue is 4 he bottom two columns of the table show that R( x k ) estimates the eigenvalue more accurately than µ k 3 If the eigenvalues close to 4 and 4 have different absolute values, then one of these is a strictly dominant eigenvalue, so the power method will work But the power method depends on powers of the quotients λ/λ and λ/λ 3 going to zero If λ /λ is close to, its powers will go to zero slowly, and the power method will converge slowly 4 If the eigenvalues close to 4 and 4 have the same absolute value, then neither of these is a strictly dominant eigenvalue, so the power method will not work However, the inverse power method may still be used If the initial estimate is chosen near the eigenvalue close to 4, then the inverse power method should produce a sequence that estimates the eigenvalue close to 4 5 Suppose A xλx, with x For any α, Ax αix ( λα ) x If α is not an eigenvalue of A, then A αi is invertible and λ α is not ; hence x ( AαI) ( λα) x and ( λ α) x ( AαI) x his last equation shows that x is an eigenvector of ( A αi) corresponding to the eigenvalue ( λα ) 6 Suppose that µ is an eigenvalue of ( A αi) with corresponding eigenvector x Since ( A α I) x µ x, x ( A α I)( µ x) A( µ x) ( α I)( µ x) µ ( Ax) αµ x Solving this equation for Ax, we find that A x ( αµ x + x ) α + x µ µ hus λ α + (/ µ ) is an eigenvalue of A with corresponding eigenvector x [M] A 8 3 4, x, α 33 he data in the table below was calculated using Mathematica, which carried more digits than shown here k x k y k µ k ν k hus an estimate for the eigenvalue to four decimal places is 33 he actual eigenvalue is (5 337), / or 33 to seven decimal places

324 34 CHAPER 5 Eigenvalues and Eigenvectors 8 8 [M] A, x, α 4 he data in the table below was calculated using 3 Mathematica, which carried more digits than shown here k x k y k µ k ν k hus an estimate for the eigenvalue to four decimal places is 444 he actual eigenvalue is (7 97) /, or 4449 to six decimal places [M] A, x (a) he data in the table below was calculated using Mathematica, which carried more digits than shown here k x k Ax k µ k

325 58 Solutions 35 k x k Ax k µ k hus an estimate for the eigenvalue to four decimal places is 3887 he actual eigenvalue is to seven decimal places An estimate for the corresponding eigenvector is (b) he data in the table below was calculated using Mathematica, which carried more digits than shown here k x k y k µ k ν k hus an estimate for the eigenvalue to five decimal places is 5 he actual eigenvalue is to eight decimal places An estimate for the corresponding eigenvector is

326 36 CHAPER 5 Eigenvalues and Eigenvectors [M] , x A (a) he data in the table below was calculated using Mathematica, which carried more digits than shown here k 3 4 k x k Ax k µ k k x k Ax k µ hus an estimate for the eigenvalue to four decimal places is 98 he actual eigenvalue is to seven decimal places An estimate for the corresponding eigenvector is

327 58 Solutions 37 a (b) he data in the table below was calculated using Mathematica, which carried more digits than shown here b k x k y k µ k ν k hus an estimate for the eigenvalue to four decimal places is he actual eigenvalue is to eight decimal places An estimate for the corresponding eigenvector is , 5 x k A Here is the sequence A x for k, 5: ,,,, Notice that A 5 4 x is approximately 8( A x) Conclusion: If the eigenvalues of A are all less than in magnitude, and if x, then approximately an eigenvector for large k 5, 8 5 x k A Here is the sequence A x for k, 5: ,,,, k A x is Notice that A k 5 x seems to be converging to Conclusion: If the strictly dominant eigenvalue of A is, and if x has a component in the direction of k the corresponding eigenvector, then { A x } will converge to a multiple of that eigenvector 8 5 c k A, x 5 Here is the sequence A x for k, 5: ,,,, 4 8 6

328 38 CHAPER 5 Eigenvalues and Eigenvectors k Notice that the distance of A x from either eigenvector of A is increasing rapidly as k increases Conclusion: If the eigenvalues of A are all greater than in magnitude, and if x is not an eigenvector, k then the distance from A x to the nearest eigenvector will increase as k Chapter 5 SUPPLEMENARY EXERCISES a rue If A is invertible and if A x x for some nonzero x, then left-multiply by to obtain x A x, which may be rewritten as A x x Since x is nonzero, this shows is an eigenvalue of A b False If A is row equivalent to the identity matrix, then A is invertible he matrix in Example 4 of Section 53 shows that an invertible matrix need not be diagonalizable Also, see Exercise 3 in Section 53 c rue If A contains a row or column of zeros, then A is not row equivalent to the identity matrix and thus is not invertible By the Invertible Matrix heorem (as stated in Section 5), is an eigenvalue of A d False Consider a diagonal matrix D whose eigenvalues are and 3, that is, its diagonal entries are and 3 hen D is a diagonal matrix whose eigenvalues (diagonal entries) are and 9 In general, the eigenvalues of A are the squares of the eigenvalues of A e rue Suppose a nonzero vector x satisfies Ax λ x, then A x A( Ax) A( λx) λax λ x A his shows that x is also an eigenvector for f rue Suppose a nonzero vector x satisfies Ax λ x, then left-multiply by A to obtain x A ( λx) λ A x Since A is invertible, the eigenvalue λ is not zero So λ x A x, which shows that x is also an eigenvector of A g False Zero is an eigenvalue of each singular square matrix h rue By definition, an eigenvector must be nonzero i False Let v be an eigenvector for A hen v and v are distinct eigenvectors for the same eigenvalue (because the eigenspace is a subspace), but v and v are linearly dependent j rue his follows from heorem 4 in Section 5 k False Let A be the 3 3 matrix in Example 3 of Section 53 hen A is similar to a diagonal matrix D he eigenvectors of D are the columns of I 3, but the eigenvectors of A are entirely different l False Let A 3 hen e and e are eigenvectors of A, but e+ e is not (Actually, it can be shown that if two eigenvectors of A correspond to distinct eigenvalues, then their sum cannot be an eigenvector) m False All the diagonal entries of an upper triangular matrix are the eigenvalues of the matrix (heorem in Section 5) A diagonal entry may be zero n rue Matrices A and A have the same characteristic polynomial, because det( A λ I) det( Aλ I) det( AλI ), by the determinant transpose property o False Counterexample: Let A be the 5 5 identity matrix p rue For example, let A be the matrix that rotates vectors through π/ radians about the origin hen Ax is not a multiple of x when x is nonzero A

329 Chapter 5 Supplementary Exercises 39 q False If A is a diagonal matrix with on the diagonal, then the columns of A are not linearly independent r rue If Ax λ x and Ax λ x, then λx λx and ( λ λ ) x If x, then λ must equal λ s False Let A be a singular matrix that is diagonalizable (For instance, let A be a diagonal matrix with on the diagonal) hen, by heorem 8 in Section 54, the transformation x Ax is represented by a diagonal matrix relative to a coordinate system determined by eigenvectors of A t rue By definition of matrix multiplication, A AI A[ e e e ] [ Ae Ae Ae ] n If Ae d e for j,, n, then A is a diagonal matrix with diagonal entries d,, d n u rue If j j j, B PDP where D is a diagonal matrix, and if n, A QBQ then A Q( PDP ) Q ( QP) D( PQ ), which shows that A is diagonalizable v rue Since B is invertible, AB is similar to B( AB) B, which equals BA w False Having n linearly independent eigenvectors makes an n n matrix diagonalizable (by the Diagonalization heorem 5 in Section 53), but not necessarily invertible One of the eigenvalues of the matrix could be zero x rue If A is diagonalizable, then by the Diagonalization heorem, A has n linearly independent eigenvectors v,, v n in R n By the Basis heorem, { v,, v n } spans R n his means that each n vector in R can be written as a linear combination of v,, Suppose Bx and AB xλx for some λ hen AB ( x) λx Left-multiply each side by B and obtain BA( Bx) B( λ x) λ( B x) his equation says that Bx is an eigenvector of BA, because B x 3 a Suppose A xλx, with x hen (5 I A) x 5x A x 5 xλ x (5 λ) x he eigenvalue is 5 λ b (5I 3 A+ A ) x 5x 3 Ax+ A( A x) 5x3( λ x) +λ x (53 λ+λ ) x he eigenvalue is 53 λ+λ 4 Assume that Ax λx for some nonzero vector x he desired statement is true for m, by the assumption about λ Suppose that for some k, the statement holds when m k hat is, suppose k k k+ k k that A x λ x hen A x A( A x) A( λ x ) by the induction hypothesis Continuing, k+ k k+ A x λ Ax λ x, because x is an eigenvector of A corresponding to A Since x is nonzero, this k equation shows that λ + k is an eigenvalue of A +, with corresponding eigenvector x hus the desired statement is true when m k + By the principle of induction, the statement is true for each positive integer m 5 Suppose A xλx, with x hen pa ( ) x ( ci + ca + ca + + ca n ) x n cx+ cax+ ca x+ + cn A x n c x+ cλ x+ c λ x+ + c λ x p( λ) x So p ( λ) is an eigenvalue of p( A ) n n v n

330 33 CHAPER 5 Eigenvalues and Eigenvectors, k k, 6 a If A PDP then A PD P and B 5I 3A+ A 5PIP 3PDP + PD P P(5I 3 D+ D) P Since D is diagonal, so is 5I 3 D+ D hus B is similar to a diagonal matrix n b p( A) ci + cpdp + cpd P + + cnpd P PcI ( + cd + cd + + cdn) n P Pp( D) P his shows that p( A ) is diagonalizable, because p( D ) is a linear combination of diagonal matrices and hence is diagonal In fact, because D is diagonal, it is easy to see that p() pd ( ) p(7), 7 If A PDP then p( A) Pp( D) P, as shown in Exercise 6 If the ( j, j ) entry in D is λ, then the k ( j, j ) entry in D is λ k, and so the ( j, j ) entry in p( D ) is p ( λ) If p is the characteristic polynomial of A, then p ( λ ) for each diagonal entry of D, because these entries in D are the eigenvalues of A hus p( D ) is the zero matrix hus pa ( ) P P 8 a If λ is an eigenvalue of an n n diagonalizable matrix A, then A PDP for an invertible matrix P and an n n diagonal matrix D whose diagonal entries are the eigenvalues of A If the multiplicity of λ is n, then λ must appear in every diagonal entry of D hat is, D λ I In this case, A P( λi) P λpip λpp λ I 3 b Since the matrix A 3 is triangular, its eigenvalues are on the diagonal hus 3 is an eigenvalue with multiplicity If the matrix A were diagonalizable, then A would be 3I, by part (a) his is not the case, so A is not diagonalizable 9 If I A were not invertible, then the equation ( I A ) x would have a nontrivial solution x hen x Ax and A x x, which shows that A would have as an eigenvalue his cannot happen if all the eigenvalues are less than in magnitude So I A must be invertible k o show that A tends to the zero matrix, it suffices to show that each column of A can be made as close to the zero vector as desired by taking k sufficiently large he jth column of A is A e j, where e j is the jth column of the identity matrix Since A is diagonalizable, there is a basis for n consisting of eigenvectors v,, v n, corresponding to eigenvalues λ,,λ n So there exist scalars c,, c n, such that ej cv + + c nvn (an eigenvector decomposition of ej) hen, for k,,, k k k ej v n n vn A c ( λ ) + + c ( λ ) ( ) If the eigenvalues are all less than in absolute value, then their kth powers all tend to zero So ( ) k shows that A e tends to the zero vector, as desired j k

331 Chapter 5 Supplementary Exercises 33 a ake x in H hen x cu for some scalar c So Ax A( cu) c( Au) c( λ u) ( c λ) u, which shows that Ax is in H b Let x be a nonzero vector in K Since K is one-dimensional, K must be the set of all scalar multiples of x If K is invariant under A, then Ax is in K and hence Ax is a multiple of x hus x is an eigenvector of A Let U and V be echelon forms of A and B, obtained with r and s row interchanges, respectively, and no r s scaling hen det A ( ) det U and det B ( ) det V U Y Using first the row operations that reduce A to U, we can reduce G to a matrix of the form G B U Y hen, using the row operations that reduce B to V, we can further reduce G to G here V A X r+ s U Y will be r+ s row interchanges, and so det G det ( ) det B V Since U Y V is upper triangular, its determinant equals the product of the diagonal entries, and since U and V are upper triangular, this product also equals (det U ) (det V ) hus r+ s det G ( ) (det U)(det V) (det A)(det B ) For any scalar λ, the matrix GλI has the same partitioned form as G, with AλI and B λi as its diagonal blocks (Here I represents various identity matrices of appropriate sizes) Hence the result about det G shows that det( Gλ I) det( AλI) det( BλI ) 3 By Exercise, the eigenvalues of A are the eigenvalues of the matrix [ 3 ] together with the eigenvalues 5 of 4 3 eigenvalues of A are, 3, and 7 he only eigenvalue of [ ] 3 is 3, while the eigenvalues of are and 7 hus the 5 4 By Exercise, the eigenvalues of A are the eigenvalues of the matrix 4 together with the 7 4 eigenvalues of 3 he eigenvalues of 5 4 are and 6, while the eigenvalues of are 5 and hus the eigenvalues of A are, 5, and 6, and the eigenvalue has multiplicity 5 Replace A by A λ in the determinant formula from Exercise 6 in Chapter 3 Supplementary Exercises n det( Aλ I) ( abλ) [ aλ+ ( n) b ] his determinant is zero only if ab λ or aλ+ ( n ) b hus λ is an eigenvalue of A if and only if λ ab or λ a+ ( n ) From the formula for det( AλI ) above, the algebraic multiplicity is n for a b and for a+ ( n) b 6 he 3 3 matrix has eigenvalues and + ()(), that is, and 5 he eigenvalues of the 5 5 matrix are 7 3 and 7 + (4)(3), that is 4 and 9

332 33 CHAPER 5 Eigenvalues and Eigenvectors 7 Note that det( Aλ I) ( a λ)( a λ) a a λ ( a + a ) λ+ ( a a a a ) λ (tr A) λ+ det A, and use the quadratic formula to solve the characteristic equation: tr A± (tr A) 4det A λ he eigenvalues are both real if and only if the discriminant is nonnegative, that is, his inequality simplifies to (tr A) tra 4det A and det A (tr A) 4det A k 8 he eigenvalues of A are and 6 Use this to factor A and A 3 3 A 6 4 k k 3 3 A k (6) (6) k k k + 6(6) 3+ 3(6) k 4 4 4(6) k 6 (6) k 3 as k 9 det( ) 6 5 p ( ) Cp ; λ λ+λ λ 6 5 C I p C p ; det( C λ I) 4 6λ+ 9 λ λ p ( λ) p If p is a polynomial of order, then a calculation such as in Exercise 9 shows that the characteristic polynomial of C p is p( λ ) ( ) p ( λ), so the result is true for n Suppose the result is true for n k for some k, and consider a polynomial p of degree k + hen expanding det( Cp λi ) by cofactors down the first column, the determinant of λi equals λ ( )det λ + ( ) a a ak λ k + a Cp

333 Chapter 5 Supplementary Exercises 333 a he k k matrix shown is C λi, where the determinant of C λi is ( ) k q ( λ) hus q q qt () a + at+ + at + t By the induction assumption, k+ k k+ k k a a ak k + det( C λ I) ( ) a + ( λ)( ) q( λ) p ( ) [ +λ ( + + λ +λ )] ( ) p( λ) k k So the formula holds for n k + when it holds for n k By the principle of induction, the formula for det( C λi ) is true for all n C p p a a a b Since λ is a zero of p, 3 + λ+ λ +λ a a a and C p λ λ λ a a a 3 λ λ λ λ λ hat is, C p (,λ,λ ) λ (,λ,λ ), which shows that to the eigenvalue λ λ 3 a aλa λ λ hus k (,λ,λ ) is an eigenvector of C p corresponding 3 From Exercise, the columns of the Vandermonde matrix V are eigenvectors of C p, corresponding to the eigenvalues λ,λ,λ 3 (the roots of the polynomial p) Since these eigenvalues are distinct, the eigenvectors from a linearly independent set, by heorem in Section 5 hus V has linearly independent columns and hence is invertible, by the Invertible Matrix heorem Finally, since the columns of V are eigenvectors of C p, the Diagonalization heorem (heorem 5 in Section 53) shows that V C V is diagonal p 4 [M] he MALAB command roots (p) requires as input a row vector p whose entries are the coefficients of a polynomial, with the highest order coefficient listed first MALAB constructs a companion matrix C p whose characteristic polynomial is p, so the roots of p are the eigenvalues of C p he numerical values of the eigenvalues (roots) are found by the same QR algorithm used by the command eig(a) 5 [M] he MALAB command [P D] eig(a) produces a matrix P, whose condition number is 8 6, and a diagonal matrix D, whose entries are almost,, However, the exact eigenvalues of A are,,, and A is not diagonalizable 6 [M] his matrix may cause the same sort of trouble as the matrix in Exercise 5 A matrix program that computes eigenvalues by an interative process may indicate that A has four distinct eigenvalues, all close 4 to zero However, the only eigenvalue is, with multiplicity 4, because A

334

335 6 SOLUIONS Notes: he first half of this section is computational and is easily learned he second half concerns the concepts of orthogonality and orthogonal complements, which are essential for later work heorem 3 is an important general fact, but is needed only for Supplementary Exercise 3 at the end of the chapter and in Section 74 he optional material on angles is not used later Exercises 7 3 concern facts used later Since u ª º and ¼ v ª 4 º, 6 ¼ u u ( ) 5, v u 4( ) + 6() 8, and v u u u 8 5 Since x w w w ª 3º w and 5 ¼ x ª 6º, 3 ¼ w w 3 ( ) ( 5) 35, x w 6(3) + ( )( ) + 3( 5) 5, and 3 Since w ª 3º, 5 ¼ w w 3 ( ) ( 5) 35, and w w w ª 3/35º /35 /7 ¼ 4 Since u ª º, ¼ u u ( ) 5 and ª /5º u u u /5 ¼ 5 Since u ª º and 4, ¼ v ª º 6 u v ( )(4) + (6) 8, ¼ u v ª 4º ª 8/3º v 3 6 /3 v v¹ ¼ ¼ v v 4 6 5, and ª 6º ª 3º 6 Since x and w, x w 6(3) + ( )( ) + 3( 5) 5, 3 ¼ 5 ¼ ª 6º ª 3/49º x w 5 /49 x x x ¹ 49 3 ¼ 5/49 ¼ x x 6 ( ) 3 49, and 335

336 336 CHAPER 6 Orthogonality and Least Squares 7 Since w ª 3º, 5 ¼ w w w 3 ( ) ( 5) 35 8 Since x ª 6º, 3 ¼ x x x 6 ( ) A unit vector in the direction of the given vector is ª 3º ª 3º ª 3/5º ( 3) /5 ¼ ¼ ¼ A unit vector in the direction of the given vector is ª 6º ª 6º ª 6/ 6º 4 4 4/ 6 ( 6) 4 ( 3) ¼ ¼ 3 6 ¼ A unit vector in the direction of the given vector is ª 7/4º ª 7/4º ª 7/ 69º / / / 69 (7/ 4) (/ ) 69/6 ¼ ¼ 4/ 69 ¼ A unit vector in the direction of the given vector is ª 8/3º ª 8/3º ª 4/5º (8/ 3) /9 3/5 ¼ ¼ ¼ 3 Since x ª º 3 and ¼ y ª º, 5 ¼ x y [ ( )] [ 3 ( 5)] 5 and dist ( xy, ) ª º 4 Since u 5 and z ¼ dist ( uz, ) 68 7 ª 4º, 8 ¼ u z [ ( 4)] [ 5 ( )] [ 8] 68 and 5 Since a b 8( ) + ( 5)( 3) z, a and b are not orthogonal 6 Since u v () + (3)( 3) + ( 5)(3), u and v are orthogonal 7 Since u v 3( 4) + () + ( 5)( ) + (6), u and v are orthogonal 8 Since y z ( 3)() + 7( 8) + 4(5) + ( 7) z, y and z are not orthogonal 9 a rue See the definition of v b rue See heorem (c) c rue See the discussion of Figure 5

337 6 Solutions 337 ª d False Counterexample: º ¼ e rue See the box following Example 6 a rue See Example and heorem (a) b False he absolute value sign is missing See the box before Example c rue See the defintion of orthogonal complement d rue See the Pythagorean heorem e rue See heorem 3 heorem (b): ( uv) w ( u v) w ( u v ) w u w v w u wv w he second and third equalities used heorems 3(b) and (c), respectively, from Section heorem (c): ( ) ( ) cu v cu v c( u v) c( u v ) he second and third equalities used heorems 3(c) and (d), respectively, from Section Since u u is the sum of the squares of the entries in u, u ut he sum of squares of numbers is zero if and only if all the numbers are themselves zero 3 One computes that u v ( 7) + ( 5)( 4) + ( )6, u u u ( 5) ( ) 3, v v v ( 7) ( 4) 6, and u v ( uv) ( u v ) ( ( 7)) ( 5 ( 4)) ( 6) 3 4 One computes that u v ( uv) ( u v) u u u vv v u u v v and u v ( uv) ( u v) u u u vv v u u v v so uv u v u u v v u u v v u v 5 When v ª aº, b the set H of all vectors ª xº ¼ y that are orthogonal to Y is the subspace of vectors whose ¼ entries satisfy ax + by If a z, then x (b/a)y with y a free variable, and H is a line through the ª bº ½ origin A natural choice for a basis for H in this case is ¾ a If a and b z, then by Since ¼ b z, y and x is a free variable he subspace H is again a line through the origin A natural choice ª º ½ ª bº ½ for a basis for H in this case is ¾, but ¾ ¼ a is still a basis for H since a and b z If a ¼ and b, then H since the equation x + y places no restrictions on x or y 6 heorem in Chapter 4 may be used to show that W is a subspace of the u3 matrix u Geometrically, W is a plane through the origin 3, because W is the null space of

338 338 CHAPER 6 Orthogonality and Least Squares 7 If y is orthogonal to u and v, then y u y v, and hence by a property of the inner product, y (u + v) y u + y v + hus y is orthogonal to u+ v 8 An arbitrary w in Span{u, v} has the form w cucv If y is orthogonal to u and v, then u y v y By heorem (b) and (c), w y ( cuc v) y c ( u y) c ( v y ) 9 A typical vector in W has the form w cv }c p v p If x is orthogonal to each v j, then by heorems (b) and (c), w x ( c v }c v ) y c ( v x) }c ( v x ) p p p p So x is orthogonal to each w in W 3 a If z is in W A, u is in W, and c is any scalar, then (cz) u c(z u) c Since u is any element of W, c z is in W A b Let z and z be in W A hen for any u in W, ( zz) u z uz u hus z z is in W A c Since is orthogonal to every vector, is in W A hus W A is a subspace 3 Suppose that x is in W and W A Since x is in, x is orthogonal to every vector in W, including x itself So x x, which happens only when x W A 3 [M] a One computes that a a a 3 a 4 and that ai a j for i zj b Answers will vary, but it should be that Au u and Av v c Answers will again vary, but the cosines should be equal d A conjecture is that multiplying by A does not change the lengths of vectors or the angles between vectors x v 33 [M] Answers to the calculations will vary, but will demonstrate that the mapping x ( x) v v v¹ (for vz) is a linear transformation o confirm this, let x and y be in n, and let c be any scalar hen ( xy) v ( x v) ( y v) x v y v ( x y) v v v v ( x) ( y) v v ¹ v v ¹ v v¹ v v¹ and ( cx) v c( x v) x v c ( x) v v c v c( x) v v ¹ v v ¹ v v¹ 34 [M] One finds that N ª 5 º 4 ª 5 /3º, R 4/3 /3 ¼ 3 ¼

339 6 Solutions 339 he row-column rule for computing RN produces the 3 u zero matrix, which shows that the rows of R are orthogonal to the columns of N his is expected by heorem 3 since each row of R is in Row A and each column of N is in Nul A 6 SOLUIONS Notes: he nonsquare matrices in heorems 6 and 7 are needed for the QR factorizarion in Section 64 It is important to emphasize that the term orthogonal matrix applies only to certain square matrices he subsection on orthogonal projections not only sets the stage for the general case in Section 63, it also provides what is needed for the orthogonal diagonalization exercises in Section 7, because none of the eigenspaces there have dimension greater than For this reason, the Gram-Schmidt process (Section 64) is not really needed in Chapter 7 Exercises 3 and 4 prepare for Section 63 Since ª º ª 3º 4 4 z, 3 ¼ 7 ¼ the set is not orthogonal Since ª º ª º ª º ª 5º ª º ª 5º, ¼ ¼ ¼ ¼ ¼ ¼ the set is orthogonal 3 Since ª 6º ª 3º 3 3z, 9 ¼ ¼ the set is not orthogonal 4 Since ª º ª º ª º ª 4º ª º ª 4º 5 5, 3 ¼ ¼ 3 ¼ 6 ¼ ¼ 6 ¼ the set is orthogonal 5 Since ª 3º ª º ª 3º ª 3º ª º ª 3º ¼ 4 ¼ 3 ¼ ¼ 4 ¼ ¼, the set is orthogonal 6 Since ª 4º ª 3º 3 3 z, ¼ ¼ the set is not orthogonal 7 Since u u, { u, u } is an orthogonal set Since the vectors are non-zero, u and u are linearly independent by heorem 4 wo such vectors in automatically form a basis for So { u, u } is an orthogonal basis for By heorem 5, x u x u x u 3u u u u u u

340 34 CHAPER 6 Orthogonality and Least Squares 8 Since u u 6 6, { u, u } is an orthogonal set Since the vectors are non-zero, u and u are linearly independent by heorem 4 wo such vectors in automatically form a basis for So { u, u } is an orthogonal basis for By heorem 5, x u x u 3 3 x u u u u u u u 4 9 Since u u u u 3 u u 3, { u, u, u 3} is an orthogonal set Since the vectors are non-zero, u, u, and u 3 are linearly independent by heorem 4 hree such vectors in 3 automatically form a basis for 3 So { u, u, u 3} is an orthogonal basis for 3 By heorem 5, x u x u x u3 5 3 x u u3 u u u3 u u u u u u 3 3 Since u u u u 3 u u 3, { u, u, u 3} is an orthogonal set Since the vectors are non-zero, u, u, and u 3 are linearly independent by heorem 4 hree such vectors in 3 automatically form a basis for 3 So { u, u, u 3} is an orthogonal basis for 3 By heorem 5, x u x u x u3 4 x u u3 u u u3 u u u u u u Let y ª º 7 and 4 ¼ u ª º he orthogonal projection of y onto the line through u and the origin is the ¼ orthogonal projection of y onto u, and this vector is y u ª º yˆ u u u u ¼ Let y ª º and ¼ u ª º 3 he orthogonal projection of y onto the line through u and the origin is the ¼ orthogonal projection of y onto u, and this vector is y u ª /5º yˆ u u u u 5 6/5 ¼ 3 he orthogonal projection of y onto u is y u 3 ª 4/5º yˆ u u u u 65 7/5 ¼ he component of y orthogonal to u is ª º y y ˆ ¼ hus y y ª º ª º ˆ y y ˆ ¼ ¼ 4 he orthogonal projection of y onto u is y u ª 4/5º yˆ u u u u 5 /5 ¼

341 6 Solutions 34 he component of y orthogonal to u is ª º y y ˆ ¼ ª º ª º hus y yˆ yy ˆ ¼ ¼ 5 he distance from y to the line through u and the origin is y ŷ One computes that y u ª 3º 3 ª 8º ª 3/5º y yˆ y u u u 6 4/5 ¼ ¼ ¼ so y y ˆ is the desired distance 6 he distance from y to the line through u and the origin is y ŷ One computes that y u ª 3º ª º ª 6º y yˆ y u 3 u u 9 3 ¼ ¼ ¼ so y y ˆ is the desired distance 7 Let u ª /3º /3, /3 ¼ ª /º v Since u v, {u, v} is an orthogonal set However, u u u / 3 and / ¼ v v v /, so {u, v} is not an orthonormal set he vectors u and v may be normalized to form the orthonormal set ª 3/3º ª /º ½ u v ½, ¾ 3/3, ¾ u v 3/3 ¼ / ¼ 8 Let u ª º, ¼ ª º v Since u v z, {u, v} is not an orthogonal set ¼ ª 6 º 9 Let u, 8 ¼ v ª 8 º 6 Since u v, {u, v} is an orthogonal set Also, u u u and ¼ v v v, so {u, v} is an orthonormal set ª /3º ª /3º Let u /3, /3 v Since u v, {u, v} is an orthogonal set However, u u u and /3 ¼ ¼ v v v 5/ 9, so {u, v} is not an orthonormal set he vectors u and v may be normalized to form the orthonormal set /3 / 5 ½ ª º ª º u v ½, /3 ¾, / 5 ¾ u v /3 ¼ ¼

342 34 CHAPER 6 Orthogonality and Least Squares Let u ª / º ª 3/ º ª º 3/, v /, and w / Since u v u w v w, {u, v, w} is an 3/ ¼ / ¼ / ¼ orthogonal set Also, u u u, v v v, and w w w, so {u, v, w} is an orthonormal set Let u ª / 8º 4/ 8, / 8 ¼ ª / º ª /3º v, and w /3 Since u v u w v w, {u, v, w} is an / ¼ /3 ¼ orthogonal set Also, u u u, v v v, and w w w, so {u, v, w} is an orthonormal set 3 a rue For example, the vectors u and y in Example 3 are linearly independent but not orthogonal b rue he formulas for the weights are given in heorem 5 c False See the paragraph following Example 5 d False he matrix must also be square See the paragraph before Example 7 e False See Example 4 he distance is y ŷ 4 a rue But every orthogonal set of nonzero vectors is linearly independent See heorem 4 b False o be orthonormal, the vectors is S must be unit vectors as well as being orthogonal to each other c rue See heorem 7(a) d rue See the paragraph before Example 3 e rue See the paragraph before Example 7 5 o prove part (b), note that ( Ux) ( Uy) ( Ux) ( Uy) x U Uy x y x y because U U I If y x in part (b), (Ux) (Ux) x x, which implies part (a) Part (c) of the heorem follows immediately fom part (b) 6 A set of n nonzero orthogonal vectors must be linearly independent by heorem 4, so if such a set spans W it is a basis for W hus W is an n-dimensional subspace of n, and W n 7 If U has orthonormal columns, then U U I by heorem 6 If U is also a square matrix, then the equation U U I implies that U is invertible by the Invertible Matrix heorem 8 If U is an n un orthogonal matrix, then I UU UU Since U is the transpose of U, heorem 6 applied to U says that U has orthogonal columns In particular, the columns of U are linearly independent and hence form a basis for n by the Invertible Matrix heorem hat is, the rows of U form a basis (an orthonormal basis) for n 9 Since U and V are orthogonal, each is invertible By heorem 6 in Section, UV is invertible and ( UV ) V U V U ( UV ), where the final equality holds by heorem 3 in Section hus UV is an orthogonal matrix

343 6 Solutions If U is an orthogonal matrix, its columns are orthonormal Interchanging the columns does not change their orthonormality, so the new matrix say, V still has orthonormal columns By heorem 6, V V I Since V is square, V V by the Invertible Matrix heorem y u 3 Suppose that yˆ u Replacing u by cu with c z gives u u y ( c u ) ( ) c( y c u ) ( ) c ( y c u ) y u u u u u yˆ ( cu) ( cu) c ( u u) c ( u u) u u So ŷ does not depend on the choice of a nonzero u in the line L used in the formula 3 If v v, then by heorem (c) in Section 6, ( cv ) ( c v ) c[ v ( c v )] cc ( v v ) cc x u 33 Let L Span{u}, where u is nonzero, and let ( x) u u u For any vectors x and y in n and any scalars c and d, the properties of the inner product (heorem ) show that ( cxdy) u c ( x dy) u u u cx udy u u u u cx u dy u u u u u u u c ( x) d ( y ) hus is a linear transformation Another approach is to view as the composition of the following three linear mappings: xa x v, a b a / v v, and b bv 34 Let L Span{u}, where u is nonzero, and let ( x) reflly projlyy By Exercise 33, the mapping y proj L y is linear hus for any vectors y and z in n and any scalars c and d, c ( y dz) proj L ( cydz) ( cydz ) ( cprojlyd proj Lz) cydz cprojlycyd projlzdz c( proj Lyy) d( proj Lzz ) c ( y) d ( z ) hus is a linear transformation 35 [M] One can compute that A A I 4 Since the off-diagonal entries in A are orthogonal A A are zero, the columns of

344 344 CHAPER 6 Orthogonality and Least Squares 36 [M] a One computes that U U I 4, while UU ª º ¹ ¼ he matrices U U and UU are of different sizes and look nothing like each other b Answers will vary he vector p UU y is in Col U because p UU ( y ) Since the columns of U are simply scaled versions of the columns of A, Col U Col A hus each p is in Col A c One computes that U z d From (c), z is orthogonal to each column of A By Exercise 9 in Section 6, z must be orthogonal to every vector in Col A; that is, z is in (Col A) A 63 SOLUIONS Notes: Example seems to help students understand heorem 8 heorem 8 is needed for the Gram-Schmidt process (but only for a subspace that itself has an orthogonal basis) heorems 8 and 9 are needed for the discussions of least squares in Sections 65 and 66 heorem is used with the QR factorization to provide a good numerical method for solving least squares problems, in Section 65 Exercises 9 and lead naturally into consideration of the Gram-Schmidt process he vector in Span{ u 4} is ª º 6 x u4 7 u 4 u 4 u 4 u4 u4 36 ¼ x u4 Since x cu c u c u u, the vector u u ª º ª º ª º 8 6 x u 4 x u 4 u4 u 4 4 ¼ ¼ ¼ is in Span{ u, u, u 3}

345 63 Solutions 345 he vector in Span{ u } is ª º 4 v u 4 u u u u u 7 ¼ v u Since x ucu c3u3 c4u4, u u ª 4º ª º ª º 5 4 v u v u u u ¼ ¼ ¼ is in Span{ u, u3, u 4} the vector 3 Since u u, { u, u } is an orthogonal set he orthogonal projection of y onto Span{ u, u } is ª º ª º ª º y u y u yˆ u u u u 4 u u u u ¼ ¼ ¼ 4 Since u u, { u, u } is an orthogonal set he orthogonal projection of y onto Span{ u, u } is ª 3º ª 4º ª 6º y u y u yˆ u u u u u u u u ¼ ¼ ¼ 5 Since u u 3 4, { u, u } is an orthogonal set he orthogonal projection of y onto Span{ u, u } is ª 3º ª º ª º y u y u yˆ u u u u u u u u 4 6 ¼ ¼ 6 ¼ 6 Since u u, { u, u } is an orthogonal set he orthogonal projection of y onto Span{ u, u } is ª 4º ª º ª 6º y u y u yˆ u u u u 4 u u u u 8 ¼ ¼ ¼ 7 Since u u 53 8, { u, u } is an orthogonal set By the Orthogonal Decomposition heorem, ª /3º ª 7 / 3º y u y u yˆ u u u u /3, ˆ 7/3 3 z y y u u u u 8/3 ¼ 7/3 ¼ and y ŷ + z, where ŷ is in W and z is in W A

346 346 CHAPER 6 Orthogonality and Least Squares 8 Since u u 3, { u, u } is an orthogonal set By the Orthogonal Decomposition heorem, ª 3/º ª 5/º y u y u yˆ u u u u 7/, ˆ / z y y u u u u ¼ ¼ and y ŷ + z, where ŷ is in W and z is in W A 9 Since u u u u 3 u u 3, { u, u, u 3} is an orthogonal set By the Orthogonal Decomposition heorem, ª º ª º 4 y u y u y u3 yˆ u u u3 u u u3, z y yˆ u u u u u3 u ¼ ¼ and y ŷ + z, where ŷ is in W and z is in W A Since u u u u 3 u u 3, { u, u, u 3} is an orthogonal set By the Orthogonal Decomposition heorem, ª 5º ª º y u y u y u3 4 5 yˆ u u u3 u u u 3, z y yˆ u u u u u3 u ¼ ¼ and y ŷ + z, where ŷ is in W and z is in W A Note that v and v are orthogonal he Best Approximation heorem says that ŷ, which is the orthogonal projection of y onto W Span{ v, v }, is the closest point to y in W his vector is ª 3º y v y v 3 yˆ v v v v v v v v ¼ Note that v and v are orthogonal he Best Approximation heorem says that ŷ, which is the orthogonal projection of \ onto W Span{ v, v }, is the closest point to y in W his vector is ª º 5 y v y v yˆ v v 3v v v v v v 3 9 ¼ 3 Note that v and v are orthogonal By the Best Approximation heorem, the closest point in Span{ v, v } to z is ª º 3 z v z v 7 zˆ v v v v v v v v ¼

347 63 Solutions Note that v and v are orthogonal By the Best Approximation heorem, the closest point in Span{ v, v } to z is ª º z v z v zˆ v v v v v v v v / 3/ ¼ 5 he distance from the point y in 3 to a subspace W is defined as the distance from y to the closest point in W Since the closest point in W to y is yˆ proj y, the desired distance is y ŷ One computes that 3 yˆ 9, y y ˆ, and y y ˆ 4 6 W 6 he distance from the point y in 4 to a subspace W is defined as the distance from y to the closest point in W Since the closest point in W to y is yˆ proj y, the desired distance is y ŷ One computes that 7 a ª º ª º yˆ y y ˆ, and y ŷ 8 ¼ ¼ U U ª 8/9 /9 /9º ª º, UU /9 5/9 4/9 ¼ /9 4/9 5/9 ¼ b Since U U I, the columns of U form an orthonormal basis for W, and by heorem ª 8/9 /9 /9ºª 4º ª º projw y UU y /9 5/ 9 4/9 8 4 /9 4/9 5/9 ¼ ¼ 5 ¼ 8 a U U >@, UU ª / 3/º 3/ 9/ ¼ b Since U U, { u } forms an orthonormal basis for W, and by heorem ª / 3/ºª 7º ª º proj W y UU y 3/ 9/ 9 6 ¼ ¼ ¼ W 9 By the Orthogonal Decomposition heorem, u 3 is the sum of a vector in W Span{ u, u } and a vector v orthogonal to W his exercise asks for the vector v: ª º ª º ª º v u3 projw u3 u3 /5 /5 u u 3 5 ¹ ¼ 4/5 ¼ /5 ¼ Any multiple of the vector v will also be in W A

348 348 CHAPER 6 Orthogonality and Least Squares By the Orthogonal Decomposition heorem, u 4 is the sum of a vector in W Span{ u, u } and a vector v orthogonal to W his exercise asks for the vector v: ª º ª º ª º v u4 projw u4 u4 / 5 4 / 5 u u 6 3 ¹ ¼ /5 ¼ /5 ¼ Any multiple of the vector v will also be in W A a rue See the calculations for z in Example or the box after Example 6 in Section 6 b rue See the Orthogonal Decomposition heorem c False See the last paragraph in the proof of heorem 8, or see the second paragraph after the statement of heorem 9 d rue See the box before the Best Approximation heorem e rue heorem applies to the column space W of U because the columns of U are linearly independent and hence form a basis for W a rue See the proof of the Orthogonal Decomposition heorem b rue See the subsection A Geometric Interpretation of the Orthogonal Projection c rue he orthgonal decomposition in heorem 8 is unique d False he Best Approximation heorem says that the best approximation to y is proj W y e False his statement is only true if x is in the column space of U If n > p, then the column space of U will not be all of n, so the statement cannot be true for all x in n 3 By the Orthogonal Decomposition heorem, each x in n can be written uniquely as x p + u, with p in Row A and u in (Row A) A A By heorem 3 in Section 6, (Row A) Nul A, so u is in Nul A Next, suppose Ax b is consistent Let x be a solution and write x p + u as above hen Ap A(x u) Ax Au b b, so the equation Ax b has at least one solution p in Row A Finally, suppose that p and p are both in Row A and both satisfy Ax b hen p p is in Nul A (Row A) A, since A( p p) Ap Ap b b he equations p p ( pp ) and p p+ both then decompose p as the sum of a vector in Row A and a vector in (Row A) A By the uniqueness of the orthogonal decomposition (heorem 8), p p, and p is unique 4 a By hypothesis, the vectors w, }, w p are pairwise orthogonal, and the vectors v, }, v q are pairwise orthogonal Since w i is in W for any i and v j is in W A for any j, wi v j for any i and j hus { w, }, wp, v, }, v q} forms an orthogonal set b For any y in n, write y ŷ + z as in the Orthogonal Decomposition heorem, with ŷ in W and z in W A hen there exist scalars c, }, cp and d, }, dq such that y yˆ z cw}cpwp dv} dqv q hus the set { w, }, wp, v, }, v q} spans n c he set { w, }, wp, v, }, v q} is linearly independent by (a) and spans n by (b), and is thus a basis for n A Hence dimw dimw p q dim n

349 64 Solutions [M] Since U U I 4, U has orthonormal columns by heorem 6 in Section 6 he closest point to y in Col U is the orthogonal projection ŷ of y onto Col U From heorem, yˆ UU 7 y ª º ¼ 6 [M] he distance from b to Col U is b ˆb, where b ˆ UU 7 b One computes that ª º ª º ˆ UU 7 ˆ ˆ b b b b b b ¼ ¼ which is 66 to four decimal places 64 SOLUIONS Notes: he QR factorization encapsulates the essential outcome of the Gram-Schmidt process, just as the LU factorization describes the result of a row reduction process For practical use of linear algebra, the factorizations are more important than the algorithms that produce them In fact, the Gram-Schmidt process is not the appropriate way to compute the QR factorization For that reason, one should consider deemphasizing the hand calculation of the Gram-Schmidt process, even though it provides easy exam questions he Gram-Schmidt process is used in Sections 67 and 68, in connection with various sets of orthogonal polynomials he process is mentioned in Sections 7 and 74, but the one-dimensional projection constructed in Section 6 will suffice he QR factorization is used in an optional subsection of Section 65, and it is needed in Supplementary Exercise 7 of Chapter 7 to produce the Cholesky factorization of a positive definite matrix ª º x v Set v x and compute that v x v x 3v 5 hus an orthogonal basis for W is v v 3 ¼ ª 3º ª º ½, 5 ¾ 3 ¼ ¼

350 35 CHAPER 6 Orthogonality and Least Squares ª 5º x v Set v x and compute that v x v x v 4 v v 8 ¼ ª º ª 5º ½ 4, 4 ¾ 8 ¼ ¼ hus an orthogonal basis for W is ª 3º x v 3 Set v x and compute that v x v x v 3/ v v 3/ ¼ ª º ª 3º ½ 5, 3/ ¾ 3/ ¼ ¼ ª 3º x v 4 Set v x and compute that v x v x ( ) v 6 v v 3 ¼ ª 3º ª 3º ½ 4, 6 ¾ 5 3 ¼ ¼ hus an orthogonal basis for W is hus an orthogonal basis for W is ª 5º x v 5 Set v x and compute that v x v x v hus an orthogonal basis for W is v v 4 ¼ ª º ª 5º ½ 4, ¾ 4 ¼ ¼ ª 4º 6 x v 6 Set v x and compute that v x v x ( 3) v hus an orthogonal basis for W is v v 3 ¼ ª 3º ª 4º ½ 6, ¾ 3 ¼ ¼

351 64 Solutions 35 7 Since v 3 and v 7 / 3 6 /, an orthonormal basis for W is ª / 3º ª / 6º ½ v v ½, ¾ 5/ 3, / 6 ¾ v v / 3 ¼ / 6 ¼ 8 Since v 5 and v , an orthonormal basis for W is ª 3/ 5º ª / 6º ½ v v ½, ¾ 4/ 5, / 6 ¾ v v 5/ 5 ¼ / 6 ¼ 9 Call the columns of the matrix x, x, and x 3 and perform the Gram-Schmidt process on these vectors: v x v ª º 3 x v x v x ( ) v v v 3 ¼ v3 ª 3º x3 v x3 v 3 x3 v v x 3 v v v v v v ¹ 3 ¼ hus an orthogonal basis for W is ª 3º ª º ª 3º ½ 3,, ¾ 3 3 ¼ ¼ 3 ¼ Call the columns of the matrix x, x, and x 3 and perform the Gram-Schmidt process on these vectors: v x v ª 3º x v x v x ( 3) v v v ¼ v3 ª º x3 v x3 v 5 x3 v v x 3 v v v v v v 3 ¼ ª º ª 3º ª º ½ 3 hus an orthogonal basis for W is,, ¾ 3 ¼ ¼ ¼

352 35 CHAPER 6 Orthogonality and Least Squares Call the columns of the matrix x, x, and x 3 and perform the Gram-Schmidt process on these vectors: v x ª 3º x v v x v x ( ) v 3 v v 3 ¼ ª º x3 v x3 v v3 x3 v v x3 4v v v v v v 3 ¹ ¼ hus an orthogonal basis for W is ª º ª 3º ª º ½, 3, ¾ 3 3 ¼ ¼ ¼ Call the columns of the matrix x, x, and x 3 and perform the Gram-Schmidt process on these vectors: v x v ª º x v x v x 4v v v ¼ ª 3º 3 x3 v x3 v 7 3 v3 x3 v v x3 v v v v v v 3 3 ¼ ª º ª º ª 3º ½ 3 hus an orthogonal basis for W is,, ¾ 3 3 ¼ ¼ ¼

353 64 Solutions Since A and Q are given, R Q A 4 Since A and Q are given, R Q A ª 5 9º 5/6 /6 3/6 /6 7 ª º ª 6 º /6 5/6 /6 3/ ¼ ¼ 5 ¼ ª 3º /7 5/7 /7 4/7 5 7 ª º ª 7 7º 5/7 /7 4/7 /7 7 ¼ ¼ 4 6 ¼ 5 he columns of Q will be normalized versions of the vectors v, v, and v 3 found in Exercise hus ª / 5 / /º / 5 ª º Q / 5 / /, R Q A 6 / 5 / / 4 ¼ / 5 / / ¼ 6 he columns of Q will be normalized versions of the vectors v, v, and v 3 found in Exercise hus ª / / /º / / / ª 8 7º Q /, R Q A 3 / / / 6 ¼ / / / ¼ 7 a False Scaling was used in Example, but the scale factor was nonzero b rue See () in the statement of heorem c rue See the solution of Example 4 8 a False he three orthogonal vectors must be nonzero to be a basis for a three-dimensional subspace (his was the case in Step 3 of the solution of Example ) b rue If x is not in a subspace w, then x cannot equal proj W x, because proj W x is in W his idea was used for v k in the proof of heorem c rue See heorem 9 Suppose that x satisfies Rx ; then Q Rx Q, and Ax Since the columns of A are linearly independent, x must be his fact, in turn, shows that the columns of R are linearly indepedent Since R is square, it is invertible by the Invertible Matrix heorem

354 354 CHAPER 6 Orthogonality and Least Squares If y is in Col A, then y Ax for some x hen y QRx Q(Rx), which shows that y is a linear combination of the columns of Q using the entries in Rx as weights Conversly, suppose that y Qx for some x Since R is invertible, the equation A QR implies that Q AR So y AR x A( R x ), which shows that y is in Col A Denote the columns of Q by { q, }, q n } Note that n dm, because A is m un and has linearly independent columns he columns of Q can be extended to an orthonormal basis for m as follows Let f be the first vector in the standard basis for m that is not in W n Span{ q, }, q n}, let u f projw n f, and let q u n / u hen { q, }, qn, q n} is an orthonormal basis for Wn Span{ q, }, qn, q n } Next let f be the first vector in the standard basis for m that is not in Wn, let u f proj W, n f and let qn u/ u hen { q, }, qn, qn, q n} is an orthogonal basis for Wn Span{ q, }, qn, qn, q n } his process will continue until m n vectors have been added to the original n vectors, and { q, }, qn, qn, }, q m} is an orthonormal basis for m Q } Q Q Q hen, using partitioned matrix multiplication, Let > q and n ª Rº Q QR A O ¼ m We may assume that { u, }, u p } is an orthonormal basis for W, by normalizing the vectors in the original basis given for W, if necessary Let U be the matrix whose columns are u, }, u hen, by heorem in Section 63, ( x) proj W x ( UU ) x for x in hence is a linear transformation, as was shown in Section 8 n hus is a matrix transformation and 3 Given A QR, partition A > A where A has p columns Partition Q as has p columns, and partition R as R ª R R º, O R ¼ where R is a p up matrix hen ª R R º A A A QR Q Q Q R Q R Q R ¼ O R p Q Q Q where Q hus A Q R he matrix Q has orthonormal columns because its columns come from Q he matrix R is square and upper triangular due to its position within the upper triangular matrix R he diagonal entries of R are positive because they are diagonal entries of R hus QR is a QR factorization of A 4 [M] Call the columns of the matrix x, x, x 3, and x 4 and perform the Gram-Schmidt process on these vectors: v x ª 3º 3 x v v x v x ( ) v 3 v v 3 ¼

355 65 Solutions 355 ª 6º x3 v x3 v 4 v3 x3 v v x3 v v 6 v v v v ¹ 3¹ 6 ¼ x v x v x v v x v v v x v ( ) v v v v v v v3 v3 ¹ ª º ª 3º ª 6º ª º ½ 3 5 hus an orthogonal basis for W is 6, 3, 6, ¾ ¼ ¼ ¼ ¼ ª º 5 5 ¼ 5 [M] he columns of Q will be normalized versions of the vectors v, v, and v 3 found in Exercise 4 hus ª / / / 3 º ª º / / / Q 3/ / / 3, R Q A /5 / 3 5 ¼ / / / ¼ 6 [M] In MALAB, when A has n columns, suitable commands are Q A(:,)/norm(A(:,)) % he first column of Q for j: n va(:,j) Q*(Q *A(:,j)) Q(:,j)v/norm(v) % Add a new column to Q end 65 SOLUIONS Notes: his is a core section the basic geometric principles in this section provide the foundation for all the applications in Sections Yet this section need not take a full day Each example provides a stopping place heorem 3 and Example are all that is needed for Section 66 heorem 5, however, gives an illustration of why the QR factorization is important Example 4 is related to Exercise 7 in Section 66

356 356 CHAPER 6 Orthogonality and Least Squares o find the normal equations and to find ˆx, compute A A A b ª º ª º 6 3 ª º 3 3 ¼ ¼ 3 ¼ ª 4º ª º 4 ª º 3 3 ¼ ¼ ¼ a he normal equations are ( A A) x A b: b Compute 6 ª x º 4 x ª º ª º ¼ ¼ ¼ ª 6 º ª 4º ª ºª 4º ˆx ( A A) A b 6 ¼ ¼ ¼ ¼ ª 33º ª 3º ¼ ¼ o find the normal equations and to find x ˆ, compute A A A b ª º ª º 8 ª º 3 8 ¼ ¼ 3 ¼ ª 5º ª º 4 8 ª º 3 ¼ ¼ ¼ 8 ª x º 4 8 x a he normal equations are ( ª º ª º A A) x A b: ¼ ¼ ¼ b Compute ª 8º ª 4º ª 8ºª 4º ˆx ( A A) A b ¼ ¼ ¼ ¼ ª 4º ª 4º ¼ ¼ 3 o find the normal equations and to find ˆx, compute A A A b ª º ª º ª 6 6º ¼ ¼ 5 ¼ ª 3º ª º ª 6º ¼ ¼ ¼

357 65 Solutions 357 a he normal equations are ( ª 6 6º ª x º ª 6º A A) x A b: 6 4 x 6 ¼ ¼ ¼ b Compute xˆ (Α Α) Α b ª 88º ª 4/3º 6 7 /3 ¼ ¼ 4 o find the normal equations and to find ˆx, compute ª 3º ª º 3 3 A A ª º 3 3 ¼ ¼ ¼ ª 5º ª º ª 6º A b 3 4 ¼ ¼ ¼ a he normal equations are ( ª 3 3º ª x º ª 6º A A) x A b: 3 x 4 ¼ ¼ ¼ b Compute xˆ (Α Α) Α b ª 4º ª º 4 4 ¼ ¼ 5 o find the least squares solutions to Ax b, compute and row reduce the augmented matrix for the system A Ax A b: ª 4 4º ª 5º ª A A A º 4 3 b ¼ a ¼ ¼ 5 so all vectors of the form x ˆ 3 + x 3 are the least-squares solutions of Ax b 6 o find the least squares solutions to Ax b, compute and row reduce the augmented matrix for the system A Ax A b: ª º ª 5º ª A A A º 3 3 b ¼ a ¼ ¼ 5 so all vectors of the form x ˆ + x 3 are the least-squares solutions of Ax b

358 358 CHAPER 6 Orthogonality and Least Squares 7 From Exercise 3, A ª º, 3 5 ¼ ª 3º b, and 4 x ª º ˆ Since ¼ ¼ ª º ª º ª º ª º ª º ª º Axˆ b ¼ ¼ ¼ ¼ ¼ ¼ the least squares error is Axˆ b 8 From Exercise 4, A ª 3º, ¼ ª 5º b, and x ªº ˆ Since ¼ ¼ ª 3º ª 5º ª 4º ª 5º ª º ªº Axˆ b ¼ ¼ ¼ ¼ ¼ ¼ the least squares error is Axˆ b 9 (a) Because the columns a and a of A are orthogonal, the method of Example 4 may be used to find ˆb, the orthogonal projection of b onto Col A: ª º ª 5º ª º ˆ b a b a b a a a a 3 a a a a ¼ 4 ¼ ¼ (b) he vector ˆx contains the weights which must be placed on a and a to produce ˆb hese weights ª º are easily read from the above equation, so x ˆ ¼ (a) Because the columns a and a of A are orthogonal, the method of Example 4 may be used to find ˆb, the orthogonal projection of b onto Col A: ª º ª º ª 4º ˆ b a b a b a a 3a a 3 4 a a a a ¼ ¼ 4 ¼ (b) he vector ˆx contains the weights which must be placed on a and a to produce ˆb hese weights ª º are easily read from the above equation, so x ˆ ¼

359 65 Solutions 359 (a) Because the columns a, a and a 3 of A are orthogonal, the method of Example 4 may be used to find ˆb, the orthogonal projection of b onto Col A: ˆ b a b a b a3 b a a a3 aa a3 a a a a a a ª 4º ª º ª º ª 3º ¼ ¼ 5 ¼ ¼ (b) he vector ˆx contains the weights which must be placed on a, a, and a 3 to produce ˆb hese ª º weights are easily read from the above equation, so x ˆ ¼ (a) Because the columns a, a and a 3 of A are orthogonal, the method of Example 4 may be used to find ˆb, the orthogonal projection of b onto Col A: ˆ b a b a b a3 4 5 b a a a3 a a a3 a a a a a a 3 3 3¹ 3 3 ª º ª º ª º ª 5º ¼ ¼ ¼ 6 ¼ (b) he vector ˆx contains the weights which must be placed on a, a, and a 3 to produce ˆb hese ª º weights are easily read from the above equation, so x ˆ ¼ 3 One computes that ª º ª º Au, A b u, b Au 4 ¼ 6 ¼ ª 7º ª 4º Av, A 3 b v, b Av 9 7 ¼ ¼ Since Av is closer to b than Au is, Au is not the closest point in Col A to b hus u cannot be a leastsquares solution of Ax b

360 36 CHAPER 6 Orthogonality and Least Squares 4 One computes that ª 3º ª º Au 8, A 4 b u, b Au 4 ¼ ¼ ª 7º ª º Av, A b v, b Av 4 8 ¼ 4 ¼ Since Au and Au are equally close to b, and the orthogonal projection is the unique closest point in Col A to b, neither Au nor Av can be the closest point in Col A to b hus neither u nor v can be a least-squares solution of Ax b 5 he least squares solution satisfies Rxˆ Q b Since for the system may be row reduced to find ª 3 5 7º ª 4º ª R Q º b ¼ a ¼ ¼ and so x ª º ˆ is the least squares solution of Ax b ¼ 6 he least squares solution satisfies Rxˆ Q b Since matrix for the system may be row reduced to find ª 3 7 / º ª 9º ª R Q º b ¼ a 5 9/ 9 ¼ ¼ and so x ª º ˆ is the least squares solution of Ax b ¼ 3 5 R ª º ¼ 3 R ª º 5 ¼ and and Q Q b b ª 7º, the augmented matrix ¼ ª 7 / º 9/, the augmented ¼ 7 a rue See the beginning of the section he distance from Ax to b is Ax b b rue See the comments about equation () c False he inequality points in the wrong direction See the definition of a least-squares solution d rue See heorem 3 e rue See heorem 4 8 a rue See the paragraph following the definition of a least-squares solution b False If ˆx is the least-squares solution, then A ˆx is the point in the column space of A closest to b See Figure and the paragraph preceding it c rue See the discussion following equation () d False he formula applies only when the columns of A are linearly independent See heorem 4 e False See the comments after Example 4 f False See the Numerical Note

361 66 Solutions 36 9 a If Ax, then A Ax A his shows that Nul A is contained in Nul A A b If A A x, then x A A x x So ( Ax) ( A x ), which means that A x, and hence Ax his shows that Nul A A is contained in Nul A Suppose that Ax hen A Ax A Since A are linearly independent A A is invertible, x must be Hence the columns of a If A has linearly independent columns, then the equation Ax has only the trivial solution By Exercise 7, the equation A A x also has only the trivial solution Since A A is a square matrix, it must be invertible by the Invertible Matrix heorem b Since the n linearly independent columns of A belong to m, m could not be less than n c he n linearly independent columns of A form a basis for Col A, so the rank of A is n Note that A A has n columns because A does hen by the Rank heorem and Exercise 9, rank A A n dim Nul A A n dim Nul A rank A 3 By heorem 4, b ˆ Axˆ AA A A b he matrix statistics AAA ( ) A is sometimes called the hat-matrix in 4 Since in this case A A I, the normal equations give xˆ A b ª ºª xº ª 6 º 5 he normal equations are, y 6 whose solution is the set of all (x, y) such that x + y 3 ¼ ¼ ¼ he solutions correspond to the points on the line midway between the lines x + y and x + y 4 6 [M] Using 7 as an approximation for /, a a and a 5 Using 77 as an approximation for /, a a , a 5 66 SOLUIONS Notes: his section is a valuable reference for any person who works with data that requires statistical analysis Many graduate fields require such work Science students in particular will benefit from Example he general linear model and the subsequent examples are aimed at students who may take a multivariate statistics course hat may include more students than one might expect he design matrix X and the observation vector y are X ª º ª º, y, 3 ¼ ¼ and one can compute ª 4 6º 6 ˆ 9, ª º, ( X X X X X) X ª º 6 4 y C y 4 ¼ ¼ ¼ he least-squares line y C Cx is thus y 9 + 4x

362 36 CHAPER 6 Orthogonality and Least Squares he design matrix X and the observation vector y are X ª º ª º, y, 4 5 ¼ 3 ¼ and one can compute ª 4 º 6 ˆ 6, ª º, ( X X X X X) X ª º 46 y 5 C y 7 ¼ ¼ ¼ he least-squares line y C Cx is thus y 6 + 7x 3 he design matrix X and the observation vector \ are X ª º ª º, y, ¼ 4 ¼ and one can compute ª 4 º 7 ˆ, ª º, ( X X X X X) X ª º 6 y C y 3 ¼ ¼ ¼ he least-squares line y C Cx is thus y + 3x 4 he design matrix X and the observation vector y are X ª º ª 3º 3, y, 5 6 ¼ ¼ and one can compute ª 4 6º 6 ˆ 43, ª º, ( X X X X X) X ª º 6 74 y 7 C y 7 ¼ ¼ ¼ he least-squares line y C Cx is thus y 43 7x 5 If two data points have different x-coordinates, then the two columns of the design matrix X cannot be multiples of each other and hence are linearly independent By heorem 4 in Section 65, the normal equations have a unique solution 6 If the columns of X were linearly dependent, then the same dependence relation would hold for the vectors in 3 formed from the top three entries in each column hat is, the columns of the matrix ª x x º x x would also be linearly dependent, and so this matrix (called a Vandermonde matrix) x3 x3 ¼ would be noninvertible Note that the determinant of this matrix is ( x x)( x3 x)( x3 x) z since x, x, and x 3 are distinct hus this matrix is invertible, which means that the columns of X are in fact linearly independent By heorem 4 in Section 65, the normal equations have a unique solution

363 66 Solutions a he model that produces the correct least-squares fit is y XC+ F, where ª º ª 8º ª F º 4 7 F ª C º X 3 9, y 34, C,and 3 C F F ¼ F4 5 5 ¼ 39 ¼ F 5 ¼ b [M] One computes that (to two decimal places) y 76 x x ˆ ª 76º C, so the desired least-squares equation is ¼ 8 a he model that produces the correct least-squares fit is y XC+ F, where X 3 ª x x x º ª yº ª Cº ª Fº, y, C,and C F 3 x y n xn x n n ¼ C3 ¼ Fn ¼ ¼ b [M] For the given data, ª º ª 58º X and y ¼ 43 ¼ ª 53º so ˆ ( X X) X 3348 C y, and the least-squares curve is 6 ¼ 3 y 53 x3348 x 6 x 9 he model that produces the correct least-squares fit is y XC+ F, where ª cos sin º ª 79º ª F º A X cos sin, 54 ª º,, and y C B F F ¼ cos 3 sin 3 ¼ 9 ¼ F3 ¼ a he model that produces the correct least-squares fit is y XC + F, where () 7() ª e e º ª 34º ª F º () 7() e e 68 F M () 7() A X ª º e e, 5,,and 3 y C, M F F B (4) 7(4) e e ¼ 887 F4 (5) 7(5) e e 83 ¼ F5 ¼ ¼

364 364 CHAPER 6 Orthogonality and Least Squares b [M] One computes that (to two decimal places) t 7t y 994e e ˆ ª 994º C, so the desired least-squares equation is ¼ [M] he model that produces the correct least-squares fit is y XCÁ+ F, where ª 3cos88º ª 3º ª F º 3cos 3 F ª C º X 65 cos4, y 65, C,and 3 e F F ¼ 5 cos77 5 F4 cos 4 ¼ ¼ F 5 ¼ 45 One computes that (to two decimal places) ˆ ª º C 8 Since e 8 < the orbit is an ellipse he ¼ equation r C / ( e cos +) produces r 33 when + 46 [M] he model that produces the correct least-squares fit is y XCÁ+ F, where X ª 378º ª 9º ª F º 4 98 F ª C º 44, y 3, C,and 3 C F F ¼ 473 F4 488 ¼ ¼ F 5 ¼ 856 One computes that (to two decimal places) ˆ ª º C 94, so the desired least-squares equation is ¼ p ln w When w, p 7 millimeters of mercury 3 [M] a he model that produces the correct least-squares fit is y XC + F, where ª º ª º 3 88 ª F º F F F ª C º F X C F, y 6, C,and F C 3 F C ¼ F F F F 3 F ¼ 89 3 ¼ ¼

365 66 Solutions 365 One computes that (to four decimal places) 3 ª 8558º ˆ 475 C, so the desired least-squares polynomial is ¼ yt ( ) t55554 t 74 t b he velocity v(t) is the derivative of the position function y(t), so vt ( ) t 8 t, and v(45) 53 ft/sec 4 Write the design matrix as > Since the residual vector F y X C ˆ is orthogonal to Col X, ˆ F ( y XC) y( X) C ˆ ª Cˆ º ( y ˆ ˆ ˆ ˆ } yn ) ª n x º ¼ ync C x ny nc ncx Cˆ ¼ his equation may be solved for y to find y C ˆ ˆ Cx 5 From equation () on page 4, ª x º ª } n x X X º ª º x x } n ¼ x ( x) x ¼ n ¼ ª y º ª } y X º ª º y x x } n ¼ xy y ¼ n ¼ he equations (7) in the text follow immediately from the normal equations X XC X y 6 he determinant of the coefficient matrix of the equations in (7) is n x ( x) formula for the inverse of the coefficient matrix, ª Cˆ º ª x xº ª y º ˆ C n x ( x) x n xy ¼ ¼ ¼ Hence Cˆ ˆ, C ( x )( y) ( x)( xy) n xy ( x)( y) n x ( x) n x ( x) Using the u ( ) ˆ ˆ y x C nc, which provides a simple formula Note: A simple algebraic calculation shows that for C ˆ once C ˆ is known 7 a he mean of the data in Example is x 55, so the data in mean-deviation form are ( 35, ), ª 35º 5 ( 5, ), (5, 3), (5, 3), and the associated design matrix is X he columns of X are 5 5 ¼ orthogonal because the entries in the second column sum to

366 366 CHAPER 6 Orthogonality and Least Squares b he normal equations are X XC X y, or so the desired least-squares line is ª 4 º ª C º ª 9 º C 75 ¼ ¼ ¼ * One computes that y (9/ 4) (5/4) x (9/ 4) (5/4)( x 55) ˆ ª 9/4º C, 5/4 ¼ 8 Since X X ª x º ª } º ª n x º x x } x x ¼ n ¼ ( ) x ¼ n X X is a diagonal matrix when x 9 he residual vector F y X C ˆ is orthogonal to Col X, while ŷ X C ˆ is in Col X Since F and ŷ are thus orthogonal, apply the Pythagorean heorem to these vectors to obtain ˆ ˆ SS() y yˆ F yˆ F XC y XC SS(R) SS(E) Since C ˆ satisfies the normal equations, ˆ X XC X y, and ˆ C Cˆ Cˆ Cˆ Cˆ C ˆ X ( X ) ( X ) X X X y Since ˆ X C SS(R) and y y y SS(), Exercise 9 shows that SS(E) SS() SS(R) y y C ˆ X y 67 SOLUIONS Notes: he three types of inner products described here (in Examples,, and 7) are matched by examples in Section 68 It is possible to spend just one day on selected portions of both sections Example matches the weighted least squares in Section 68 Examples 6 are applied to trend analysis in Seciton 68 his material is aimed at students who have not had much calculus or who intend to take more than one course in statistics For students who have seen some calculus, Example 7 is needed to develop the Fourier series in Section 68 Example 8 is used to motivate the inner product on C[a, b] he Cauchy-Schwarz and triangle inequalities are not used here, but they should be part of the training of every mathematics student he inner product is x, y² 4xy 5xy Let x (, ), y (5, ) a Since x x, x² 9, x 3 Since y y, y² 5, x 5 Finally, xy, ² 5 5 b A vector z is orthogonal to y if and only if x, y², that is, z 5z, or 4 z z hus all ª º multiples of 4 are orthogonal to y ¼ he inner product is xy, ² 4xy 5 xy Let x (3, ), y (, ) Compute that x xx, ² 56, y y, y², x y 56 76, x, y² 34, and x, y² 56 hus xy, ² d x y, as the Cauchy-Schwarz inequality predicts 3 he inner product is p, q² p( )q( ) + p()q() + p()q(), so 4 t,5 4t ² 3() 4(5) 5() 8

367 67 Solutions he inner product is p, q² p( )q( ) + p()q() + p()q(), so 3 tt,3 t ² ( 4)(5) (3) (5) 5 he inner product is p, q² p( )q( ) + p()q() + p()q(), so pq, 4 + t,4+ t and p p, p² 5 5 Likewise qq, ² 54 t,5 4t ² 5 7 and q q, q² he inner product is p, q² p( )q( ) + p()q() + p()q(), so p, p² 3 tt,3t t ² ( 4) and p p, p² 5 Likewise and q q, q² 59 7 he orthogonal projection ˆq of q onto the subspace spanned by p is qp, ² qˆ p (4 t) t pp, ² he orthogonal projection ˆq of q onto the subspace spanned by p is qp, ² 3 qˆ p (3 t t ) t t pp, ² qq, ² 3 t,3 t ² 9 he inner product is p, q² p( 3)q( 3) + p( )q( ) + p()q() + p(3)q(3) a he orthogonal projection ˆp of p onto the subspace spanned by p and p is p, p² p, p² pˆ p p () t 5 p, p² p, p² 4 b he vector q p3 pˆ t will be orthogonal to both p and p and { p, p, q } will be an orthogonal basis for Span{ p, p, p } he vector of values for q at ( 3,,, 3) is (4, 4, 4, 4), so scaling by /4 yields the new vector q (/ 4)( t 5) he best approximation to 3 p t by vectors in W Span{ p, p, q } will be pp, ² p, p² p, q² 64 t 5 4 pˆ proj W p p p q () ( t) t p, p² p, p² q, q² ¹ 5 3 he orthogonal projection of p t onto W Span{ p, p, p } will be pp, ² pp, ² pp, ² 34 7 pˆ proj W p p p p () ( t) ( t ) t p, p ² p, p ² p, p ² Let W Span{ p, p, p } he vector p3 p proj W p t (7 / 5) t will make { p, p, p, p 3} an orthogonal basis for the subspace 3 of 4 he vector of values for p 3 at (,,,, ) is 3 ( 6/5, /5,, /5, 6/5), so scaling by 5/6 yields the new vector p3 (5/ 6)( t (7 / 5) t) 3 (5/6) t (7/6) t 3

368 368 CHAPER 6 Orthogonality and Least Squares 3 Suppose that A is invertible and that u, v² (Au) (Av) for u and v in n Check each axiom in the definition on page 48, using the properties of the dot product i u, v² (Au) (Av) (Av) (Au) v, u² ii u + v, w² (A(u + v)) (Aw) (Au + Av) (Aw) (Au) (Aw) + (Av) (Aw) u, w²+ v, w² iii c u, v² (A( cu)) (Av) (c(au)) (Av) c((au) (Av)) c u, v² iv cuu, ² ( Au) ( Au) Au t, and this quantity is zero if and only if the vector Au is But Au if and only u because A is invertible 4 Suppose that is a one-to-one linear transformation from a vector space V into n and that u, v² (u) (v) for u and v in n Check each axiom in the definition on page 48, using the properties of the dot product and he linearity of is used often in the following i u, v² (u) (v) (v) (u) v, u² ii u+ v, w² (u + v) (w) ((u) + (v)) (w) (u) (w) + (v) (w) u, Z²+ v, w² iii cu, v² (cu) (v) (c(u)) (v) c((u) (v)) c u, v² iv uu, ² ( u) ( u) ( u ) t, and this quantity is zero if and only if u since is a one-toone transformation 5 Using Axioms and 3, u, c v² c v, u² c v, u² c u, v² 6 Using Axioms, and 3, u v uv, u v² u, u v² v, u v ² uu, ² uv, ² vu, ² vv, ² uu, ² uv, ² vv, ² u u, v² v Since {u, v} is orthonormal, u v and u, v² So u v 7 Following the method in Exercise 6, u v uv, u v² u, u v² v, u v ² uu, ² uv, ² vu, ² vv, ² uu, ² uv, ² vv, ² u u, v² v Subtracting these results, one finds that uv u v 4 u, v ², and dividing by 4 gives the desired identity 8 In Exercises 6 and 7, it has been shown that u v u u, v² v and u v u u, v² v Adding these two results gives uv u v u v 9 let u ª a º ª b º and v hen u ab, v b ¼ a ¼ ab, and uv, ² ab Since a and b are nonnegative, u ab, v ab Plugging these values into the Cauchy-Schwarz inequality gives ab uv, ² d u v ab a b ab Dividing both sides of this equation by gives the desired inequality

369 67 Solutions 369 he Cauchy-Schwarz inequality may be altered by dividing both sides of the inequality by and then squaring both sides of the inequality he result is uv, ² u v d ¹ 4 a Now let u ª º b and ªº v hen u a b, v, and u, v² a + b Plugging these values ¼ ¼ into the inequality above yields the desired inequality he inner product is f, g² ³ f( t) g( t) dt Let ³ ³ f() t 3 t, f, g² (3 t )( t t ) dt 3t 4t t dt he inner product is f, g² ³ f( t) g( t) dt Let f (t) 5t 3, ³ ³ f, g² (5t3)( t t ) dt 5t 8t 3t dt gt () t t hen 3 3 gt () t t hen 3 he inner product is f, g² ³ f( t) g( t) dt, so f f, f² / 5 4 he inner product is f, g² ³ f( t) g( t) dt, so g g, g² / 5 4 ³ ³ and f, f² ( 3 t ) dt 9t 6t dt 4/5, ³ ³ and gg, ² ( t t) dt t t tdt /5, 5 he inner product is f, g² ³ f( t) g( t) dt hen and t are orthogonal because, t² ³ t dt So and t can be in an orthogonal basis for Span{, tt, } By the Gram-Schmidt process, the third basis element in the orthogonal basis can be,, t t ² t t² t, ² tt, ² 3 Since t,² ³ t dt / 3,,² dt, ³ and t, t² t dt, ³ the third basis element can be written as t (/ 3) his element can be scaled by 3, which gives the orthogonal basis as {, t, 3t } 6 he inner product is f, g² ³ f( t) g( t) dt hen and t are orthogonal because, t² ³ t dt So and t can be in an orthogonal basis for Span{, tt, } By the Gram-Schmidt process, the third basis element in the orthogonal basis can be,, t t ² t t² t, ² tt, ² 3 Since t,² ³ t dt 6/3,, ² dt 4, ³ and t, t² t dt, ³ the third basis element can be written as t (4/3) his element can be scaled by 3, which gives the orthogonal basis as {, t, 3t 4}

370 37 CHAPER 6 Orthogonality and Least Squares 7 [M] he new orthogonal polynomials are multiples of 7t 5t and 7 55t 35 t hese polynomials may be scaled so that their values at,,,, and are small integers [M] he orthogonal basis is f () t, f () t cos t, f ( t ) cos t (/ ) (/ )cos t, and f 3 ( t ) cos t (3/ 4)cos t (/ 4)cos 3 t 3 68 SOLUIONS Notes: he connections between this section and Section 67 are described in the notes for that section For my junior-senior class, I spend three days on the following topics: heorems 3 and 5 in Section 65, plus Examples, 3, and 5; Example in Section 66; Examples and 3 in Section 67, with the motivation for the definite integral; and Fourier series in Section 68 he weighting matrix W, design matrix X, parameter vector C, and observation vector y are: ª º ª º ª º ª C º W, X, C, y C ¼ 4 ¼ ¼ 4 ¼ he design matrix X and the observation vector y are scaled by W: ª º ª º WX, Wy 4 8 ¼ 4 ¼ Further compute 4 8 ( WX ) WX ª º,( WX ) W ª º 6 y 4 ¼ ¼ and find that ˆ /4 8 (( WX ) WX ) ( WX ) W ª ºª º ª º C y /6 4 3/ ¼ ¼ ¼ hus the weighted least-squares line is y + (3/)x Let X be the original design matrix, and let y be the original observation vector Let W be the weighting matrix for the first method hen W is the weighting matrix for the second method he weighted leastsquares by the first method is equivalent to the ordinary least-squares for an equation whose normal equation is ( ) ˆ WX WX C ( WX ) Wy () while the second method is equivalent to the ordinary least-squares for an equation whose normal equation is ( ) ( ) ˆ WX W X C ( WX ) ( W ) y () Since equation () can be written as 4( ) ˆ WX WX C 4( WX ) Wy, it has the same solutions as equation ()

371 68 Solutions 37 3 From Example and the statement of the problem, p () t, p () t t, p () t t, p 3 3 () t (5/6) t (7/6), t and g (3, 5, 5, 4, 3) he cubic trend function for g is the orthogonal projection ˆp of g onto the subspace spanned by p, p, p, and p 3 : g, p g, p g, p g, p pˆ ² ² ² p p p ² p p p ² p p ² p p ² p p ² 3 3,,, 3, () t t t t ¹ t t t t 5 t t t ¹ 3 6 his polynomial happens to fit the data exactly he inner product is p, q² p( 5)q( 5) + p( 3)q( 3) + p( )q( ) + p()q() + p(3)q(3) + p(5)q(5) a Begin with the basis {, tt, } for Since and t are orthogonal, let p () t and p () t t hen the Gram-Schmidt process gives t ² t t²,, 7 35 p () t t t t t, ² tt, ² 6 3 he vector of values for p is (4/3, 8/3, 3/3, 3/3, 8/3, 4/3), so scaling by 3/8 yields the new function p (3/8)( t (35/ 3)) (3/8) t (35/8) b he data vector is g (,, 4, 4, 6, 8) he quadratic trend function for g is the orthogonal projection ˆp of g onto the subspace spanned by p, p and p : g, p ² g, p ² g, p ² pˆ p p p () t t p, p ² p, p ² p, p ² ¹ t t t t ¹ 6 7 Q Q 5 he inner product is f, g² ³ f( t) g( t) dt Let m zn hen ³ ³ Q sin mt, sin nt² sin mt sin nt dt cos(( m n) t) cos(( m n) t) dt hus sin mt and sin nt are orthogonal Q Q 6 he inner product is f, g² ³ f( t) g( t) dt Let m and n be positive integers hen ³ ³ Q sin mt,cos nt² sin mt cos nt dt sin(( m n) t) sin(( m n) t) dt hus sinmt and cosnt are orthogonal

372 37 CHAPER 6 Orthogonality and Least Squares Q 7 he inner product is f, g² ³ f( t) g( t) dt Let k be a positive integer hen Q Q cos kt cos kt, cos kt² cos kt dt cos kt dt Q ³ ³ and Q Q sin kt sin kt,sin kt² sin kt dt cos kt dt Q ³ ³ 8 Let f(t) t he Fourier coefficients for f are: a Q Q () ³ f t dt t dt ³ Q Q and for k >, a Q f()cos t kt dt Q ( t )coskt dt k Q ³ ³ Q Q Q ³ ³ bk f()sin t kt dt ( t )sinkt dt Q Q k he third-order Fourier approximation to f is thus a bsin tbsin t b3sin 3t sin tsin t sin 3t Q 3 9 Let f(t) Q t he Fourier coefficients for f are: a Q Q () ³ f t dt ³ Q Q and for k >, Q t dt a Q Q f( t) cos kt dt ( Q t) cos kt dt k Q ³ ³ Q Q Q ³ ³ bk f()sin t kt dt ( Q t)sinkt dt Q Q k he third-order Fourier approximation to f is thus a bsin tbsin t b3sin 3t Q sin tsin t sin 3t 3 fordt Q Let f() t he Fourier coefficients for f are: for Q d t Q a f () t dt dt dt Q Q Q ³ ³ Q Q Q ³ Q and for k >, a Q f()cos t ktdt cosktdt Q cosktdt Q k ³ ³ ³ Q Q Q Q Q Q 4/( kq ) for k odd bk ³ f( t) sin ktdt sin ktdt sin ktdt ³ ³ Q Q Q Q for k even he third-order Fourier approximation to f is thus 4 4 bsin t b3sin 3t sin t sin 3t Q 3Q Q Q Q

373 68 Solutions 373 he trigonometric identity cos t sin t shows that sin t cos t he expression on the right is in the subspace spanned by the trigonometric polynomials of order 3 or 3 less, so this expression is the third-order Fourier approximation to cos t 3 he trigonometric identity cos 3t 4 cos t 3 cos t shows that 3 3 cos t cos t cos 3t 4 4 he expression on the right is in the subspace spanned by the trigonometric polynomials of order 3 or 3 less, so this expression is the third-order Fourier approximation to cos t 3 Let f and g be in C [, S] and let m be a nonnegative integer hen the linearity of the inner product shows that ( f + g), cos mt² f, cos mt²+ g, cos mt², ( f + g), sin mt² f, sin mt²+ g, sin mt² Dividing these identities respectively by cos mt, cos mt² and sin mt, sin mt² shows that the Fourier coefficients a m and b m for f + g are the sums of the corresponding Fourier coefficients of f and of g 4 Note that g and h are both in the subspace H spanned by the trigonometric polynomials of order or less Since h is the second-order Fourier approximation to f, it is closer to f than any other function in the subspace H 5 [M] he weighting matrix W is the 3 u3 diagonal matrix with diagonal entries,,, 9, 9, 8, 7, 6, 5, 4, 3,, he design matrix X, parameter vector C, and observation vector y are: ª º ª º ª C º 59 3 X C 6 6 6, C, y C C 3 3 ¼ ¼ ¼

374 374 CHAPER 6 Orthogonality and Least Squares he design matrix X and the observation vector y are scaled by W: WX Further compute ª º ª º , Wy ¼ 89¼ ª º ª º ( WX ) WX,( WX ) Wy ¼ ¼ and find that ˆ C (( ) ) ( ) WX WX WX Wy ª 685º ¼ 3 hus the weighted least-squares cubic is y g( t) t58576 t 477 t he velocity at t 45 seconds is g (45) 534 ft/sec his is about 7% faster than the estimate obtained in Exercise 3 of Section 66 fordt Q 6 [M] Let f() t he Fourier coefficients for f have already been found to be a k for Q d t Q 4/( kq ) for k odd for all k t and bk hus for k even f4( t) sin t sin 3t and f5( t) sin t sin 3t sin 5t Q 3Q Q 3Q 5Q A graph of f 4 over the interval [, Q] is

375 Chapter 6 Supplementary Exercises 375 A graph of f 5 over the interval [, Q] is A graph of f 5 over the interval [ Q, Q] is Chapter 6 SUPPLEMENARY EXERCISES a False he length of the zero vector is zero b rue By the displayed equation before Example in Section 6, with c, x ( )x x x c rue his is the definition of distance d False his equation would be true if r v were replaced by r v e False Orthogonal nonzero vectors are linearly independent f rue If x u and x v, then x (u v) x u x v g rue his is the only if part of the Pythagorean heorem in Section 6 h rue his is the only if part of the Pythagorean heorem in Section 6 where v is replaced by v, because v is the same as v i False he orthogonal projection of y onto u is a scalar multiple of u, not y (except when y itself is already a multiple of u) j rue he orthogonal projection of any vector y onto W is always a vector in W k rue his is a special case of the statement in the box following Example 6 in Section 6 (and proved in Exercise 3 of Section 6) l False he zero vector is in both W and W A m rue See Exercise 3 in Section 6 If v v, then ( cv ) ( c v ) cc ( v v ) cc i j i i j j i j i j i j n False his statement is true only for a square matrix See heorem in Section 63 o False An orthogonal matrix is square and has orthonormal columns

376 376 CHAPER 6 Orthogonality and Least Squares p rue See Exercises 7 and 8 in Section 6 If U has orthonormal columns, then U U I If U is also square, then the Invertible Matrix heorem shows that U is invertible and U U In this case, U U I, which shows that the columns of U are orthonormal; that is, the rows of U are orthonormal q rue By the Orthogonal Decomposition heorem, the vectors proj W v and v proj W v are orthogonal, so the stated equality follows from the Pythagorean heorem r False A least-squares solution is a vector ˆx (not A ˆx ) such that A ˆx is the closest point to b in Col A 7 7 s False he equation xˆ $ $ $ b describes the solution of the normal equations, not the matrix form of the normal equations Furthermore, this equation makes sense only when A A is invertible If { v, v } is an orthonormal set and x c v c v, then the vectors cv and cv are orthogonal (Exercise 3 in Section 6) By the Pythagorean heorem and properties of the norm c c c c c c c c x v v v v ( v ) ( v ) So the stated equality holds for p Now suppose the equality holds for p k, with k t Let { v,}, v } be an orthonormal set, and consider x cv}c v c v u c v, where k k c }ck k k k k k k k k u v v Observe that u k and ck v k are orthogonal because v j v k for j,},k By the Pythagorean heorem and the assumption that the stated equality holds for k, and because c v c v c, k k k k k x uk ckvk uk ckv k c } ck hus the truth of the equality for p k implies its truth for p k + By the principle of induction, the equality is true for all integers p t 3 Given x and an orthonormal set { v, }, v p } in n, let ˆx be the orthogonal projection of x onto the subspace spanned by v, }, v p By heorem in Section 63, xˆ ( x v) v }( x vp) v p By Exercise, xˆ x v } x v p Bessel s inequality follows from the fact that xˆ d x, which is noted before the proof of the Cauchy-Schwarz inequality in Section 67 4 By parts (a) and (c) of heorem 7 in Section 6, { Uv, }, Uv k } is an orthonormal set in are n vectors in this linearly independent set, the set is a basis for n n Since there 5 Suppose that (U x) (U y) x y for all x, y in n, and let e, }, e n be the standard basis for n For j, }, n, Ue j is the jth column of U Since Uej ( Uej) ( Uej) ej e j, the columns of U are unit vectors; since ( Ue ) ( Ue ) e e for j zk, the columns are pairwise orthogonal j k j k 6 If Ux Ox for some xz, then by heorem 7(a) in Section 6 and by a property of the norm, x Ux Ox O x, which shows that O, because xz 7 Let u be a unit vector, and let Q I uu Since ( uu ) u u uu, Q ( I uu ) I ( uu ) I uu Q hen QQ Q ( I uu ) I uu uu 4( uu )( uu )

377 Chapter 6 Supplementary Exercises 377 Since u is a unit vector, uu uu, so ( uu )( uu ) u( u )( u) u uu, and QQ I uu uu 4uu I hus Q is an orthogonal matrix 8 a Suppose that x y By the Pythagorean heorem, x y xy Since preserves lengths and is linear, ( x) ( y) ( x y) ( x) ( y ) his equation shows that (x) and (y) are orthogonal, because of the Pythagorean heorem hus preserves orthogonality b he standard matrix of is > ( ) } ( )@ e e n, where e, }, e n are the columns of the identity matrix hen { ( e), }, ( e n )} is an orthonormal set because preserves both orthogonality and lengths (and because the columns of the identity matrix form an orthonormal set) Finally, a square matrix with orthonormal columns is an orthogonal matrix, as was observed in Section 6 9 Let W Span{u, v} Given z in n, let zˆ proj W z hen ẑ is in Col A, where A > u hus there is a vector, say, ˆx in, with A ˆx ẑ So, ˆx is a least-squares solution of Ax z he normal equations may be solved to find ˆx, and then ẑ may be found by computing A x ˆ Use heorem 4 in Section 65 If c z, the least-squares solution of Ax c b is given by ( A A) A ( cb ), which equals caa ( ) Ab, by linearity of matrix multiplication his solution is c times the least-squares solution of Ax b ª xº ª aº ª º ª v º ª 5º Let x y, b b, v, and A 5 v hen the given set of equations is z ¼ c ¼ 5 ¼ 5 v ¼ ¼ Ax b, and the set of all least-squares solutions coincides with the set of solutions of the normal equations A Ax A b he column-row expansions of A A and A b give A A vv vv vv 3 vv, A b av bv cv ( abc) v hus A A x 3( vv ) x 3 v( v x) 3( v x) v since v x is a scalar, and the normal equations have become 3( v x) v ( abc) v, so 3( v x ) abc, or v x ( abc)/3 Computing v x gives the equation x y + 5z (a + b + c)/3 which must be satisfied by all least-squares solutions to Ax b he equation () in the exercise has been written as VO b, where V is a single nonzero column vector v, and b Av he least-squares solution Ô of VO b is the exact solution of the normal equations V VO V b In the original notation, this equation is v vo v Av Since v v is nonzero, the least squares solution Ô is v Av/( v v ) his expression is the Rayleigh quotient discussed in the Exercises for Section 58 3 a he row-column calculation of Au shows that each row of A is orthogonal to every u in Nul A So each row of A is in (Nul A) A Since (Nul A) A is a subspace, it must contain all linear combinations of the rows of A; hence (Nul A) A contains Row A b If rank A r, then dim Nul A n r by the Rank heorem By Exercsie 4(c) in Section 63, A dimnul A dim(nul A) n, so dim(nul A) A must be r But Row A is an r-dimensional subspace of (Nul A) A by the Rank heorem and part (a) herefore, Row A (Nul A) A

378 378 CHAPER 6 Orthogonality and Least Squares c Replace A by Col A (Nul A ) A A in part (b) and conclude that Row A (Nul A ) A Since Row A Col A, 4 he equation Ax b has a solution if and only if b is in Col A By Exercise 3(c), Ax b has a solution if and only if b is orthogonal to Nul A his happens if and only if b is orthogonal to all solutions of A x 5 If A URU with U orthogonal, then A is similar to R (because U is invertible and U U ), so A has the same eigenvalues as R by heorem 4 in Section 5 Since the eigenvalues of R are its n real diagonal entries, A has n real eigenvalues 6 a If U > u u } then AU > O A } n, n u u u Since u is a unit vector and u, }, u n are orthogonal to u, the first column of U AU is U ( O u) O U u Oe b From (a), U AU ª O * * * * º A ¼ View U AU as a u block upper triangular matrix, with A as the (, )-block hen from Supplementary Exercise in Chapter 5, det( U AU O I ) det(( O O) I ) det( A O I ) ( O O) det( A O I ) n n n his shows that the eigenvalues of U AU, namely, O, }, O n, consist of O and the eigenvalues of A So the eigenvalues of A are O, }, O n 7 [M] Compute that 'x / x 468 and cond( A) u ( ' b / b ) 3363 u(548u ) 56 In this case, 'x / x is almost the same as cond(a) u 'E / b 8 [M] Compute that 'x / x and cond(a) u( 'b / b ) 3363 u() 73 In this case, 'x / x is almost the same as 'b / b, even though the large condition number suggests that 'x / x could be much larger 9 [M] Compute that ' x / x 778u 8 4 and cond( A) u ( ' b / b ) 3683 u(83u ) 677 Observe that the realtive change in x is much smaller than the relative change in b In fact the theoretical bound on the realtive change in x is 677 (to four significant figures) his exercise shows that even when a condition number is large, the relative error in the solution need not be as large as you suspect [M] Compute that 'x / x 597 and cond( A) u ( ' b / b ) 3683 u(97 u ) 598 his calculation shows that the relative change in x, for this particular b and 'b, should not exceed 598 In this case, the theoretical maximum change is almost acheived 4 5

379 7 SOLUIONS Notes: Students can profit by reviewing Section 53 (focusing on the Diagonalization heorem) before working on this section heorems and and the calculations in Examples and 3 are important for the sections that follow Note that symmetric matrix means real symmetric matrix, because all matrices in the text have real entries, as mentioned at the beginning of this chapter he exercises in this section have been constructed so that mastery of the Gram-Schmidt process is not needed heorem is easily proved for the case: a b If A, c d then ( ( ) 4 ) λ a+ d ± a d + b If b there is nothing to prove Otherwise, there are two distinct eigenvalues, so A must be diagonalizable d λ In each case, an eigenvector for λ is b Since Since 3 5 A A, A A, 5 3 the matrix is symmetric the matrix is not symmetric 3 Since A A, 4 4 the matrix is not symmetric 4 Since 8 3 A 8 A, 3 the matrix is symmetric 5 Since 6 A 6 A, 6 the matrix is not symmetric 6 Since A is not a square matrix A A and the matrix is not symmetric 379

Chapter 1: Linear Equations

Chapter 1: Linear Equations Chapter : Linear Equations (Last Updated: September, 7) The material for these notes is derived primarily from Linear Algebra and its applications by David Lay (4ed).. Systems of Linear Equations Before

More information

Chapter 1: Linear Equations

Chapter 1: Linear Equations Chapter : Linear Equations (Last Updated: September, 6) The material for these notes is derived primarily from Linear Algebra and its applications by David Lay (4ed).. Systems of Linear Equations Before

More information

Math 314H EXAM I. 1. (28 points) The row reduced echelon form of the augmented matrix for the system. is the matrix

Math 314H EXAM I. 1. (28 points) The row reduced echelon form of the augmented matrix for the system. is the matrix Math 34H EXAM I Do all of the problems below. Point values for each of the problems are adjacent to the problem number. Calculators may be used to check your answer but not to arrive at your answer. That

More information

Math 54 HW 4 solutions

Math 54 HW 4 solutions Math 54 HW 4 solutions 2.2. Section 2.2 (a) False: Recall that performing a series of elementary row operations A is equivalent to multiplying A by a series of elementary matrices. Suppose that E,...,

More information

Linear Equation: a 1 x 1 + a 2 x a n x n = b. x 1, x 2,..., x n : variables or unknowns

Linear Equation: a 1 x 1 + a 2 x a n x n = b. x 1, x 2,..., x n : variables or unknowns Linear Equation: a x + a 2 x 2 +... + a n x n = b. x, x 2,..., x n : variables or unknowns a, a 2,..., a n : coefficients b: constant term Examples: x + 4 2 y + (2 5)z = is linear. x 2 + y + yz = 2 is

More information

MATH10212 Linear Algebra B Homework Week 3. Be prepared to answer the following oral questions if asked in the supervision class

MATH10212 Linear Algebra B Homework Week 3. Be prepared to answer the following oral questions if asked in the supervision class MATH10212 Linear Algebra B Homework Week Students are strongly advised to acquire a copy of the Textbook: D. C. Lay Linear Algebra its Applications. Pearson, 2006. ISBN 0-521-2871-4. Normally, homework

More information

MATH10212 Linear Algebra B Homework 6. Be prepared to answer the following oral questions if asked in the supervision class:

MATH10212 Linear Algebra B Homework 6. Be prepared to answer the following oral questions if asked in the supervision class: MATH0 Linear Algebra B Homework 6 Students are strongly advised to acquire a copy of the Textbook: D C Lay, Linear Algebra its Applications Pearson, 006 (or other editions) Normally, homework assignments

More information

MTH 464: Computational Linear Algebra

MTH 464: Computational Linear Algebra MTH 464: Computational Linear Algebra Lecture Outlines Exam 1 Material Dr. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University January 9, 2018 Linear Algebra (MTH 464)

More information

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017 Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...

More information

Chapter 3. Directions: For questions 1-11 mark each statement True or False. Justify each answer.

Chapter 3. Directions: For questions 1-11 mark each statement True or False. Justify each answer. Chapter 3 Directions: For questions 1-11 mark each statement True or False. Justify each answer. 1. (True False) Asking whether the linear system corresponding to an augmented matrix [ a 1 a 2 a 3 b ]

More information

2. Every linear system with the same number of equations as unknowns has a unique solution.

2. Every linear system with the same number of equations as unknowns has a unique solution. 1. For matrices A, B, C, A + B = A + C if and only if A = B. 2. Every linear system with the same number of equations as unknowns has a unique solution. 3. Every linear system with the same number of equations

More information

MTH 464: Computational Linear Algebra

MTH 464: Computational Linear Algebra MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University March 2, 2018 Linear Algebra (MTH 464)

More information

Midterm 1 Review. Written by Victoria Kala SH 6432u Office Hours: R 12:30 1:30 pm Last updated 10/10/2015

Midterm 1 Review. Written by Victoria Kala SH 6432u Office Hours: R 12:30 1:30 pm Last updated 10/10/2015 Midterm 1 Review Written by Victoria Kala vtkala@math.ucsb.edu SH 6432u Office Hours: R 12:30 1:30 pm Last updated 10/10/2015 Summary This Midterm Review contains notes on sections 1.1 1.5 and 1.7 in your

More information

= = 3( 13) + 4(10) = = = 5(4) + 1(22) =

= = 3( 13) + 4(10) = = = 5(4) + 1(22) = . SOLUIONS Notes: If time is needed for other topics, this chapter may be omitted. Section 5. contains enough information about determinants to support the discussion there of the characteristic polynomial

More information

Study Guide for Linear Algebra Exam 2

Study Guide for Linear Algebra Exam 2 Study Guide for Linear Algebra Exam 2 Term Vector Space Definition A Vector Space is a nonempty set V of objects, on which are defined two operations, called addition and multiplication by scalars (real

More information

3.1 SOLUTIONS. 2. Expanding along the first row: Expanding along the second column:

3.1 SOLUTIONS. 2. Expanding along the first row: Expanding along the second column: . SOLUIONS Notes: Some exercises in this section provide practice in computing determinants, while others allow the student to discover the properties of determinants which will be studied in the next

More information

Chapter 1: Systems of Linear Equations

Chapter 1: Systems of Linear Equations Chapter : Systems of Linear Equations February, 9 Systems of linear equations Linear systems Lecture A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether

More information

b for the linear system x 1 + x 2 + a 2 x 3 = a x 1 + x 3 = 3 x 1 + x 2 + 9x 3 = 3 ] 1 1 a 2 a

b for the linear system x 1 + x 2 + a 2 x 3 = a x 1 + x 3 = 3 x 1 + x 2 + 9x 3 = 3 ] 1 1 a 2 a Practice Exercises for Exam Exam will be on Monday, September 8, 7. The syllabus for Exam consists of Sections One.I, One.III, Two.I, and Two.II. You should know the main definitions, results and computational

More information

LINEAR ALGEBRA SUMMARY SHEET.

LINEAR ALGEBRA SUMMARY SHEET. LINEAR ALGEBRA SUMMARY SHEET RADON ROSBOROUGH https://intuitiveexplanationscom/linear-algebra-summary-sheet/ This document is a concise collection of many of the important theorems of linear algebra, organized

More information

Review for Exam Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions.

Review for Exam Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions. Review for Exam. Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions. x + y z = 2 x + 2y + z = 3 x + y + (a 2 5)z = a 2 The augmented matrix for

More information

Chapter 3. Vector spaces

Chapter 3. Vector spaces Chapter 3. Vector spaces Lecture notes for MA1111 P. Karageorgis pete@maths.tcd.ie 1/22 Linear combinations Suppose that v 1,v 2,...,v n and v are vectors in R m. Definition 3.1 Linear combination We say

More information

MATH10212 Linear Algebra B Homework Week 4

MATH10212 Linear Algebra B Homework Week 4 MATH22 Linear Algebra B Homework Week 4 Students are strongly advised to acquire a copy of the Textbook: D. C. Lay Linear Algebra and its Applications. Pearson, 26. ISBN -52-2873-4. Normally, homework

More information

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education MTH 3 Linear Algebra Study Guide Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education June 3, ii Contents Table of Contents iii Matrix Algebra. Real Life

More information

Chapter 1. Vectors, Matrices, and Linear Spaces

Chapter 1. Vectors, Matrices, and Linear Spaces 1.4 Solving Systems of Linear Equations 1 Chapter 1. Vectors, Matrices, and Linear Spaces 1.4. Solving Systems of Linear Equations Note. We give an algorithm for solving a system of linear equations (called

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

Math 3108: Linear Algebra

Math 3108: Linear Algebra Math 3108: Linear Algebra Instructor: Jason Murphy Department of Mathematics and Statistics Missouri University of Science and Technology 1 / 323 Contents. Chapter 1. Slides 3 70 Chapter 2. Slides 71 118

More information

MODEL ANSWERS TO THE FIRST QUIZ. 1. (18pts) (i) Give the definition of a m n matrix. A m n matrix with entries in a field F is a function

MODEL ANSWERS TO THE FIRST QUIZ. 1. (18pts) (i) Give the definition of a m n matrix. A m n matrix with entries in a field F is a function MODEL ANSWERS TO THE FIRST QUIZ 1. (18pts) (i) Give the definition of a m n matrix. A m n matrix with entries in a field F is a function A: I J F, where I is the set of integers between 1 and m and J is

More information

Linear Algebra Homework and Study Guide

Linear Algebra Homework and Study Guide Linear Algebra Homework and Study Guide Phil R. Smith, Ph.D. February 28, 20 Homework Problem Sets Organized by Learning Outcomes Test I: Systems of Linear Equations; Matrices Lesson. Give examples of

More information

Linear Equations in Linear Algebra

Linear Equations in Linear Algebra 1 Linear Equations in Linear Algebra 1.1 SYSTEMS OF LINEAR EQUATIONS LINEAR EQUATION x 1,, x n A linear equation in the variables equation that can be written in the form a 1 x 1 + a 2 x 2 + + a n x n

More information

DM559 Linear and Integer Programming. Lecture 2 Systems of Linear Equations. Marco Chiarandini

DM559 Linear and Integer Programming. Lecture 2 Systems of Linear Equations. Marco Chiarandini DM559 Linear and Integer Programming Lecture Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. Outline 1. 3 A Motivating Example You are organizing

More information

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1) EXERCISE SET 5. 6. The pair (, 2) is in the set but the pair ( )(, 2) = (, 2) is not because the first component is negative; hence Axiom 6 fails. Axiom 5 also fails. 8. Axioms, 2, 3, 6, 9, and are easily

More information

Extra Problems for Math 2050 Linear Algebra I

Extra Problems for Math 2050 Linear Algebra I Extra Problems for Math 5 Linear Algebra I Find the vector AB and illustrate with a picture if A = (,) and B = (,4) Find B, given A = (,4) and [ AB = A = (,4) and [ AB = 8 If possible, express x = 7 as

More information

ICS 6N Computational Linear Algebra Vector Space

ICS 6N Computational Linear Algebra Vector Space ICS 6N Computational Linear Algebra Vector Space Xiaohui Xie University of California, Irvine xhx@uci.edu Xiaohui Xie (UCI) ICS 6N 1 / 24 Vector Space Definition: A vector space is a non empty set V of

More information

Math113: Linear Algebra. Beifang Chen

Math113: Linear Algebra. Beifang Chen Math3: Linear Algebra Beifang Chen Spring 26 Contents Systems of Linear Equations 3 Systems of Linear Equations 3 Linear Systems 3 2 Geometric Interpretation 3 3 Matrices of Linear Systems 4 4 Elementary

More information

APPENDIX: MATHEMATICAL INDUCTION AND OTHER FORMS OF PROOF

APPENDIX: MATHEMATICAL INDUCTION AND OTHER FORMS OF PROOF ELEMENTARY LINEAR ALGEBRA WORKBOOK/FOR USE WITH RON LARSON S TEXTBOOK ELEMENTARY LINEAR ALGEBRA CREATED BY SHANNON MARTIN MYERS APPENDIX: MATHEMATICAL INDUCTION AND OTHER FORMS OF PROOF When you are done

More information

MA 242 LINEAR ALGEBRA C1, Solutions to First Midterm Exam

MA 242 LINEAR ALGEBRA C1, Solutions to First Midterm Exam MA 242 LINEAR ALGEBRA C Solutions to First Midterm Exam Prof Nikola Popovic October 2 9:am - :am Problem ( points) Determine h and k such that the solution set of x + = k 4x + h = 8 (a) is empty (b) contains

More information

MATH 2331 Linear Algebra. Section 1.1 Systems of Linear Equations. Finding the solution to a set of two equations in two variables: Example 1: Solve:

MATH 2331 Linear Algebra. Section 1.1 Systems of Linear Equations. Finding the solution to a set of two equations in two variables: Example 1: Solve: MATH 2331 Linear Algebra Section 1.1 Systems of Linear Equations Finding the solution to a set of two equations in two variables: Example 1: Solve: x x = 3 1 2 2x + 4x = 12 1 2 Geometric meaning: Do these

More information

Math 308 Spring Midterm Answers May 6, 2013

Math 308 Spring Midterm Answers May 6, 2013 Math 38 Spring Midterm Answers May 6, 23 Instructions. Part A consists of questions that require a short answer. There is no partial credit and no need to show your work. In Part A you get 2 points per

More information

System of Linear Equations

System of Linear Equations Math 20F Linear Algebra Lecture 2 1 System of Linear Equations Slide 1 Definition 1 Fix a set of numbers a ij, b i, where i = 1,, m and j = 1,, n A system of m linear equations in n variables x j, is given

More information

Linear Algebra: Sample Questions for Exam 2

Linear Algebra: Sample Questions for Exam 2 Linear Algebra: Sample Questions for Exam 2 Instructions: This is not a comprehensive review: there are concepts you need to know that are not included. Be sure you study all the sections of the book and

More information

MTH 2032 Semester II

MTH 2032 Semester II MTH 232 Semester II 2-2 Linear Algebra Reference Notes Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education December 28, 2 ii Contents Table of Contents

More information

Solutions of Linear system, vector and matrix equation

Solutions of Linear system, vector and matrix equation Goals: Solutions of Linear system, vector and matrix equation Solutions of linear system. Vectors, vector equation. Matrix equation. Math 112, Week 2 Suggested Textbook Readings: Sections 1.3, 1.4, 1.5

More information

MATH 1553, SPRING 2018 SAMPLE MIDTERM 1: THROUGH SECTION 1.5

MATH 1553, SPRING 2018 SAMPLE MIDTERM 1: THROUGH SECTION 1.5 MATH 553, SPRING 28 SAMPLE MIDTERM : THROUGH SECTION 5 Name Section Please read all instructions carefully before beginning You have 5 minutes to complete this exam There are no aids of any kind (calculators,

More information

LECTURES 4/5: SYSTEMS OF LINEAR EQUATIONS

LECTURES 4/5: SYSTEMS OF LINEAR EQUATIONS LECTURES 4/5: SYSTEMS OF LINEAR EQUATIONS MA1111: LINEAR ALGEBRA I, MICHAELMAS 2016 1 Linear equations We now switch gears to discuss the topic of solving linear equations, and more interestingly, systems

More information

MATH10212 Linear Algebra B Homework 7

MATH10212 Linear Algebra B Homework 7 MATH22 Linear Algebra B Homework 7 Students are strongly advised to acquire a copy of the Textbook: D C Lay, Linear Algebra and its Applications Pearson, 26 (or other editions) Normally, homework assignments

More information

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible. MATH 2331 Linear Algebra Section 2.1 Matrix Operations Definition: A : m n, B : n p ( 1 2 p ) ( 1 2 p ) AB = A b b b = Ab Ab Ab Example: Compute AB, if possible. 1 Row-column rule: i-j-th entry of AB:

More information

Linear Algebra Summary. Based on Linear Algebra and its applications by David C. Lay

Linear Algebra Summary. Based on Linear Algebra and its applications by David C. Lay Linear Algebra Summary Based on Linear Algebra and its applications by David C. Lay Preface The goal of this summary is to offer a complete overview of all theorems and definitions introduced in the chapters

More information

Math 2940: Prelim 1 Practice Solutions

Math 2940: Prelim 1 Practice Solutions Math 294: Prelim Practice Solutions x. Find all solutions x = x 2 x 3 to the following system of equations: x 4 2x + 4x 2 + 2x 3 + 2x 4 = 6 x + 2x 2 + x 3 + x 4 = 3 3x 6x 2 + x 3 + 5x 4 = 5 Write your

More information

Math 240, 4.3 Linear Independence; Bases A. DeCelles. 1. definitions of linear independence, linear dependence, dependence relation, basis

Math 240, 4.3 Linear Independence; Bases A. DeCelles. 1. definitions of linear independence, linear dependence, dependence relation, basis Math 24 4.3 Linear Independence; Bases A. DeCelles Overview Main ideas:. definitions of linear independence linear dependence dependence relation basis 2. characterization of linearly dependent set using

More information

1300 Linear Algebra and Vector Geometry

1300 Linear Algebra and Vector Geometry 1300 Linear Algebra and Vector Geometry R. Craigen Office: MH 523 Email: craigenr@umanitoba.ca May-June 2017 Introduction: linear equations Read 1.1 (in the text that is!) Go to course, class webpages.

More information

Linear equations in linear algebra

Linear equations in linear algebra Linear equations in linear algebra Samy Tindel Purdue University Differential equations and linear algebra - MA 262 Taken from Differential equations and linear algebra Pearson Collections Samy T. Linear

More information

Math 308 Midterm November 6, 2009

Math 308 Midterm November 6, 2009 Math 308 Midterm November 6, 2009 We will write A 1,..., A n for the columns of an m n matrix A. If x R n, we will write x = (x 1,..., x n ). he null space and range of a matrix A are denoted by N (A)

More information

Linear Algebra, Summer 2011, pt. 2

Linear Algebra, Summer 2011, pt. 2 Linear Algebra, Summer 2, pt. 2 June 8, 2 Contents Inverses. 2 Vector Spaces. 3 2. Examples of vector spaces..................... 3 2.2 The column space......................... 6 2.3 The null space...........................

More information

Section Gaussian Elimination

Section Gaussian Elimination Section. - Gaussian Elimination A matrix is said to be in row echelon form (REF) if it has the following properties:. The first nonzero entry in any row is a. We call this a leading one or pivot one..

More information

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS chapter MORE MATRIX ALGEBRA GOALS In Chapter we studied matrix operations and the algebra of sets and logic. We also made note of the strong resemblance of matrix algebra to elementary algebra. The reader

More information

MATH 1553 PRACTICE FINAL EXAMINATION

MATH 1553 PRACTICE FINAL EXAMINATION MATH 553 PRACTICE FINAL EXAMINATION Name Section 2 3 4 5 6 7 8 9 0 Total Please read all instructions carefully before beginning. The final exam is cumulative, covering all sections and topics on the master

More information

Solving Linear Systems Using Gaussian Elimination

Solving Linear Systems Using Gaussian Elimination Solving Linear Systems Using Gaussian Elimination DEFINITION: A linear equation in the variables x 1,..., x n is an equation that can be written in the form a 1 x 1 +...+a n x n = b, where a 1,...,a n

More information

BASIC NOTIONS. x + y = 1 3, 3x 5y + z = A + 3B,C + 2D, DC are not defined. A + C =

BASIC NOTIONS. x + y = 1 3, 3x 5y + z = A + 3B,C + 2D, DC are not defined. A + C = CHAPTER I BASIC NOTIONS (a) 8666 and 8833 (b) a =6,a =4 will work in the first case, but there are no possible such weightings to produce the second case, since Student and Student 3 have to end up with

More information

7. Dimension and Structure.

7. Dimension and Structure. 7. Dimension and Structure 7.1. Basis and Dimension Bases for Subspaces Example 2 The standard unit vectors e 1, e 2,, e n are linearly independent, for if we write (2) in component form, then we obtain

More information

Problem 1: Solving a linear equation

Problem 1: Solving a linear equation Math 38 Practice Final Exam ANSWERS Page Problem : Solving a linear equation Given matrix A = 2 2 3 7 4 and vector y = 5 8 9. (a) Solve Ax = y (if the equation is consistent) and write the general solution

More information

ft-uiowa-math2550 Assignment OptionalFinalExamReviewMultChoiceMEDIUMlengthForm due 12/31/2014 at 10:36pm CST

ft-uiowa-math2550 Assignment OptionalFinalExamReviewMultChoiceMEDIUMlengthForm due 12/31/2014 at 10:36pm CST me me ft-uiowa-math255 Assignment OptionalFinalExamReviewMultChoiceMEDIUMlengthForm due 2/3/2 at :3pm CST. ( pt) Library/TCNJ/TCNJ LinearSystems/problem3.pg Give a geometric description of the following

More information

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2 Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch

More information

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved Fundamentals of Linear Algebra Marcel B. Finan Arkansas Tech University c All Rights Reserved 2 PREFACE Linear algebra has evolved as a branch of mathematics with wide range of applications to the natural

More information

SUMMARY OF MATH 1600

SUMMARY OF MATH 1600 SUMMARY OF MATH 1600 Note: The following list is intended as a study guide for the final exam. It is a continuation of the study guide for the midterm. It does not claim to be a comprehensive list. You

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

LECTURES 14/15: LINEAR INDEPENDENCE AND BASES

LECTURES 14/15: LINEAR INDEPENDENCE AND BASES LECTURES 14/15: LINEAR INDEPENDENCE AND BASES MA1111: LINEAR ALGEBRA I, MICHAELMAS 2016 1. Linear Independence We have seen in examples of span sets of vectors that sometimes adding additional vectors

More information

1. Determine by inspection which of the following sets of vectors is linearly independent. 3 3.

1. Determine by inspection which of the following sets of vectors is linearly independent. 3 3. 1. Determine by inspection which of the following sets of vectors is linearly independent. (a) (d) 1, 3 4, 1 { [ [,, 1 1] 3]} (b) 1, 4 5, (c) 3 6 (e) 1, 3, 4 4 3 1 4 Solution. The answer is (a): v 1 is

More information

MATH 213 Linear Algebra and ODEs Spring 2015 Study Sheet for Midterm Exam. Topics

MATH 213 Linear Algebra and ODEs Spring 2015 Study Sheet for Midterm Exam. Topics MATH 213 Linear Algebra and ODEs Spring 2015 Study Sheet for Midterm Exam This study sheet will not be allowed during the test Books and notes will not be allowed during the test Calculators and cell phones

More information

(i) [7 points] Compute the determinant of the following matrix using cofactor expansion.

(i) [7 points] Compute the determinant of the following matrix using cofactor expansion. Question (i) 7 points] Compute the determinant of the following matrix using cofactor expansion 2 4 2 4 2 Solution: Expand down the second column, since it has the most zeros We get 2 4 determinant = +det

More information

Math 344 Lecture # Linear Systems

Math 344 Lecture # Linear Systems Math 344 Lecture #12 2.7 Linear Systems Through a choice of bases S and T for finite dimensional vector spaces V (with dimension n) and W (with dimension m), a linear equation L(v) = w becomes the linear

More information

LINEAR SYSTEMS, MATRICES, AND VECTORS

LINEAR SYSTEMS, MATRICES, AND VECTORS ELEMENTARY LINEAR ALGEBRA WORKBOOK CREATED BY SHANNON MARTIN MYERS LINEAR SYSTEMS, MATRICES, AND VECTORS Now that I ve been teaching Linear Algebra for a few years, I thought it would be great to integrate

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations. POLI 7 - Mathematical and Statistical Foundations Prof S Saiegh Fall Lecture Notes - Class 4 October 4, Linear Algebra The analysis of many models in the social sciences reduces to the study of systems

More information

Worksheet for Lecture 15 (due October 23) Section 4.3 Linearly Independent Sets; Bases

Worksheet for Lecture 15 (due October 23) Section 4.3 Linearly Independent Sets; Bases Worksheet for Lecture 5 (due October 23) Name: Section 4.3 Linearly Independent Sets; Bases Definition An indexed set {v,..., v n } in a vector space V is linearly dependent if there is a linear relation

More information

Chapter 9: Systems of Equations and Inequalities

Chapter 9: Systems of Equations and Inequalities Chapter 9: Systems of Equations and Inequalities 9. Systems of Equations Solve the system of equations below. By this we mean, find pair(s) of numbers (x, y) (if possible) that satisfy both equations.

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

Lecture Notes in Mathematics. Arkansas Tech University Department of Mathematics. The Basics of Linear Algebra

Lecture Notes in Mathematics. Arkansas Tech University Department of Mathematics. The Basics of Linear Algebra Lecture Notes in Mathematics Arkansas Tech University Department of Mathematics The Basics of Linear Algebra Marcel B. Finan c All Rights Reserved Last Updated November 30, 2015 2 Preface Linear algebra

More information

Solving Systems of Linear Equations Using Matrices

Solving Systems of Linear Equations Using Matrices Solving Systems of Linear Equations Using Matrices What is a Matrix? A matrix is a compact grid or array of numbers. It can be created from a system of equations and used to solve the system of equations.

More information

Row Reduction and Echelon Forms

Row Reduction and Echelon Forms Row Reduction and Echelon Forms 1 / 29 Key Concepts row echelon form, reduced row echelon form pivot position, pivot, pivot column basic variable, free variable general solution, parametric solution existence

More information

1 Last time: inverses

1 Last time: inverses MATH Linear algebra (Fall 8) Lecture 8 Last time: inverses The following all mean the same thing for a function f : X Y : f is invertible f is one-to-one and onto 3 For each b Y there is exactly one a

More information

This MUST hold matrix multiplication satisfies the distributive property.

This MUST hold matrix multiplication satisfies the distributive property. The columns of AB are combinations of the columns of A. The reason is that each column of AB equals A times the corresponding column of B. But that is a linear combination of the columns of A with coefficients

More information

Lecture 03. Math 22 Summer 2017 Section 2 June 26, 2017

Lecture 03. Math 22 Summer 2017 Section 2 June 26, 2017 Lecture 03 Math 22 Summer 2017 Section 2 June 26, 2017 Just for today (10 minutes) Review row reduction algorithm (40 minutes) 1.3 (15 minutes) Classwork Review row reduction algorithm Review row reduction

More information

5x 2 = 10. x 1 + 7(2) = 4. x 1 3x 2 = 4. 3x 1 + 9x 2 = 8

5x 2 = 10. x 1 + 7(2) = 4. x 1 3x 2 = 4. 3x 1 + 9x 2 = 8 1 To solve the system x 1 + x 2 = 4 2x 1 9x 2 = 2 we find an (easier to solve) equivalent system as follows: Replace equation 2 with (2 times equation 1 + equation 2): x 1 + x 2 = 4 Solve equation 2 for

More information

Math 54 First Midterm Exam, Prof. Srivastava September 23, 2016, 4:10pm 5:00pm, 155 Dwinelle Hall.

Math 54 First Midterm Exam, Prof. Srivastava September 23, 2016, 4:10pm 5:00pm, 155 Dwinelle Hall. Math 54 First Midterm Exam, Prof Srivastava September 23, 26, 4:pm 5:pm, 55 Dwinelle Hall Name: SID: Instructions: Write all answers in the provided space This exam includes two pages of scratch paper,

More information

Linear Algebra Exam 1 Spring 2007

Linear Algebra Exam 1 Spring 2007 Linear Algebra Exam 1 Spring 2007 March 15, 2007 Name: SOLUTION KEY (Total 55 points, plus 5 more for Pledged Assignment.) Honor Code Statement: Directions: Complete all problems. Justify all answers/solutions.

More information

Math 4377/6308 Advanced Linear Algebra

Math 4377/6308 Advanced Linear Algebra 1.4 Linear Combinations Math 4377/6308 Advanced Linear Algebra 1.4 Linear Combinations & Systems of Linear Equations Jiwen He Department of Mathematics, University of Houston jiwenhe@math.uh.edu math.uh.edu/

More information

web: HOMEWORK 1

web:   HOMEWORK 1 MAT 207 LINEAR ALGEBRA I 2009207 Dokuz Eylül University, Faculty of Science, Department of Mathematics Instructor: Engin Mermut web: http://kisideuedutr/enginmermut/ HOMEWORK VECTORS IN THE n-dimensional

More information

9.1 - Systems of Linear Equations: Two Variables

9.1 - Systems of Linear Equations: Two Variables 9.1 - Systems of Linear Equations: Two Variables Recall that a system of equations consists of two or more equations each with two or more variables. A solution to a system in two variables is an ordered

More information

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3 Linear Algebra Row Reduced Echelon Form Techniques for solving systems of linear equations lie at the heart of linear algebra. In high school we learn to solve systems with or variables using elimination

More information

Math 3191 Applied Linear Algebra

Math 3191 Applied Linear Algebra Math 9 Applied Linear Algebra Lecture : Null and Column Spaces Stephen Billups University of Colorado at Denver Math 9Applied Linear Algebra p./8 Announcements Study Guide posted HWK posted Math 9Applied

More information

Linear Algebra: Lecture Notes. Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway

Linear Algebra: Lecture Notes. Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway Linear Algebra: Lecture Notes Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway November 6, 23 Contents Systems of Linear Equations 2 Introduction 2 2 Elementary Row

More information

The Gauss-Jordan Elimination Algorithm

The Gauss-Jordan Elimination Algorithm The Gauss-Jordan Elimination Algorithm Solving Systems of Real Linear Equations A. Havens Department of Mathematics University of Massachusetts, Amherst January 24, 2018 Outline 1 Definitions Echelon Forms

More information

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3 Math 0 Row Reduced Echelon Form Techniques for solving systems of linear equations lie at the heart of linear algebra. In high school we learn to solve systems with or variables using elimination and substitution

More information

1 9/5 Matrices, vectors, and their applications

1 9/5 Matrices, vectors, and their applications 1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric

More information

MATH10212 Linear Algebra B Homework Week 5

MATH10212 Linear Algebra B Homework Week 5 MATH Linear Algebra B Homework Week 5 Students are strongly advised to acquire a copy of the Textbook: D C Lay Linear Algebra its Applications Pearson 6 (or other editions) Normally homework assignments

More information

MTH501- Linear Algebra MCQS MIDTERM EXAMINATION ~ LIBRIANSMINE ~

MTH501- Linear Algebra MCQS MIDTERM EXAMINATION ~ LIBRIANSMINE ~ MTH501- Linear Algebra MCQS MIDTERM EXAMINATION ~ LIBRIANSMINE ~ Question No: 1 (Marks: 1) If for a linear transformation the equation T(x) =0 has only the trivial solution then T is One-to-one Onto Question

More information

Solution to Homework 1

Solution to Homework 1 Solution to Homework Sec 2 (a) Yes It is condition (VS 3) (b) No If x, y are both zero vectors Then by condition (VS 3) x = x + y = y (c) No Let e be the zero vector We have e = 2e (d) No It will be false

More information

Section 1.1 System of Linear Equations. Dr. Abdulla Eid. College of Science. MATHS 211: Linear Algebra

Section 1.1 System of Linear Equations. Dr. Abdulla Eid. College of Science. MATHS 211: Linear Algebra Section 1.1 System of Linear Equations College of Science MATHS 211: Linear Algebra (University of Bahrain) Linear System 1 / 33 Goals:. 1 Define system of linear equations and their solutions. 2 To represent

More information