Elementary Linear Algebra

Size: px
Start display at page:

Download "Elementary Linear Algebra"

Transcription

1 Instructor s Manual with Sample Tests for Elementary Linear Algebra Fourth Edition Stephen Andrilli and David Hecker

2 Dedication To all the instructors who have used the various editions of our book over the years

3 Table of Contents Preface...iii Answers to Exercises. Chapter Answer Key. Chapter Answer Key... Chapter Answer Key... Chapter Answer Key...8 Chapter Answer Key...9 Chapter Answer Key. Chapter Answer Key.9 Chapter 8 Answer Key. Chapter 9 Answer Key.8 Appendix B Answer Key.0 Appendix C Answer Key.0 Chapter Tests...0 Chapter Chapter Tests 0 Chapter Chapter Tests Chapter Chapter Tests Chapter Chapter Tests Chapter Chapter Tests Chapter Chapter Tests 0 Chapter Chapter Tests Answers to Chapter Tests.9 Chapter Answers for Chapter Tests...9 Chapter Answers for Chapter Tests... Chapter Answers for Chapter Tests... Chapter Answers for Chapter Tests... Chapter Answers for Chapter Tests... Chapter Answers for Chapter Tests... Chapter Answers for Chapter Tests... ii

4 Preface This Instructor s Manual with Sample Tests is designed to accompany Elementary Linear Algebra, th edition, by Stephen Andrilli and David Hecker. This manual contains answers for all the computational exercises in the textbook and detailed solutions for virtually all of the problems that ask students for proofs. The exceptions are typically those exercises that ask students to verify a particular computation, or that ask for a proof for which detailed hints have already been supplied in the textbook. A few proofs that are extremely trivial in nature have also been omitted. This manual also contains sample Chapter Tests for the material in Chapters through, as well as answer keys for these tests. Additional information regarding the textbook, this manual, the Student Solutions Manual, and linear algebra in general can be found at the web site for the textbook, where you found this manual. Thank you for using our textbook. Stephen Andrilli David Hecker August 009 iii

5 Andrilli/Hecker - Answers to Exercises Section. Answers to Exercises Chapter Section. () (a) [9; ], distance = p 9 (b) [ ; ; ], distance = p 0 () (a) (; ; ) (see Figure ) (b) (0; ; ) (see Figure ) (c) [ ; ; ; ; ], distance = p (c) (; ; 0) (see Figure ) (d) (; 0; 0) (see Figure ) () (a) (; ) (b) ( ; ; ) (c) ( ; ; ; ; ) () (a) ; ; 8 (b) ; 0; ; h () (a) p 0 ; p 0 ; p 0 i; shorter, since length of original vector is > h (b) p ; p ; 0; p i; shorter, since length of original vector is > (c) [0:; h (d) 0:8]; neither, since given vector is a unit vector p ; p ; p ; p ; = p < p i; longer, since length of original vector () (a) Parallel (b) Parallel (c) Not parallel (d) Not parallel () (a) [ ; ; ] (b) [; 0; ] (c) [ ; ; 8] (d) [ ; ; ] (e) [; 0; ] (f) [ ; ; ] (8) (a) x + y = [; ], x y = [ ; 9], y x = [; 9] (see Figure ) (b) x + y = [; ], x y = [; ], y x = [ ; ] (see Figure ) (c) x+y = [; 8; ], x y = [; ; ], y x = [ ; ; ] (see Figure ) (d) x+y = [ ; ; ], x y = [; 0; ], y x = [ ; 0; ] (see Figure 8) (9) With A = (; ; ), B = (; ; ), and C = (0; ; 8), the length of side AB = length of side AC = p 9. The triangle is isosceles, but not equilateral, since the length of side BC is p 0.

6 Andrilli/Hecker - Answers to Exercises Section.

7 Andrilli/Hecker - Answers to Exercises Section.

8 Andrilli/Hecker - Answers to Exercises Section. (0) (a) [0; 0] (b) [ p ; ] (c) [0; 0] = 0 () See Figures. and.8 in Section. of the textbook. () See Figure 9. Both represent the same diagonal vector by the associative law of addition for vectors. Figure 9: x + (y + z) () [0: 0: p ; 0: p ] [ 0:8; 0:] () (a) If x = [a ; a ] is a unit vector, then cos = a kxk = a = a. Similarly, cos = a. (b) Analogous to part (a). () Net velocity = [ p ; + p ], resultant speed.8 km/hr h i () Net velocity = ; speed.8 km/hr p ; 8 p () [ 8 p ; p ] (8) Acceleration = 0 [ ; (9) Acceleration = ; 0; ; 9 (0) 80 p [ ; ; ] [ 9:; :; 8:] () a = [ mg + p ; mg + p ]; b = [ mg + p ; mg p + p ] ] [0:0; 0:; 0:0] () Let a = [a ; : : : ; a n ]. Then, kak = a + +a n is a sum of squares, which must be nonnegative. The square root of a nonnegative real number is a nonnegative real number. The sum can only be zero if every a i = 0.

9 Andrilli/Hecker - Answers to Exercises Section. () Let x = [x ; x ; : : : ; x n ]. Then kcxk = p (cx ) + + (cx n ) = p c (x + + x n) = jcj p x + + x n = jcj kxk: () In each part, suppose that x = [x ; : : : ; x n ], y = [y ; : : : ; y n ], and z = [z ; : : : ; z n ]. (a) x + (y + z) = [x ; : : : ; x n ] + [(y + z ); : : : ; (y n + z n )] = [(x + (y + z )); : : : ; (x n + (y n + z n ))] = [((x + y ) + z ); : : : ; ((x n + y n ) + z n )] = [(x + y ); : : : ; (x n + y n )] + [z ; : : : ; z n ] = (x + y) + z (b) x + ( x) = [x ; : : : ; x n ] + [ x ; : : : ; x n ] = [(x + ( x )); : : : ; (x n + ( x n ))] = [0; : : : ; 0]. Also, ( x) + x = x + ( x) (by part () of Theorem.) = 0, by the above. (c) c(x+y) = c([(x +y ); : : : ; (x n +y n )]) = [c(x +y ); : : : ; c(x n +y n )] = [(cx +cy ); : : : ; (cx n +cy n )] = [cx ; : : : ; cx n ]+[cy ; : : : ; cy n ] = cx+cy (d) (cd)x = [((cd)x ); : : : ; ((cd)x n )] = [(c(dx )); : : : ; (c(dx n ))] = c[(dx ); : : : ; (dx n )] = c(d[x ; : : : ; x n ]) = c(dx) () If c = 0, done. Otherwise, ( c )(cx) = c (0) ) ( c c)x = 0 (by part () of Theorem.) ) x = 0: Thus either c = 0 or x = 0. () Let x = [x ; x ; : : : ; x n ]. Then [0; 0; : : : ; 0] = c x c x = [(c c )x ; : : : ; (c c )x n ]. Since c c = 0, we have x = x = = x n = 0. () (a) F (b) T (c) T (d) F (e) T (f) F (g) F Section. () (a) arccos( p ), or about :, or. radians (b) arccos( p p ), or about :, or. radians (c) arccos(0), which is 90, or radians (d) arccos( ), which is 80, or radians (since x = y) () The vector from A to A is [; ; ], and the vector from A to A is [; ; ]. These vectors are orthogonal. () (a) [a; b] [ b; a] = a( b) + ba = 0: Similarly, [a; b] [b; a] = 0: (b) A vector in the direction of the line ax + by + c = 0 is [b; a vector in the direction of bx ay + d = 0 is [a; b]. a], while () (a) joules (b) 00p 9, or 8: joules (c) joules () Note that yz is a scalar, so x(yz) is not de ned.

10 Andrilli/Hecker - Answers to Exercises Section. () In all parts, let x = [x ; x ; : : : ; x n ] ; y = [y ; y ; : : : ; y n ] ; and z = [z ; z ; : : : ; z n ] : Part (): x y = [x ; x ; : : : ; x n ] [y ; y ; : : : ; y n ] = x y + + x n y n = y x + + y n x n = [y ; y ; : : : ; y n ] [x ; x ; : : : ; x n ] = y x Part (): x x = [x ; x ; : : : ; x n ] [x ; x ; : : : ; x n ] = x x + + x n x n = x + +x n. Now x + +x n is a sum of squares, each of which must be nonnegative. Hence, the sum is also nonnegative, and so its square root is de ned. Thus, 0 x x = x + + x n = p x + + x n = kxk. Part (): Suppose x x = 0. From part (), 0 = x x = x + + x n x i, for each i, since the sum of the remaining squares (without x i ), which is nonnegative. Hence, 0 x i for each i. But x i 0, because it is a square. Hence each x i = 0. Therefore, x = 0. Next, suppose x = 0. Then x x = [0; : : : ; 0] [0; : : : ; 0] = (0)(0) + + (0)(0) = 0. Part (): c(x y) = c ([x ; x ; : : : ; x n ] [y ; y ; : : : ; y n ]) = c (x y + + x n y n ) = cx y + + cx n y n = [cx ; cx ; : : : ; cx n ] [y ; y ; : : : ; y n ] = (cx) y. Next, c(x y) = c(y x) (by part ()) = (cy) x (by the above) = x (cy), by part (): Part (): (x + y) z = ([x ; x ; : : : ; x n ] + [y ; y ; : : : ; y n ]) [z ; z ; : : : ; z n ] = [x + y ; x + y ; : : : ; x n + y n ] [z ; z ; : : : ; z n ] = (x + y )z + (x + y )z + + (x n + y n )z n = (x z + x z + + x n z n ) + (y z + y z + + y n z n ): Also, (x z) + (y z) = ([x ; x ; : : : ; x n ] [z ; z ; : : : ; z n ]) + ([y ; y ; : : : ; y n ] [z ; z ; : : : ; z n ]) = (x z + x z + + x n z n ) + (y z + y z + + y n z n ). Hence, (x + y) z = (x z) + (y z). () No; consider x = [; 0], y = [0; ], and z = [; ]. (8) A method similar to the rst part of the proof of Theorem. in the textbook yields: ka bk 0 ) (a a) (b a) (a b) + (b b) 0 ) (a b) + 0 ) a b. (9) Note that (x + y)(x y) = (xx) + (yx) (xy) (yy) = kxk kyk. (0) Note that kx + yk = kxk + (xy) + kyk, while kx yk = kxk (xy) + kyk. () (a) This follows immediately from the solution to Exercise 0 above. (b) Note that kx + y + zk = k(x + y) + zk = kx + yk + ((x + y)z) + kzk = kxk + (xy) + kyk + (xz) + (yz) + kzk. (c) This follows immediately from the solution to Exercise 0 above.

11 Andrilli/Hecker - Answers to Exercises Section. () Note that x(c y + c z) = c (xy) + c (xz). () cos = a p a +b +c, cos = () (a) Length = p s b p a +b +c, and cos = (b) angle = arccos( p ) :, or 0:9 radians () (a) [ ; 0 ; ] (c) [ (b) [ 90 ; 0; ; ] ; ; 0] [:9; :8; 0] c p a +b +c () (a) 0 (zero vector). The dropped perpendicular travels along b to the common initial point of a and b. (b) The vector b. The terminal point of b lies on the line through a, so the dropped perpendicular has length zero. () ai, bj, ck (8) (a) Parallel: [ 0 9 ; 0 9 ; 0 9 ], orthogonal: [ 9 9 ; 88 9 ; 9 ] (b) Parallel: [ ; ], orthogonal: [ ; ; ] (c) Parallel: [ 0 9 ; 0 9 ; 0 9 ], orthogonal: [ 9 ; 8 9 ; 9 ] (9) From the lower triangle in the gure, we have (proj r x) + (proj r x x) = re ection of x (see Figure 0). Figure 0: Re ection of a vector x through a line. (0) For the case kxk kyk: j kxk kyk j = kyk kxk = kx+y+( x)k kxk kx+yk+k xk kxk (by the Triangle Inequality) = kx+yk+kxk kxk = kx + yk. The case kxk kyk is done similarly, with the roles of x and y reversed. () (a) Let c = (xy)=(kxk ), and w = y proj x y. (In this way, cx = proj x y.)

12 Andrilli/Hecker - Answers to Exercises Section. (b) Suppose cx + w = dx + v. Then (d c)x = w v, and (d c)x(w v) = (w v)(w v) = kw vk. But (d c)x(w v) = (d c)(xw) (d c)(xv) = 0, since v and w are orthogonal to x. Hence kw vk = 0 =) w v = 0 =) w = v. Then, cx = dx =) c = d, from Exercise in Section., since x is nonzero. () If is the angle between x and y, and is the angle between proj x y and proj y x, then cos = xy yx kxk kyk (xy) = jxyj jyxj kxk kxk kyk kyk proj x y proj y x kproj x ykkproj y xk = (xy) kxk kyk (xy) kxkkyk xy kxk x yx kyk y xy kxk x yx = kyk y = (xy) kxkkyk = cos. Hence =. () (a) T (b) T (c) F (d) F (e) T (f) F Section. () (a) We have kx + yk kxk + kyk = kxk + kyk kxk + kyk = (kxk + kyk). (b) Let m = maxfjcj; jdjg. Then kcx dyk m(kxk + kyk). () (a) Note that j = ((j )) +. Let k = (j ). (b) Consider the number. () Note that since x = 0 and y = 0, proj x y = 0 i (xy)=(kxk ) = 0 i xy = 0 i yx = 0 i (yx)=(kyk ) = 0 i proj y x = 0. () If y = cx (for c > 0), then kx+yk = kx+cxk = (+c)kxk = kxk+ckxk = kxk + kcxk = kxk + kyk. On the other hand, if kx + yk = kxk + kyk, then kx + yk = (kxk + kyk). Now kx+yk = (x+y)(x+y) = kxk +(xy)+kyk, while (kxk+kyk) = kxk + kxkkyk + kyk. Hence xy = kxkkyk. By Result, y = cx, for some c > 0. () (a) Consider x = [; 0; 0] and y = [; ; 0]. (b) If x = y, then x y = kxk. (c) Yes () (a) Suppose y = 0. We must show that x is not orthogonal to y. Now kx + yk = kxk, so kxk + (xy) + kyk = kxk. Hence kyk = (xy). Since y = 0, we have kyk = 0, and so xy = 0. (b) Suppose x is not a unit vector. We must show that xy =. Now proj x y = x =) ((xy)=(kxk ))x = x =) (xy)=(kxk ) = =) xy = kxk. But then kxk = =) kxk = =) xy =. () Use Exercise (a) in Section.. 8

13 Andrilli/Hecker - Answers to Exercises Section. (8) (a) Contrapositive: If x = 0, then x is not a unit vector. Converse: If x is nonzero, then x is a unit vector. Inverse: If x is not a unit vector, then x = 0. (b) (Let x and y be nonzero vectors.) Contrapositive: If y = proj x y, then x is not parallel to y. Converse: If y = proj x y, then x k y. Inverse: If x is not parallel to y, then y = proj x y. (c) (Let x, y be nonzero vectors.) Contrapositive: If proj y x = 0, then proj x y = 0. Converse: If proj y x = 0, then proj x y = 0. Inverse: If proj x y = 0, then proj y x = 0. (9) (a) Converse: Let x and y be nonzero vectors in R n. If kx + yk > kyk, then xy 0. (b) Let x = [; ] and y = [0; ]. (0) (a) Converse: Let x, y, and z be vectors in R n. If y = z, then xy = xz. The converse is obviously true, but the original statement is false in general, with counterexample x = [; ], y = [; ], and z = [ ; ]. (b) Converse: Let x and y be vectors in R n. If kx + yk kyk, then x y = 0. The original statement is true, but the converse is false in general. Proof of the original statement follows from kx + yk = (x + y) (x + y) = kxk + (x y) + kyk = kxk + kyk kyk. Counterexample to converse: let x = [; 0], y = [; ]. (c) Converse: Let x, y be vectors in R n, with n >. If x = 0 or y = 0, then xy = 0. The converse is obviously true, but the original statement is false in general, with counterexample x = [; ] and y = [; ]. () Suppose x? y and n is odd. Then x y = 0. Now x y = P n i= x iy i. But each product x i y i equals either or. If exactly k of these products equal, then x y = k (n k) = n + k. Hence n + k = 0, and so n = k, contradicting n odd. () Suppose that the vectors [; a], [; b], and [z ; z ] as constructed in the hint are all mutually orthogonal. Then z + az = 0 = z + bz. Hence (a b)z = 0, so either a = b or z = 0. But if a = b, then [; a] and [; b] are not orthogonal. Alternatively, if z = 0, then z + az = 0 =) z = 0, so [z ; z ] = [0; 0], a contradiction. () Base Step (n = ): x = x. Inductive Step: Assume x +x + +x n +x n = x n +x n + +x +x, for some n : Prove: x + x + + x n + x n + x n+ = x n+ + x n + x n + + x + x : But x + x + + x n + x n + x n+ = (x + x + + x n + x n ) + x n+ = x n+ + (x + x + + x n + x n ) = x n+ + (x n + x n + + x + x ) (by the assumption) = x n+ + x n + x n + + x + x : 9

14 Andrilli/Hecker - Answers to Exercises Section. () Base Step (m = ): kx k kx k. Inductive Step: Assume kx + + x m k kx k + + kx m k, for some m. Prove kx + +x m +x m+ k kx k+ +kx m k+kx m+ k. But, by the Triangle Inequality, k(x + +x m )+x m+ k kx + +x m k+kx m+ k kx k + + kx m k + kx m+ k. () Base Step (k = ): kx k = kx k. Inductive Step: Assume kx + + x k k = kx k + + kx k k. Prove kx + +x k +x k+ k = kx k + +kx k k +kx k+ k. Use the fact that k(x + +x k )+x k+ k = kx + +x k k +((x + +x k )x k+ )+kx k+ k = kx + +x k k +kx k+ k, since x k+ is orthogonal to all of x ; : : : ; x k. () Base Step (k = ): We must show (a x )y ja j kyk. But (a x )y j(a x )yj ka x k kyk (by the Cauchy-Schwarz Inequality) = ja j kx k kyk = ja j kyk, since x is a unit vector. Inductive Step: Assume (a x + + a k x k )y (ja j + + ja k j)kyk, for some k. Prove (a x + + a k x k + a k+ x k+ )y (ja j + + ja k j + ja k+ j)kyk. Use the fact that ((a x + + a k x k ) +a k+ x k+ )y = (a x + + a k x k )y + (a k+ x k+ )y and the Base Step. () Base Step (n = ): We must show that k[x ]k jx j. This is obvious since k[x ]k = p x = jx j. q Pk Inductive Step: Assume i= x i P q k i= jx Pk+ ij and prove i= x i P k+ i= jx ij. Let y = [x ; : : : ; x k ; x k+ ], z = [x ; : : : ; x k ; 0], and w = [0; : : : ; 0; x k+ ]. Note q that y = z+w. So, by the Triangle Inequality, kyk Pk+ q kzk+kwk. Thus, i= x i Pk q i= x i + x k+ P q k i= jx ij+ x k+ (by the inductive hypothesis) = P k i= jx ij + jx k+ j = P k+ i= jx ij. (8) Step cannot be reversed, because y could equal (x + ). Step cannot be reversed, because y could equal x + x + c. Step cannot be reversed, because in general y does not have to equal x +. Step cannot be reversed, since dy dx could equal x + c. All other steps remain true when reversed. (9) (a) For every unit vector x in R, x [; ; ] = 0. (b) x = 0 and xy 0, for some vectors x and y in R n. (c) x = 0 or kx + yk = kyk, for all vectors x and y in R n. (d) There is some vector x R n for which xx 0. (e) There is an x R such that for every nonzero y R, x y = 0. (f) For every x R, there is some y R such that xy = 0. 0

15 Andrilli/Hecker - Answers to Exercises Section. (0) (a) Contrapositive: If x = 0 and kx yk kyk, then x y = 0. Converse: If x = 0 or kx yk > kyk, then x y = 0. Inverse: If x y = 0, then x = 0 and kx yk kyk. (b) Contrapositive: If kx yk kyk, then either x = 0 or xy = 0. Converse: If kx yk > kyk, then x = 0 and xy = 0. Inverse: If x = 0 or xy = 0, then kx yk kyk. () Suppose x = 0. We must prove xy = 0 for some vector y R n. Let y = x. () The contrapositive is: If u and v are (nonzero) vectors that are not in opposite directions, then there is a vector x such that u x > 0 and v x > 0. If u and v are in the same direction, we can let x = u. So, we can assume that u and v are not parallel. Consider a vector x that bisects the angle between u and v. Geometrically, it is clear that such a vector makes an acute angle with both u and v, which nishes the proof. (Algebraically, a formula for such a vector would be x = u kuk + v kvk. Then, u x = u u kuk + v kvk = uu kuk + uv kvk kuk the Cauchy-Schwarz inequality) = kuk similar proof shows that v x > 0.) () Let x = [; ], y = [; ]. uv > kuk kvk kukkvk kvk (by kuk = 0. Hence u x > 0. A () Let y = [; ; ]. Then since xy 0, Result implies that kx + yk > kyk =. () (a) F (b) T (c) T (d) F (e) F (f) F (g) F (h) T (i) F Section. () (a) 9 0 (b) Impossible 8 (c) (d) (e) Impossible (f) (g) (h) Impossible (i) 8 8 (j) (k) (l) Impossible (m) Impossible (n) We used > rather than in the Cauchy-Schwarz inequality because equality occurs in the Cauchy-Schwarz inequality only when the vectors are parallel (see Result ).

16 Andrilli/Hecker - Answers to Exercises Section. () Square: B; C; E; F; G; H; J; K; L; M; N; P; Q Diagonal: B; G; N Upper triangular: B; G; L; N Lower triangular: B; G; M; N; Q Symmetric: B; F; G; J; N; P Skew-symmetric: H (but not E; C; K) Transposes: A T = on () (a) (b) (c) (d) , B T = B, C T = , and so () If A T = B T, then A T T = B T T, implying A = B, by part () of Theorem.. () (a) If A is an m n matrix, then A T is n m. If A = A T, then m = n. (b) If A is a diagonal matrix and if i = j, then a ij = 0 = a ji. (c) Follows directly from part (b), since I n is diagonal. (d) The matrix must be a square zero matrix; that is, O n, for some n. () (a) If i = j, then a ij + b ij = = 0. (b) Use the fact that a ij + b ij = a ji + b ji. () Use induction on n. Base Step (n = ): Obvious. Inductive Step: Assume A ; : : : ; A k+ and B = P k i= A i are upper triangular. Prove that D = P k+ i= A i is upper triangular. Let C = A k+. Then D = B + C. Hence, d ij = b ij + c ij = = 0 if i > j (by the inductive hypothesis). Hence D is upper triangular.

17 Andrilli/Hecker - Answers to Exercises Section. (8) (a) Let B = A T. Then b ij = a ji = a ij = b ji. Let D = ca. Then d ij = ca ij = ca ji = d ji. (b) Let B = A T. Then b ij = a ji = a ij = b ji. Let D = ca. Then d ij = ca ij = c( a ji ) = ca ji = d ji. (9) I n is de ned as the matrix whose (i; j) entry is ij. (0) Part (): Let B = A + ( A) (= ( A) + A, by part ()). Then b ij = a ij + ( a ij ) = 0. Part (): Let D = c(a+b) and let E = ca+cb. Then, d ij = c(a ij +b ij ) = ca ij + cb ij = e ij. Part (): Let B = (cd)a and let E = c(da). Then b ij = (cd)a ij = c(da ij ) = e ij. () Part (): Let B = A T and C = (A T ) T. Then c ij = b ji = a ij. Part (): Let B = c(a T ), D = ca and F = (ca) T. Then f ij = d ji = ca ji = b ij. () Assume c = 0. We must show A = O mn. But for all i; j, i m, j n, ca ij = 0 with c = 0. Hence, all a ij = 0. () (a) ( (A + AT )) T = (A + AT ) T = (AT + (A T ) T ) = (AT + A) = (A + AT ) (b) ( (A AT )) T = (A AT ) T = (AT (A T ) T ) = (AT A) = (A AT ) (c) (A + AT ) + (A AT ) = (A) = A (d) S V = S V (e) Follows immediately from part (d). (f) Follows immediately from parts (d) and (e). (g) Parts (a) through (c) show A can be decomposed as the sum of a symmetric matrix and a skew-symmetric matrix, while parts (d) through (f) show that the decomposition for A is unique. () (a) Trace (B) =, trace (C) = 0, trace (E) =, trace (F) =, trace (G) = 8, trace (H) = 0, trace (J) =, trace (K) =, trace (L) =, trace (M) = 0, trace (N) =, trace (P) = 0, trace (Q) = (b) Part (i): Let D = A + B. Then trace(d) = P n P i= d ii = n i= a ii + P n i= b ii = trace(a) + trace(b). Part (ii): Let B = ca. Then trace(b) = P n i= b ii = P n i= ca ii = c P n i= a ii = c(trace(a)). Part (iii): Let B = A T. Then trace(b) = P n i= b ii = P n i= a ii (since b ii = a ii for all i) = trace(a). (c) Not necessarily: consider the matrices L and N in Exercise. (Note: If n =, the statement is true.)

18 Andrilli/Hecker - Answers to Exercises Section. () (a) F (b) T (c) F (d) T (e) T Section. () (a) Impossible (e) [ 8] 8 (k) (b) 9 (f) 8 8 (c) Impossible (g) Impossible (l) (h) [; 8] 0 (d) (i) Impossible 9 (j) Impossible (m) [98] 0 (o) Impossible (n) () (a) No (b) Yes () (a) [; ; 8] (b) (c) No (d) Yes (c) [] (d) [; 8; ; ] (e) No () (a) Valid, by Theorem., part () (b) Invalid (c) Valid, by Theorem., part () (d) Valid, by Theorem., part () (e) Valid, by Theorem. (f) Invalid (g) Valid, by Theorem., part () (h) Valid, by Theorem., part () (i) Invalid (j) Valid, by Theorem., part (), and Theorem. () Outlet Outlet Outlet Outlet Salary $00 $8000 $000 $8000 $000 $000 $0000 $00 Fringe Bene ts () June July August Tickets Food Souvenirs $00 $0900 $900 $0000 $00 $800 $9800 $900 $9000

19 Andrilli/Hecker - Answers to Exercises Section. () Nitrogen Phosphate Potash Field Field Field :00 0: 0: 0:90 0: 0: (in tons) 0:9 0: 0:8 (8) Rocket Rocket Rocket Rocket Chip Chip Chip Chip (9) (a) One example: 0 (b) One example: (0) (a) Third row, fourth column entry of AB (b) Fourth row, rst column entry of AB (c) Third row, second column entry of BA (d) Second row, fth column entry of BA (c) Consider () (a) P n k= a kb k (b) P m k= b ka k () (a) [ ; ; ] (b) 8 () (a) [ ; ; 0; 9] () (a) Let B = Ai. Then B is m and b k = P m j= a kji j = (a k )() + (a k )(0) + (a k )(0) = a k. (b) Ae i = ith column of A. (c) By part (b), each column of A is easily seen to be the zero vector by letting x equal each of e ; : : : ; e n in turn. (b) () Proof of Part (): The (i; j) entry of A(B + C) = (ith row of A)(jth column of (B + C)) = (ith row of A)(jth column of B + jth column of C) = (ith row of A)(jth column of B) + (ith row of A)(jth column of C) = (i; j) entry of AB + (i; j) entry of AC = (i; j) entry of (AB + AC). The proof of Part () is similar. For the rst equation in part (), the

20 Andrilli/Hecker - Answers to Exercises Section. (i; j) entry of c(ab) = c ((ith row of A)(jth column of B)) = (c(ith row of A))(jth column of B) = (ith row of ca)(jth column of B) = (i; j) entry of (ca)b. The proof of the second equation is similar. () Let B = AO np. Clearly B is an m p matrix and b ij = P n k= a ik0 = 0. () Proof that AI n = A: Let B = AI n. Then B is clearly an m n matrix and b jl = P n k= a jki kl = a jl, since i kl = 0 unless k = l, in which case i kk =. The proof that I m A = A is similar. (8) (a) We need to show c ij = 0 if i = j. Now, if i = j, both factors in each term of the formula for c ij in the formal de nition of matrix multiplication are zero, except possibly for the terms a ii b ij and a ij b jj. But since the factors b ij and a ij also equal zero, all terms in the formula for c ij equal zero. (b) Assume i > j. Consider the term a ik b kj in the formula for c ij. If i > k, then a ik = 0. If i k, then j < k, so b kj = 0. Hence all terms in the formula for c ij equal zero. (c) Let L and L be lower triangular matrices. U = L T and U = L T are then upper triangular, and L L = U T U T = (U U ) T (by Theorem.). But by part (b), U U is upper triangular. So L L is lower triangular. (9) Base Step: Clearly, (ca) = c A. Inductive Step: Assume (ca) n = c n A n, and prove (ca) n+ = c n+ A n+. Now, (ca) n+ = (ca) n (ca) = (c n A n )(ca) (by the inductive hypothesis) = c n ca n A (by part () of Theorem.) = c n+ A n+. (0) Proof of Part (): Base Step: A s+0 = A s = A s I = A s A 0. Inductive Step: Assume A s+t = A s A t for some t 0. We must prove A s+(t+) = A s A t+. But A s+(t+) = A (s+t)+ = A s+t A = (A s A t )A (by the inductive hypothesis) = A s (A t A) = A s A t+. Proof of Part (): (We only need prove (A s ) t = A st.) Base Step: (A s ) 0 = I = A 0 = A s0. Inductive Step: Assume (A s ) t = A st for some integer t 0. We must prove (A s ) t+ = A s(t+). But (A s ) t+ = (A s ) t A s = A st A s (by the inductive hypothesis) = A st+s (by part ()) = A s(t+). () (a) If A is an m n matrix, and B is a p r matrix, then the fact that AB exists means n = p, and the fact that BA exists means m = r. Then AB is an m m matrix, while BA is an n n matrix. If AB = BA, then m = n. (b) Note that by the Distributive Law, (A+B) = A +AB+BA+B. () (AB)C = A(BC) = A(CB) = (AC)B = (CA)B = C(AB).

21 Andrilli/Hecker - Answers to Exercises Section. () Use the fact that A T B T = (BA) T, while B T A T = (AB) T. () (AA T ) T = (A T ) T A T (by Theorem.) = AA T. (Similarly for A T A.) () (a) If A; B are both skew-symmetric, then (AB) T = B T A T = ( B)( A) = ( )( )BA = BA. The symmetric case is similar. (b) Use the fact that (AB) T = B T A T = BA, since A; B are symmetric. () (a) The (i; i) entry of AA T = (ith row of A)(ith column of A T ) = (ith row of A)(ith row of A) = sum of the squares of the entries in the ith row of A. Hence, trace(aa T ) is the sum of the squares of the entries from all rows of A. (c) Trace(AB) = P n i= (P n k= a ikb ki ) = P n P n P i= k= b kia ik = n P n k= i= b kia ik. Reversing the roles of the dummy variables k and i gives P n P n i= k= b ika ki, which is equal to trace(ba). 0 () (a) Consider any matrix of the form. x 0 (c) (I n A) = I n I n A AI n + A = I n A A + A = I n A. (d) 0 0 (e) A = (A)A = (AB)A = A(BA) = AB = A. (8) (a) (ith row of A) (jth column of B) = (i; j)th entry of O mp = 0. (b) Consider A = and B = 0. 0 (c) Let C = (9) See Exercise 0(c) (0) Throughout this solution, we use the fact that Ae i = (ith column of A). (See Exercise (b).) Similarly, if e i is thought of as a row matrix, e i A = (ith row of A). (a) (jth column of A ij) = A(jth column of ij) = Ae i = (ith column of A). And, if k = j, (kth column of A ij) = A(kth column of ij) = A0 = 0. (b) (ith row of ija) = (ith row of ij)a = e j A = (jth row of A). And, if k = i, (kth row of ija) = (kth row of ij)a = 0A = 0.

22 Andrilli/Hecker - Answers to Exercises Chapter Review (c) Suppose A is an n n matrix that commutes with all other n n matrices. Suppose i = j. Then a ij = ith entry of (jth column of A) = ith entry of (ith column of A ji) (by reversing the roles of i and j in part (a)) = (i; i)th entry of A ji = (i; i)th entry of jia (since A commutes with ji) = ith entry of (ith row of jia) = 0 (since all rows other than the jth row of jia have all zero entries, by part (b)). Thus, A is diagonal. Next, a ii = ith entry of (ith column of A) = ith entry of (jth column of A ij) (by part (a)) = (i; j)th entry of A ij = (i; j)th entry of ija (since A commutes with ij) = jth entry of (ith row of ija) = jth entry of (jth row of A) (by part (b)) = a jj. Thus, all the main diagonal terms of A are equal, and so A = ci n for some c R. () (a) T (b) T (c) T (d) F (e) F (f) F Chapter Review Exercises () Yes. Vectors corresponding to adjacent sides are orthogonal. Vectors corresponding to opposite sides are parallel, with one pair having slope and the other pair having slope. h i () u = p 9 ; p 9 ; p 9 [0:8; 0:9; 0:]; slightly longer. () Net velocity = p ; p [0:9; :9]; speed :9 mi/hr. () a = [ 0; 9; 0] m/sec () jx yj = kxk kyk 90:9 () () proj a b = ; 8 ; 9 ; = [:; :; 0:; :8]; b proj a b = ; ; ; = [ 0:; :8; :; :8]; a (b proj a b) = 0. (8) 8 joules (9) We must prove that (x + y) (x y) = 0 ) kxk = kyk : But, (x + y) (x y) = 0 ) x x x y + y x y y = 0 ) kxk kyk = 0 ) kxk = kyk = kxk = kyk (0) First, x = 0, or else proj x y is not de ned. Also, y = 0, since that would imply proj x y = y. Now, assume xky. Then, there is a scalar c = 0 such xy kxk xcx that y = cx. Hence, proj x y = x = x = = y, contradicting the assumption that y = proj x y. kxk ckxk kxk x = cx 8

23 Andrilli/Hecker - Answers to Exercises () (a) A C T = () S = 9 0 ; AB = 0 BA is not de ned; AC = ; 0 CA = 0 8 ; A is not de ned; B = (b) Third row of BC = [ 8]. ; V = Chapter Review () (a) ((A B) T ) T = (A B). Also, ((A B) T ) = ( )(A T B T ) = ( )(( A) ( B)) (de nition of skew-symmetric) = (A B). Hence, ((A B) T ) T = ((A B) T ), so (A B) T is skew-symmetric. (b) Let C = A + B. Now, c ij = a ij + b ij. But for i < j, a ij = b ij = 0. Hence, for i < j, c ij = 0. Thus, C is lower triangular. ; () Company I Company II Company III Price Shipping Cost $800 $00 $000 $900. $000 $00 () Take the transpose of both sides of A T B T = B T A T to get BA = AB. Then, (AB) = (AB)(AB) = A(BA)B = A(AB)B = A B. () Negation: For every square matrix A, A = A. Counterexample: A = [ ]. () If A = O, then some row of A, say the ith row, is nonzero. Apply Result in Section. with x = (ith row of A). (8) Base Step (k = ): Suppose A and B are upper triangular n n matrices, and let C = AB. Then a ij = b ij = 0, for i > j. Hence, for i > j, c ij = P n m= a imb mj = P i m= 0b mj +a ii b ij + P n m=i+ a im 0 = a ii (0) = 0. Thus, C is upper triangular. Inductive Step: Let A ; : : : ; A k+ be upper triangular matrices. Then, the product C = A A k is upper triangular by the Induction Hypothesis, 9

24 Andrilli/Hecker - Answers to Exercises Chapter Review and so the product A A k+ = CA k+ is upper triangular by the Base Step. (9) (a) Let A and B be n n matrices having the properties given in the exercise. Let C = AB. Then we know that a ij = 0 for all i < j, b ij = 0 for all i > j, c ij = 0 for all i = j, and that a ii = 0 and b ii = 0 for all i. We need to prove that a ij = 0 for all i > j. We use induction on j. Base Step (j = ): Let i > j =. Then 0 = c i = P n k= a ikb k = a i b + P n k= a ikb k = a i b + P n k= a ik 0 = a i b. Since b = 0, this implies that a i = 0, completing the Base Step. Inductive Step: Assume for j = ; ; : : : ; m that a ij = 0 for all i > j: That is, assume we have already proved that, for some m, that the rst m columns of A have all zeros below the main diagonal. To complete the inductive step, we need to prove that the mth column of A has all zeros below the main diagonal. That is, we must prove that a im = 0 for all i > m. Let i > m. Then, 0 = c im = P n k= a ikb km = P m k= a ikb km +a im b mm + P n k=m+ a ikb km = P m k= 0 b km + a im b mm + P n k=m+ a ik 0 = a im b mm. But, since b mm = 0, we must have a im = 0, completing the proof. (b) Apply part (a) to B T and A T to prove that B T is diagonal. Hence B is also diagonal. (0) (a) F (b) T (c) F (d) F (e) F (f) T (g) F (h) F (i) F (j) T (k) T (l) T (m) T (n) F (o) F (p) F (q) F (r) T 0

25 Andrilli/Hecker - Answers to Exercises Section. Chapter Section. () (a) f( ; ; )g (b) f(; ; )g (c) fg (d) f(c + ; c ; c; ) j c Rg; (; ; 0; ); (8; ; ; ); (9; ; ; ) (e) f(b d ; b; d + ; d; ) j b; d Rg; ( ; 0; ; 0; ); ( ; ; ; 0; ); ( ; 0; ; ; ) (f) f(b + c ; b; c; 8) j b; c Rg; ( ; 0; 0; 8); ( ; ; 0; 8); (; 0; ; 8) (g) f(; ; )g (h) fg () (a) f(c + e + ; c + e + ; c; e + ; e) j c; e Rg (b) f(b d + 0e ; b; d 8e + ; d; e; ) j b; d; e Rg (c) f( 0c+9d f 8; c d+f +; c; d; f +; f) j c; d; f Rg (d) fg () nickels, dimes, quarters () y = x x + () y = x + x x + () x + y x 8y = 0, or (x ) + (y ) = () In each part, R(AB) = (R(A))B, which equals the given matrix. (a) 0 0 (b) 0 8 (8) (a) To save space, we write hci i for the ith row of a matrix C. For the Type (I) operation R : hii chii: Now, hr(ab)i i = chabi i = chai i B (by the hint in the text) = hr(a)i i B = hr(a)bi i. But, if k = i, hr(ab)i k = habi k = hai k B (by the hint in the text) = hr(a)i k B = hr(a)bi k. For the Type (II) operation R : hii chji + hii: Now, hr(ab)i i = chabi j + habi i = chai j B + hai i B (by the hint in the text) = (chai j + hai i )B = hr(a)i i B = hr(a)bi i. But, if k = i, hr(ab)i k = habi k = hai k B (by the hint in the text) = hr(a)i k B = hr(a)bi k.

26 Andrilli/Hecker - Answers to Exercises Section. For the Type (III) operation R : hii! hji: Now, hr(ab)i i = habi j = hai j B (by the hint in the text) = hr(a)i i B = hr(a)bi i. Similarly, hr(ab)i j = habi i = hai i B (by the hint in the text) = hr(a)i j B = hr(a)bi j. And, if k = i and k = j, hr(ab)i k = habi k = hai k B (by the hint in the text) = hr(a)i k B = hr(a)bi k. (b) Use induction on n, the number of row operations used. Base Step: The case n = is part () of Theorem., and is proven in part (a) of this exercise. Inductive Step: Assume that R n ( (R (R (AB))) ) = R n ( (R (R (A))) )B and prove that R n+ (R n ( (R (R (AB))) )) = R n+ (R n ( (R (R (A))) ))B: Now, R n+ (R n ( (R (R (AB))) )) = R n+ (R n ( (R (R (A))) )B) (by the inductive hypothesis) (by part (a)). = R n+ (R n ( (R (R (A))) ))B (9) Multiplying a row by zero changes all of its entries to zero, essentially erasing all of the information in the row. (0) Follow the hint in the textbook. First, A(X + c(x X )) = AX + ca(x X ) = B + cax cax = B + cb cb = B. Then we want to show that X + c(x X ) = X + d(x X ) implies that c = d to prove that each real number c produces a di erent solution. But X + c(x X ) = X + d(x X ) ) c(x X ) = d(x X ) ) (c d)(x X ) = 0 ) (c d) = 0 or (X X ) = 0. However, X = X, and so c = d. () (a) T (b) F (c) F (d) F (e) T (f) T Section. () Matrices in (a), (b), (c), (d), and (f) are not in reduced row echelon form. Matrix in (a) fails condition of the de nition. Matrix in (b) fails condition of the de nition. Matrix in (c) fails condition of the de nition. Matrix in (d) fails conditions,, and of the de nition. Matrix in (f) fails condition of the de nition.

27 Andrilli/Hecker - Answers to Exercises Section. () (a) (b) I (c) () (a) (e) (g) ; ( ; ; ) ; f(; ; )g (d) (e) (f) I ; f(b d ; b; d + ; d; ) j b; d Rg () (a) Solution set = f(c d; d; c; d) j c; d Rg; one particular solution = ( ; ; ; ) (b) Solution set = f(e; d e; d; d; e) j d; e Rg; one particular solution = (; ; ; ; ) (c) Solution set = f( b + d f; b; d + f; d; f; f) j b; d; f Rg; one particular solution = ( ; ; 0; ; ; ) () (a) f(c; c; c) j c Rg = fc(; ; ) j c Rg (b) f(0; 0; 0)g (c) f(0; 0; 0; 0)g (d) f(d; d; d; d) j d Rg = fd(; ; ; ) j d Rg () (a) a =, b =, c =, d = (b) a =, b =, c =, d = 8 (c) a =, b =, c =, d =, e = (d) a =, b =, c =, d =, e =, f = () (a) A =, B =, C = (b) A =, B =, C =, D = 0 (8) Solution for system AX = B : (; ; ); solution for system AX = B : ( ; 98; 9 ). (9) Solution for system AX = B : ( ; ; ; 9); solution for system AX = B : ( 0; ; 8; 0 ).

28 Andrilli/Hecker - Answers to Exercises Section. (0) (a) R : (III): hi! hi R : (I): hi hi R : (II): hi hi R : (II): hi hi + hi ; (b) AB = ; 0 R (R (R (R (AB)))) = R (R (R (R (A))))B = 0 9 () (a) A(X + X ) = AX + AX = O + O = O; A(cX ) = c(ax ) = co = O. (b) Any nonhomogeneous system with two equations and two unknowns that has a unique solution will serve as a counterexample. For instance, consider x + y = x y = : This system has a unique solution: (; 0). Let (s ; s ) and (t ; t ) both equal (; 0). Then the sum of solutions is not a solution in this case. Also, if c =, then the scalar multiple of a solution by c is not a solution. (c) A(X + X ) = AX + AX = B + O = B. (d) Let X be the unique solution to AX = B. Suppose AX = O has a nontrivial solution X. Then, by (c), X + X is a solution to AX = B, and X = X + X since X = O. This contradicts the uniqueness of X. a b () If a = 0, then pivoting in the rst column of 0 c d yields 0 " # b a 0. There is a nontrivial solution if and only if the 0 d c b 0 a (; ) entry of this matrix is zero, which occurs if and only if ad bc = 0. a b Similarly, if c = 0, swapping the rst and second rows of 0 c d 0 " # d c 0 and then pivoting in the rst column yields. There 0 b a d 0 c is a nontrivial solution if and only if the (; ) entry of this matrix is zero, which occurs if and only if ad bc = 0. Finally, if both a and c equal 0, then ad bc = 0 and (; 0) is a nontrivial solution. () (a) The contrapositive is: If AX = O has only the trivial solution, then A X = O has only the trivial solution. Let X be a solution to A X = O. Then O = A X = A(AX ). Thus AX is a solution to AX = O. Hence AX = O by the premise. Thus, X = O, using the premise again. :

29 Andrilli/Hecker - Answers to Exercises Section. (b) The contrapositive is: Let t be a positive integer. If AX = O has only the trivial solution, then A t X = O has only the trivial solution. Proceed by induction. The statement is clearly true when t =, completing the Base Step. Inductive Step: Assume that if AX = O has only the trivial solution, then A t X = O has only the trivial solution. We must prove that if AX = O has only the trivial solution, then A t+ X = O has only the trivial solution. Let X be a solution to A t+ X = O. Then A(A t X ) = O. Thus A t X = O, since it is a solution to AX = O. But then X = O by the inductive hypothesis. () (a) T (b) T (c) F (d) T (e) F (f) F Section. () (a) A row operation of type (I) converts A to B: hi hi. (b) A row operation of type (III) converts A to B: hi! hi. (c) A row operation of type (II) converts A to B: hi hi + hi. () (a) B = I. The sequence of row operations converting A to B is: (I): hi hi (II): hi hi + hi (II): hi hi + hi (III): hi! hi (II): hi hi + hi (b) The sequence of row operations converting B to A is: (II): hi hi + hi (III): hi! hi (II): hi hi + hi (II): hi hi + hi (I): hi hi () (a) The common reduced row echelon form is I. (b) The sequence of row operations is: (II): hi hi + hi (I): hi hi (II): hi 9 hi + hi (II): hi hi + hi 9 (II): hi hi + hi

30 Andrilli/Hecker - Answers to Exercises Section. (II): hi hi + hi (I): hi hi (II): hi (II): hi (I): hi hi + hi hi + hi hi () A reduces to ; while B reduces to () (a) (b) (c) (d) (e) (f) () Corollary. cannot be used in either part (a) or part (b) because there are more equations than variables. (a) Rank =. Theorem. predicts that there is only the trivial solution. Solution set = f(0; 0; 0)g (b) Rank =. Theorem. predicts that nontrivial solutions exist. Solution set = f(c; c; c) j c Rg () In the following answers, the asterisk represents any real entry: (a) Smallest rank = Largest rank = : (b) Smallest rank = Largest rank = (c) Smallest rank = Largest rank =

31 Andrilli/Hecker - Answers to Exercises Section. (d) Smallest rank = Largest rank = (8) (a) x = a + a (c) Not possible (b) x = a a + a (d) x = a a + a (e) The answer is not unique; one possible answer is x = a +a +0a. (f) Not possible (g) x = a a a (h) Not possible (9) (a) Yes: (row ) (row ) (row ) (b) Not in row space (c) Not in row space (d) Yes: (row ) + (row ) (e) Yes, but the linear combination of the rows is not unique; one possible expression for the given vector is (row ) + (row ) + 0(row ). (0) (a) [; ; 0] = q + q + q (b) q = r r r ; q = r + r r ; q = r r + r (c) [; ; 0] = r r + r () (a) (i) B = ; (ii) [; 0; ; ] = 8 [0; ; ; 8] + [; ; 9; 8] + 0[; ; ; ]; [0; ; ; ] = [0; ; ; 8] + 0[; ; 9; 8] + 0[; ; ; ]; (iii) [0; ; ; 8] = 0[; 0; ; ] + [0; ; ; ]; [; ; 9; 8] = [; 0; ; ] + [0; ; ; ]; [; ; ; ] = [; 0; ; ]+ [0; ; ; ] 0 (b) (i) B = ; (ii) [; ; ; 0; ] = [; ; ; ; ] [ ; ; ; ; ]+0[; ; 9; ; ]+0[; ; ; ; ]; [0; 0; 0; ; ] = [; ; ; ; ] [ ; ; ; ; ] + 0[; ; 9; ; ] + 0[; ; ; ; ]; (iii) [; ; ; ; ] = [; ; ; 0; ] [0; 0; 0; ; ]; [ ; ; ; ; ] = [; ; ; 0; ] + [0; 0; 0; ; ]; [; ; 9; ; ] = [; ; ; 0; ] + [0; 0; 0; ; ]; [; ; ; ; ] = [; ; ; 0; ] [0; 0; 0; ; ]

32 Andrilli/Hecker - Answers to Exercises Section. () Suppose that all main diagonal entries of A are nonzero. Then, for each i, perform the row operation hii (=a ii )hii on the matrix A. This will convert A into I n. We prove the converse by contrapositive. Suppose some diagonal entry a ii equals 0. Then the ith column of A has all zero entries. No step in the row reduction process will alter this column of zeroes, and so the unique reduced row echelon form for the matrix must contain at least one column of zeroes, and so cannot equal I n. () (a) Suppose we are performing row operations on an m n matrix A. Throughout this part, we will write hbi i for the ith row of a matrix B. For the Type (I) operation R : hii chii: Now R is hii c hii. Clearly, R and R change only the ith row of A. We want to show that R R leaves hai i unchanged. But hr (R(A))i i = c hr(a)i i = c (chai i) = hai i. For the Type (II) operation R : hii chji+hii: Now R is hii chji + hii. Again, R and R change only the ith row of A, and we need to show that R R leaves hai i unchanged. But hr (R(A))i i = chr(a)i j +hr(a)i i = chai j +hr(a)i i = chai j +chai j +hai i = hai i. For the Type (III) operation R : hii! hji: Now, R = R. Also, R changes only the ith and jth rows of A, and these get swapped. Obviously, a second application of R swaps them back to where they were, proving that R is indeed its own inverse. (b) An approach similar to that used for Type (II) operations in the abridged proof of Theorem. in the text works just as easily for Type (I) and Type (III) operations. However, here is a di erent approach: Suppose R is a row operation, and let X satisfy AX = B. Multiplying both sides of this matrix equation by the matrix R(I) yields R(I)AX = R(I)B, implying R(IA)X = R(IB), by Theorem.. Thus, R(A)X = R(B), showing that X is a solution to the new linear system obtained from AX = B after the row operation R is performed. () The zero vector is a solution to AX = O, but it is not a solution for AX = B. () Consider the systems x + y = x + y = 0 and x y = x y = : The reduced row echelon matrices for these inconsistent systems are, respectively, and : Thus, the original augmented matrices are not row equivalent, since their reduced row echelon forms are di erent. 8

33 Andrilli/Hecker - Answers to Exercises Section. () (a) Plug each of the points in turn into the equation for the conic. This will give a homogeneous system of equations in the variables a, b, c, d, e, and f. This system has a nontrivial solution, by Corollary.. (b) Yes. In this case, there will be even fewer equations, so Corollary. again applies. () The fact that the system is homogeneous was used in the proof of Theorem. to show that the system is consistent (because it has the trivial solution). However, nonhomogeneous systems can be inconsistent, and so the proof of Theorem. does not work for nonhomogeneous systems. (8) (a) R(A) and A are row equivalent, and hence have the same reduced row echelon form (by Theorem.), and the same rank. (b) The reduced row echelon form of A cannot have more nonzero rows than A. (c) If A has k rows of zeroes, then rank(a) = m least k rows of zeroes, so rank(ab) m k. k. But AB has at (d) Let A = R n ( (R (R (D))) ), where D is in reduced row echelon form. Then rank(ab) = rank(r n ( (R (R (D))) )B) = rank(r n ( (R (R (DB))) )) (by part () of Theorem.) = rank(db) (by repeated use of part (a)) rank(d) (by part (c)) = rank(a) (by de nition of rank). (9) (a) As in the abridged proof of Theorem.8 in the text, let a ; : : : ; a m represent the rows of A, and let b ; : : : ; b m represent the rows of B. For the Type (I) operation R : hii chii: Now b i = 0a + 0a + + ca i + 0a i a m, and, for k = i, b k = 0a + 0a + + a k +0a k+ + +0a m. Hence, each row of B is a linear combination of the rows of A, implying it is in the row space of A. For the Type (II) operation R : hii chji + hii: Now b i = 0a + 0a + + ca j + 0a j+ + + a i + 0a i+ + : : : + 0a m, where our notation assumes i > j. (An analogous argument works for i < j.) And, for k = i, b k = 0a +0a + +a k +0a k+ + +0a m. Hence, each row of B is a linear combination of the rows of A, implying it is in the row space of A. For the Type (III) operation R : hii! hji: Now, b i = 0a + 0a + + a j + 0a j a m, b j = 0a + 0a + + a i + 0a i a m, and, for k = i, k = j, b k = 0a + 0a + + a k + 0a k a m. Hence, each row of B is a linear combination of the rows of A, implying it is in the row space of A. (b) This follows directly from part (a) and Lemma.. (0) Let k be the number of matrices between A and B when performing row operations to get from A to B. Use a proof by induction on k. Base Step: If k = 0, then there are no intermediary matrices, and Exercise 9

34 Andrilli/Hecker - Answers to Exercises Section. 9 shows that the row space of B is contained in the row space of A. Inductive Step: Given the chain A! D! D!! D k! D k+! B; we must show that the row space B is contained in the row space of A. The inductive hypothesis shows that the row space of D k+ is in the row space of A, since there are only k matrices between A and D k+ in the chain. Thus, each row of D k+ can be expressed as a linear combination of the rows of A. But by Exercise 9, each row of B can be expressed as a linear combination of the rows of D k+. Hence, by Lemma., each row of B can be expressed as a linear combination of the rows of A, and therefore is in the row space of A. By Lemma. again, the row space of B is contained in the row space of A. () (a) Let x ij represent the jth coordinate of x i. The corresponding homogeneous system in variables a ; : : : ; a n+ is 8 >< a x + a x + + a n+ x n+; = ; >: a x n + a x n + + a n+ x n+;n = 0 which has a nontrivial solution for a ; : : : ; a n+, by Corollary.. (b) Using part (a), we can suppose that a x + + a n+ x n+ = 0, with a i = 0 for some i. Then x i = a a a i x i a a i x i+ i a i x i+ a n+ a i x n+. Let b j = aj a i for j n +, j = i. () (a) T (b) T (c) F (d) F (e) F (f) T Section. () The product of each given pair of matrices equals I. () (a) Rank = ; nonsingular (b) Rank = ; singular (c) Rank = ; nonsingular (d) Rank = ; nonsingular (e) Rank = ; singular () No inverse exists for (b), (e) and (f). (a) " 0 0 # (c) " 8 8 # (d) " # 0

35 Andrilli/Hecker - Answers to Exercises Section. () No inverse exists for (b) and (e). (a) 0 (c) () (a) (b) 0 8 a 0 0 a a a a () (a) The general inverse is cos sin " p When =, the inverse is (d) (f) (c) " p When =, the inverse is When =, the inverse is (b) The general inverse is When =, the inverse is When =, the inverse is When =, the inverse is () (a) Inverse = " # sin. cos # p 0 0 p p p. cos sin 0 sin cos p a a a nn. #.. 0 p p p. p 0 p ; solution set = f(; )g.

36 Andrilli/Hecker - Answers to Exercises Section. (b) Inverse = (c) Inverse = (8) (a) Consider ; solution set = f( ; ; )g ; solution set = f(; 8; ; )g (b) Consider (c) A = A if A is involutory. (9) (a) A = I, B = I, A + B = O (b) A =, B = 0 0 0, A + B = 0 0 (c) If A = B = I, then A + B = I, A = B = I, A + B = I, and (A + B) = I, so A + B = (A + B). (0) (a) B must be the zero matrix. (b) No. AB = I n implies A exists (and equals B). Multiply both sides of AC = O n on the left by A. () : : : ; A 9 ; A ; A ; A ; A ; A ; : : : () B A is the inverse of A B. () (A ) T = (A T ) (by Theorem., part ()) = A. () (a) No step in the row reduction process will alter the column of zeroes, and so the unique reduced row echelon form for the matrix must contain a column of zeroes, and so cannot equal I n. (b) Such a matrix will contain a column of all zeroes. Then use part (a). (c) When pivoting in the ith column of the matrix during row reduction, the (i; i) entry will be nonzero, allowing a pivot in that position. Then, since all entries in that column below the main diagonal are already zero, none of the rows below the ith row are changed in that column. Hence, none of the entries below the ith row are changed when that row is used as the pivot row. Thus, none of the nonzero entries on the main diagonal are a ected by the row reduction steps on previous columns (and the matrix stays in upper triangular form throughout the process). Since this is true for each main diagonal position, the matrix row reduces to I n.

37 Andrilli/Hecker - Answers to Exercises Section. (d) If A is lower triangular with no zeroes on the main diagonal, then A T is upper triangular with no zeroes on the main diagonal. By part (c), A T is nonsingular. Hence A = (A T ) T is nonsingular by part () of Theorem.. (e) Note that when row reducing the ith column of A, all of the following occur:. The pivot a ii is nonzero, so we use a type (I) operation to change it to a. This changes only row i.. No nonzero targets appear below row i, so all type (II) row operations only change rows above row i.. Because of (), no entries are changed below the main diagonal, and no main diagonal entries are changed by type (II) operations. When row reducing [AjI n ], we use the exact same row operations we use to reduce A. Since I n is also upper triangular, fact () above shows that all the zeroes below the main diagonal of I n remain zero when the row operations are applied. Thus, the result of the row operations, namely A, is upper triangular. () (a) Part (): Since AA = I, we must have (A ) = A. Part (): For k > 0, to show (A k ) = (A ) k, we must show that A k (A ) k = I. Proceed by induction on k. Base Step: For k =, clearly AA = I. Inductive Step: Assume A k (A ) k = I. Prove A k+ (A ) k+ = I. Now, A k+ (A ) k+ = AA k (A ) k A = AIA = AA = I. This concludes the proof for k > 0. We now show A k (A ) k = I for k 0. For k = 0, clearly A 0 (A ) 0 = I I = I. The case k = is covered by part () of the theorem. For k, (A k ) = ((A ) k ) (by de nition) = ((A k ) ) (by the k > 0 case) = A k (by part ()). (b) To show (A A n ) = An A, we must prove that (A A n )(An A ) = I. Use induction on n. Base Step: For n =, clearly A A = I. Inductive Step: Assume that (A A n )(An A ) = I. Prove that (A A n+ )(An+ A ) = I. Now, (A A n+ )(An+ A ) = (A A n )A n+ A n+ (A n A ) = (A A n )I(A n A ) = (A A n )(A n A ) = I. () We must prove that (ca)( c A ) = I. But, (ca)( c A ) = c c AA = I = I. () (a) Let p = s, q = t. Then p; q > 0. Now, A s+t = A (p+q) = (A ) p+q = (A ) p (A ) q (by Theorem.) = A p A q = A s A t.

38 Andrilli/Hecker - Answers to Exercises Section. (b) Let q = t. Then (A s ) t = (A s ) q = ((A s ) ) q = ((A ) s ) q (by Theorem., part ()) = (A ) sq (by Theorem.) = A sq (by Theorem., part ()) = A s( q) = A st. Similarly, (A s ) t = ((A ) s ) q (as before) = ((A ) q ) s (by Theorem.) = (A q ) s = (A t ) s. (8) First assume AB = BA. Then (AB) = ABAB = A(BA)B = A(AB)B = A B. Conversely, if (AB) = A B, then ABAB = AABB =) A ABABB = A AABBB =) BA = AB. (9) If (AB) q = A q B q for all q, use q = and the proof in Exercise 8 to show BA = AB. Conversely, we need to show that BA = AB ) (AB) q = A q B q for all q. First, we prove that BA = AB ) AB q = B q A for all q. We use induction on q. Base Step (q = ): AB = A(BB) = (AB)B = (BA)B = B(AB) = B(BA) = (BB)A = B A. Inductive Step: AB q+ = A(B q B) = (AB q )B = (B q A)B (by the inductive hypothesis) = B q (AB) = B q (BA) = (B q B)A = B q+ A. Now we use this lemma (BA = AB ) AB q = B q A for all q ) to prove BA = AB ) (AB) q = A q B q for all q. Again, we proceed by induction on q. Base Step (q = ): (AB) = (AB)(AB) = A(BA)B = A(AB)B = A B. Inductive Step: (AB) q+ = (AB) q (AB) = (A q B q )(AB) (by the inductive hypothesis) = A q (B q A)B = A q (AB q )B (by the lemma) = (A q A)(B q B) = A q+ B q+. (0) Base Step (k = 0): I n = (A I n )(A I n ). Inductive Step: Assume I n +A+A + +A k = (A k+ I n )(A I n ) ; for some k: Prove I n +A+A + +A k +A k+ = (A k+ I n )(A I n ) : Now, I n +A+A + +A k +A k+ = (I n +A+A + +A k )+A k+ = (A k+ I n )(A I n ) + A k+ (A I n )(A I n ) (where the rst term is obtained from the inductive hypothesis) = ((A k+ I n ) + A k+ (A I n ))(A I n ) = (A k+ I n )(A I n ) : () (a) Suppose that n > k. Then, by Corollary., there is a nontrivial X such that BX = O. Hence, (AB)X = A(BX) = AO = O. But, (AB)X = I n X = X. Therefore, X = O, which gives a contradiction. (b) Suppose that k > n. Then, by Corollary., there is a nontrivial Y such that AY = O. Hence, (BA)Y = B(AY) = BO = O. But, (BA)Y = I k Y = X. Therefore, Y = O, which gives a contradiction. (c) Parts (a) and (b) combine to prove that n = k. Hence A and B are both square matrices. They are nonsingular with A = B by the de nition of a nonsingular matrix, since AB = I n = BA.

MAT 2037 LINEAR ALGEBRA I web:

MAT 2037 LINEAR ALGEBRA I web: MAT 237 LINEAR ALGEBRA I 2625 Dokuz Eylül University, Faculty of Science, Department of Mathematics web: Instructor: Engin Mermut http://kisideuedutr/enginmermut/ HOMEWORK 2 MATRIX ALGEBRA Textbook: Linear

More information

4.3 - Linear Combinations and Independence of Vectors

4.3 - Linear Combinations and Independence of Vectors - Linear Combinations and Independence of Vectors De nitions, Theorems, and Examples De nition 1 A vector v in a vector space V is called a linear combination of the vectors u 1, u,,u k in V if v can be

More information

LINEAR ALGEBRA WITH APPLICATIONS

LINEAR ALGEBRA WITH APPLICATIONS SEVENTH EDITION LINEAR ALGEBRA WITH APPLICATIONS Instructor s Solutions Manual Steven J. Leon PREFACE This solutions manual is designed to accompany the seventh edition of Linear Algebra with Applications

More information

Steven J. Leon University of Massachusetts, Dartmouth

Steven J. Leon University of Massachusetts, Dartmouth INSTRUCTOR S SOLUTIONS MANUAL LINEAR ALGEBRA WITH APPLICATIONS NINTH EDITION Steven J. Leon University of Massachusetts, Dartmouth Boston Columbus Indianapolis New York San Francisco Amsterdam Cape Town

More information

A FIRST COURSE IN LINEAR ALGEBRA. An Open Text by Ken Kuttler. Matrix Arithmetic

A FIRST COURSE IN LINEAR ALGEBRA. An Open Text by Ken Kuttler. Matrix Arithmetic A FIRST COURSE IN LINEAR ALGEBRA An Open Text by Ken Kuttler Matrix Arithmetic Lecture Notes by Karen Seyffarth Adapted by LYRYX SERVICE COURSE SOLUTION Attribution-NonCommercial-ShareAlike (CC BY-NC-SA)

More information

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in Chapter 4 - MATRIX ALGEBRA 4.1. Matrix Operations A a 11 a 12... a 1j... a 1n a 21. a 22.... a 2j... a 2n. a i1 a i2... a ij... a in... a m1 a m2... a mj... a mn The entry in the ith row and the jth column

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Linear Algebra and Matrix Inversion

Linear Algebra and Matrix Inversion Jim Lambers MAT 46/56 Spring Semester 29- Lecture 2 Notes These notes correspond to Section 63 in the text Linear Algebra and Matrix Inversion Vector Spaces and Linear Transformations Matrices are much

More information

APPENDIX: MATHEMATICAL INDUCTION AND OTHER FORMS OF PROOF

APPENDIX: MATHEMATICAL INDUCTION AND OTHER FORMS OF PROOF ELEMENTARY LINEAR ALGEBRA WORKBOOK/FOR USE WITH RON LARSON S TEXTBOOK ELEMENTARY LINEAR ALGEBRA CREATED BY SHANNON MARTIN MYERS APPENDIX: MATHEMATICAL INDUCTION AND OTHER FORMS OF PROOF When you are done

More information

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved Fundamentals of Linear Algebra Marcel B. Finan Arkansas Tech University c All Rights Reserved October 9, 200 2 PREFACE Linear algebra has evolved as a branch of mathematics with wide range of applications

More information

Linear Algebra: Lecture notes from Kolman and Hill 9th edition.

Linear Algebra: Lecture notes from Kolman and Hill 9th edition. Linear Algebra: Lecture notes from Kolman and Hill 9th edition Taylan Şengül March 20, 2019 Please let me know of any mistakes in these notes Contents Week 1 1 11 Systems of Linear Equations 1 12 Matrices

More information

Linear Algebra Homework and Study Guide

Linear Algebra Homework and Study Guide Linear Algebra Homework and Study Guide Phil R. Smith, Ph.D. February 28, 20 Homework Problem Sets Organized by Learning Outcomes Test I: Systems of Linear Equations; Matrices Lesson. Give examples of

More information

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations Chapter 1: Systems of linear equations and matrices Section 1.1: Introduction to systems of linear equations Definition: A linear equation in n variables can be expressed in the form a 1 x 1 + a 2 x 2

More information

Matrix Arithmetic. j=1

Matrix Arithmetic. j=1 An m n matrix is an array A = Matrix Arithmetic a 11 a 12 a 1n a 21 a 22 a 2n a m1 a m2 a mn of real numbers a ij An m n matrix has m rows and n columns a ij is the entry in the i-th row and j-th column

More information

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same. Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

Extra Problems: Chapter 1

Extra Problems: Chapter 1 MA131 (Section 750002): Prepared by Asst.Prof.Dr.Archara Pacheenburawana 1 Extra Problems: Chapter 1 1. In each of the following answer true if the statement is always true and false otherwise in the space

More information

web: HOMEWORK 1

web:   HOMEWORK 1 MAT 207 LINEAR ALGEBRA I 2009207 Dokuz Eylül University, Faculty of Science, Department of Mathematics Instructor: Engin Mermut web: http://kisideuedutr/enginmermut/ HOMEWORK VECTORS IN THE n-dimensional

More information

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS 1. HW 1: Due September 4 1.1.21. Suppose v, w R n and c is a scalar. Prove that Span(v + cw, w) = Span(v, w). We must prove two things: that every element

More information

Lecture Notes in Linear Algebra

Lecture Notes in Linear Algebra Lecture Notes in Linear Algebra Dr. Abdullah Al-Azemi Mathematics Department Kuwait University February 4, 2017 Contents 1 Linear Equations and Matrices 1 1.2 Matrices............................................

More information

BASIC NOTIONS. x + y = 1 3, 3x 5y + z = A + 3B,C + 2D, DC are not defined. A + C =

BASIC NOTIONS. x + y = 1 3, 3x 5y + z = A + 3B,C + 2D, DC are not defined. A + C = CHAPTER I BASIC NOTIONS (a) 8666 and 8833 (b) a =6,a =4 will work in the first case, but there are no possible such weightings to produce the second case, since Student and Student 3 have to end up with

More information

Math113: Linear Algebra. Beifang Chen

Math113: Linear Algebra. Beifang Chen Math3: Linear Algebra Beifang Chen Spring 26 Contents Systems of Linear Equations 3 Systems of Linear Equations 3 Linear Systems 3 2 Geometric Interpretation 3 3 Matrices of Linear Systems 4 4 Elementary

More information

7 Matrix Operations. 7.0 Matrix Multiplication + 3 = 3 = 4

7 Matrix Operations. 7.0 Matrix Multiplication + 3 = 3 = 4 7 Matrix Operations Copyright 017, Gregory G. Smith 9 October 017 The product of two matrices is a sophisticated operations with a wide range of applications. In this chapter, we defined this binary operation,

More information

Extra Problems for Math 2050 Linear Algebra I

Extra Problems for Math 2050 Linear Algebra I Extra Problems for Math 5 Linear Algebra I Find the vector AB and illustrate with a picture if A = (,) and B = (,4) Find B, given A = (,4) and [ AB = A = (,4) and [ AB = 8 If possible, express x = 7 as

More information

LINEAR SYSTEMS AND MATRICES

LINEAR SYSTEMS AND MATRICES CHAPTER 3 LINEAR SYSTEMS AND MATRICES SECTION 3. INTRODUCTION TO LINEAR SYSTEMS This initial section takes account of the fact that some students remember only hazily the method of elimination for and

More information

Student Solutions Manual. for. Web Sections. Elementary Linear Algebra. 4 th Edition. Stephen Andrilli. David Hecker

Student Solutions Manual. for. Web Sections. Elementary Linear Algebra. 4 th Edition. Stephen Andrilli. David Hecker Student Solutions Manual for Web Sections Elementary Linear Algebra th Edition Stephen Andrilli David Hecker Copyright, Elsevier Inc. All rights reserved. Table of Contents Lines and Planes and the Cross

More information

Matrix operations Linear Algebra with Computer Science Application

Matrix operations Linear Algebra with Computer Science Application Linear Algebra with Computer Science Application February 14, 2018 1 Matrix operations 11 Matrix operations If A is an m n matrix that is, a matrix with m rows and n columns then the scalar entry in the

More information

Systems of Linear Equations and Matrices

Systems of Linear Equations and Matrices Chapter 1 Systems of Linear Equations and Matrices System of linear algebraic equations and their solution constitute one of the major topics studied in the course known as linear algebra. In the first

More information

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)

More information

Systems of Linear Equations and Matrices

Systems of Linear Equations and Matrices Chapter 1 Systems of Linear Equations and Matrices System of linear algebraic equations and their solution constitute one of the major topics studied in the course known as linear algebra. In the first

More information

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1) EXERCISE SET 5. 6. The pair (, 2) is in the set but the pair ( )(, 2) = (, 2) is not because the first component is negative; hence Axiom 6 fails. Axiom 5 also fails. 8. Axioms, 2, 3, 6, 9, and are easily

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

Linear Algebra Highlights

Linear Algebra Highlights Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to

More information

Matrix & Linear Algebra

Matrix & Linear Algebra Matrix & Linear Algebra Jamie Monogan University of Georgia For more information: http://monogan.myweb.uga.edu/teaching/mm/ Jamie Monogan (UGA) Matrix & Linear Algebra 1 / 84 Vectors Vectors Vector: A

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

Linear Systems and Matrices

Linear Systems and Matrices Department of Mathematics The Chinese University of Hong Kong 1 System of m linear equations in n unknowns (linear system) a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.......

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

Chapter 2: Matrix Algebra

Chapter 2: Matrix Algebra Chapter 2: Matrix Algebra (Last Updated: October 12, 2016) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). Write A = 1. Matrix operations [a 1 a n. Then entry

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

MATH 2030: MATRICES. Example 0.2. Q:Define A 1 =, A. 3 4 A: We wish to find c 1, c 2, and c 3 such that. c 1 + c c

MATH 2030: MATRICES. Example 0.2. Q:Define A 1 =, A. 3 4 A: We wish to find c 1, c 2, and c 3 such that. c 1 + c c MATH 2030: MATRICES Matrix Algebra As with vectors, we may use the algebra of matrices to simplify calculations. However, matrices have operations that vectors do not possess, and so it will be of interest

More information

Linear Algebra. Linear Equations and Matrices. Copyright 2005, W.R. Winfrey

Linear Algebra. Linear Equations and Matrices. Copyright 2005, W.R. Winfrey Copyright 2005, W.R. Winfrey Topics Preliminaries Systems of Linear Equations Matrices Algebraic Properties of Matrix Operations Special Types of Matrices and Partitioned Matrices Matrix Transformations

More information

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 11 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,, a n, b are given real

More information

MTH5112 Linear Algebra I MTH5212 Applied Linear Algebra (2017/2018)

MTH5112 Linear Algebra I MTH5212 Applied Linear Algebra (2017/2018) MTH5112 Linear Algebra I MTH5212 Applied Linear Algebra (2017/2018) COURSEWORK 3 SOLUTIONS Exercise ( ) 1. (a) Write A = (a ij ) n n and B = (b ij ) n n. Since A and B are diagonal, we have a ij = 0 and

More information

Preliminary Linear Algebra 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 100

Preliminary Linear Algebra 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 100 Preliminary Linear Algebra 1 Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 100 Notation for all there exists such that therefore because end of proof (QED) Copyright c 2012

More information

STAT200C: Review of Linear Algebra

STAT200C: Review of Linear Algebra Stat200C Instructor: Zhaoxia Yu STAT200C: Review of Linear Algebra 1 Review of Linear Algebra 1.1 Vector Spaces, Rank, Trace, and Linear Equations 1.1.1 Rank and Vector Spaces Definition A vector whose

More information

Matrix Theory. A.Holst, V.Ufnarovski

Matrix Theory. A.Holst, V.Ufnarovski Matrix Theory AHolst, VUfnarovski 55 HINTS AND ANSWERS 9 55 Hints and answers There are two different approaches In the first one write A as a block of rows and note that in B = E ij A all rows different

More information

ICS 6N Computational Linear Algebra Matrix Algebra

ICS 6N Computational Linear Algebra Matrix Algebra ICS 6N Computational Linear Algebra Matrix Algebra Xiaohui Xie University of California, Irvine xhx@uci.edu February 2, 2017 Xiaohui Xie (UCI) ICS 6N February 2, 2017 1 / 24 Matrix Consider an m n matrix

More information

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B Chapter 8 - S&B Algebraic operations Matrix: The size of a matrix is indicated by the number of its rows and the number of its columns. A matrix with k rows and n columns is called a k n matrix. The number

More information

NOTES FOR LINEAR ALGEBRA 133

NOTES FOR LINEAR ALGEBRA 133 NOTES FOR LINEAR ALGEBRA 33 William J Anderson McGill University These are not official notes for Math 33 identical to the notes projected in class They are intended for Anderson s section 4, and are 2

More information

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved Fundamentals of Linear Algebra Marcel B. Finan Arkansas Tech University c All Rights Reserved 2 PREFACE Linear algebra has evolved as a branch of mathematics with wide range of applications to the natural

More information

Applied Matrix Algebra Lecture Notes Section 2.2. Gerald Höhn Department of Mathematics, Kansas State University

Applied Matrix Algebra Lecture Notes Section 2.2. Gerald Höhn Department of Mathematics, Kansas State University Applied Matrix Algebra Lecture Notes Section 22 Gerald Höhn Department of Mathematics, Kansas State University September, 216 Chapter 2 Matrices 22 Inverses Let (S) a 11 x 1 + a 12 x 2 + +a 1n x n = b

More information

Offline Exercises for Linear Algebra XM511 Lectures 1 12

Offline Exercises for Linear Algebra XM511 Lectures 1 12 This document lists the offline exercises for Lectures 1 12 of XM511, which correspond to Chapter 1 of the textbook. These exercises should be be done in the traditional paper and pencil format. The section

More information

APPM 3310 Problem Set 4 Solutions

APPM 3310 Problem Set 4 Solutions APPM 33 Problem Set 4 Solutions. Problem.. Note: Since these are nonstandard definitions of addition and scalar multiplication, be sure to show that they satisfy all of the vector space axioms. Solution:

More information

Chapter 7. Linear Algebra: Matrices, Vectors,

Chapter 7. Linear Algebra: Matrices, Vectors, Chapter 7. Linear Algebra: Matrices, Vectors, Determinants. Linear Systems Linear algebra includes the theory and application of linear systems of equations, linear transformations, and eigenvalue problems.

More information

Matrix Operations. Linear Combination Vector Algebra Angle Between Vectors Projections and Reflections Equality of matrices, Augmented Matrix

Matrix Operations. Linear Combination Vector Algebra Angle Between Vectors Projections and Reflections Equality of matrices, Augmented Matrix Linear Combination Vector Algebra Angle Between Vectors Projections and Reflections Equality of matrices, Augmented Matrix Matrix Operations Matrix Addition and Matrix Scalar Multiply Matrix Multiply Matrix

More information

MATRIX ALGEBRA. or x = (x 1,..., x n ) R n. y 1 y 2. x 2. x m. y m. y = cos θ 1 = x 1 L x. sin θ 1 = x 2. cos θ 2 = y 1 L y.

MATRIX ALGEBRA. or x = (x 1,..., x n ) R n. y 1 y 2. x 2. x m. y m. y = cos θ 1 = x 1 L x. sin θ 1 = x 2. cos θ 2 = y 1 L y. as Basics Vectors MATRIX ALGEBRA An array of n real numbers x, x,, x n is called a vector and it is written x = x x n or x = x,, x n R n prime operation=transposing a column to a row Basic vector operations

More information

Elementary Row Operations on Matrices

Elementary Row Operations on Matrices King Saud University September 17, 018 Table of contents 1 Definition A real matrix is a rectangular array whose entries are real numbers. These numbers are organized on rows and columns. An m n matrix

More information

Solution to Homework 1

Solution to Homework 1 Solution to Homework Sec 2 (a) Yes It is condition (VS 3) (b) No If x, y are both zero vectors Then by condition (VS 3) x = x + y = y (c) No Let e be the zero vector We have e = 2e (d) No It will be false

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K. R. MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Corrected Version, 7th April 013 Comments to the author at keithmatt@gmail.com Chapter 1 LINEAR EQUATIONS 1.1

More information

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence)

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) David Glickenstein December 7, 2015 1 Inner product spaces In this chapter, we will only consider the elds R and C. De nition 1 Let V be a vector

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 207, v 260) Contents Matrices and Systems of Linear Equations Systems of Linear Equations Elimination, Matrix Formulation

More information

Appendix C Vector and matrix algebra

Appendix C Vector and matrix algebra Appendix C Vector and matrix algebra Concepts Scalars Vectors, rows and columns, matrices Adding and subtracting vectors and matrices Multiplying them by scalars Products of vectors and matrices, scalar

More information

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe

More information

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the

More information

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02) Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 206, v 202) Contents 2 Matrices and Systems of Linear Equations 2 Systems of Linear Equations 2 Elimination, Matrix Formulation

More information

Matrices and Determinants

Matrices and Determinants Chapter1 Matrices and Determinants 11 INTRODUCTION Matrix means an arrangement or array Matrices (plural of matrix) were introduced by Cayley in 1860 A matrix A is rectangular array of m n numbers (or

More information

Chapter 1 Matrices and Systems of Equations

Chapter 1 Matrices and Systems of Equations Chapter 1 Matrices and Systems of Equations System of Linear Equations 1. A linear equation in n unknowns is an equation of the form n i=1 a i x i = b where a 1,..., a n, b R and x 1,..., x n are variables.

More information

Linear Equations in Linear Algebra

Linear Equations in Linear Algebra 1 Linear Equations in Linear Algebra 1.7 LINEAR INDEPENDENCE LINEAR INDEPENDENCE Definition: An indexed set of vectors {v 1,, v p } in n is said to be linearly independent if the vector equation x x x

More information

3 Matrix Algebra. 3.1 Operations on matrices

3 Matrix Algebra. 3.1 Operations on matrices 3 Matrix Algebra A matrix is a rectangular array of numbers; it is of size m n if it has m rows and n columns. A 1 n matrix is a row vector; an m 1 matrix is a column vector. For example: 1 5 3 5 3 5 8

More information

Announcements Wednesday, October 10

Announcements Wednesday, October 10 Announcements Wednesday, October 10 The second midterm is on Friday, October 19 That is one week from this Friday The exam covers 35, 36, 37, 39, 41, 42, 43, 44 (through today s material) WeBWorK 42, 43

More information

Review of Matrices and Block Structures

Review of Matrices and Block Structures CHAPTER 2 Review of Matrices and Block Structures Numerical linear algebra lies at the heart of modern scientific computing and computational science. Today it is not uncommon to perform numerical computations

More information

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form Chapter 5. Linear Algebra A linear (algebraic) equation in n unknowns, x 1, x 2,..., x n, is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b where a 1, a 2,..., a n and b are real numbers. 1

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Second Online Version, December 998 Comments to the author at krm@mathsuqeduau All contents copyright c 99 Keith

More information

Chapter 4. Matrices and Matrix Rings

Chapter 4. Matrices and Matrix Rings Chapter 4 Matrices and Matrix Rings We first consider matrices in full generality, i.e., over an arbitrary ring R. However, after the first few pages, it will be assumed that R is commutative. The topics,

More information

Chapter SSM: Linear Algebra Section Fails to be invertible; since det = 6 6 = Invertible; since det = = 2.

Chapter SSM: Linear Algebra Section Fails to be invertible; since det = 6 6 = Invertible; since det = = 2. SSM: Linear Algebra Section 61 61 Chapter 6 1 2 1 Fails to be invertible; since det = 6 6 = 0 3 6 3 5 3 Invertible; since det = 33 35 = 2 7 11 5 Invertible; since det 2 5 7 0 11 7 = 2 11 5 + 0 + 0 0 0

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT Math Camp II Basic Linear Algebra Yiqing Xu MIT Aug 26, 2014 1 Solving Systems of Linear Equations 2 Vectors and Vector Spaces 3 Matrices 4 Least Squares Systems of Linear Equations Definition A linear

More information

94 CHAPTER 3. VECTORS AND THE GEOMETRY OF SPACE

94 CHAPTER 3. VECTORS AND THE GEOMETRY OF SPACE 94 CHAPTER 3. VECTORS AND THE GEOMETRY OF SPACE 3.3 Dot Product We haven t yet de ned a multiplication between vectors. It turns out there are di erent ways this can be done. In this section, we present

More information

We could express the left side as a sum of vectors and obtain the Vector Form of a Linear System: a 12 a x n. a m2

We could express the left side as a sum of vectors and obtain the Vector Form of a Linear System: a 12 a x n. a m2 Week 22 Equations, Matrices and Transformations Coefficient Matrix and Vector Forms of a Linear System Suppose we have a system of m linear equations in n unknowns a 11 x 1 + a 12 x 2 + + a 1n x n b 1

More information

Introduction to Matrices

Introduction to Matrices POLS 704 Introduction to Matrices Introduction to Matrices. The Cast of Characters A matrix is a rectangular array (i.e., a table) of numbers. For example, 2 3 X 4 5 6 (4 3) 7 8 9 0 0 0 Thismatrix,with4rowsand3columns,isoforder

More information

MATH10212 Linear Algebra B Homework Week 5

MATH10212 Linear Algebra B Homework Week 5 MATH Linear Algebra B Homework Week 5 Students are strongly advised to acquire a copy of the Textbook: D C Lay Linear Algebra its Applications Pearson 6 (or other editions) Normally homework assignments

More information

Math 314/814 Topics for first exam

Math 314/814 Topics for first exam Chapter 2: Systems of linear equations Math 314/814 Topics for first exam Some examples Systems of linear equations: 2x 3y z = 6 3x + 2y + z = 7 Goal: find simultaneous solutions: all x, y, z satisfying

More information

Matrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1...

Matrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1... Chapter Matrices We review the basic matrix operations What is a Matrix? An array of numbers a a n A = a m a mn with m rows and n columns is a m n matrix Element a ij in located in position (i, j The elements

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

Announcements Monday, October 02

Announcements Monday, October 02 Announcements Monday, October 02 Please fill out the mid-semester survey under Quizzes on Canvas WeBWorK 18, 19 are due Wednesday at 11:59pm The quiz on Friday covers 17, 18, and 19 My office is Skiles

More information

Math 313 Chapter 1 Review

Math 313 Chapter 1 Review Math 313 Chapter 1 Review Howard Anton, 9th Edition May 2010 Do NOT write on me! Contents 1 1.1 Introduction to Systems of Linear Equations 2 2 1.2 Gaussian Elimination 3 3 1.3 Matrices and Matrix Operations

More information

Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

More information

Chapter 1: Systems of Linear Equations

Chapter 1: Systems of Linear Equations Chapter : Systems of Linear Equations February, 9 Systems of linear equations Linear systems Lecture A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where

More information

b 1 b 2.. b = b m A = [a 1,a 2,...,a n ] where a 1,j a 2,j a j = a m,j Let A R m n and x 1 x 2 x = x n

b 1 b 2.. b = b m A = [a 1,a 2,...,a n ] where a 1,j a 2,j a j = a m,j Let A R m n and x 1 x 2 x = x n Lectures -2: Linear Algebra Background Almost all linear and nonlinear problems in scientific computation require the use of linear algebra These lectures review basic concepts in a way that has proven

More information

4.1 Distance and Length

4.1 Distance and Length Chapter Vector Geometry In this chapter we will look more closely at certain geometric aspects of vectors in R n. We will first develop an intuitive understanding of some basic concepts by looking at vectors

More information

Math 321: Linear Algebra

Math 321: Linear Algebra Math 32: Linear Algebra T. Kapitula Department of Mathematics and Statistics University of New Mexico September 8, 24 Textbook: Linear Algebra,by J. Hefferon E-mail: kapitula@math.unm.edu Prof. Kapitula,

More information

Chapter y. 8. n cd (x y) 14. (2a b) 15. (a) 3(x 2y) = 3x 3(2y) = 3x 6y. 16. (a)

Chapter y. 8. n cd (x y) 14. (2a b) 15. (a) 3(x 2y) = 3x 3(2y) = 3x 6y. 16. (a) Chapter 6 Chapter 6 opener A. B. C. D. 6 E. 5 F. 8 G. H. I. J.. 7. 8 5. 6 6. 7. y 8. n 9. w z. 5cd.. xy z 5r s t. (x y). (a b) 5. (a) (x y) = x (y) = x 6y x 6y = x (y) = (x y) 6. (a) a (5 a+ b) = a (5

More information

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Jim Lambers MAT 610 Summer Session Lecture 1 Notes Jim Lambers MAT 60 Summer Session 2009-0 Lecture Notes Introduction This course is about numerical linear algebra, which is the study of the approximate solution of fundamental problems from linear algebra

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

Review for Exam Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions.

Review for Exam Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions. Review for Exam. Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions. x + y z = 2 x + 2y + z = 3 x + y + (a 2 5)z = a 2 The augmented matrix for

More information

This operation is - associative A + (B + C) = (A + B) + C; - commutative A + B = B + A; - has a neutral element O + A = A, here O is the null matrix

This operation is - associative A + (B + C) = (A + B) + C; - commutative A + B = B + A; - has a neutral element O + A = A, here O is the null matrix 1 Matrix Algebra Reading [SB] 81-85, pp 153-180 11 Matrix Operations 1 Addition a 11 a 12 a 1n a 21 a 22 a 2n a m1 a m2 a mn + b 11 b 12 b 1n b 21 b 22 b 2n b m1 b m2 b mn a 11 + b 11 a 12 + b 12 a 1n

More information

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra 1.1. Introduction SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear algebra is a specific branch of mathematics dealing with the study of vectors, vector spaces with functions that

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information