Student Solutions Manual. for. Web Sections. Elementary Linear Algebra. 4 th Edition. Stephen Andrilli. David Hecker

Size: px
Start display at page:

Download "Student Solutions Manual. for. Web Sections. Elementary Linear Algebra. 4 th Edition. Stephen Andrilli. David Hecker"

Transcription

1 Student Solutions Manual for Web Sections Elementary Linear Algebra th Edition Stephen Andrilli David Hecker Copyright, Elsevier Inc. All rights reserved.

2 Table of Contents Lines and Planes and the Cross Product in Change of Variables and the Jacobian Function Spaces..9 Max-Min Problems in and the Hessian Matrix... Jordan Canonical Form. Copyright, Elsevier Inc. All rights reserved.

3 Student Manual for Andrilli and Hecker - Elementary Linear Algebra, th edition Lines and Planes Lines and Planes and the Cross Product in R () (a) Let (x ; y ; z ) = (; ; ) and [a; b; c] = [; ; ]. Then, from Theorem, parametric equations for the line are x = +t; y = +t; z = t (t R); that is, x = ; y = +t; z = t (t R). (c) Let (x ; y ; z ) = (; ; ) and (x ; y ; z ) = (; ; ): Then a vector in the direction of the line is [x x ; y y ; z z ] = [ ; ; ]: Let [a; b; c] = [ ; ; ]. Then, from Theorem, parametric equations for the line are x = t; y = t; z = + t (t R). (Reversing the order of the points gives (x ; y ; z ) = (; ; ) and [a; b; c] = [; ; ], and so another valid set of parametric equations for the line is: x = + t; y = + t; z = t (t R):) (e) Let (x ; y ; z ) = (; ; ): A vector in the direction of the given line is [a; b; c] = [ ; ; ] (since these values are the respective coe cients of the parameter t); and therefore this is a vector in the direction of the desired line as well. Then, from Theorem, parametric equations for the line are: x = t; y = + t; z = + t = (t R). () (a) Setting the expressions for x equal gives + t = 9 + s, which means s = t. Setting the expressions for y equal gives t = + s, which means s + t =. Substituting t for s into s + t = leads to t = and s =. Plugging these values into both expressions for x, y, and z in turn results in the same values: x =, y =, and z =. Hence, the given lines intersect at a single point: (; ; ): (c) Setting the expressions for x equal gives t = 8 s, which means s = + t: (The same result is obtained by setting the expressions for y equal.) Setting the expressions for z equal gives t = 9 s, which means s = t +. Since it is impossible for these two expressions for s to be equal ( + t = t + ) for any value of t, the given lines do not intersect. (e) Setting the expressions for x equal gives t = 8s 9, which means t = s. Similarly, setting the expressions for y equal and setting the expressions for z equal also gives t = s : (In other words, substituting s for t in the expressions for the rst given line for x, y, z, respectively, gives the expressions for the second given line for x, y, z, respectively.) Thus, the lines are actually identical. The intersection of the lines is therefore the set of all points on (either) line; that is, the intersection consists of all points on the line x = t ; y = t + ; z = t (t R). () (b) A vector in the direction of l is a = [ ; ; ]: A vector in the direction of l is b = [ ; ; ]. If is the angle between vectors a and b, then cos = ab kakkbk = 9++ p = p = p : Then, p = cos = : () (b) Let (x ; y ; z ) = (; ; ): In the solution to Exercise (c) above, we found that [a; b; c] = [ ; ; ] is a vector in the direction of the line. Substituting these values into the formula for the symmetric equations of the line gives x = y = z (If, instead, we had considered the points in reverse order, and chosen (x ; y ; z ) = (; ; ) and calculated [a; b; c] = [; ; ] as a vector in the direction of the line, we would have obtained another valid form for the symmetric equations of the line: x = y+ = z :) () (a) Let (x ; y ; z ) = (; ; ), and let [a; b; c] = [; ; ]. Then by Theorem, the equation of the plane is x + y + z = ()() + ()() + ()( ); which simpli es to x + y + z =.

4 Student Manual for Andrilli and Hecker - Elementary Linear Algebra, th edition Lines and Planes (d) This plane goes through the origin (x ; y ; z ) = (; ; ) with normal vector [a; b; c] = [; ; ]. Then by Theorem, the equation for the plane is x + y + z = ()() + ()() + ()(), which simpli es to z = : () (a) A normal vector to the given plane is a = [; ; ]. Since jjajj = ; a unit normal vector to the plane is given by ; ; : () (a) Normal vectors to the given planes are, respectively, a = [; ; 8] and b = [; ; ]. If is the angle between vectors a and b, then cos = ab kakkbk = p p = 8 = : Then, = cos = : (8) (a) Let x = [x ; x ; x ] = [; ; ] and y = [y ; y ; y ] = [; ; ]. Then [x y x y ; x y x y ; x y x y ] = [()() ( )(); ( )() ()(); ()() ()()] = [; ; ]: (g) First note that [; ; ] [ ; ; ] = [()() ( )(); ( )( ) ()(); ()() ()( )] = [; ; ]: Thus, [; ; ] ([; ; ] [ ; ; ]) = [; ; ] [; ; ] = ()() + ()() + ( )() = 8: () (a) Let A = (; ; ), B = ( ; ; ), C = (; ; ). Let x be the vector from A to B. Then x = [ ; ; ] = [ ; ; ]. Let y be the vector from A to C. Then y = [ ; ; ] = [; ; ]: Hence, a normal vector to the plane is x y = [; ; ]: Therefore, the equation of the plane is x + y + z = ()() + ()() + ()(), or, x + y =, which reduces to x + y = : () (b) From the solution to Exercise (b) above, we know that a vector in the direction of l is [ ; ; ], and a vector in the direction of l is [ ; ; ]. The cross product of these vectors gives [ ; ; ] as a normal vector to the plane. Dividing by, we nd that [; ; ] is also a normal vector to the plane. We next nd a point of intersection for the lines. Setting the expressions for y equal gives = s,.which means that s =. Setting the expressions for x equal gives 9 t = s, which means s = t. Substituting for s into s = t leads to t =. Plugging these values for s and t into both expressions for x, y and z, respectively, yields the same values: x = ; y = ; z =. Hence, the given lines intersect at a single point: (; ; ): Finally, the equation for the plane is x + y + z = ()() + ()() + ()(), which reduces to x + z = 9. (8) (c) The normal vector for the plane x+z = is [; ; ], and the normal vector for the plane y +z = 9 is [; ; ]. The cross product [ ; ; ] of these vectors produces a vector in the direction of the line of intersection. To nd a point of intersection of the planes, choose z = to obtain x = and y = 9. That is, one point where the planes intersect is (; 9; ). Thus, the line of intersection is: x = t; y = 9 t; z = t (t R). (9) (a) A vector in the direction of the line l is [a; b; c] = [ ; ; ]. By setting t =, we obtain the point (x ; y ; z ) = (; ; ) on l. The vector from this point to the given point (x ; y ; z ) = (; ; ) is [x x ; y y ; z z ] = [ ; ; ]. Finally, using Theorem, the shortest distance from the given point to l is: k[ ; ; ] [ ; ; ]k k[ ; 8; ]k p = = ( ) + + p :8 () (a) Since the given point is (x ; y ; z ) = (; ; ), and the given plane is x y z =, we have a =, b =, c =, and d =. From the formula in Theorem, the shortest distance from

5 Student Manual for Andrilli and Hecker - Elementary Linear Algebra, th edition Lines and Planes the given point to the given plane is: jax + by + cz dj p = a + b + c j + ( ) + ( ) j p + ( ) + ( ) = j j (c) The given point is (x ; y ; z ) = (; ; ): First, we nd the equation of the plane through the points A = (; ; ), B = (; ; ), and C = (; ; ). Note that the vector from A to B is [ ; ; ] and the vector from A to C is [; ; ]. Thus, a normal vector to the plane is the cross product [; ; ] of these vectors. Hence, the equation of the plane is x + ( )y + ( )z = ()() + ( )() + ( )(), or, x y z =. That is, a =, b =, c =, and d =. From the formula in Theorem, the shortest distance from the given point to the given plane is: jax + by + cz dj p = a + b + c = j + ( ) + ( ) ( ) j p + ( ) + ( ) = :9. () (a) A vector in the direction of the line l is [ ; ; ], and a vector in the direction of the line l is [; ; ]. (Note that the lines are not parallel.) The cross product of these vectors is v = [; 8; ]: By setting t = and s =, respectively, we nd that (x ; y ; z ) = (; ; ) is a point on l, and (x ; y ; z ) = (; ; ) is a point on l. Hence, the vector from (x ; y ; z ) to (x ; y ; z ) is w = [x x ; y y ; z z ] = [ ; ; ]. Finally, from the formula given in this exercise, we nd that the shortest distance between the given lines is j(v w)j kvk = j + j p ( ) = p = p 9 :. (c) A vector in the direction of the line l is [ ; ; ], and a vector in the direction of the line l is [ ; ; ]. (Note that the lines are not parallel.) The cross product of these vectors is v = [ ; ; ]: By setting t = and s =, respectively, we nd that (x ; y ; z ) = (; ; 8) is a point on l, and (x ; y ; z ) = ( ; ; ) is a point on l. Hence, the vector from (x ; y ; z ) to (x ; y ; z ) is w = [x x ; y y ; z z ] = [ ; ; 9]. Finally, from the formula given in this exercise, we nd that the shortest distance between the given lines is j(v w)j kvk j8 + j = p ( ) + ( ) + ( ) = p =. 8 That is, the given lines actually intersect. (In fact, it can easily be shown that the lines intersect at ( ; ; 8).) () (a) Labeling the rst plane as ax + by + cz = d and the second plane as ax + by + cz = d, we have [a; b; c] = [; ; ], and d =, d =. From the formula in Exercise, the shortest distance between the given planes is jd d j p a + b + c = j j p + ( ) + = p = p :88. (c) Divide the equation of the rst plane by, and divide the equation of the second plane by to obtain equations for the planes whose coe cients agree: x + y z = 9 and x + y z = Labeling the rst of these new equations as ax + by + cz = d and the second as ax + by + cz = d,

6 Student Manual for Andrilli and Hecker - Elementary Linear Algebra, th edition Lines and Planes we have [a; b; c] = [; ; ], and d = 9, d =. From the formula in Exercise, the shortest distance between the given planes is jd d j p a + b + c = j 9 j p + () + ( ) = p = p 9 :. 9 () (a) Let (x ; y ; z ) = (; ; ), (x ; y ; z ) = (; ; ), and (x ; y ; z ) = (; ; ). Then we have [x x ; y y ; z z ] = [; ; ]; and [x x ; y y ; z z ] = [; ; ]: Hence, from the formula in this exercise, the area of the triangle with vertices (; ; ), (; ; ), and (; ; ) is p k [x k[; ; ]k x ; y y ; z z ] [x x ; y y ; z z ] k = = :. (9) (a) The vector r = [; ; ] ft, and the vector! = [; ; ] = [; ; ] rad/sec. Hence, the velocity vector v =! r = [ ; ; ] ft/sec; speed = jjvjj = p : ft/sec. () (a) Let r = [x; y; z] represent the direction vector from the origin to a point (x; y; z) on the Earth s surface at latitude. Since the rotation is counterclockwise about the z-axis and one rotation of radians occurs every hours = 8 seconds, we nd that! = [; ; 8 ] = [; ; ] rad/sec. Then, y v =! r = ; x ; : By considering the right triangle with hypotenuse = Earth s radius = 9 km, and altitude z, we see that z = 9 sin. But since jjrjj = p x + y + z = 9, we must have x +y +(9 sin ) = (9) ; which means x + y = (9 cos ) : Thus, jjvjj = s : cos km/sec. y + x + = p (x + y ) = 9 cos () (a) False. If [a; b; c] is a direction vector for a given line, then any nonzero scalar multiple of [a; b; c] is another direction vector for that same line. For example, the line x = t, y =, z = (t R) (that is, the x-axis) has many distinct direction vectors, such as [; ; ] and [; ; ]. (b) False. If the lines are skew lines, they do not intersect each other, and the angle between the lines is not de ned. For example, the lines in Exercise (c) do not intersect, and hence, there is no angle de ned between them. (c) True. Two distinct nonparallel planes always intersect each other (in a line), and the angle between the planes is the (minimal) angle between a pair of normal vectors to the planes. (d) True. The vector [a; b; c] is normal to the given plane, and the vector [ a; b; c] is a nonzero scalar multiple of [a; b; c]; so [ a; b; c] is also perpendicular to the given plane. (e) True. For all vectors x, y, and z, we have: (x + y) z = (z (x + y)) (by part () of Theorem ) = ((z x) + (z y)) (by part () of Theorem ) = (z x) (z y) = (x z) + (y z) (by part () of Theorem ). (f) True. For all vectors x, y, and z, we have: y (z x) = (y z) x, by part () of Theorem. (g) False. From Corollary, if x and y are parallel, then x y = : For example, [; ; ] [; ; ] = [; ; ].

7 Student Manual for Andrilli and Hecker - Elementary Linear Algebra, th edition Lines and Planes (h) True. From Theorem, if x and y are nonzero, then kx yk = (jjxjj jjyjj) sin, and dividing both sides by (jjxjj jjyjj) (which is nonzero) gives the desired result. (i) False. k i = j, while i k = j. (Since the results of the cross product operations are nonzero here, the anti-commutative property from part () of Theorem also shows that the given statement is false.) (j) True. The vectors x, y, and x y, in that order, form a right-handed system, as in Figure 8, where we can imagine the vectors x, y, and x y lying along the positive x-, y-, and z-axes, respectively. Then, moving the (right) hand in that gure, so that its ngers are curling instead from the vector x y (positive z-axis) toward the vector x (positive x-axis) will cause the thumb of the (right) hand to point in the direction of y (positive y-axis). Thus, the vectors x y, x, and y, taken in that order, also form a right-handed system. (k) True. This is precisely the method outlined before Theorem for nding the distance from a point to a plane. (l) False. If! is the angular velocity, v is the velocity, and r is the position vector, then v =! r, but it is not generally true that v r =!: For example, in Example, v = ; ; ; r = [ ; ; ]; and! = ; ;, so, v r = ; ; [ ; ; ] = ; ; =!:

8 Student Manual for Andrilli and Hecker - Elementary Linear Algebra, th edition Jacobian Change of Variables and the Jacobian () (a) The Jacobian matrix J = " # = u (c) The Jacobian matrix J = = v dx dy = jjj du dv = (u + v ) du dv. # ((u+) +v )() u((u+)) ((u+) +v ) (e) The Jacobian matrix J =. Therefore, dx dy = jjj du dv = du dv. v u. Therefore, uv ((u+) +v ) ((u+) +v )( u) ( (u +v ))((u+)) ((u+) +v )( v) ( (u +v ))(v) ((u+) +v ) ((u+) +v ) J = ((u+) +v ) = ((u+) +v ) " (u + ) + v # u(u + ) uv u (u + ) + v (u + ) u + v uv v " # u + v + uv. u u + v uv v Now, for a matrix A, jkaj = k jaj, and so, jjj = u + v + ( uv v) ( uv) u u + v ((u+) +v ). Simplifying yields = u v uv uv + u v v v u v u v uv + uv ((u+) +v ) = uv u v v v 8 = u v + uv + v + v ((u+) +v ) ((u+) +v ) 8 = (u + ) + v v 8v = ((u+) +v ) 8jvj Hence, dx dy = jjj du dv = du dv. ((u+) +v ) () (a) The Jacobian matrix ((u+) @w =. Thus, jjj = ()()() + ()()() + ()()() ()()() ()()() ()()() =. Therefore, dx dy dz = jjj du dv dw = du dv dw. (c) The Jacobian matrix J = (u +v @w = u + v + w u uv uw uv u + v + w v vw uw vw u + v + w w To make this expression simpler, let A = u + v + w. Now, for a matrix M, jkmj = k jmj, and so by basketweaving,.

9 Student Manual for Andrilli and Hecker - Elementary Linear Algebra, th edition Jacobian jjj = A A u A v A w + ( uv)( vw)( uw) + ( uw)( uv)( vw) ( uw)(a v )( uw) (A u )( vw)( vw) ( uv)( uv)(a w ) = A A u A v A w A + u v A + u w A + v w A 8u v w 8u v w 8u v w u w A + 8u v w v w A + 8u v w u v A + 8u v w = A A (u + v + w )A = A A = A. Therefore, dx dy dz = jjj du dv dw = (u +v +w ) du dv dw. () (a) For the given region R, r = x + y ranges from to 9, and so r ranges from to. Since R is restricted to the rst quadrant, ranges from to. Therefore, using x = r cos, y = r sin, and dx dy = r dr d yields RR (x + y) dx dy = RR (r cos + r sin ) r dr d R R = R R (r cos + r sin ) r dr d = R r (cos + sin ) d = R (cos + sin ) d = (sin cos ) =. (c) For the given sphere R, ranges from to. Since R is restricted to the upper half of the xy-plane, ranges from to. However, ranges from to, since R is the entire upper half of the sphere; that is, all the way around the z-axis. Therefore, using z = cos and dx dy dz = ( sin ) d d d, we get RRR z dx dy dz = RRR cos ( sin ) d d d R R = R R R cos sin d d d = R R cos sin d d R = R cos sin d d = R sin R d = d = 8 =. (e) Since r = x + y, the condition x + y on the region R means that r ranges from to and ranges from to. Therefore, using x + y = r and dx dy dz = r dr d dz produces RRR x + y + z dx dy dz = R R R r + z r dr d dz R = R R r + r z d dz = R R + z d dz = R + z dz = R 8 + z dz = 8z + z = + ( ) = 8. () (a) True. This is because all of the partial derivatives in the Jacobian matrix will be constants, since they are derivatives of linear functions. Hence, the determinant of the Jacobian will be a constant.

10 Student Manual for Andrilli and Hecker - Elementary Linear Algebra, th edition Jacobian (b) True. The determinant of the Jacobian matrix J = dx dy = jjj du dv = j j du dv = du # is ( ); however, (c) True. This is explained and illustrated in the Jacobian section, just after Example, in Figure, and again in Example. (d) False. The scaling factor is the absolute value of the determinant of the Jacobian matrix, which will not equal the determinant of the Jacobian matrix when that determinant is negative. For a counterexample, see the solution to part (b) of this Exercise. 8

11 Student Manual for Andrilli and Hecker - Elementary Linear Algebra, th edition Function Spaces Function Spaces () (a) To test for linear independence, we must determine whether ae x + be x + ce x = has any nontrivial solutions. Using this equation we substitute the following values for x: But 8 < : e e e e e e Letting x = =) a + b + c = Letting x = =) a(e) + b(e ) + c(e ) = Letting x = =) a(e ) + b(e ) + c(e ) = row reduces to showing that the system has only the trivial solution a = b = c =. Hence, the given set is linearly independent. (c) To test for linear independence, we must check for nontrivial solutions to x x + x x + x a + x + b + x + c x + x = : + Using this equation we substitute the following values for x: 8 >< But >: Letting x = =) a + b c = Letting x = =) a + b + 8 c = Letting x = =) a b c = : 8 which has nontrivial solution a = x ( ) + x row reduces to + (),, b =, c =. Algebraic simpli cation veri es that x + x x + x + x + () x + x = + is a functional identity. Hence, since a nontrivial linear combination of the elements of S produces, the set S is linearly dependent. () (a) First, to simplify matters, we eliminate any functions that we can see by inspection are linear combinations of other functions in S. The familiar trigonometric identities sin x + cos x = and sin x cos x = sin x show that the last two functions in S are linear combinations of earlier functions. Thus, the subset S = fsin x; cos x; sin x; cos xg has the same span as S. However, the identity cos x = cos x+sin x (more commonly stated as cos x = cos x sin x) shows that B = fsin x; cos x; sin xg also has the same span. We suspect that we have now eliminated all of the functions that we can, and that B is linearly independent, making it a basis for span(s). We verify the linear independence of B by considering the equation a sin x + b cos x + c sin x =., : 9

12 Student Manual for Andrilli and Hecker - Elementary Linear Algebra, th edition Function Spaces Using this equation we substitute the following values for x: 8 >< Letting x = =) + b = Letting x = p =) >: a + b + c = : Letting x = =) a + c = Next, row reduce p to Since the system has only the trivial solution, B is linearly independent, and is thus the desired basis for span(s). (c) First, to simplify matters, we eliminate any functions that we can see by inspection are linear combinations of other functions in S. We start with the familiar trigonometric identities sin(a + b) = sin a cos b + sin b cos a and cos(a + b) = cos a cos b sin a sin b. Hence, sin(x + ) = (sin x)(cos ) + (sin )(cos x); cos(x + ) = (cos x)(cos ) (sin x)(sin ); sin(x + ) = (sin x)(cos ) + (sin )(cos x); and cos(x + ) = (cos x)(cos ) (sin x)(sin ). Therefore, each of the elements of S is a linear combination of sin x and cos x. That is, span(s) span(fsin x; cos xg). Thus, dim(span(s)). Now, if we can nd two linearly independent vectors in S, they form a basis for span(s). We claim that the subset B = fsin(x + ); cos(x + )g of S is linearly independent, and hence is a basis contained in S. To show that B is linearly independent, we plug two values for x into a sin(x+)+b cos(x+) = : Letting x = =) (sin )a + (cos )b = Letting x = =) b = : Using back substitution and the fact that sin = shows that a = b =, proving that B is linearly independent. () (a) Since v = e x + e x + ( )e x, [v] B = [; ; ] by the de nition of [v] B. (c) We start with the familiar trigonometric identity sin(a + b) = sin a cos b + sin b cos a, which yields sin(x + ) = (sin x)(cos ) + (sin )(cos x); and sin(x + ) = (sin x)(cos ) + (sin )(cos x). Dividing the rst equation by cos and the second by cos, and then subtracting and rearranging terms yields cos sin(x + ) sin sin cos sin(x + ) = cos cos cos x. Now, sin sin cos cos = sin cos sin cos cos cos = sin( ) cos cos = sin cos cos Therefore, the de nition of [v] B yields [v] B = [ cos.. Hence, cos x = cos sin sin ; cos sin cos sin(x + ) + sin sin(x + ). ], or approximately [:9; :]. An alternate approach is to solve for a and b in the equation cos x = a sin(x + ) + b sin(x + ). Plug in the following values for x: Letting x = =) cos = (sin )b ; Letting x = =) cos = (sin )a producing the same results for a and b as above.

13 Student Manual for Andrilli and Hecker - Elementary Linear Algebra, th edition Function Spaces () (a) True. In any vector space, S = fv ; v g is linearly dependent if and only if either of the vectors in S can be expressed as a linear combination of the other. Since there are only two vectors in the set, neither of which is zero, this happens precisely when v is a nonzero constant multiple of v. (b) True. Since the polynomials all have di erent degrees, none of them is a linear combination of the others. (c) False. By the de nition of linear independence, the set of vectors would be linearly independent, not linearly dependent. (d) False. It is possible that the three values chosen for x do not produce equations proving linear independence, but choosing a di erent set of three values might. This principle is illustrated directly after Example in the Functions Spaces section. We are only assured of linear dependence when we have found values of a, b, and c, not all zero, such that af (x) + bf (x) + cf (x) = is a functional identity.

14 Student Manual for Andrilli and Hecker - Elementary Linear Algebra, th edition Hessian Matrix Max-Min Problems in R n and the Hessian Matrix () (a) First, rf = [x + x + y ; x + y]. We nd critical points by setting rf =. Setting the second coordinate of rf equal to zero yields y = x. Plugging this into x + x + y = produces x =, which has solutions x = and x =. This gives the two critical points x + 8 (; ) and ( ; ). Next, H =. At the rst critical point, H =, (; ) which is positive de nite because its determinant is, which is positive, and its (; ) entry is 8, which is also positive. (See the comment just before Example in the Hessian section for easily veri ed necessary and su cient conditions for a matrix to be either positive de nite or negative de nite.) So, f has a local minimum at (; ). At the second critical point, H =. We refer to this matrix as A. Now A is neither positive de nite nor ( ;) negative de nite, since its determinant is negative. Also note that p A (x) = x + x, which, by the Quadratic Formula, has roots p. Since one of these is positive and the other is negative, we see that ( ; ) is not a local extreme value. (c) First, rf = [x+y +; x+y ]. We nd critical points by setting rf =. This corresponds to a linear system with two equations and two variables which has the unique solution ( ; ). Next, H =. H is positive de nite since its determinant is, which is positive, and its (; ) entry is, which is positive. Hence, the critical point ( ; ) is a local minimum. (e) First, rf = [x + y + z; x + y + y z + yz y + z z; x+y +y z+yz y+z z]. We nd critical points by setting rf =. To make things easier, notice that at the critical This yields y = z. With this, =, we obtain x = y. Substituting for y and z = and solving, produces 8x x =. Hence, 8x( x ) = 8x( x)( + x) =. This yields the three critical points (; ; ), ; ;, and ; ;. Next, H = y + yz + z y + yz + z. y + yz + z y + yz + z At the rst critical point, H = (;;). We refer to this matrix as A. Then p A (x) = x x + = (x )(x + x ). Hence, the eigenvalues of A are ; and p. Since A has both positive and negative eigenvalues, (; ; ) is not an extreme point. At the second critical point, H ( ; ; ) = 8. We refer to this matrix as B. Then 8 p B (x) = x x + 8x 8 = (x )(x x + ). Hence, the eigenvalues of B are all positive ( and p ). So f has a local minimum at ; ; by Theorem.. At the third critical point, H ( ; ; ) = H, and so f also has a local minimum at ( ; ; ) ; ;.

15 Student Manual for Andrilli and Hecker - Elementary Linear Algebra, th edition Hessian Matrix () (a) True. We know from calculus that if a real-valued function f on R n has continuous second partial derivatives i@x j j@x i, for all i; j. (b) False. For example, the matrix represents neither a positive de nite nor negative de nite quadratic form since it has both a positive and a negative eigenvalue. (c) True. In part (a) we noted that the Hessian matrix is symmetric. Theorems.8 and. then establish that the matrix is orthogonally diagonalizable, hence diagonalizable. (d) True. We established that a symmetric matrix A with jaj > and a > represents a positive de nite quadratic form (see the comment just before Example in the Hessian section). In this problem, jaj = > and a = >. (e) False. The eigenvalues of the given matrix are clearly, 9, and. Hence, the given matrix has both a positive and a negative eigenvalue, and so can not represent a positive de nite quadratic form.

16 Student Manual for Andrilli and Hecker - Elementary Linear Algebra, th edition Jordan Canonical Form Jordan Canonical Form () (c) By parts (a) and (b), (A I k ) e = k, and (A I k ) e i = e i for i k. We use induction to prove that (A I k ) i e i = k for all i, i k. The Base Step holds since we already noted that (A I k ) e = k. For the Inductive Step, we assume that (A I k ) i e i = k and show that (A I k ) i e i = k. But (A I k ) i e i = (A I k ) i (A I k ) e i = (A I k ) i e i = k, completing the induction proof. Using (A I k ) i e i = k, we see that (A I k ) k e i = (A I k ) k i (A I k ) i e i = (A I k ) k i k = k for all i, i k: Now, for i k, the i th column of (A I k ) k equals (A I k ) k e i. Hence, we have shown that every column of (A I k ) k is zero, and so (A I k ) k = O k. () Note for all the parts below that two matrices in Jordan canonical form are similar to each other if and only if they contain the same Jordan blocks, rearranged in any order. (a) [] [], [] [ ], [ ] [] [ ] [] [] [ ], [ ] The three matrices on the rst line are all similar to each other. The two matrices on the second line are similar to each other, but not to those on the rst line. (c) Using the quadratic formula, x + x + has roots + i and i, each having multiplicity ; [ + i] [ i] and, which are similar to each other [ i] [ + i] (d) x + x + x = x (x + ) ;, to none of the others; [] [], [] [], which are similar to each other but, which are similar to each other but to none of the others; [ ] [ ],, [ ] [ ] which are similar to each other but to none of the others; [] [] [ ] [ ],,

17 Student Manual for Andrilli and Hecker - Elementary Linear Algebra, th edition Jordan Canonical Form [ ] [ ] [] [] [] [ ] [ ] [],, [ ] [] [ ] [] [] [ ] [] [ ],, which are similar to each other but to none of the others [ ] [] [] [ ] [] [] [ ] [ ] (f) x x = x x + = (x ) (x + ) (x i) (x + i); There are possible Jordan forms. Because each eigenvalue has algebraic multiplicity, all of the Jordan blocks have size. Hence, any Jordan canonical form matrix with these blocks is diagonal with the eigenvalues,, i, and i on the main diagonal. The possibilities result from all of the possible orders in which these eigenvalues can appear on the diagonal. All possible Jordan canonical form matrices are similar to each other because they all have the same twenty-four Jordan blocks, only in a di erent order. () (a) Consider the matrix B = We nd a Jordan canonical form for B. You can quickly calculate that p B (x) = x x x + = (x ) (x + ). Thus, the eigenvalues for B are = and =. We must nd the sizes of the Jordan blocks corresponding to these eigenvalues and a fundamental sequence of generalized eigenvectors corresponding to each block. The following is one possible solution. (Other answers are possible if, for example, the eigenvalues are taken in a di erent order or if di erent generalized eigenvectors are chosen.) The Cayley-Hamilton Theorem tells us that p B (B) = (B I ) (B + I ) = O : Step A: We begin with the eigenvalue = : Step A: Let D = (B + I ) = : Then (B I ) D = O. Step A: Next, we search for the smallest positive integer k such that (B B I = ; 8 8 and so, (B I ) D = 8 8 : = O ;,, I ) k D = O. Now, while, as we have seen, (B I ) D = O. Hence, k =. Step A: We choose a maximal linearly independent subset of the columns of (B I ) k D =

18 Student Manual for Andrilli and Hecker - Elementary Linear Algebra, th edition Jordan Canonical Form (B I ) D to get as many linearly independent generalized eigenvectors as possible. Since all of the columns are multiples of the rst, the rst column alone su ces. Thus, v = [ ; 8; ]. We decide whether to simplify v (by multiplying every entry by ) after v is determined. Step A: Next, we work backwards through the products of the form (B I ) k j D for j running from up to k, choosing the same column in which we found the generalized eigenvector v. Because k =, the only value we need to consider here is j =. Hence, we let v = [ 8; ; 8], the rst column of (B I ) ( ) D = D. Since the entries of v are exactly divisible by, we simplify the entries of both v and v by multiplying by. Hence, we will use v = [; ; ] and v = [; ; ]. Steps A and A: Now by construction, (B I ) v = v and (B I ) v =. Since the total number of generalized eigenvectors we have found for equals the algebraic multiplicity of, which is, we can stop our work for. We therefore have a fundamental sequence fv ; v g of generalized eigenvectors corresponding to a Jordan block associated with = in a Jordan canonical form for B. To complete this example, we still must nd a fundamental sequence of generalized eigenvectors corresponding to =. We repeat Step A for this eigenvalue. Step A: Let D = (B I ) = 8 8 Then (B + I ) D = O. Step A: Next, we search for the smallest positive integer k such that (B + I ) k D = O. However, it is obvious here that k =. Step A: Since k =, (B + I ) k D = D: Hence, each nonzero column of D = (B I ) is a generalized eigenvector for B corresponding to =. In particular, the rst column of (B I ) serves nicely as a generalized eigenvector v for B corresponding to =. The other columns of (B I ) are scalar multiples of the rst column. Steps A, A, and A: No further work for is needed here because = has algebraic multiplicity, and hence only one generalized eigenvector corresponding to is required. However, we can simplify v by multiplying the rst column of (B I ) by, yielding v = [; ; ]. Thus, fv g is a fundamental sequence of generalized eigenvectors corresponding to the Jordan block associated with = in a Jordan canonical form for B. Thus, we have completed Step A for both eigenvalues. Step B: Finally, we now have an ordered basis (v ; v ; v ) comprised of fundamental sequences of generalized eigenvectors for B. Letting P be the matrix whose columns are these basis vectors, we nd that A = P BP = = ; [ ] : which gives a Jordan canonical form for B. Note that we could have used the generalized eigenvectors in the order v ; v ; v instead, which would have resulted in the other possible Jordan

19 Student Manual for Andrilli and Hecker - Elementary Linear Algebra, th edition Jordan Canonical Form canonical form for B, [ ] : (d) Consider the matrix B = 8 We nd a Jordan canonical form for B. You can quickly calculate that p B (x) = x x +8x = (x )(x ( i))(x ( + i)). Thus, the eigenvalues for B are = ; = i, and = + i. Because each eigenvalue has geometric multiplicity, each of the three Jordan blocks will be blocks. Hence, the Jordan canonical form matrix will be diagonal. We could just solve this problem using our method for diagonalization from Section. of the textbook. However, we shall use the Method of this section here. In what follows, we consider the eigenvalues in the order given above. (Other answers are possible if, for example, the eigenvalues are taken in a di erent order or if di erent eigenvectors are chosen.) The Cayley-Hamilton Theorem tells us that p B (B) = (B I ) (B ( i)i ) (B ( + i)i ) = O : Step A: We begin with the eigenvalue = : Step A: Let D = (B ( i)i ) (B ( + i)i ) = : Then (B I ) D = O. Step A: Next, we search for the smallest positive integer k such that (B I ) k D = O. Clearly, k =. Step A: We choose a maximal linearly independent subset of the columns of (B I ) k D = D to get as many linearly independent eigenvectors as possible. Since all of the columns are multiples of the rst, the rst column alone su ces. We can multiply this column by, obtaining v = [; ; ]. Steps A, A, and A: No further work for is needed here because = has algebraic multiplicity, and hence only one generalized eigenvector corresponding to is required. Thus, fv g is a fundamental sequence of generalized eigenvectors corresponding to the Jordan block associated with = in a Jordan canonical form for B. Thus, we have completed Step A for this eigenvalue. To complete this example, we still must nd a fundamental sequences of generalized eigenvectors corresponding to and. We now repeat Step A for = i. Step A: Let D = (B I ) (B ( + i)i ) = : i i + i 8 + i + i i i + i Then (B ( i)i ) D = O. Step A: Next, we search for the smallest positive integer k such that (B ( i)i ) k D = O. :

20 Student Manual for Andrilli and Hecker - Elementary Linear Algebra, th edition Jordan Canonical Form Clearly, k =. Step A: We choose a maximal linearly independent subset of the columns of (B I ) k D = D to get as many linearly independent eigenvectors as possible. Since all of the columns are multiples of the rst, the rst column alone su ces. (The second column is ( i) times the rst column, and the third column is i times the rst column. We can discover these scalars by dividing the rst entry in each column by the rst entry in the rst column, and then verify these scalars are correct for the remaining entries.) We can multiply this column by, obtaining v = [ i; + i; i]. Steps A, A, and A: No further work for is needed here because = i has algebraic multiplicity, and hence only one generalized eigenvector corresponding to is required. Thus, fv g is a fundamental sequence of generalized eigenvectors corresponding to the Jordan block associated with = i in a Jordan canonical form for B. Thus, we have completed Step A for this eigenvalue. To complete this example, we still must nd a fundamental sequence of generalized eigenvectors corresponding to. We now repeat Step A for = + i. Step A: Let + i + i D = (B I ) (B ( i)i ) = i 8 i i : + i + i i Then (B ( + i)i ) D = O. Step A: Next, we search for the smallest positive integer k such that (B ( + i)i ) k D = O. Clearly, k =. Step A: We choose a maximal linearly independent subset of the columns of (B ( + i)i ) k D = D to get as many linearly independent eigenvectors as possible. Since all of the columns are multiples of the rst, the rst column alone su ces. (The second column is ( + i) times the rst column, and the third column is + i times the rst column.) We can multiply this column by, obtaining v = [ + i; i; + i]. Steps A, A, and A: No further work for is needed here because = + i has algebraic multiplicity, and hence only one generalized eigenvector corresponding to is required. Thus, fv g is a fundamental sequence of generalized eigenvectors corresponding to the Jordan block associated with = + i in a Jordan canonical form for B. Thus, we have completed Step A for all three eigenvalues. Step B: Finally, we now have an ordered basis (v ; v ; v ) comprised of fundamental sequences of generalized eigenvectors for B. Letting P be the matrix whose columns are these basis vectors, we nd that A = P BP = = [] [ i] [ + i] i i i + i + i i ; which gives a Jordan canonical form for B. 8 + i i i + i i + i (g) We are given that p B (x) = x + x + x + x + x + = (x + ). Therefore, the only eigenvalue for B is =. (Other answers are possible if, for example, di erent generalized eigenvectors are chosen or they are used in a di erent order.) 8

21 Student Manual for Andrilli and Hecker - Elementary Linear Algebra, th edition Jordan Canonical Form Step A: We begin by nding generalized eigenvectors for =. Step A: p B (B) = (B + I ) I = O. We let D = I. Step A: We calculate as follows: (B + I ) D = (B + I ) = 8 and (B + I ) D = (B + I ) = O. Hence k =. Step A: Each nonzero column of (B + I ) D is a generalized eigenvector for B corresponding to =. We let v = [ ; 8; ; ; ], the rst column of (B + I ) D. (We do not divide by here since not all of the entries of the vector v calculated in the next step are divisible by.) The second column of (B + I ) D is not a scalar multiple of v, so we let v = [ ; ; ; ; ], the second column of (B + I ) D. Row reduction shows that the remaining columns of (B + I ) D are linear combinations of the rst two (see the adjustment step, below). Step A: Next, we let v = [; ; ; ; ] and v = [; ; ; ; ], the rst and second columns of D (= I ), respectively. Thus, (B + I ) v = v, and (B + I ) v = v : This gives us our rst two fundamental sequences of generalized eigenvectors, fv ; v g and fv ; v g corresponding to two Jordan blocks for =. Steps A and A: We have two fundamental sequence of generalized eigenvectors consisting of a total of vectors. But, since the algebraic multiplicity of is, we must still nd another generalized eigenvector. We do this by making an adjustment to the matrix D. Recall that each column of (B + I ) D is a linear combination of v and v. In fact, the row reduction we did in Step A shows that st column of (B + I ) D = v + v = f v + g v nd column of (B + I ) D = v + v = f v + g v rd column of (B + I ) D = v + v = f v + g v th column of (B + I ) D = v + ( )v = f v + g v and th column of (B + I ) D = v + ( )v = f v + g v where the f i s and g i s represent the respective coe cients of v and v for each column of (B + I ) D. We create a new matrix H whose i th column is f i v + g i v. Thus, H = Then, since (B + I )v = v and (B + I )v = v, we get that (B + I ) H = (B + I ) D. Let D = D H = I H. Clearly, (B + I ) D = O. We now revisit Steps A through A using the matrix D instead of D. The purpose of this adjustment to the matrix D is to attempt to eliminate the e ects of the two fundamental sequences of length, thus unmasking shorter fundamental sequences. : 9

22 Student Manual for Andrilli and Hecker - Elementary Linear Algebra, th edition Jordan Canonical Form Step A: We have D = and (B + I ) D = O. Hence k =. Step A: We look for new generalized eigenvectors among the columns of D. We must choose columns of D that are not only linearly independent of each other, but also of our previously computed generalized eigenvectors (at the rst level). We check this using the Independence Test Method by row reducing 8 where the rst two columns are v and v, and the other columns are the nonzero columns of D. The resulting reduced row echelon form matrix is 8 Hence, the third column of D is linearly independent of v and v, and the remaining columns of D are linear combinations of v, v, and the third column of D. Therefore, we let v = [; ; ; ; ], which is times the rd column of D. Steps A, A, and A: Because k =, the generalized eigenvector v for = corresponds to a Jordan block. This gives the sequence fv g ; and we do not need to nd more vectors for this sequence. We now have the following three fundamental sequences of generalized eigenvectors for = : fv ; v g,fv ; v g and fv g : Since we have now found generalized eigenvectors for B corresponding to = ; and since the algebraic multiplicity of is, we are nished with Step A. Step B: We now have the ordered basis (v ; v ; v ; v ; v ) for C consisting of sequences of generalized eigenvectors for B. If we let P = ; : 8 the matrix whose columns are the vectors in this ordered basis. Using this, we obtain the following ;

23 Student Manual for Andrilli and Hecker - Elementary Linear Algebra, th edition Jordan Canonical Form Jordan canonical form for B: A = P BP = = : [ ] 8 () (b) Consider the matrix B = 8 We nd a Jordan canonical form for B. You can quickly calculate that p B (x) = x x + x = (x ) (x ). Thus, the eigenvalues for B are = and =. We must nd the sizes of the Jordan blocks corresponding to these eigenvalues and a fundamental sequence of generalized eigenvectors corresponding to each block. Now, the Cayley-Hamilton Theorem tells us that p B (B) = (B I ) (B I ) = O : Step A: We begin with the eigenvalue = : Step A: Let D = (B I ) = : 8 : Then (B I ) D = O. Step A: Next, we search for the smallest positive integer k such that (B (B I ) D = = O ; I ) k D = O. Now, while, as we have seen, (B I ) D = O. Hence, k =. Step A: We choose a maximal linearly independent subset of the columns of (B I ) k D = (B I ) D to get as many linearly independent generalized eigenvectors as possible. Since all of the columns are multiples of the rst, the rst column alone su ces. Thus, u = [ ; ; ]. Step A: Next, we work backwards through the products of the form (B I ) k j D for j running from up to k, choosing the same column in which we found the generalized eigenvector u. Because k =, the only value we need to consider here is j =. Hence, we let u = [; ; ], the rst column of (B I ) ( ) D = D. Steps A and A: Now by construction, (B I ) u = u and (B I ) u =. Hence, fu ; u g forms a fundamental sequence of generalized eigenvectors for =. Since the total number of generalized eigenvectors we have found for equals the algebraic multiplicity of,

24 Student Manual for Andrilli and Hecker - Elementary Linear Algebra, th edition Jordan Canonical Form which is, we can stop our work for. We therefore have a fundamental sequence fu ; u g of generalized eigenvectors corresponding to a Jordan block associated with = in a Jordan canonical form for B. To complete this example, we still must nd a fundamental sequence of generalized eigenvectors corresponding to =. We repeat Step A for this eigenvalue. Step A: Let D = (B I ) = 8 Then (B I ) D = O. Step A: Next, we search for the smallest positive integer k such that (B I ) k D = O. However, it is obvious here that k =. Step A: Since k =, (B I ) k D = D: Hence, each nonzero column of D = (B I ) is a generalized eigenvector for B corresponding to =. In particular, the rst column of (B I ) serves nicely as a generalized eigenvector u = [ ; ; ] for B corresponding to =. The other columns of (B I ) are scalar multiples of the rst column. Steps A, A and A: No further work for is needed here because = has algebraic multiplicity, and hence only one generalized eigenvector corresponding to is required. Thus, fu g is a fundamental sequence of generalized eigenvectors corresponding to the Jordan block associated with = in a Jordan canonical form for B. Thus, we have completed Step A for both eigenvalues. Step B: Finally, we now have an ordered basis (u ; u ; u ) comprised of fundamental sequences of generalized eigenvectors for B. Letting Q be the matrix whose columns are these basis vectors, we nd that A = Q BQ = 8 = ; [] which gives a Jordan canonical form for B. () (a) (A + ai n )(A + bi n ) = (A + ai n )A + (A + ai n ) (bi n ) = A + ai n A + A (bi n ) + (ai n ) (bi n ) = A + aa+ba + abi n = A + ba+aa + bai n = A + bi n A + A (ai n ) + (bi n ) (ai n ) = (A + bi n )A+(A + bi n ) (ai n ) = (A + bi n )(A + ai n ): () Let A, B, P, and q(x) be as given in the statement of the problem. We proceed using induction on k, the degree of the polynomial q. Base Step: Suppose k =. Then q(x) = c for some c C. Hence, q(a) = ci n = q(b). Therefore, P q(a)p = P (ci n ) P = cp I n P =cp P = ci n = q(b). Inductive Step: Suppose that s(b) = P s(a)p for every k th degree polynomial s(x). We need to prove that q(b) = P q(a)p, where q(x) has degree k +. Suppose that q(x) = a k+ x k+ + a k x k + + a x + a. Then q(a) = a k+ A k+ + a k A k + + a A + a I n = (a k+ A k + a k A k + + a I n )A + a I n = s(a)a + a I n, where s(x) = a k+ x k + a k x k + + a. Similarly, q(b) = s(b)b + a I n for the same polynomial s(x). Therefore, :

25 Student Manual for Andrilli and Hecker - Elementary Linear Algebra, th edition Jordan Canonical Form P q(a)p = P (s(a)a + a I n ) P = P s(a)a + P (a I n ) P = P s(a)ap + P (a I n ) P = P s(a) PP AP+a P I n P = P s(a)p P AP +a P P = s(b) P AP +a P P (by the inductive hypothesis) = s(b)b + a I n = q(b). () (a) We compute the (i; j) entry of A. First consider the case in which i k and j k. The (i; j) entry of A = a i a j + +a ik a kj +a i(k+) a (k+)j + +a i(k+m) a (k+m)j = (i; j) entry of A + ((i; j) entry of A A ) : If i > k and j k, then the (i; j) entry of A = a i a j + + a ik a kj + a i(k+) a (k+)j + + a i(k+m) a (k+m)j = ((i; j) entry of A A ) + ((i; j) entry of A A ). The other two cases are handled similarly. (9) (g) The number of Jordan blocks having size exactly k k is the number having size at least k k minus the number of size at least (k + ) (k + ). By part (f), r k (A) r k (A) gives the total number of Jordan blocks having size at least k k corresponding to. Similarly, the number of Jordan blocks having size at least (k + ) (k + ) is r k (A) r k+ (A). Hence the number of Jordan blocks having size exactly k k equals (r k (A) r k (A)) (r k (A) r k+ (A)) = r k (A) r k (A) + r k+ (A). () (a) False. For example, any matrix in Jordan canonical form that has more than one Jordan block can not be similar to a matrix with a single Jordan block. This follows from the uniqueness statement in Theorem. The matrix is a speci c example of such a matrix. (b) True. This is stated in the rst paragraph of the subsection entitled Finding A Jordan Canonical Form. (c) False. The geometric multiplicity of an eigenvalue equals the dimension of the eigenspace corresponding to that eigenvalue, notthe dimension of the generalized eigenspace. For a speci c example, consider the matrix A =, whose only eigenvalue is =. The eigenspace for is spanned by e, and so the dimension of the eigenspace, which equals the geometric multiplicity of =, is. However, the generalized eigenspace for A is fv C j(a I ) k v = g for some positive integer k. But A I = A; so (A I ) = O : Thus, the generalized eigenspace for A is C, which has dimension. (Note, by the way, that p A (x) = x, and so the eigenvalue has algebraic multiplicity.) (d) False. According to Theorem, a Jordan canonical form for a matrix is unique, except for the order in which the Jordan blocks appear on the main diagonal. However, if there are two or more Jordan blocks, and if they are all identical, then all of the Jordan canonical form matrices are the same, since changing the order of the blocks in such a case does not change the matrix. One example is the Jordan canonical form matrix I, which has two identical Jordan blocks. (e) False. For a simple counterexample, suppose A = J = I ; and let P and Q be any two distinct nonsingular matrices, such as I and I. (f) True. Theorem asserts that there is a nonsingular matrix P such that P AP is in Jordan canonical form. The comments just before Example show that the columns of P are a basis for C n consisting of generalized eigenvectors for A. Since these columns span C n, every vector in C n can be expressed as a linear combination of these column vectors.

4.3 - Linear Combinations and Independence of Vectors

4.3 - Linear Combinations and Independence of Vectors - Linear Combinations and Independence of Vectors De nitions, Theorems, and Examples De nition 1 A vector v in a vector space V is called a linear combination of the vectors u 1, u,,u k in V if v can be

More information

Max-Min Problems in R n Matrix

Max-Min Problems in R n Matrix Max-Min Problems in R n Matrix 1 and the essian Prerequisite: Section 6., Orthogonal Diagonalization n this section, we study the problem of nding local maxima and minima for realvalued functions on R

More information

a11 a A = : a 21 a 22

a11 a A = : a 21 a 22 Matrices The study of linear systems is facilitated by introducing matrices. Matrix theory provides a convenient language and notation to express many of the ideas concisely, and complicated formulas are

More information

MTH 5102 Linear Algebra Practice Final Exam April 26, 2016

MTH 5102 Linear Algebra Practice Final Exam April 26, 2016 Name (Last name, First name): MTH 5 Linear Algebra Practice Final Exam April 6, 6 Exam Instructions: You have hours to complete the exam. There are a total of 9 problems. You must show your work and write

More information

Lecture 11: Eigenvalues and Eigenvectors

Lecture 11: Eigenvalues and Eigenvectors Lecture : Eigenvalues and Eigenvectors De nition.. Let A be a square matrix (or linear transformation). A number λ is called an eigenvalue of A if there exists a non-zero vector u such that A u λ u. ()

More information

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det What is the determinant of the following matrix? 3 4 3 4 3 4 4 3 A 0 B 8 C 55 D 0 E 60 If det a a a 3 b b b 3 c c c 3 = 4, then det a a 4a 3 a b b 4b 3 b c c c 3 c = A 8 B 6 C 4 D E 3 Let A be an n n matrix

More information

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1) EXERCISE SET 5. 6. The pair (, 2) is in the set but the pair ( )(, 2) = (, 2) is not because the first component is negative; hence Axiom 6 fails. Axiom 5 also fails. 8. Axioms, 2, 3, 6, 9, and are easily

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved Fundamentals of Linear Algebra Marcel B. Finan Arkansas Tech University c All Rights Reserved 2 PREFACE Linear algebra has evolved as a branch of mathematics with wide range of applications to the natural

More information

Lecture 12: Diagonalization

Lecture 12: Diagonalization Lecture : Diagonalization A square matrix D is called diagonal if all but diagonal entries are zero: a a D a n 5 n n. () Diagonal matrices are the simplest matrices that are basically equivalent to vectors

More information

LINEAR SYSTEMS AND MATRICES

LINEAR SYSTEMS AND MATRICES CHAPTER 3 LINEAR SYSTEMS AND MATRICES SECTION 3. INTRODUCTION TO LINEAR SYSTEMS This initial section takes account of the fact that some students remember only hazily the method of elimination for and

More information

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

More information

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether

More information

Lecture Notes in Mathematics. Arkansas Tech University Department of Mathematics. The Basics of Linear Algebra

Lecture Notes in Mathematics. Arkansas Tech University Department of Mathematics. The Basics of Linear Algebra Lecture Notes in Mathematics Arkansas Tech University Department of Mathematics The Basics of Linear Algebra Marcel B. Finan c All Rights Reserved Last Updated November 30, 2015 2 Preface Linear algebra

More information

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS 1. HW 1: Due September 4 1.1.21. Suppose v, w R n and c is a scalar. Prove that Span(v + cw, w) = Span(v, w). We must prove two things: that every element

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors CHAPTER Eigenvalues and Eigenvectors CHAPTER CONTENTS. Eigenvalues and Eigenvectors 9. Diagonalization. Complex Vector Spaces.4 Differential Equations 6. Dynamical Systems and Markov Chains INTRODUCTION

More information

1 Linear Algebra Problems

1 Linear Algebra Problems Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and

More information

Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

More information

MAT 2037 LINEAR ALGEBRA I web:

MAT 2037 LINEAR ALGEBRA I web: MAT 237 LINEAR ALGEBRA I 2625 Dokuz Eylül University, Faculty of Science, Department of Mathematics web: Instructor: Engin Mermut http://kisideuedutr/enginmermut/ HOMEWORK 2 MATRIX ALGEBRA Textbook: Linear

More information

Math 1060 Linear Algebra Homework Exercises 1 1. Find the complete solutions (if any!) to each of the following systems of simultaneous equations:

Math 1060 Linear Algebra Homework Exercises 1 1. Find the complete solutions (if any!) to each of the following systems of simultaneous equations: Homework Exercises 1 1 Find the complete solutions (if any!) to each of the following systems of simultaneous equations: (i) x 4y + 3z = 2 3x 11y + 13z = 3 2x 9y + 2z = 7 x 2y + 6z = 2 (ii) x 4y + 3z =

More information

Linear Algebra Practice Problems

Linear Algebra Practice Problems Linear Algebra Practice Problems Math 24 Calculus III Summer 25, Session II. Determine whether the given set is a vector space. If not, give at least one axiom that is not satisfied. Unless otherwise stated,

More information

2.3. VECTOR SPACES 25

2.3. VECTOR SPACES 25 2.3. VECTOR SPACES 25 2.3 Vector Spaces MATH 294 FALL 982 PRELIM # 3a 2.3. Let C[, ] denote the space of continuous functions defined on the interval [,] (i.e. f(x) is a member of C[, ] if f(x) is continuous

More information

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017 Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

Chapter 2 Notes, Linear Algebra 5e Lay

Chapter 2 Notes, Linear Algebra 5e Lay Contents.1 Operations with Matrices..................................1.1 Addition and Subtraction.............................1. Multiplication by a scalar............................ 3.1.3 Multiplication

More information

Chapter 5 Eigenvalues and Eigenvectors

Chapter 5 Eigenvalues and Eigenvectors Chapter 5 Eigenvalues and Eigenvectors Outline 5.1 Eigenvalues and Eigenvectors 5.2 Diagonalization 5.3 Complex Vector Spaces 2 5.1 Eigenvalues and Eigenvectors Eigenvalue and Eigenvector If A is a n n

More information

Math 314H Solutions to Homework # 3

Math 314H Solutions to Homework # 3 Math 34H Solutions to Homework # 3 Complete the exercises from the second maple assignment which can be downloaded from my linear algebra course web page Attach printouts of your work on this problem to

More information

Some Notes on Linear Algebra

Some Notes on Linear Algebra Some Notes on Linear Algebra prepared for a first course in differential equations Thomas L Scofield Department of Mathematics and Statistics Calvin College 1998 1 The purpose of these notes is to present

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02) Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 206, v 202) Contents 2 Matrices and Systems of Linear Equations 2 Systems of Linear Equations 2 Elimination, Matrix Formulation

More information

which are not all zero. The proof in the case where some vector other than combination of the other vectors in S is similar.

which are not all zero. The proof in the case where some vector other than combination of the other vectors in S is similar. It follows that S is linearly dependent since the equation is satisfied by which are not all zero. The proof in the case where some vector other than combination of the other vectors in S is similar. is

More information

Extra Problems for Math 2050 Linear Algebra I

Extra Problems for Math 2050 Linear Algebra I Extra Problems for Math 5 Linear Algebra I Find the vector AB and illustrate with a picture if A = (,) and B = (,4) Find B, given A = (,4) and [ AB = A = (,4) and [ AB = 8 If possible, express x = 7 as

More information

and let s calculate the image of some vectors under the transformation T.

and let s calculate the image of some vectors under the transformation T. Chapter 5 Eigenvalues and Eigenvectors 5. Eigenvalues and Eigenvectors Let T : R n R n be a linear transformation. Then T can be represented by a matrix (the standard matrix), and we can write T ( v) =

More information

Linear Algebra. Workbook

Linear Algebra. Workbook Linear Algebra Workbook Paul Yiu Department of Mathematics Florida Atlantic University Last Update: November 21 Student: Fall 2011 Checklist Name: A B C D E F F G H I J 1 2 3 4 5 6 7 8 9 10 xxx xxx xxx

More information

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N. Math 410 Homework Problems In the following pages you will find all of the homework problems for the semester. Homework should be written out neatly and stapled and turned in at the beginning of class

More information

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0. Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the

More information

Mathematics 1. Part II: Linear Algebra. Exercises and problems

Mathematics 1. Part II: Linear Algebra. Exercises and problems Bachelor Degree in Informatics Engineering Barcelona School of Informatics Mathematics Part II: Linear Algebra Eercises and problems February 5 Departament de Matemàtica Aplicada Universitat Politècnica

More information

4. Linear transformations as a vector space 17

4. Linear transformations as a vector space 17 4 Linear transformations as a vector space 17 d) 1 2 0 0 1 2 0 0 1 0 0 0 1 2 3 4 32 Let a linear transformation in R 2 be the reflection in the line = x 2 Find its matrix 33 For each linear transformation

More information

Final Exam Practice Problems Answers Math 24 Winter 2012

Final Exam Practice Problems Answers Math 24 Winter 2012 Final Exam Practice Problems Answers Math 4 Winter 0 () The Jordan product of two n n matrices is defined as A B = (AB + BA), where the products inside the parentheses are standard matrix product. Is the

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 207, v 260) Contents Matrices and Systems of Linear Equations Systems of Linear Equations Elimination, Matrix Formulation

More information

LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS

LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F has characteristic zero. The following are facts

More information

LINEAR ALGEBRA QUESTION BANK

LINEAR ALGEBRA QUESTION BANK LINEAR ALGEBRA QUESTION BANK () ( points total) Circle True or False: TRUE / FALSE: If A is any n n matrix, and I n is the n n identity matrix, then I n A = AI n = A. TRUE / FALSE: If A, B are n n matrices,

More information

4 Matrix Diagonalization and Eigensystems

4 Matrix Diagonalization and Eigensystems 14.102, Math for Economists Fall 2004 Lecture Notes, 9/21/2004 These notes are primarily based on those written by George Marios Angeletos for the Harvard Math Camp in 1999 and 2000, and updated by Stavros

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018 Homework #1 Assigned: August 20, 2018 Review the following subjects involving systems of equations and matrices from Calculus II. Linear systems of equations Converting systems to matrix form Pivot entry

More information

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a). .(5pts) Let B = 5 5. Compute det(b). (a) (b) (c) 6 (d) (e) 6.(5pts) Determine which statement is not always true for n n matrices A and B. (a) If two rows of A are interchanged to produce B, then det(b)

More information

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8. Linear Algebra M1 - FIB Contents: 5 Matrices, systems of linear equations and determinants 6 Vector space 7 Linear maps 8 Diagonalization Anna de Mier Montserrat Maureso Dept Matemàtica Aplicada II Translation:

More information

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Bare-bones outline of eigenvalue theory and the Jordan canonical form Bare-bones outline of eigenvalue theory and the Jordan canonical form April 3, 2007 N.B.: You should also consult the text/class notes for worked examples. Let F be a field, let V be a finite-dimensional

More information

Math113: Linear Algebra. Beifang Chen

Math113: Linear Algebra. Beifang Chen Math3: Linear Algebra Beifang Chen Spring 26 Contents Systems of Linear Equations 3 Systems of Linear Equations 3 Linear Systems 3 2 Geometric Interpretation 3 3 Matrices of Linear Systems 4 4 Elementary

More information

Math 443/543 Graph Theory Notes 5: Graphs as matrices, spectral graph theory, and PageRank

Math 443/543 Graph Theory Notes 5: Graphs as matrices, spectral graph theory, and PageRank Math 443/543 Graph Theory Notes 5: Graphs as matrices, spectral graph theory, and PageRank David Glickenstein November 3, 4 Representing graphs as matrices It will sometimes be useful to represent graphs

More information

Advanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur

Advanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur Advanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur Lecture No. #07 Jordan Canonical Form Cayley Hamilton Theorem (Refer Slide Time:

More information

MTH 309 Supplemental Lecture Notes Based on Robert Messer, Linear Algebra Gateway to Mathematics

MTH 309 Supplemental Lecture Notes Based on Robert Messer, Linear Algebra Gateway to Mathematics MTH 309 Supplemental Lecture Notes Based on Robert Messer, Linear Algebra Gateway to Mathematics Ulrich Meierfrankenfeld Department of Mathematics Michigan State University East Lansing MI 48824 meier@math.msu.edu

More information

Systems of Linear Equations and Matrices

Systems of Linear Equations and Matrices Chapter 1 Systems of Linear Equations and Matrices System of linear algebraic equations and their solution constitute one of the major topics studied in the course known as linear algebra. In the first

More information

Third Midterm Exam Name: Practice Problems November 11, Find a basis for the subspace spanned by the following vectors.

Third Midterm Exam Name: Practice Problems November 11, Find a basis for the subspace spanned by the following vectors. Math 7 Treibergs Third Midterm Exam Name: Practice Problems November, Find a basis for the subspace spanned by the following vectors,,, We put the vectors in as columns Then row reduce and choose the pivot

More information

MAT1302F Mathematical Methods II Lecture 19

MAT1302F Mathematical Methods II Lecture 19 MAT302F Mathematical Methods II Lecture 9 Aaron Christie 2 April 205 Eigenvectors, Eigenvalues, and Diagonalization Now that the basic theory of eigenvalues and eigenvectors is in place most importantly

More information

Linear Algebra. Preliminary Lecture Notes

Linear Algebra. Preliminary Lecture Notes Linear Algebra Preliminary Lecture Notes Adolfo J. Rumbos c Draft date May 9, 29 2 Contents 1 Motivation for the course 5 2 Euclidean n dimensional Space 7 2.1 Definition of n Dimensional Euclidean Space...........

More information

UNDERSTANDING THE DIAGONALIZATION PROBLEM. Roy Skjelnes. 1.- Linear Maps 1.1. Linear maps. A map T : R n R m is a linear map if

UNDERSTANDING THE DIAGONALIZATION PROBLEM. Roy Skjelnes. 1.- Linear Maps 1.1. Linear maps. A map T : R n R m is a linear map if UNDERSTANDING THE DIAGONALIZATION PROBLEM Roy Skjelnes Abstract These notes are additional material to the course B107, given fall 200 The style may appear a bit coarse and consequently the student is

More information

Econ Slides from Lecture 7

Econ Slides from Lecture 7 Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for

More information

Systems of Linear Equations and Matrices

Systems of Linear Equations and Matrices Chapter 1 Systems of Linear Equations and Matrices System of linear algebraic equations and their solution constitute one of the major topics studied in the course known as linear algebra. In the first

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K. R. MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Second Online Version, December 1998 Comments to the author at krm@maths.uq.edu.au Contents 1 LINEAR EQUATIONS

More information

Linear Algebra Highlights

Linear Algebra Highlights Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A = STUDENT S COMPANIONS IN BASIC MATH: THE ELEVENTH Matrix Reloaded by Block Buster Presumably you know the first part of matrix story, including its basic operations (addition and multiplication) and row

More information

Math Linear Algebra Final Exam Review Sheet

Math Linear Algebra Final Exam Review Sheet Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

More information

The minimal polynomial

The minimal polynomial The minimal polynomial Michael H Mertens October 22, 2015 Introduction In these short notes we explain some of the important features of the minimal polynomial of a square matrix A and recall some basic

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Second Online Version, December 998 Comments to the author at krm@mathsuqeduau All contents copyright c 99 Keith

More information

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)

More information

Notes on the Matrix-Tree theorem and Cayley s tree enumerator

Notes on the Matrix-Tree theorem and Cayley s tree enumerator Notes on the Matrix-Tree theorem and Cayley s tree enumerator 1 Cayley s tree enumerator Recall that the degree of a vertex in a tree (or in any graph) is the number of edges emanating from it We will

More information

SOLUTIONS TO EXERCISES FOR MATHEMATICS 133 Part 1. I. Topics from linear algebra

SOLUTIONS TO EXERCISES FOR MATHEMATICS 133 Part 1. I. Topics from linear algebra SOLUTIONS TO EXERCISES FOR MATHEMATICS 133 Part 1 Winter 2009 I. Topics from linear algebra I.0 : Background 1. Suppose that {x, y} is linearly dependent. Then there are scalars a, b which are not both

More information

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence)

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) David Glickenstein December 7, 2015 1 Inner product spaces In this chapter, we will only consider the elds R and C. De nition 1 Let V be a vector

More information

2 Vector Products. 2.0 Complex Numbers. (a~e 1 + b~e 2 )(c~e 1 + d~e 2 )=(ac bd)~e 1 +(ad + bc)~e 2

2 Vector Products. 2.0 Complex Numbers. (a~e 1 + b~e 2 )(c~e 1 + d~e 2 )=(ac bd)~e 1 +(ad + bc)~e 2 Vector Products Copyright 07, Gregory G. Smith 8 September 07 Unlike for a pair of scalars, we did not define multiplication between two vectors. For almost all positive n N, the coordinate space R n cannot

More information

Linear Algebra. James Je Heon Kim

Linear Algebra. James Je Heon Kim Linear lgebra James Je Heon Kim (jjk9columbia.edu) If you are unfamiliar with linear or matrix algebra, you will nd that it is very di erent from basic algebra or calculus. For the duration of this session,

More information

A FIRST COURSE IN LINEAR ALGEBRA. An Open Text by Ken Kuttler. Matrix Arithmetic

A FIRST COURSE IN LINEAR ALGEBRA. An Open Text by Ken Kuttler. Matrix Arithmetic A FIRST COURSE IN LINEAR ALGEBRA An Open Text by Ken Kuttler Matrix Arithmetic Lecture Notes by Karen Seyffarth Adapted by LYRYX SERVICE COURSE SOLUTION Attribution-NonCommercial-ShareAlike (CC BY-NC-SA)

More information

Web Solutions for How to Read and Do Proofs

Web Solutions for How to Read and Do Proofs Web Solutions for How to Read and Do Proofs An Introduction to Mathematical Thought Processes Sixth Edition Daniel Solow Department of Operations Weatherhead School of Management Case Western Reserve University

More information

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 11 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,, a n, b are given real

More information

Linear Algebra. and

Linear Algebra. and Instructions Please answer the six problems on your own paper. These are essay questions: you should write in complete sentences. 1. Are the two matrices 1 2 2 1 3 5 2 7 and 1 1 1 4 4 2 5 5 2 row equivalent?

More information

Linear Algebra. Preliminary Lecture Notes

Linear Algebra. Preliminary Lecture Notes Linear Algebra Preliminary Lecture Notes Adolfo J. Rumbos c Draft date April 29, 23 2 Contents Motivation for the course 5 2 Euclidean n dimensional Space 7 2. Definition of n Dimensional Euclidean Space...........

More information

Vector Geometry. Chapter 5

Vector Geometry. Chapter 5 Chapter 5 Vector Geometry In this chapter we will look more closely at certain geometric aspects of vectors in R n. We will first develop an intuitive understanding of some basic concepts by looking at

More information

DIAGONALIZATION BY SIMILARITY TRANSFORMATIONS

DIAGONALIZATION BY SIMILARITY TRANSFORMATIONS DIAGONALIZATION BY SIMILARITY TRANSFORMATIONS The correct choice of a coordinate system (or basis) often can simplify the form of an equation or the analysis of a particular problem. For example, consider

More information

Linear Algebra Primer

Linear Algebra Primer Introduction Linear Algebra Primer Daniel S. Stutts, Ph.D. Original Edition: 2/99 Current Edition: 4//4 This primer was written to provide a brief overview of the main concepts and methods in elementary

More information

Review of Matrices and Block Structures

Review of Matrices and Block Structures CHAPTER 2 Review of Matrices and Block Structures Numerical linear algebra lies at the heart of modern scientific computing and computational science. Today it is not uncommon to perform numerical computations

More information

Chapter 7. Extremal Problems. 7.1 Extrema and Local Extrema

Chapter 7. Extremal Problems. 7.1 Extrema and Local Extrema Chapter 7 Extremal Problems No matter in theoretical context or in applications many problems can be formulated as problems of finding the maximum or minimum of a function. Whenever this is the case, advanced

More information

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS SPRING 006 PRELIMINARY EXAMINATION SOLUTIONS 1A. Let G be the subgroup of the free abelian group Z 4 consisting of all integer vectors (x, y, z, w) such that x + 3y + 5z + 7w = 0. (a) Determine a linearly

More information

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS chapter MORE MATRIX ALGEBRA GOALS In Chapter we studied matrix operations and the algebra of sets and logic. We also made note of the strong resemblance of matrix algebra to elementary algebra. The reader

More information

MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS

MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS There will be eight problems on the final. The following are sample problems. Problem 1. Let F be the vector space of all real valued functions on

More information

Chapter 6: Orthogonality

Chapter 6: Orthogonality Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

Exercise Set 7.2. Skills

Exercise Set 7.2. Skills Orthogonally diagonalizable matrix Spectral decomposition (or eigenvalue decomposition) Schur decomposition Subdiagonal Upper Hessenburg form Upper Hessenburg decomposition Skills Be able to recognize

More information

Chapter 7: Symmetric Matrices and Quadratic Forms

Chapter 7: Symmetric Matrices and Quadratic Forms Chapter 7: Symmetric Matrices and Quadratic Forms (Last Updated: December, 06) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved

More information

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form Chapter 5. Linear Algebra A linear (algebraic) equation in n unknowns, x 1, x 2,..., x n, is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b where a 1, a 2,..., a n and b are real numbers. 1

More information

The Gauss-Jordan Elimination Algorithm

The Gauss-Jordan Elimination Algorithm The Gauss-Jordan Elimination Algorithm Solving Systems of Real Linear Equations A. Havens Department of Mathematics University of Massachusetts, Amherst January 24, 2018 Outline 1 Definitions Echelon Forms

More information

av 1 x 2 + 4y 2 + xy + 4z 2 = 16.

av 1 x 2 + 4y 2 + xy + 4z 2 = 16. 74 85 Eigenanalysis The subject of eigenanalysis seeks to find a coordinate system, in which the solution to an applied problem has a simple expression Therefore, eigenanalysis might be called the method

More information

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same. Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read

More information

LINEAR ALGEBRA. VECTOR CALCULUS

LINEAR ALGEBRA. VECTOR CALCULUS im7.qxd 9//5 :9 PM Page 5 Part B LINEAR ALGEBRA. VECTOR CALCULUS Part B consists of Chap. 7 Linear Algebra: Matrices, Vectors, Determinants. Linear Systems Chap. 8 Linear Algebra: Matrix Eigenvalue Problems

More information

Generalized Eigenvectors and Jordan Form

Generalized Eigenvectors and Jordan Form Generalized Eigenvectors and Jordan Form We have seen that an n n matrix A is diagonalizable precisely when the dimensions of its eigenspaces sum to n. So if A is not diagonalizable, there is at least

More information

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero. Sec 6 Eigenvalues and Eigenvectors Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called an eigenvalue of A if there is a nontrivial

More information

Study Guide for Linear Algebra Exam 2

Study Guide for Linear Algebra Exam 2 Study Guide for Linear Algebra Exam 2 Term Vector Space Definition A Vector Space is a nonempty set V of objects, on which are defined two operations, called addition and multiplication by scalars (real

More information

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education MTH 3 Linear Algebra Study Guide Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education June 3, ii Contents Table of Contents iii Matrix Algebra. Real Life

More information