Student Solutions Manual. for. Web Sections. Elementary Linear Algebra. 4 th Edition. Stephen Andrilli. David Hecker

Similar documents
4.3 - Linear Combinations and Independence of Vectors

Max-Min Problems in R n Matrix

a11 a A = : a 21 a 22

MTH 5102 Linear Algebra Practice Final Exam April 26, 2016

Lecture 11: Eigenvalues and Eigenvectors

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra March 16, 2019

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved

Lecture 12: Diagonalization

LINEAR SYSTEMS AND MATRICES

Numerical Linear Algebra Homework Assignment - Week 2

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Lecture Notes in Mathematics. Arkansas Tech University Department of Mathematics. The Basics of Linear Algebra

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS

Eigenvalues and Eigenvectors

1 Linear Algebra Problems

Chapter Two Elements of Linear Algebra

MAT 2037 LINEAR ALGEBRA I web:

Math 1060 Linear Algebra Homework Exercises 1 1. Find the complete solutions (if any!) to each of the following systems of simultaneous equations:

Linear Algebra Practice Problems

2.3. VECTOR SPACES 25

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

A matrix over a field F is a rectangular array of elements from F. The symbol

Chapter 2 Notes, Linear Algebra 5e Lay

Chapter 5 Eigenvalues and Eigenvectors

Math 314H Solutions to Homework # 3

Some Notes on Linear Algebra

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

which are not all zero. The proof in the case where some vector other than combination of the other vectors in S is similar.

Extra Problems for Math 2050 Linear Algebra I

and let s calculate the image of some vectors under the transformation T.

Linear Algebra. Workbook

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Mathematics 1. Part II: Linear Algebra. Exercises and problems

4. Linear transformations as a vector space 17

Final Exam Practice Problems Answers Math 24 Winter 2012

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

1 Matrices and Systems of Linear Equations

LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS

LINEAR ALGEBRA QUESTION BANK

4 Matrix Diagonalization and Eigensystems

ELEMENTARY LINEAR ALGEBRA

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Math113: Linear Algebra. Beifang Chen

Math 443/543 Graph Theory Notes 5: Graphs as matrices, spectral graph theory, and PageRank

Advanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur

MTH 309 Supplemental Lecture Notes Based on Robert Messer, Linear Algebra Gateway to Mathematics

Systems of Linear Equations and Matrices

Third Midterm Exam Name: Practice Problems November 11, Find a basis for the subspace spanned by the following vectors.

MAT1302F Mathematical Methods II Lecture 19

Linear Algebra. Preliminary Lecture Notes

UNDERSTANDING THE DIAGONALIZATION PROBLEM. Roy Skjelnes. 1.- Linear Maps 1.1. Linear maps. A map T : R n R m is a linear map if

Econ Slides from Lecture 7

Systems of Linear Equations and Matrices

ELEMENTARY LINEAR ALGEBRA

Linear Algebra Highlights

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

Math Linear Algebra Final Exam Review Sheet

The minimal polynomial

ELEMENTARY LINEAR ALGEBRA

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

Notes on the Matrix-Tree theorem and Cayley s tree enumerator

SOLUTIONS TO EXERCISES FOR MATHEMATICS 133 Part 1. I. Topics from linear algebra

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence)

2 Vector Products. 2.0 Complex Numbers. (a~e 1 + b~e 2 )(c~e 1 + d~e 2 )=(ac bd)~e 1 +(ad + bc)~e 2

Linear Algebra. James Je Heon Kim

A FIRST COURSE IN LINEAR ALGEBRA. An Open Text by Ken Kuttler. Matrix Arithmetic

Web Solutions for How to Read and Do Proofs

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Linear Algebra. and

Linear Algebra. Preliminary Lecture Notes

Vector Geometry. Chapter 5

DIAGONALIZATION BY SIMILARITY TRANSFORMATIONS

Linear Algebra Primer

Review of Matrices and Block Structures

Chapter 7. Extremal Problems. 7.1 Extrema and Local Extrema

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS

Chapter 6: Orthogonality

CS 246 Review of Linear Algebra 01/17/19

Exercise Set 7.2. Skills

Chapter 7: Symmetric Matrices and Quadratic Forms

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

The Gauss-Jordan Elimination Algorithm

av 1 x 2 + 4y 2 + xy + 4z 2 = 16.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

LINEAR ALGEBRA. VECTOR CALCULUS

Generalized Eigenvectors and Jordan Form

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Study Guide for Linear Algebra Exam 2

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

Transcription:

Student Solutions Manual for Web Sections Elementary Linear Algebra th Edition Stephen Andrilli David Hecker Copyright, Elsevier Inc. All rights reserved.

Table of Contents Lines and Planes and the Cross Product in Change of Variables and the Jacobian Function Spaces..9 Max-Min Problems in and the Hessian Matrix... Jordan Canonical Form. Copyright, Elsevier Inc. All rights reserved.

Student Manual for Andrilli and Hecker - Elementary Linear Algebra, th edition Lines and Planes Lines and Planes and the Cross Product in R () (a) Let (x ; y ; z ) = (; ; ) and [a; b; c] = [; ; ]. Then, from Theorem, parametric equations for the line are x = +t; y = +t; z = t (t R); that is, x = ; y = +t; z = t (t R). (c) Let (x ; y ; z ) = (; ; ) and (x ; y ; z ) = (; ; ): Then a vector in the direction of the line is [x x ; y y ; z z ] = [ ; ; ]: Let [a; b; c] = [ ; ; ]. Then, from Theorem, parametric equations for the line are x = t; y = t; z = + t (t R). (Reversing the order of the points gives (x ; y ; z ) = (; ; ) and [a; b; c] = [; ; ], and so another valid set of parametric equations for the line is: x = + t; y = + t; z = t (t R):) (e) Let (x ; y ; z ) = (; ; ): A vector in the direction of the given line is [a; b; c] = [ ; ; ] (since these values are the respective coe cients of the parameter t); and therefore this is a vector in the direction of the desired line as well. Then, from Theorem, parametric equations for the line are: x = t; y = + t; z = + t = (t R). () (a) Setting the expressions for x equal gives + t = 9 + s, which means s = t. Setting the expressions for y equal gives t = + s, which means s + t =. Substituting t for s into s + t = leads to t = and s =. Plugging these values into both expressions for x, y, and z in turn results in the same values: x =, y =, and z =. Hence, the given lines intersect at a single point: (; ; ): (c) Setting the expressions for x equal gives t = 8 s, which means s = + t: (The same result is obtained by setting the expressions for y equal.) Setting the expressions for z equal gives t = 9 s, which means s = t +. Since it is impossible for these two expressions for s to be equal ( + t = t + ) for any value of t, the given lines do not intersect. (e) Setting the expressions for x equal gives t = 8s 9, which means t = s. Similarly, setting the expressions for y equal and setting the expressions for z equal also gives t = s : (In other words, substituting s for t in the expressions for the rst given line for x, y, z, respectively, gives the expressions for the second given line for x, y, z, respectively.) Thus, the lines are actually identical. The intersection of the lines is therefore the set of all points on (either) line; that is, the intersection consists of all points on the line x = t ; y = t + ; z = t (t R). () (b) A vector in the direction of l is a = [ ; ; ]: A vector in the direction of l is b = [ ; ; ]. If is the angle between vectors a and b, then cos = ab kakkbk = 9++ p = p = p : Then, p = cos = : () (b) Let (x ; y ; z ) = (; ; ): In the solution to Exercise (c) above, we found that [a; b; c] = [ ; ; ] is a vector in the direction of the line. Substituting these values into the formula for the symmetric equations of the line gives x = y = z (If, instead, we had considered the points in reverse order, and chosen (x ; y ; z ) = (; ; ) and calculated [a; b; c] = [; ; ] as a vector in the direction of the line, we would have obtained another valid form for the symmetric equations of the line: x = y+ = z :) () (a) Let (x ; y ; z ) = (; ; ), and let [a; b; c] = [; ; ]. Then by Theorem, the equation of the plane is x + y + z = ()() + ()() + ()( ); which simpli es to x + y + z =.

Student Manual for Andrilli and Hecker - Elementary Linear Algebra, th edition Lines and Planes (d) This plane goes through the origin (x ; y ; z ) = (; ; ) with normal vector [a; b; c] = [; ; ]. Then by Theorem, the equation for the plane is x + y + z = ()() + ()() + ()(), which simpli es to z = : () (a) A normal vector to the given plane is a = [; ; ]. Since jjajj = ; a unit normal vector to the plane is given by ; ; : () (a) Normal vectors to the given planes are, respectively, a = [; ; 8] and b = [; ; ]. If is the angle between vectors a and b, then cos = ab 8 +88 kakkbk = p p = 8 = : Then, = cos = : (8) (a) Let x = [x ; x ; x ] = [; ; ] and y = [y ; y ; y ] = [; ; ]. Then [x y x y ; x y x y ; x y x y ] = [()() ( )(); ( )() ()(); ()() ()()] = [; ; ]: (g) First note that [; ; ] [ ; ; ] = [()() ( )(); ( )( ) ()(); ()() ()( )] = [; ; ]: Thus, [; ; ] ([; ; ] [ ; ; ]) = [; ; ] [; ; ] = ()() + ()() + ( )() = 8: () (a) Let A = (; ; ), B = ( ; ; ), C = (; ; ). Let x be the vector from A to B. Then x = [ ; ; ] = [ ; ; ]. Let y be the vector from A to C. Then y = [ ; ; ] = [; ; ]: Hence, a normal vector to the plane is x y = [; ; ]: Therefore, the equation of the plane is x + y + z = ()() + ()() + ()(), or, x + y =, which reduces to x + y = : () (b) From the solution to Exercise (b) above, we know that a vector in the direction of l is [ ; ; ], and a vector in the direction of l is [ ; ; ]. The cross product of these vectors gives [ ; ; ] as a normal vector to the plane. Dividing by, we nd that [; ; ] is also a normal vector to the plane. We next nd a point of intersection for the lines. Setting the expressions for y equal gives = s,.which means that s =. Setting the expressions for x equal gives 9 t = s, which means s = t. Substituting for s into s = t leads to t =. Plugging these values for s and t into both expressions for x, y and z, respectively, yields the same values: x = ; y = ; z =. Hence, the given lines intersect at a single point: (; ; ): Finally, the equation for the plane is x + y + z = ()() + ()() + ()(), which reduces to x + z = 9. (8) (c) The normal vector for the plane x+z = is [; ; ], and the normal vector for the plane y +z = 9 is [; ; ]. The cross product [ ; ; ] of these vectors produces a vector in the direction of the line of intersection. To nd a point of intersection of the planes, choose z = to obtain x = and y = 9. That is, one point where the planes intersect is (; 9; ). Thus, the line of intersection is: x = t; y = 9 t; z = t (t R). (9) (a) A vector in the direction of the line l is [a; b; c] = [ ; ; ]. By setting t =, we obtain the point (x ; y ; z ) = (; ; ) on l. The vector from this point to the given point (x ; y ; z ) = (; ; ) is [x x ; y y ; z z ] = [ ; ; ]. Finally, using Theorem, the shortest distance from the given point to l is: k[ ; ; ] [ ; ; ]k k[ ; 8; ]k p = = ( ) + + p :8 () (a) Since the given point is (x ; y ; z ) = (; ; ), and the given plane is x y z =, we have a =, b =, c =, and d =. From the formula in Theorem, the shortest distance from

Student Manual for Andrilli and Hecker - Elementary Linear Algebra, th edition Lines and Planes the given point to the given plane is: jax + by + cz dj p = a + b + c j + ( ) + ( ) j p + ( ) + ( ) = j j (c) The given point is (x ; y ; z ) = (; ; ): First, we nd the equation of the plane through the points A = (; ; ), B = (; ; ), and C = (; ; ). Note that the vector from A to B is [ ; ; ] and the vector from A to C is [; ; ]. Thus, a normal vector to the plane is the cross product [; ; ] of these vectors. Hence, the equation of the plane is x + ( )y + ( )z = ()() + ( )() + ( )(), or, x y z =. That is, a =, b =, c =, and d =. From the formula in Theorem, the shortest distance from the given point to the given plane is: jax + by + cz dj p = a + b + c = j + ( ) + ( ) ( ) j p + ( ) + ( ) = :9. () (a) A vector in the direction of the line l is [ ; ; ], and a vector in the direction of the line l is [; ; ]. (Note that the lines are not parallel.) The cross product of these vectors is v = [; 8; ]: By setting t = and s =, respectively, we nd that (x ; y ; z ) = (; ; ) is a point on l, and (x ; y ; z ) = (; ; ) is a point on l. Hence, the vector from (x ; y ; z ) to (x ; y ; z ) is w = [x x ; y y ; z z ] = [ ; ; ]. Finally, from the formula given in this exercise, we nd that the shortest distance between the given lines is j(v w)j kvk = j + j p + 8 + ( ) = p = p 9 :. (c) A vector in the direction of the line l is [ ; ; ], and a vector in the direction of the line l is [ ; ; ]. (Note that the lines are not parallel.) The cross product of these vectors is v = [ ; ; ]: By setting t = and s =, respectively, we nd that (x ; y ; z ) = (; ; 8) is a point on l, and (x ; y ; z ) = ( ; ; ) is a point on l. Hence, the vector from (x ; y ; z ) to (x ; y ; z ) is w = [x x ; y y ; z z ] = [ ; ; 9]. Finally, from the formula given in this exercise, we nd that the shortest distance between the given lines is j(v w)j kvk j8 + j = p ( ) + ( ) + ( ) = p =. 8 That is, the given lines actually intersect. (In fact, it can easily be shown that the lines intersect at ( ; ; 8).) () (a) Labeling the rst plane as ax + by + cz = d and the second plane as ax + by + cz = d, we have [a; b; c] = [; ; ], and d =, d =. From the formula in Exercise, the shortest distance between the given planes is jd d j p a + b + c = j j p + ( ) + = p = p :88. (c) Divide the equation of the rst plane by, and divide the equation of the second plane by to obtain equations for the planes whose coe cients agree: x + y z = 9 and x + y z = Labeling the rst of these new equations as ax + by + cz = d and the second as ax + by + cz = d,

Student Manual for Andrilli and Hecker - Elementary Linear Algebra, th edition Lines and Planes we have [a; b; c] = [; ; ], and d = 9, d =. From the formula in Exercise, the shortest distance between the given planes is jd d j p a + b + c = j 9 j p + () + ( ) = p = p 9 :. 9 () (a) Let (x ; y ; z ) = (; ; ), (x ; y ; z ) = (; ; ), and (x ; y ; z ) = (; ; ). Then we have [x x ; y y ; z z ] = [; ; ]; and [x x ; y y ; z z ] = [; ; ]: Hence, from the formula in this exercise, the area of the triangle with vertices (; ; ), (; ; ), and (; ; ) is p k [x k[; ; ]k x ; y y ; z z ] [x x ; y y ; z z ] k = = :. (9) (a) The vector r = [; ; ] ft, and the vector! = [; ; ] = [; ; ] rad/sec. Hence, the velocity vector v =! r = [ ; ; ] ft/sec; speed = jjvjj = p : ft/sec. () (a) Let r = [x; y; z] represent the direction vector from the origin to a point (x; y; z) on the Earth s surface at latitude. Since the rotation is counterclockwise about the z-axis and one rotation of radians occurs every hours = 8 seconds, we nd that! = [; ; 8 ] = [; ; ] rad/sec. Then, y v =! r = ; x ; : By considering the right triangle with hypotenuse = Earth s radius = 9 km, and altitude z, we see that z = 9 sin. But since jjrjj = p x + y + z = 9, we must have x +y +(9 sin ) = (9) ; which means x + y = (9 cos ) : Thus, jjvjj = s : cos km/sec. y + x + = p (x + y ) = 9 cos () (a) False. If [a; b; c] is a direction vector for a given line, then any nonzero scalar multiple of [a; b; c] is another direction vector for that same line. For example, the line x = t, y =, z = (t R) (that is, the x-axis) has many distinct direction vectors, such as [; ; ] and [; ; ]. (b) False. If the lines are skew lines, they do not intersect each other, and the angle between the lines is not de ned. For example, the lines in Exercise (c) do not intersect, and hence, there is no angle de ned between them. (c) True. Two distinct nonparallel planes always intersect each other (in a line), and the angle between the planes is the (minimal) angle between a pair of normal vectors to the planes. (d) True. The vector [a; b; c] is normal to the given plane, and the vector [ a; b; c] is a nonzero scalar multiple of [a; b; c]; so [ a; b; c] is also perpendicular to the given plane. (e) True. For all vectors x, y, and z, we have: (x + y) z = (z (x + y)) (by part () of Theorem ) = ((z x) + (z y)) (by part () of Theorem ) = (z x) (z y) = (x z) + (y z) (by part () of Theorem ). (f) True. For all vectors x, y, and z, we have: y (z x) = (y z) x, by part () of Theorem. (g) False. From Corollary, if x and y are parallel, then x y = : For example, [; ; ] [; ; ] = [; ; ].

Student Manual for Andrilli and Hecker - Elementary Linear Algebra, th edition Lines and Planes (h) True. From Theorem, if x and y are nonzero, then kx yk = (jjxjj jjyjj) sin, and dividing both sides by (jjxjj jjyjj) (which is nonzero) gives the desired result. (i) False. k i = j, while i k = j. (Since the results of the cross product operations are nonzero here, the anti-commutative property from part () of Theorem also shows that the given statement is false.) (j) True. The vectors x, y, and x y, in that order, form a right-handed system, as in Figure 8, where we can imagine the vectors x, y, and x y lying along the positive x-, y-, and z-axes, respectively. Then, moving the (right) hand in that gure, so that its ngers are curling instead from the vector x y (positive z-axis) toward the vector x (positive x-axis) will cause the thumb of the (right) hand to point in the direction of y (positive y-axis). Thus, the vectors x y, x, and y, taken in that order, also form a right-handed system. (k) True. This is precisely the method outlined before Theorem for nding the distance from a point to a plane. (l) False. If! is the angular velocity, v is the velocity, and r is the position vector, then v =! r, but it is not generally true that v r =!: For example, in Example, v = ; ; ; r = [ ; ; ]; and! = ; ;, so, v r = ; ; [ ; ; ] = ; ; =!:

Student Manual for Andrilli and Hecker - Elementary Linear Algebra, th edition Jacobian Change of Variables and the Jacobian () (a) The Jacobian matrix J = " @x " @x @x # = # @x u (c) The Jacobian matrix J = = v dx dy = jjj du dv = (u + v ) du dv. # " @x ((u+) +v )() u((u+)) ((u+) +v ) (e) The Jacobian matrix J = @x =. Therefore, dx dy = jjj du dv = du dv. v u. Therefore, uv ((u+) +v ) ((u+) +v )( u) ( (u +v ))((u+)) ((u+) +v )( v) ( (u +v ))(v) ((u+) +v ) ((u+) +v ) J = ((u+) +v ) = ((u+) +v ) " (u + ) + v # u(u + ) uv u (u + ) + v (u + ) u + v uv v " # u + v + uv. u u + v uv v Now, for a matrix A, jkaj = k jaj, and so, jjj = u + v + ( uv v) ( uv) u u + v ((u+) +v ). Simplifying yields = u v uv uv + u v v v u v u v uv + uv ((u+) +v ) = uv u v v v 8 = u v + uv + v + v ((u+) +v ) ((u+) +v ) 8 = (u + ) + v v 8v = ((u+) +v ) 8jvj Hence, dx dy = jjj du dv = du dv. ((u+) +v ) () (a) The Jacobian matrix J = @x @z @x @z ((u+) +v ). @x @w @w @z @w =. Thus, jjj = ()()() + ()()() + ()()() ()()() ()()() ()()() =. Therefore, dx dy dz = jjj du dv dw = du dv dw. (c) The Jacobian matrix J = (u +v +w ) @x @z @x @z @x @w @w @z @w = u + v + w u uv uw uv u + v + w v vw uw vw u + v + w w To make this expression simpler, let A = u + v + w. Now, for a matrix M, jkmj = k jmj, and so by basketweaving,.

Student Manual for Andrilli and Hecker - Elementary Linear Algebra, th edition Jacobian jjj = A A u A v A w + ( uv)( vw)( uw) + ( uw)( uv)( vw) ( uw)(a v )( uw) (A u )( vw)( vw) ( uv)( uv)(a w ) = A A u A v A w A + u v A + u w A + v w A 8u v w 8u v w 8u v w u w A + 8u v w v w A + 8u v w u v A + 8u v w = A A (u + v + w )A = A A = A. Therefore, dx dy dz = jjj du dv dw = (u +v +w ) du dv dw. () (a) For the given region R, r = x + y ranges from to 9, and so r ranges from to. Since R is restricted to the rst quadrant, ranges from to. Therefore, using x = r cos, y = r sin, and dx dy = r dr d yields RR (x + y) dx dy = RR (r cos + r sin ) r dr d R R = R R (r cos + r sin ) r dr d = R r (cos + sin ) d = R (cos + sin ) d = (sin cos ) =. (c) For the given sphere R, ranges from to. Since R is restricted to the upper half of the xy-plane, ranges from to. However, ranges from to, since R is the entire upper half of the sphere; that is, all the way around the z-axis. Therefore, using z = cos and dx dy dz = ( sin ) d d d, we get RRR z dx dy dz = RRR cos ( sin ) d d d R R = R R R cos sin d d d = R R cos sin d d R = R cos sin d d = R sin R d = d = 8 =. (e) Since r = x + y, the condition x + y on the region R means that r ranges from to and ranges from to. Therefore, using x + y = r and dx dy dz = r dr d dz produces RRR x + y + z dx dy dz = R R R r + z r dr d dz R = R R r + r z d dz = R R + z d dz = R + z dz = R 8 + z dz = 8z + z = + ( ) = 8. () (a) True. This is because all of the partial derivatives in the Jacobian matrix will be constants, since they are derivatives of linear functions. Hence, the determinant of the Jacobian will be a constant.

Student Manual for Andrilli and Hecker - Elementary Linear Algebra, th edition Jacobian (b) True. The determinant of the Jacobian matrix J = dx dy = jjj du dv = j j du dv = du dv. " @x @x # is ( ); however, (c) True. This is explained and illustrated in the Jacobian section, just after Example, in Figure, and again in Example. (d) False. The scaling factor is the absolute value of the determinant of the Jacobian matrix, which will not equal the determinant of the Jacobian matrix when that determinant is negative. For a counterexample, see the solution to part (b) of this Exercise. 8

Student Manual for Andrilli and Hecker - Elementary Linear Algebra, th edition Function Spaces Function Spaces () (a) To test for linear independence, we must determine whether ae x + be x + ce x = has any nontrivial solutions. Using this equation we substitute the following values for x: But 8 < : e e e e e e Letting x = =) a + b + c = Letting x = =) a(e) + b(e ) + c(e ) = Letting x = =) a(e ) + b(e ) + c(e ) = row reduces to showing that the system has only the trivial solution a = b = c =. Hence, the given set is linearly independent. (c) To test for linear independence, we must check for nontrivial solutions to x x + x x + x a + x + b + x + c x + x = : + Using this equation we substitute the following values for x: 8 >< But >: Letting x = =) a + b c = Letting x = =) a + b + 8 c = Letting x = =) a b c = : 8 which has nontrivial solution a = x ( ) + x row reduces to + (),, b =, c =. Algebraic simpli cation veri es that x + x x + x + x + () x + x = + is a functional identity. Hence, since a nontrivial linear combination of the elements of S produces, the set S is linearly dependent. () (a) First, to simplify matters, we eliminate any functions that we can see by inspection are linear combinations of other functions in S. The familiar trigonometric identities sin x + cos x = and sin x cos x = sin x show that the last two functions in S are linear combinations of earlier functions. Thus, the subset S = fsin x; cos x; sin x; cos xg has the same span as S. However, the identity cos x = cos x+sin x (more commonly stated as cos x = cos x sin x) shows that B = fsin x; cos x; sin xg also has the same span. We suspect that we have now eliminated all of the functions that we can, and that B is linearly independent, making it a basis for span(s). We verify the linear independence of B by considering the equation a sin x + b cos x + c sin x =., : 9

Student Manual for Andrilli and Hecker - Elementary Linear Algebra, th edition Function Spaces Using this equation we substitute the following values for x: 8 >< Letting x = =) + b = Letting x = p =) >: a + b + c = : Letting x = =) a + c = Next, row reduce p to Since the system has only the trivial solution, B is linearly independent, and is thus the desired basis for span(s). (c) First, to simplify matters, we eliminate any functions that we can see by inspection are linear combinations of other functions in S. We start with the familiar trigonometric identities sin(a + b) = sin a cos b + sin b cos a and cos(a + b) = cos a cos b sin a sin b. Hence, sin(x + ) = (sin x)(cos ) + (sin )(cos x); cos(x + ) = (cos x)(cos ) (sin x)(sin ); sin(x + ) = (sin x)(cos ) + (sin )(cos x); and cos(x + ) = (cos x)(cos ) (sin x)(sin ). Therefore, each of the elements of S is a linear combination of sin x and cos x. That is, span(s) span(fsin x; cos xg). Thus, dim(span(s)). Now, if we can nd two linearly independent vectors in S, they form a basis for span(s). We claim that the subset B = fsin(x + ); cos(x + )g of S is linearly independent, and hence is a basis contained in S. To show that B is linearly independent, we plug two values for x into a sin(x+)+b cos(x+) = : Letting x = =) (sin )a + (cos )b = Letting x = =) b = : Using back substitution and the fact that sin = shows that a = b =, proving that B is linearly independent. () (a) Since v = e x + e x + ( )e x, [v] B = [; ; ] by the de nition of [v] B. (c) We start with the familiar trigonometric identity sin(a + b) = sin a cos b + sin b cos a, which yields sin(x + ) = (sin x)(cos ) + (sin )(cos x); and sin(x + ) = (sin x)(cos ) + (sin )(cos x). Dividing the rst equation by cos and the second by cos, and then subtracting and rearranging terms yields cos sin(x + ) sin sin cos sin(x + ) = cos cos cos x. Now, sin sin cos cos = sin cos sin cos cos cos = sin( ) cos cos = sin cos cos Therefore, the de nition of [v] B yields [v] B = [ cos.. Hence, cos x = cos sin sin ; cos sin cos sin(x + ) + sin sin(x + ). ], or approximately [:9; :]. An alternate approach is to solve for a and b in the equation cos x = a sin(x + ) + b sin(x + ). Plug in the following values for x: Letting x = =) cos = (sin )b ; Letting x = =) cos = (sin )a producing the same results for a and b as above.

Student Manual for Andrilli and Hecker - Elementary Linear Algebra, th edition Function Spaces () (a) True. In any vector space, S = fv ; v g is linearly dependent if and only if either of the vectors in S can be expressed as a linear combination of the other. Since there are only two vectors in the set, neither of which is zero, this happens precisely when v is a nonzero constant multiple of v. (b) True. Since the polynomials all have di erent degrees, none of them is a linear combination of the others. (c) False. By the de nition of linear independence, the set of vectors would be linearly independent, not linearly dependent. (d) False. It is possible that the three values chosen for x do not produce equations proving linear independence, but choosing a di erent set of three values might. This principle is illustrated directly after Example in the Functions Spaces section. We are only assured of linear dependence when we have found values of a, b, and c, not all zero, such that af (x) + bf (x) + cf (x) = is a functional identity.

Student Manual for Andrilli and Hecker - Elementary Linear Algebra, th edition Hessian Matrix Max-Min Problems in R n and the Hessian Matrix () (a) First, rf = [x + x + y ; x + y]. We nd critical points by setting rf =. Setting the second coordinate of rf equal to zero yields y = x. Plugging this into x + x + y = produces x =, which has solutions x = and x =. This gives the two critical points x + 8 (; ) and ( ; ). Next, H =. At the rst critical point, H =, (; ) which is positive de nite because its determinant is, which is positive, and its (; ) entry is 8, which is also positive. (See the comment just before Example in the Hessian section for easily veri ed necessary and su cient conditions for a matrix to be either positive de nite or negative de nite.) So, f has a local minimum at (; ). At the second critical point, H =. We refer to this matrix as A. Now A is neither positive de nite nor ( ;) negative de nite, since its determinant is negative. Also note that p A (x) = x + x, which, by the Quadratic Formula, has roots p. Since one of these is positive and the other is negative, we see that ( ; ) is not a local extreme value. (c) First, rf = [x+y +; x+y ]. We nd critical points by setting rf =. This corresponds to a linear system with two equations and two variables which has the unique solution ( ; ). Next, H =. H is positive de nite since its determinant is, which is positive, and its (; ) entry is, which is positive. Hence, the critical point ( ; ) is a local minimum. (e) First, rf = [x + y + z; x + y + y z + yz y + z z; x+y +y z+yz y+z z]. We nd critical points by setting rf =. To make things easier, notice that at the critical point, @f = @f @f @z. This yields y = z. With this, and @x =, we obtain x = y. Substituting for y and z into @f = and solving, produces 8x x =. Hence, 8x( x ) = 8x( x)( + x) =. This yields the three critical points (; ; ), ; ;, and ; ;. Next, H = y + yz + z y + yz + z. y + yz + z y + yz + z At the rst critical point, H = (;;). We refer to this matrix as A. Then p A (x) = x x + = (x )(x + x ). Hence, the eigenvalues of A are ; and p. Since A has both positive and negative eigenvalues, (; ; ) is not an extreme point. At the second critical point, H ( ; ; ) = 8. We refer to this matrix as B. Then 8 p B (x) = x x + 8x 8 = (x )(x x + ). Hence, the eigenvalues of B are all positive ( and p ). So f has a local minimum at ; ; by Theorem.. At the third critical point, H ( ; ; ) = H, and so f also has a local minimum at ( ; ; ) ; ;.

Student Manual for Andrilli and Hecker - Elementary Linear Algebra, th edition Hessian Matrix () (a) True. We know from calculus that if a real-valued function f on R n has continuous second partial derivatives then @ f @x i@x j = @ f @x j@x i, for all i; j. (b) False. For example, the matrix represents neither a positive de nite nor negative de nite quadratic form since it has both a positive and a negative eigenvalue. (c) True. In part (a) we noted that the Hessian matrix is symmetric. Theorems.8 and. then establish that the matrix is orthogonally diagonalizable, hence diagonalizable. (d) True. We established that a symmetric matrix A with jaj > and a > represents a positive de nite quadratic form (see the comment just before Example in the Hessian section). In this problem, jaj = > and a = >. (e) False. The eigenvalues of the given matrix are clearly, 9, and. Hence, the given matrix has both a positive and a negative eigenvalue, and so can not represent a positive de nite quadratic form.

Student Manual for Andrilli and Hecker - Elementary Linear Algebra, th edition Jordan Canonical Form Jordan Canonical Form () (c) By parts (a) and (b), (A I k ) e = k, and (A I k ) e i = e i for i k. We use induction to prove that (A I k ) i e i = k for all i, i k. The Base Step holds since we already noted that (A I k ) e = k. For the Inductive Step, we assume that (A I k ) i e i = k and show that (A I k ) i e i = k. But (A I k ) i e i = (A I k ) i (A I k ) e i = (A I k ) i e i = k, completing the induction proof. Using (A I k ) i e i = k, we see that (A I k ) k e i = (A I k ) k i (A I k ) i e i = (A I k ) k i k = k for all i, i k: Now, for i k, the i th column of (A I k ) k equals (A I k ) k e i. Hence, we have shown that every column of (A I k ) k is zero, and so (A I k ) k = O k. () Note for all the parts below that two matrices in Jordan canonical form are similar to each other if and only if they contain the same Jordan blocks, rearranged in any order. (a) [] [], [] [ ], [ ] [] [ ] [] [] [ ], [ ] The three matrices on the rst line are all similar to each other. The two matrices on the second line are similar to each other, but not to those on the rst line. (c) Using the quadratic formula, x + x + has roots + i and i, each having multiplicity ; [ + i] [ i] and, which are similar to each other [ i] [ + i] (d) x + x + x = x (x + ) ;, to none of the others; [] [], [] [], which are similar to each other but, which are similar to each other but to none of the others; [ ] [ ],, [ ] [ ] which are similar to each other but to none of the others; [] [] [ ] [ ],,

Student Manual for Andrilli and Hecker - Elementary Linear Algebra, th edition Jordan Canonical Form [ ] [ ] [] [] [] [ ] [ ] [],, [ ] [] [ ] [] [] [ ] [] [ ],, which are similar to each other but to none of the others [ ] [] [] [ ] [] [] [ ] [ ] (f) x x = x x + = (x ) (x + ) (x i) (x + i); There are possible Jordan forms. Because each eigenvalue has algebraic multiplicity, all of the Jordan blocks have size. Hence, any Jordan canonical form matrix with these blocks is diagonal with the eigenvalues,, i, and i on the main diagonal. The possibilities result from all of the possible orders in which these eigenvalues can appear on the diagonal. All possible Jordan canonical form matrices are similar to each other because they all have the same twenty-four Jordan blocks, only in a di erent order. () (a) Consider the matrix B = 9 8 8 We nd a Jordan canonical form for B. You can quickly calculate that p B (x) = x x x + = (x ) (x + ). Thus, the eigenvalues for B are = and =. We must nd the sizes of the Jordan blocks corresponding to these eigenvalues and a fundamental sequence of generalized eigenvectors corresponding to each block. The following is one possible solution. (Other answers are possible if, for example, the eigenvalues are taken in a di erent order or if di erent generalized eigenvectors are chosen.) The Cayley-Hamilton Theorem tells us that p B (B) = (B I ) (B + I ) = O : Step A: We begin with the eigenvalue = : Step A: Let D = (B + I ) = : 8 8 8 8 Then (B I ) D = O. Step A: Next, we search for the smallest positive integer k such that (B B I = ; 8 8 and so, (B I ) D = 8 8 : = O ;,, I ) k D = O. Now, while, as we have seen, (B I ) D = O. Hence, k =. Step A: We choose a maximal linearly independent subset of the columns of (B I ) k D =

Student Manual for Andrilli and Hecker - Elementary Linear Algebra, th edition Jordan Canonical Form (B I ) D to get as many linearly independent generalized eigenvectors as possible. Since all of the columns are multiples of the rst, the rst column alone su ces. Thus, v = [ ; 8; ]. We decide whether to simplify v (by multiplying every entry by ) after v is determined. Step A: Next, we work backwards through the products of the form (B I ) k j D for j running from up to k, choosing the same column in which we found the generalized eigenvector v. Because k =, the only value we need to consider here is j =. Hence, we let v = [ 8; ; 8], the rst column of (B I ) ( ) D = D. Since the entries of v are exactly divisible by, we simplify the entries of both v and v by multiplying by. Hence, we will use v = [; ; ] and v = [; ; ]. Steps A and A: Now by construction, (B I ) v = v and (B I ) v =. Since the total number of generalized eigenvectors we have found for equals the algebraic multiplicity of, which is, we can stop our work for. We therefore have a fundamental sequence fv ; v g of generalized eigenvectors corresponding to a Jordan block associated with = in a Jordan canonical form for B. To complete this example, we still must nd a fundamental sequence of generalized eigenvectors corresponding to =. We repeat Step A for this eigenvalue. Step A: Let D = (B I ) = 8 8 Then (B + I ) D = O. Step A: Next, we search for the smallest positive integer k such that (B + I ) k D = O. However, it is obvious here that k =. Step A: Since k =, (B + I ) k D = D: Hence, each nonzero column of D = (B I ) is a generalized eigenvector for B corresponding to =. In particular, the rst column of (B I ) serves nicely as a generalized eigenvector v for B corresponding to =. The other columns of (B I ) are scalar multiples of the rst column. Steps A, A, and A: No further work for is needed here because = has algebraic multiplicity, and hence only one generalized eigenvector corresponding to is required. However, we can simplify v by multiplying the rst column of (B I ) by, yielding v = [; ; ]. Thus, fv g is a fundamental sequence of generalized eigenvectors corresponding to the Jordan block associated with = in a Jordan canonical form for B. Thus, we have completed Step A for both eigenvalues. Step B: Finally, we now have an ordered basis (v ; v ; v ) comprised of fundamental sequences of generalized eigenvectors for B. Letting P be the matrix whose columns are these basis vectors, we nd that A = P BP = = ; [ ] 9 8 8 : which gives a Jordan canonical form for B. Note that we could have used the generalized eigenvectors in the order v ; v ; v instead, which would have resulted in the other possible Jordan

Student Manual for Andrilli and Hecker - Elementary Linear Algebra, th edition Jordan Canonical Form canonical form for B, [ ] : (d) Consider the matrix B = 8 We nd a Jordan canonical form for B. You can quickly calculate that p B (x) = x x +8x = (x )(x ( i))(x ( + i)). Thus, the eigenvalues for B are = ; = i, and = + i. Because each eigenvalue has geometric multiplicity, each of the three Jordan blocks will be blocks. Hence, the Jordan canonical form matrix will be diagonal. We could just solve this problem using our method for diagonalization from Section. of the textbook. However, we shall use the Method of this section here. In what follows, we consider the eigenvalues in the order given above. (Other answers are possible if, for example, the eigenvalues are taken in a di erent order or if di erent eigenvectors are chosen.) The Cayley-Hamilton Theorem tells us that p B (B) = (B I ) (B ( i)i ) (B ( + i)i ) = O : Step A: We begin with the eigenvalue = : Step A: Let D = (B ( i)i ) (B ( + i)i ) = : Then (B I ) D = O. Step A: Next, we search for the smallest positive integer k such that (B I ) k D = O. Clearly, k =. Step A: We choose a maximal linearly independent subset of the columns of (B I ) k D = D to get as many linearly independent eigenvectors as possible. Since all of the columns are multiples of the rst, the rst column alone su ces. We can multiply this column by, obtaining v = [; ; ]. Steps A, A, and A: No further work for is needed here because = has algebraic multiplicity, and hence only one generalized eigenvector corresponding to is required. Thus, fv g is a fundamental sequence of generalized eigenvectors corresponding to the Jordan block associated with = in a Jordan canonical form for B. Thus, we have completed Step A for this eigenvalue. To complete this example, we still must nd a fundamental sequences of generalized eigenvectors corresponding to and. We now repeat Step A for = i. Step A: Let D = (B I ) (B ( + i)i ) = : i i + i 8 + i + i i i + i Then (B ( i)i ) D = O. Step A: Next, we search for the smallest positive integer k such that (B ( i)i ) k D = O. :

Student Manual for Andrilli and Hecker - Elementary Linear Algebra, th edition Jordan Canonical Form Clearly, k =. Step A: We choose a maximal linearly independent subset of the columns of (B I ) k D = D to get as many linearly independent eigenvectors as possible. Since all of the columns are multiples of the rst, the rst column alone su ces. (The second column is ( i) times the rst column, and the third column is i times the rst column. We can discover these scalars by dividing the rst entry in each column by the rst entry in the rst column, and then verify these scalars are correct for the remaining entries.) We can multiply this column by, obtaining v = [ i; + i; i]. Steps A, A, and A: No further work for is needed here because = i has algebraic multiplicity, and hence only one generalized eigenvector corresponding to is required. Thus, fv g is a fundamental sequence of generalized eigenvectors corresponding to the Jordan block associated with = i in a Jordan canonical form for B. Thus, we have completed Step A for this eigenvalue. To complete this example, we still must nd a fundamental sequence of generalized eigenvectors corresponding to. We now repeat Step A for = + i. Step A: Let + i + i D = (B I ) (B ( i)i ) = i 8 i i : + i + i i Then (B ( + i)i ) D = O. Step A: Next, we search for the smallest positive integer k such that (B ( + i)i ) k D = O. Clearly, k =. Step A: We choose a maximal linearly independent subset of the columns of (B ( + i)i ) k D = D to get as many linearly independent eigenvectors as possible. Since all of the columns are multiples of the rst, the rst column alone su ces. (The second column is ( + i) times the rst column, and the third column is + i times the rst column.) We can multiply this column by, obtaining v = [ + i; i; + i]. Steps A, A, and A: No further work for is needed here because = + i has algebraic multiplicity, and hence only one generalized eigenvector corresponding to is required. Thus, fv g is a fundamental sequence of generalized eigenvectors corresponding to the Jordan block associated with = + i in a Jordan canonical form for B. Thus, we have completed Step A for all three eigenvalues. Step B: Finally, we now have an ordered basis (v ; v ; v ) comprised of fundamental sequences of generalized eigenvectors for B. Letting P be the matrix whose columns are these basis vectors, we nd that A = P BP = = [] [ i] [ + i] i i i + i + i i ; which gives a Jordan canonical form for B. 8 + i i i + i i + i (g) We are given that p B (x) = x + x + x + x + x + = (x + ). Therefore, the only eigenvalue for B is =. (Other answers are possible if, for example, di erent generalized eigenvectors are chosen or they are used in a di erent order.) 8

Student Manual for Andrilli and Hecker - Elementary Linear Algebra, th edition Jordan Canonical Form Step A: We begin by nding generalized eigenvectors for =. Step A: p B (B) = (B + I ) I = O. We let D = I. Step A: We calculate as follows: (B + I ) D = (B + I ) = 8 and (B + I ) D = (B + I ) = O. Hence k =. Step A: Each nonzero column of (B + I ) D is a generalized eigenvector for B corresponding to =. We let v = [ ; 8; ; ; ], the rst column of (B + I ) D. (We do not divide by here since not all of the entries of the vector v calculated in the next step are divisible by.) The second column of (B + I ) D is not a scalar multiple of v, so we let v = [ ; ; ; ; ], the second column of (B + I ) D. Row reduction shows that the remaining columns of (B + I ) D are linear combinations of the rst two (see the adjustment step, below). Step A: Next, we let v = [; ; ; ; ] and v = [; ; ; ; ], the rst and second columns of D (= I ), respectively. Thus, (B + I ) v = v, and (B + I ) v = v : This gives us our rst two fundamental sequences of generalized eigenvectors, fv ; v g and fv ; v g corresponding to two Jordan blocks for =. Steps A and A: We have two fundamental sequence of generalized eigenvectors consisting of a total of vectors. But, since the algebraic multiplicity of is, we must still nd another generalized eigenvector. We do this by making an adjustment to the matrix D. Recall that each column of (B + I ) D is a linear combination of v and v. In fact, the row reduction we did in Step A shows that st column of (B + I ) D = v + v = f v + g v nd column of (B + I ) D = v + v = f v + g v rd column of (B + I ) D = v + v = f v + g v th column of (B + I ) D = v + ( )v = f v + g v and th column of (B + I ) D = v + ( )v = f v + g v where the f i s and g i s represent the respective coe cients of v and v for each column of (B + I ) D. We create a new matrix H whose i th column is f i v + g i v. Thus, H = Then, since (B + I )v = v and (B + I )v = v, we get that (B + I ) H = (B + I ) D. Let D = D H = I H. Clearly, (B + I ) D = O. We now revisit Steps A through A using the matrix D instead of D. The purpose of this adjustment to the matrix D is to attempt to eliminate the e ects of the two fundamental sequences of length, thus unmasking shorter fundamental sequences. : 9

Student Manual for Andrilli and Hecker - Elementary Linear Algebra, th edition Jordan Canonical Form Step A: We have D = and (B + I ) D = O. Hence k =. Step A: We look for new generalized eigenvectors among the columns of D. We must choose columns of D that are not only linearly independent of each other, but also of our previously computed generalized eigenvectors (at the rst level). We check this using the Independence Test Method by row reducing 8 where the rst two columns are v and v, and the other columns are the nonzero columns of D. The resulting reduced row echelon form matrix is 8 Hence, the third column of D is linearly independent of v and v, and the remaining columns of D are linear combinations of v, v, and the third column of D. Therefore, we let v = [; ; ; ; ], which is times the rd column of D. Steps A, A, and A: Because k =, the generalized eigenvector v for = corresponds to a Jordan block. This gives the sequence fv g ; and we do not need to nd more vectors for this sequence. We now have the following three fundamental sequences of generalized eigenvectors for = : fv ; v g,fv ; v g and fv g : Since we have now found generalized eigenvectors for B corresponding to = ; and since the algebraic multiplicity of is, we are nished with Step A. Step B: We now have the ordered basis (v ; v ; v ; v ; v ) for C consisting of sequences of generalized eigenvectors for B. If we let P = ; : 8 the matrix whose columns are the vectors in this ordered basis. Using this, we obtain the following ;

Student Manual for Andrilli and Hecker - Elementary Linear Algebra, th edition Jordan Canonical Form Jordan canonical form for B: A = P BP = 8 8 8 8 8 8 = : [ ] 8 () (b) Consider the matrix B = 8 We nd a Jordan canonical form for B. You can quickly calculate that p B (x) = x x + x = (x ) (x ). Thus, the eigenvalues for B are = and =. We must nd the sizes of the Jordan blocks corresponding to these eigenvalues and a fundamental sequence of generalized eigenvectors corresponding to each block. Now, the Cayley-Hamilton Theorem tells us that p B (B) = (B I ) (B I ) = O : Step A: We begin with the eigenvalue = : Step A: Let D = (B I ) = : 8 : Then (B I ) D = O. Step A: Next, we search for the smallest positive integer k such that (B (B I ) D = = O ; I ) k D = O. Now, while, as we have seen, (B I ) D = O. Hence, k =. Step A: We choose a maximal linearly independent subset of the columns of (B I ) k D = (B I ) D to get as many linearly independent generalized eigenvectors as possible. Since all of the columns are multiples of the rst, the rst column alone su ces. Thus, u = [ ; ; ]. Step A: Next, we work backwards through the products of the form (B I ) k j D for j running from up to k, choosing the same column in which we found the generalized eigenvector u. Because k =, the only value we need to consider here is j =. Hence, we let u = [; ; ], the rst column of (B I ) ( ) D = D. Steps A and A: Now by construction, (B I ) u = u and (B I ) u =. Hence, fu ; u g forms a fundamental sequence of generalized eigenvectors for =. Since the total number of generalized eigenvectors we have found for equals the algebraic multiplicity of,

Student Manual for Andrilli and Hecker - Elementary Linear Algebra, th edition Jordan Canonical Form which is, we can stop our work for. We therefore have a fundamental sequence fu ; u g of generalized eigenvectors corresponding to a Jordan block associated with = in a Jordan canonical form for B. To complete this example, we still must nd a fundamental sequence of generalized eigenvectors corresponding to =. We repeat Step A for this eigenvalue. Step A: Let D = (B I ) = 8 Then (B I ) D = O. Step A: Next, we search for the smallest positive integer k such that (B I ) k D = O. However, it is obvious here that k =. Step A: Since k =, (B I ) k D = D: Hence, each nonzero column of D = (B I ) is a generalized eigenvector for B corresponding to =. In particular, the rst column of (B I ) serves nicely as a generalized eigenvector u = [ ; ; ] for B corresponding to =. The other columns of (B I ) are scalar multiples of the rst column. Steps A, A and A: No further work for is needed here because = has algebraic multiplicity, and hence only one generalized eigenvector corresponding to is required. Thus, fu g is a fundamental sequence of generalized eigenvectors corresponding to the Jordan block associated with = in a Jordan canonical form for B. Thus, we have completed Step A for both eigenvalues. Step B: Finally, we now have an ordered basis (u ; u ; u ) comprised of fundamental sequences of generalized eigenvectors for B. Letting Q be the matrix whose columns are these basis vectors, we nd that A = Q BQ = 8 = ; [] which gives a Jordan canonical form for B. () (a) (A + ai n )(A + bi n ) = (A + ai n )A + (A + ai n ) (bi n ) = A + ai n A + A (bi n ) + (ai n ) (bi n ) = A + aa+ba + abi n = A + ba+aa + bai n = A + bi n A + A (ai n ) + (bi n ) (ai n ) = (A + bi n )A+(A + bi n ) (ai n ) = (A + bi n )(A + ai n ): () Let A, B, P, and q(x) be as given in the statement of the problem. We proceed using induction on k, the degree of the polynomial q. Base Step: Suppose k =. Then q(x) = c for some c C. Hence, q(a) = ci n = q(b). Therefore, P q(a)p = P (ci n ) P = cp I n P =cp P = ci n = q(b). Inductive Step: Suppose that s(b) = P s(a)p for every k th degree polynomial s(x). We need to prove that q(b) = P q(a)p, where q(x) has degree k +. Suppose that q(x) = a k+ x k+ + a k x k + + a x + a. Then q(a) = a k+ A k+ + a k A k + + a A + a I n = (a k+ A k + a k A k + + a I n )A + a I n = s(a)a + a I n, where s(x) = a k+ x k + a k x k + + a. Similarly, q(b) = s(b)b + a I n for the same polynomial s(x). Therefore, :

Student Manual for Andrilli and Hecker - Elementary Linear Algebra, th edition Jordan Canonical Form P q(a)p = P (s(a)a + a I n ) P = P s(a)a + P (a I n ) P = P s(a)ap + P (a I n ) P = P s(a) PP AP+a P I n P = P s(a)p P AP +a P P = s(b) P AP +a P P (by the inductive hypothesis) = s(b)b + a I n = q(b). () (a) We compute the (i; j) entry of A. First consider the case in which i k and j k. The (i; j) entry of A = a i a j + +a ik a kj +a i(k+) a (k+)j + +a i(k+m) a (k+m)j = (i; j) entry of A + ((i; j) entry of A A ) : If i > k and j k, then the (i; j) entry of A = a i a j + + a ik a kj + a i(k+) a (k+)j + + a i(k+m) a (k+m)j = ((i; j) entry of A A ) + ((i; j) entry of A A ). The other two cases are handled similarly. (9) (g) The number of Jordan blocks having size exactly k k is the number having size at least k k minus the number of size at least (k + ) (k + ). By part (f), r k (A) r k (A) gives the total number of Jordan blocks having size at least k k corresponding to. Similarly, the number of Jordan blocks having size at least (k + ) (k + ) is r k (A) r k+ (A). Hence the number of Jordan blocks having size exactly k k equals (r k (A) r k (A)) (r k (A) r k+ (A)) = r k (A) r k (A) + r k+ (A). () (a) False. For example, any matrix in Jordan canonical form that has more than one Jordan block can not be similar to a matrix with a single Jordan block. This follows from the uniqueness statement in Theorem. The matrix is a speci c example of such a matrix. (b) True. This is stated in the rst paragraph of the subsection entitled Finding A Jordan Canonical Form. (c) False. The geometric multiplicity of an eigenvalue equals the dimension of the eigenspace corresponding to that eigenvalue, notthe dimension of the generalized eigenspace. For a speci c example, consider the matrix A =, whose only eigenvalue is =. The eigenspace for is spanned by e, and so the dimension of the eigenspace, which equals the geometric multiplicity of =, is. However, the generalized eigenspace for A is fv C j(a I ) k v = g for some positive integer k. But A I = A; so (A I ) = O : Thus, the generalized eigenspace for A is C, which has dimension. (Note, by the way, that p A (x) = x, and so the eigenvalue has algebraic multiplicity.) (d) False. According to Theorem, a Jordan canonical form for a matrix is unique, except for the order in which the Jordan blocks appear on the main diagonal. However, if there are two or more Jordan blocks, and if they are all identical, then all of the Jordan canonical form matrices are the same, since changing the order of the blocks in such a case does not change the matrix. One example is the Jordan canonical form matrix I, which has two identical Jordan blocks. (e) False. For a simple counterexample, suppose A = J = I ; and let P and Q be any two distinct nonsingular matrices, such as I and I. (f) True. Theorem asserts that there is a nonsingular matrix P such that P AP is in Jordan canonical form. The comments just before Example show that the columns of P are a basis for C n consisting of generalized eigenvectors for A. Since these columns span C n, every vector in C n can be expressed as a linear combination of these column vectors.