Math 113 Solutions: Homework 8. November 28, 2007

Similar documents
Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Definition 1. A set V is a vector space over the scalar field F {R, C} iff. there are two operations defined on V, called vector addition

Homework 11 Solutions. Math 110, Fall 2013.

NORMS ON SPACE OF MATRICES

6 Inner Product Spaces

Math 113 Winter 2013 Prof. Church Midterm Solutions

GQE ALGEBRA PROBLEMS

Eigenvalues and Eigenvectors

Math 4153 Exam 3 Review. The syllabus for Exam 3 is Chapter 6 (pages ), Chapter 7 through page 137, and Chapter 8 through page 182 in Axler.

Solutions for Math 225 Assignment #5 1

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007

1. General Vector Spaces

Math 113 Practice Final Solutions

Lecture Notes for Math 414: Linear Algebra II Fall 2015, Michigan State University

LINEAR ALGEBRA W W L CHEN

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by

Homework For each of the following matrices, find the minimal polynomial and determine whether the matrix is diagonalizable.

Math 113 Final Exam: Solutions

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Chapter 4 Euclid Space

1 Invariant subspaces

Introduction to Linear Algebra, Second Edition, Serge Lange

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

Math Linear Algebra II. 1. Inner Products and Norms

CS 143 Linear Algebra Review

MATH SOLUTIONS TO PRACTICE PROBLEMS - MIDTERM I. 1. We carry out row reduction. We begin with the row operations

MATH 115A: SAMPLE FINAL SOLUTIONS

235 Final exam review questions

MTH 362: Advanced Engineering Mathematics

Math 113 Homework 5. Bowei Liu, Chao Li. Fall 2013

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Basic Calculus Review

LINEAR ALGEBRA MICHAEL PENKAVA

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

Math 108b: Notes on the Spectral Theorem

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

Travis Schedler. Thurs, Oct 27, 2011 (version: Thurs, Oct 27, 1:00 PM)

Linear Algebra 2 Spectral Notes

Solving a system by back-substitution, checking consistency of a system (no rows of the form

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

What is on this week. 1 Vector spaces (continued) 1.1 Null space and Column Space of a matrix

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

Math 593: Problem Set 10

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence)

Lecture 11: Finish Gaussian elimination and applications; intro to eigenvalues and eigenvectors (1)

Eigenvalues, Eigenvectors, and Diagonalization

Math 290, Midterm II-key

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r

Functional Analysis Review

Linear Algebra Lecture Notes-II

1 Last time: least-squares problems

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions

Chapter 6 Inner product spaces

Midterm solutions. (50 points) 2 (10 points) 3 (10 points) 4 (10 points) 5 (10 points)

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Systems of Linear Differential Equations Chapter 7

The following definition is fundamental.

Linear algebra II Homework #1 solutions A = This means that every eigenvector with eigenvalue λ = 1 must have the form

MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2.

Definitions for Quizzes

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

MATH 221, Spring Homework 10 Solutions

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.

Numerical Linear Algebra

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:

Linear Algebra. Paul Yiu. 6D: 2-planes in R 4. Department of Mathematics Florida Atlantic University. Fall 2011

Math 24 Spring 2012 Sample Homework Solutions Week 8

Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u u 2 2 = u 2. Geometric Meaning:

Lecture notes - Math 110 Lec 002, Summer The reference [LADR] stands for Axler s Linear Algebra Done Right, 3rd edition.

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Chapter 6: Orthogonality

Elementary linear algebra

Solutions to Final Exam

The Spectral Theorem for normal linear maps

Spectral Theorem for Self-adjoint Linear Operators

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

Study Guide for Linear Algebra Exam 2

Math 24 Spring 2012 Questions (mostly) from the Textbook

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

Lecture notes: Applied linear algebra Part 1. Version 2

MATH 213 Linear Algebra and Ordinary Differential Equations Spring 2015 Study Sheet for Final Exam. Topics

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

SPECTRAL THEORY EVAN JENKINS

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

Math 414: Linear Algebra II, Fall 2015 Final Exam

Beyond Vectors. Hung-yi Lee

MATH 583A REVIEW SESSION #1

Homework 6 Solutions. Solution. Note {e t, te t, t 2 e t, e 2t } is linearly independent. If β = {e t, te t, t 2 e t, e 2t }, then

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA REVIEW

Linear Algebra. Workbook

DM554 Linear and Integer Programming. Lecture 9. Diagonalization. Marco Chiarandini

Exercise Sheet 1.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Then since v is an eigenvector of T, we have (T λi)v = 0. Then

Linear algebra 2. Yoav Zemel. March 1, 2012

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

Transcription:

Math 113 Solutions: Homework 8 November 28, 27

3) Define c j = ja j, d j = 1 j b j, for j = 1, 2,, n Let c = c 1 c 2 c n and d = vectors in R n Then the Cauchy-Schwarz inequality on Euclidean n-space gives ( n ) 2 ( n )( n ) c j d j On writing c j and d j in terms of a j and b j in the above equation, we have as desired ( n ) 2 ( n a j b j c 2 j ja 2 j )( n d 2 j b 2 j j ) d 1 d 2 d n be 5) No such inner product exists We prove this by showing that the parallelogram equality is violated for the vectors u = (1, ) and v = (, 1) We have u + v = (1, 1) = 2 and u v = (1, 1) = 2 while u = v = 1 Thus u + v 2 + u v 2 = 8 2( u 2 + v 2 ) = 4 6) We have u + v 2 u v 2 = u + v, u + v u v, u v, which, after expanding yields, ( u, u + u, v + v, u + v, v ) ( u, u u, v v, u + v, v ) Collecting terms, we have that u + v 2 u v 2 = 2 u, v + 2 v, u = 4 u, v, and, on dividing by 4, we have the result 7) Expanding the right hand side, i p u + i p v 2 = i p u + i p v, u + i p v = i p ( u, u + u, i p v + i p v, u + i p v, i p v ) Now u, i p v = i p u, v and i p v, u = i p v, u, while i p v, i p v = v, v Our sum can thus be split 4RHS = u, u i p + u, v 1 + v, u i 2p + v, v since 4 ip = 4 i2p = On dividing by 4 we have the result i p = 4 u, v 1

9) We have cos 2 kx dx = ( ) 2 1 2π = 1, so 1 2π = 1 Also, sin k ( x + 2k) π = coskx, so ( sin 2 k x + π ) + π 2k π dx = sin 2 kx dx = sin 2 kx dx, 2k + π 2k where the last equality holds because sin 2 kx is 2π periodic (actually the period is π ) Thus k ( ) 2 π ( ) 2 cos kx π π = sinkx π But cos 2 kx + sin 2 kx = 1, so 2 = [ (coskx ) 2 ( ) ] 2 sin kx + π π and we get ( ) 2 π ( ) 2 cos kx π π = sin kx π = 1 Together, our calculations show that sinkx π, and cos kx π have norm 1 for k = 1, 2,, n It remains to show that pairs of distinct functions in the list are orthogonal To prove orthogonality with 1, observe that sin kx = since sin is odd Since coskx is just a phase shift from sin kx, this proves that coskx = as well The same reasoning shows that π sin ax cosbx =, since this is odd as well The remaining integrals to consider have the form sin ax sin bx and cosax cosbx for a b But now, cosax cos bx + sin ax sin bx = cos(ax bx) and cosax cosbx sin ax sin bx = cos(ax + bx) For a b we have both π cos(ax bx) = and cos(ax + bx) =, so that cosax cos bx dx = 1 ( ) cos(ax + bx) + cos(ax bx) dx = 2 sin ax sin bx dx = 1 ( ) cos(ax bx) cos(ax + bx) dx = 2 Thus the entire sequence is orthogonal 1) We can put e 1 = 1 since this has norm 1 Then x x, e 1 e 1 = x x = x 1/2 This has norm x 1/2 2 = so we put e 2 = 12(x 1 ) Finally, 2 x 2 x 2, e 1 e 1 x 2, e 2 e 2 = x 2 dx x 2 x + 1/4 = 1/12 x 2 12(x 1/2) = x 2 1/3 12(x 1/2) (1/4 1/6) = x 2 x + 1/6 2 12(x 3 1/2x 2 )

This has norm x 2 x + 1/6 2 = Thus e 3 = 6 5(x 2 x + 1/6) x 4 2x 3 + 4/3x 2 1/3x + 1/36 = 1/18 17) Given P 2 = P, we show that P is the orthogonal projection onto range P Given v V we can write v = u + w with u range P, w (range P) We need to show that Pv = u We know that Pv = Pu + Pw Observe that u range P implies there exists x with Px = u Then Pu = P 2 x = Px = u We claim that (range P) = null P so that Pw = Indeed, the condition each vector of null P is orthogonal to each vector of range P implies that null P (range P) But dim null P = dim V dim range P by Rank-Nullity and dim(range P) = dim V dim range P since V = range P (range P) Thus null P and (range P) have the same dimension, so that they are actually equal This completes our proof, since we ve shown Pv = Pu + Pw = u + 2) Suppose U and U are invariant under T Write v V as v = u + w with u U, w U Then P U Tv = P U (Tu + Tw) and Tu U, Tw W implies P U (Tu + Tw) = Tu Meanwhile, TP U (u + w) = Tu so TP U v = P U Tv for every v V Now suppose that TP U = P U T Take u U Then Tu = TP U u = P U Tu which is an element of U since range P U = U Thus Tu is contained in U Now take w U We have P U Tw = TP U w = T = It follows that when we write Tw = u + w with u U, w U we have u = Thus Tw U so U is also T-invariant 22) Since we are given that p() = p () =, p is a polynomial without constant or linear term This condition is satified if and only if p lies in the subspace of P 3 (R) spanned by (x 2, x 3 ) The quantity to be minimized is 2 + 3x p(x) 2 where the norm is the one inherited from the inner product, on P 3 (R) given by P, Q = P(x)Q(x) dx By proposition 636, this minimization is achieved exactly when p(x) is the orthogonal projection of 2 + 3x onto the span of (x 2, x 3 ) To calculate this projection, we first find an orthonormal basis for span (x 2, x 3 ) with respect to, We have x 2 2 = x4 = 1/5 so we choose e 1 = 5x 2 Then x 3 x 3, e 1 e 1 = x 3 5x 2 1 x5 = x 3 5/6x 2 This has squared norm x 3 5/6x 2 2 = x 6 5/3x 5 + 25/36x 4 = 1/252 3

Thus we set e 3 = 6 7(x 3 5/6x 2 ) The best choice for p(x) is then given by p(x) = 2 + 3x, e 1 e 1 + 2 + 3x, e 2 e 2, ie ( p(x) = 2 5x 2 + 3 ) 5x 3 ( ( 5x 2 ) + 6 ) 7(2 + 3x)(x 3 5/6x 2 ) 6 7(x 3 5/6x 2 ) = 85/12x 2 23/1(x 3 5/6x 2 ) = 23/1x 3 + 24x 2 Extra Problem 1: We first show that V is a vector space Given (u 1, w 1 ), (u 2, w 2 ) V we have (u 1, w 1 ) + (u 2, w 2 ) = (u 1 + u 2, w 1 + w 2 ) = (u 2 + u 1, w 2 + w 1 ) since both U and W are commutative But then (u 1, w 1 ) + (u 2, w 2 ) = (u 2 + u 1, w 2 + w 1 ) = (u 2, w 2 ) + (u 1, w 1 ) so V is commutative Given a third vector (u 3, w 3 ) V, then ((u 1, w 1 ) + (u 2, w w )) + (u 3, w 3 ) = (u 1 + u 2, w 1 + w 2 ) + (u 3, w 3 ) = ((u 1 + u 2 ) + u 3, (w 1 + w 2 ) + w 3 ) = (u 1 + (u 2 + u 3 ), w 1 + (w 2 + w 3 )) = (u 1, w 1 ) + (u 2 + u 3, w 2 + w 3 ) = (u 1, w 1 ) + ((u 2, w 2 ) + (u 3 + w 3 )) by invoking the associativity of both U and W This proves addition on V is associative To show scalar multiplication on V is associative, take a, b scalars and (u, w) V Then (ab)(u, w) = (abu, abw) = a(bu, bw) = a(b(u, w)) The additive identity is (, ), where the first is the identity in U and the second is the identity in W Indeed, (u 1, w 1 ) + (, ) = (u 1 +, w 1 + ) = (u 1, w 1 ) For the additive inverse to (u 1, w 1 ) we may choose ( u 1, w 1 ) where u 1 is the additive inverse of u 1 in U, and w 1 is the additive inverse of w 1 in W We have (u 1, w 1 ) + ( u 1, w 1 ) = (u 1 u 1, w 1 w 1 ) = (, ) which is the additive identity we chose for V We calculate 1 (u 1, w 1 ) = (1 u 1, 1 w 1 ) = (u 1, w 1 ) so 1 is the multiplicative identity Also a((u 1, w 1 ) + (u 2, w 2 )) = a(u 1 + u 2, w 1 + w 2 ) = (a(u 1 + u 2 ), a(w 1 + w 2 )) = (au 1 + au 2, aw 1 +aw 2 ) = (au 1, aw 1 )+(au 2, aw 2 ) = a(u 1, w 1 )+a(u 2, w 2 ) Similarly, (a+b)(u 1, w 1 ) = ((a+b)u 1, (a+b)w 1 ) = (au 1 +bu 1, aw 1 +bw 1 ) = (au 1, aw 1 )+(bu 1, bw 1 ) = a(u 1, w 1 )+b(u 1, w 1 ), so that the distributive properties hold for V We have used in these two calculations the distributive properties from U and W Together the above calculations show that V is a vector space To show that ((u 1, ),, (u m, ), (, w 1 ),, (, w k )) forms a basis for V, we first check that it spans V Take (u, w) V Then we may write u = a 1 u 1 + + a m u m for some a 1,, a m since the u i span U Similarly we may write w = b 1 w 1 + + b k w k since the w j span W But then a 1 (u 1, ) + + a m (u m, ) + b 1 (, w 1 ) + + b k (, w k ) = (a 1 u 1 + + a m u m, b 1 w 1 ++b k w k ) = (u, w) This proves that ((u 1, ),, (, w k )) spans V To check that the list is linearly independent, suppose that there exist constants c 1,, c m, d 1,, d k so that c 1 (u 1, )++c m (u m, )+d 1 (, w 1 )++d k (, w k ) = (, ) Then (c 1 u 1 ++c m u m, d 1 w 1 + +d k w k ) = (, ) and in particular, c 1 u 1 ++c m u m = in U and d 1 w 1 ++d k w k = in W But then the linear independence of the u i and w j implies c 1 = = c m = d 1 = = d k = This proves that ((u 1, ),, (, w k )) is independent Thus it forms a basis To prove that ι L(V, X) is an isomorphism we first show that it is linear, and then show that it is a bijection Take scalar a and two vectors (u, w), and (u, w ) in V Then 4

ι(a(u, w) + (u, w )) = ι(au + u, aw + w ) = au + u + aw + w = a(u + w) + (u + w ) = aι(u, w) + ι(u, w ) This proves that ι is linear To show that ι is a bijection, observe that X is the set of vectors {z Z z = u + w; u U, w W } so ι is a genuine map from V to X (that is, its image is indeed contained in X) Moreover, it is onto since for x X we have x = u + w for some u U, w W and x = ι(u, w) Furthermore, it is injective, since ι(u 1, w 1 ) = ι(u 2, w 2 ) implies u 1 +w 1 = u 2 +w 2, or u 1 u 2 = w 2 w 1 The left side here is a member of U and the right side of W, so each is in the intersection Since U W = {} we have u 1 u 2 = w 2 w 1 =, that is, u 1 = u 2, w 1 = w 2, so (u 1, w 1 ) = (u 2, w 2 ) Extra Problem 2: (1) To check that W has commutative and associative addition, additive identity and additive inverses is the same set of calculations as in problem 1 We check that 1 + i is the multiplicative identity on W We have (1 + i)(u, w) = 1(u, w) + ( w, u) = (u, w) + (( w), u) = (u, w) where this calculation uses the fact that 1 is the multiplicative identity on V V and that v = for all v V Next we show that scalar multiplication by complex numbers is associative For a, b, a, b real numbers and (u, v) W we have Meanwhile, ((a + bi)(a + b i))(u, v) = ((aa bb ) + (ab + a b)i)(u, v) = (aa bb )(u, v) + (ab + a b)( v, u) = (aa u bb u ab v a bv, aa v bb v + ab u + a bu) (a + bi)((a + b i)(u, v)) = (a + bi)(a (u, v) + b ( v, u)) = (a + bi)(a u b v, a v + b u) = a(a u b v, a v + b u) + b( a v b u, a u b v) = (aa u bb u ab v a bv, aa v bb v + ab u + a bu) Since these two equations are equal, we have that multiplication by complex scalars is associative It remains to prove the distributive properties Take (u 1, w 1 ), (u 2, w 2 ) W and a, b R Then (a + bi)((u 1, w 1 ) + (u 2, w 2 )) = (a + bi)(u 1 + u 2, w 1 + w 2 ) = a(u 1 + u 2, w 1 + w 2 ) + b( (w 1 + w 2 ), u 1 + u 2 ) = (au 1 bw 1 + au 2 bw 2, aw 1 + bu 1 + aw 2 + bu 2 ) = ((au 1, aw 1 ) + ( bw 1, bu 1 )) + ((au 2, aw 2 ) + ( bw 2, bu 2 )) = (a + bi)(u 1, w 1 ) + (a + bi)(u 2, w 2 ) 5

Now take a second pair of real numbers a, b Then ((a + bi) + (a + b i))(u, w) = ((a + a ) + (b + b )i)(u, w) = (a + a )(u, w) + (b + b )( w, u) = a(u, w) + a (u, w) + b( w, u) + b ( w, u) = (a(u, w) + b( w, u)) + (a (u, w) + b ( w, u)) = (a + bi)(u, w) + (a + b i)(u, w) These two calculations prove that scalar multiplication by complex numbers distributes over addition, and together with the previous facts, this shows that W is a vector space over the complex numbers (2) To show that ((e 1, ),, (e n, ) forms a basis for W we first show that this list spans, and then show that it is linearly independent Take (u, w) W Then since (e 1,, e n ) is a basis for V and u, w V we can find real scalars a 1,, a n and b 1,, b n so that u = a 1 e 1 + + a n e n and w = b 1 e 1 + + b n e n But then (a 1 + b 1 i)(e 1, ) + + (a n + b n i)(e n, ) = (a 1 e 1 b 1, a 1 + b 1 e 1 ) + + (a n e n b n, a n + b n e n ) = (a 1 e 1 + + a n e n, b 1 e 1 + + b n e n ) = (u, w) Hence ((e 1, ),, (e n, )) spans all of W To show that this list is linearly independent, suppose there exist real numbers c 1,, c n, d 1,, d n with n k=1 (c k + d k i)(e k, ) = (, ) Then n k=1 (c ke k, d k e k ) = (, ), so in V we have that n k=1 c ke k =, n k=1 d ke k = But (e 1,, e n ) is linearly independent in V, so we must have c 1 = = c n = d 1 = = d n = Hence ((e 1, ),, (e n, )) is independent over W Thus it is a basis (3) Since T maps V to V, we have S(v 1, v 2 ) = (Tv 1, Tv 2 ) W so S maps W to W We show that it is linear Take (u 1, w 1 ), (u 2, w 2 ) W and a, b R Then Also, S((u 1, w 1 ) + (u 2, w 2 )) = S(u 1 + u 2, w 1 + w 2 ) = (T(u 1 + u 2 ), T(w 1 + w 2 )) = (Tu 1 + Tu 2, Tw 1 + Tw 2 ) = (Tu 1, Tw 1 ) + (Tu 2, Tw 2 ) = S(u 1, w 1 ) + S(u 2, w 2 ) S((a + bi)(u 1, w 1 )) = S(au 1 bw 1, aw 1 + bu 1 ) = (T(au 1 bw 1 ), T(aw 1 + bu 1 )) = (atu 1 btw 1, atw 1 + btu 1 ) = (a + bi)(tu 1, Tw 1 ) = (a + bi)s(u 1, w 1 ) Together the last two equations show that S is linear 6

For the matrix computation, suppose that Te k = a 1k e 1 + + a nk e n Note that the a ik are real Then S(e k, ) = (Te k, T) = (Te k, ) = (a 1k e 1, ) + + (a nk e n, ) = (a 1k + i)(e 1, ) + + (a nk + i)(e n, ) Thus the kth column of M(T, (e 1,, e n )) is a 1k + i the kth column of M(S, ((e 1, ),, (e n, )) is a nk + i a 1k a nk and This holds for every k, so the two matrices are the same Suppose that λ R is an eigenvalue of T with associated non-zero eigenvector v Then S(v, ) = (Tv, T) = (λv, ) = (λ+i)(v, ), so λ is also an eigenvalue of S Suppose instead that λ is a real eigenvalue of S with non-zero eigenvector (u, w) Then S(u, w) = (Tu, Tw) = (λ + i)(u, w) = (λu, λw) We must have one of u, w non-zero, say u Then Tu = λu so λ is an eigenvalue of T Before considering the case of complex eigenvalues, we prove the claims made in the hint regarding the conjugation map c We have c(s(v 1, v 2 )) = c(tv 1, Tv 2 ) = (Tv 1, Tv 2 ) and S(c(v 1, v 2 )) = S(v 1, v 2 ) = (Tv 1, T( v 2 )) = (Tv 1, Tv 2 ) Thus indeed, c(s(v 1, v 2 )) = S(c(v 1, v 2 )) Also, for λ = a + bi, a, b R, we calculate c(λ(v 1, v 2 )) = c((a + bi)(v 1, v 2 )) = (av 1 bv 2, av 2 bv 1 ) and λc(v 1, v 2 ) = (a bi)(v 1, v 2 ) = (av 1 bv 2, av 2 bv 1 ) Also, c((v 1, v 2 ) + (v 1, v 2)) = c(v 1 + v 1, v 2 + v 2) = (v 1 + v 1, v 2 v 2) = c(v 1, v 2 ) + c(v 1, v 2) and, for r real, r(v 1, v 2 ) = (rv 1, rv 2 ) so c(r(v 1, v 2 )) = (rv 1, rv 2 ) = rc(v 1, v 2 ) Hence c is real-linear Thus we ve checked the hint The second statement that we are to prove, that (v 1, v 2 ) null (S λi) k only if (v 1, v 2 ) null (S λi) k contains the first statement (λ an eigenvalue only if λ is) in the case k = 1, so we only prove the more general claim Suppose (S λi) k (v 1, v 2 ) = (, ) Then (, ) = c((s λi) k (v 1, v 2 )) We show that c((s λi) k (v 1, v 2 )) = (S λi) k c(v 1, v 2 ), which suffices to prove the result since then (S λi) k (v 1, v 2 ) = (, ) The proof is by induction In the base case k = it is certainly true that c((s λi) (v 1, v 2 )) = c(v 1, v 2 ) = (S λi) c(v 1, v 2 ) Moreover, we know from the hint that c(s(v 1, v 2 )) = S(c(v 1, v 2 )) and c(λi(v 1, v 2 )) = λi(c(v 1, v 2 )) so that, for all (v 1, v 2 ) W, c((t λi)(v 1, v 2 )) = (T λi)(c(v 1, v 2 )) (we ve used the real-linearity of c) Assuming inductively that c((t λi) k (v 1, v 2 )) = (T λi) k c(v 1, v 2 ), we have that c((t λi) k+1 (v 1, v 2 )) = (T λi)c((t λi) k (v 1, v 2 )) = (T λi)(t λi) k c(v 1, v 2 ) = (T λi) k+1 c(v 1, v 2 ) Thus our claim holds by induction, which completes the proposition 7