Math 113 Final Exam: Solutions

Similar documents
Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Math 113 Midterm Exam Solutions

Math 4153 Exam 3 Review. The syllabus for Exam 3 is Chapter 6 (pages ), Chapter 7 through page 137, and Chapter 8 through page 182 in Axler.

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

MATH 115A: SAMPLE FINAL SOLUTIONS

Elementary linear algebra

Math 25a Practice Final #1 Solutions

Homework 11 Solutions. Math 110, Fall 2013.

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45

MATH PRACTICE EXAM 1 SOLUTIONS

235 Final exam review questions

Review problems for MA 54, Fall 2004.

Linear Algebra Final Exam Solutions, December 13, 2008

NATIONAL UNIVERSITY OF SINGAPORE DEPARTMENT OF MATHEMATICS SEMESTER 2 EXAMINATION, AY 2010/2011. Linear Algebra II. May 2011 Time allowed :

Linear Algebra 2 Spectral Notes

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by

MATH 235. Final ANSWERS May 5, 2015

Math 113 Winter 2013 Prof. Church Midterm Solutions

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Lecture 19: Polar and singular value decompositions; generalized eigenspaces; the decomposition theorem (1)

Summary of Week 9 B = then A A =

Bare-bones outline of eigenvalue theory and the Jordan canonical form

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

PRACTICE FINAL EXAM. why. If they are dependent, exhibit a linear dependence relation among them.

PRACTICE PROBLEMS FOR THE FINAL

Math 121 Practice Final Solutions

Lecture 19: Polar and singular value decompositions; generalized eigenspaces; the decomposition theorem (1)

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Linear Algebra. Min Yan

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

Last name: First name: Signature: Student number:

Math 312 Final Exam Jerry L. Kazdan May 5, :00 2:00

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam January 22, 2016

Math Linear Algebra II. 1. Inner Products and Norms

Final A. Problem Points Score Total 100. Math115A Nadja Hempel 03/23/2017

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

12x + 18y = 30? ax + by = m

Linear Algebra Highlights

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

Math 113 Practice Final Solutions

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

MAT Linear Algebra Collection of sample exams

1 Invariant subspaces

No books, no notes, no calculators. You must show work, unless the question is a true/false, yes/no, or fill-in-the-blank question.

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure.

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

Linear Algebra- Final Exam Review

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

1. General Vector Spaces

(VII.E) The Singular Value Decomposition (SVD)

Math 21b. Review for Final Exam

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

Definition (T -invariant subspace) Example. Example

Math Linear Algebra

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

Final Exam. Linear Algebra Summer 2011 Math S2010X (3) Corrin Clarkson. August 10th, Solutions

Mathematics Department Stanford University Math 61CM/DM Inner products

SUMMARY OF MATH 1600

Linear Algebra problems

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

Linear Algebra Practice Problems

Topics in linear algebra

EXAM. Exam 1. Math 5316, Fall December 2, 2012

Math 115A: Homework 5

Algebra Exam Syllabus

Math 414: Linear Algebra II, Fall 2015 Final Exam

ANSWERS (5 points) Let A be a 2 2 matrix such that A =. Compute A. 2

A Brief Outline of Math 355

Math 110, Summer 2012: Practice Exam 1 SOLUTIONS

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Practice Exam. 2x 1 + 4x 2 + 2x 3 = 4 x 1 + 2x 2 + 3x 3 = 1 2x 1 + 3x 2 + 4x 3 = 5

The eigenvalues are the roots of the characteristic polynomial, det(a λi). We can compute

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

Conceptual Questions for Review

I. Multiple Choice Questions (Answer any eight)

Math 217: Eigenspaces and Characteristic Polynomials Professor Karen Smith

A PRIMER ON SESQUILINEAR FORMS

Linear algebra and applications to graphs Part 1

Math 113 Homework 5. Bowei Liu, Chao Li. Fall 2013

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 8, 2012

Math 21b Final Exam Thursday, May 15, 2003 Solutions

DEPARTMENT OF MATHEMATICS

Math 308 Practice Test for Final Exam Winter 2015

HONORS LINEAR ALGEBRA (MATH V 2020) SPRING 2013

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Lecture Summaries for Linear Algebra M51A

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007

LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT FALL 2006 PRINCETON UNIVERSITY

2. (10 pts) How many vectors are in the null space of the matrix A = 0 1 1? (i). Zero. (iv). Three. (ii). One. (v).

Lecture 19: Isometries, Positive operators, Polar and singular value decompositions; Unitary matrices and classical groups; Previews (1)

Math 110. Final Exam December 12, 2002

Math 108b: Notes on the Spectral Theorem

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7

Transcription:

Math 113 Final Exam: Solutions Thursday, June 11, 2013, 3.30-6.30pm. 1. (25 points total) Let P 2 (R) denote the real vector space of polynomials of degree 2. Consider the following inner product on P 2 (R): p, q := 1 2 1 1 p(x)q(x)dx (a) (10 points) Use the Gram-Schmidt method to find an orthonormal basis of P 2 (R). Solution: Denote c = 1 2. Let s begin with the basis (v 1, v 2, v 3 ) = (1, x, x 2 ). We ll use the following facts: 1 1 x2k+1 dx = 0 for any odd number 2k + 1, as x 2k+1 is an odd function, and 1 1 x2k dx = x2k+1 2k+1 1 1 = 2. 2k+1 In particular, this implies that x 2, x = x, 1 = 0. Also, note that the norm 1 = 1, 1 = 2/ 2 = 2 1 4. Step 1. Set e 1 = 1/ 1 = k 1 1, where k 1 = 1 2 1/4. Step 2. Set f 2 = v 2 v 2, e 1 = x x, k 1 1 1 = x because 1 xdx = 0. Now, normalize: first compute that f 1 2 2 = 1 1 2 1 x2 dx = 2, so 3 e 2 = f 2 / f 2 = k 2 x where k 2 = 3 2 1/4. Finally,

Step 3. Set f 3 = v 3 v 3, e 2 e 2 v 3, e 1 e 1 = x 2 x 2, k 2 x x 2, k 1 1 = x 2 k 1 ( 1 1 = x 2 2k 1 /3 1 x 2 dx) 1 Then, normalize: first compute that 1 1 1 f 3 2 = c x 4 dx 4k 1 /3c x 2 dx + 4/9c k1dx 2 1 1 1 = 2c/5 8ck 1 /9 + 4/9. = 2 2/5 8 2 1/4 /9 + 4/9. Letting 1/k 3 be the square root of this number, we set e 3 = f 3 / f 3 = k 3 (x 2 2k 1 /3 1). The result (e 1, e 2, e 3 ) is an orthonormal basis, by the Gram-Schmidt method. Remarks: The Gram-Schmidt formula, properly applied (but not necessarily simplified, received full credit. A common mistake (only worth 1 or 2 points off) was to assume that the vector 1 had norm 1, which it does not. (b) (5 points) Find an isomorphism T : P 2 (R) R 3 such that p, q = T p, T q Eucl, (1) where, Eucl denotes the Euclidean dot product on R 3 (you do not need to prove this is an isomorphism, but you will need to verify (1)). Solution: Let (e 1, e 2, e 3 ) be as above. Then, define T : P 2 (R) R 3 by sending e i to the ith standard basis vector e i (here we re using e i because the symbol e i is already taken).

Note that T sends an orthonormal basis to an orthonormal basis, so by Axler Chapter 7, it s an isometry, which in particular implies that T u, T v Eucl = u, v for any pair of vectors u, v (by the same lemma in Axler). Alternatively, one could directly check this. (c) (10 points) Find the polynomial p(x) P 2 (R) which best approximates the function f(x) = x 3 on [ 1, 1], in the sense that it minimizes p(x) f(x) (using the above inner product). Solution: The answer is given by the orthogonal projection P U of x 3 onto the subspace U = P 2 (R) of P 3 (R). One way to compute this orthogonal projection is as x 3, e 1 e 1 + x 3, e 2 e 2 + x 3, e 3 e 3. 1 = 0 + (c k 2 x 4 dx)(k 2 x) + 0 1 = (2ck 2 2/5)x = 2( 1 2 )( 3 2 )( 1 5 )x = 3 5 x. where the first and last terms are zero because they are integrals of odd functions over the interval [ 1, 1]. Remarks: A common approach here was to simply write out the expression p(x) x 3 2 for an arbitrary polynomial p(x) = a 0 + a 1 x + a 2 x 2 and then to try to minimize this expression in a 1, a 2, and a 3. While such a method is certainly valid, it is much more computationally difficult (and as a result, very few such approaches succeeded). On the other hand, simply stating the Axler result that orthogonal projection provided the correct answer, along with a correct formula for orthogonal projection, was worth most of the points. 2. (35 points total; 7 points each) Prove or disprove. For each of the following statements, say whether the statement is True or False. Then, prove the statement

if it is true, or disprove (find a counterexample with justification) if it is false. (Note: simply stating True or False will receive no credit). (a) If V is an inner product space and S : V V is an isometry, then S = S. Solution: False. All that is guaranteed is that S = S 1. For an example, consider the isometry S : C C given by multiplication by i. Note that S is multiplication by i, which is not equal to S. (b) If V is a finite-dimensional inner product space, and T : V V is a map satisfying T = p(t ) for some polynomial p(z), then T v = T v for every v V. Solution: True. We check first that T is normal: Note that T T = T p(t ) = p(t )T = T T because any two polynomials in T commute. Then by a Lemma in Axler, normality of T is equivalent to T v and T v having the same norm for any v. (c) If S and T are two nilpotent linear operators on a finite-dimensional vector space V, and ST = T S, then S + T is nilpotent. Solution: True. Suppose that S k = 0 and T l = 0 for some k, l. Then, using binomial expansion, k+l ( ) k + l (S + T ) k+l = S i T k+l i i i=0 where we ve critically used the fact that ST = T S to equate expressions like SST S and SSST. But note that in each term of the form S i T k+l i, if i < k, then k + l i > l. Namely, either the exponent of S is greater than or equal to k or the exponent of T is greater than or equal to l. Thus, each term is 0. (d) If V is an inner product space and T : V V is self-adjoint, then for any basis v = (v 1,..., v n ) of V the matrices of T and T with respect to v are conjugate transposes, e.g., M(T, v) = M(T, v) T. Solution: False, this is only necessarily true with respect to orthonormal bases. For a counterexample, let T : R 2 R 2 send e 1 to e 2 and e 2 to e 1

(where e i are the standard basis vectors). The matrix of T with respect to the orthonormal basis (e 1, e 2 ) is ( ) 0 1, 1 0 which is equal to its conjugate transpose, hence T is self-adjoint. However, with respect to the basis v = (v 1, v 2 ) = (e 1 + e 2, e 2 ), T maps v 1 to v 1 and v 2 to v 1 v 2. Thus, the matrix of T with respect to this basis is ( ) 1 1 M(T, v) =, 0 1 which is not equal to its conjugate transpose. (e) If V is a finite-dimensional inner product space and T : V W a linear map such that T : W V is injective, then T is surjective. Solution: True. Note that by Axler, ker T = (im T ). Now, if T is injective, then the former space is 0. This means (im T ) = 0, which implies im T = V. 3. (30 points total) (a) (5 points) State the Jordan Normal Form theorem for linear maps T : V V, where V is a finite-dimensional complex vector space. Solution: If V is a finite-dimensional complex vector space and T : V V is a linear operator then there exists a basis v = (v 1,..., v n ) of V, called a Jordan basis, such that the matrix of T with respect to v is block-diagonal, i.e. of the form A 1 0 0 0 A 2 0.. 0, 0 0 A k

where each A i is a square upper triangular matrix of the form λ 1 0 0 0 λ 1 0 A i = 0... 0, 0... 1 0 0 0 λ where λ is some eigenvalue of T. (b) (15 points) Suppose T : C 7 C 7 is a linear map with characteristic polynomial q T (z) = (z 2) 2 (z 3) 2 (z 4) 3. Suppose also that dim ker(t 2I) = dim ker(t 4I) = 1 and dim ker(t 3I) = 2. Find a matrix which is a Jordan normal form for T. Be sure to justify your answer. Solution: By Axler, this form for the characteristic polynomial implies that C 7 decomposes as U 2 U 3 U 4 where U λ denotes the generalized λ-eigenspace of T. Moreover, by Axler, the exponent of (z λ) in the characteristic polynomial is the dimension of U λ, so dim U 2 = dim U 3 = 2, and dim U 4 = 3. Let W λ denote the λ-eigenspace of T, so W λ U λ. By JNF, there exists a Jordan basis for T, with associated matrix a collection of Jordan blocks. We claim that the number of Jordan blocks with a given λ on the diagonal is in fact the dimension of the eigenspace W λ. Namely, for each Jordan block, the basis vector corresponding to the top left corner of that Jordan block is a genuine eigenvalue. Moreover, by explicit computation, it is the only basis vector within that given Jordan block in the kernel of T λi. More explicitly, if v 1,..., v k is a substring of a basis of V corresponding to a single k k Jordan block with λ on the diagonal, then on this collection of vectors, T λi, whose corresponding matrix is the same sized Jordan block with 0 s on the diagonal,

acts as v 1 0 v 2 1.. v k v k 1 and thus T λi has one-dimensional kernel on span(v 1,..., v k ). On all of V, the kernel of T λi is the direct sum of the kernels of T λi on each subspace corresponding to a λ-jordan block, each of which is one dimensional. Thus, the total dimension of ker(t λi) is the number of Jordan blocks. Let us now examine the which Jordan blocks must appear in a Jordan matrix for T for each eigenvalue λ. First consider the case λ = 2 or 4, in which case dim W λ = 1. Using the above claim, we conclude there can be only one Jordan block for λ For λ = 2, since dim U 2 = 2, we conclude there is one 2 by 2 Jordan block with 2 the diagonal. Similarly, since dim U 4 = 3, we conclude that there is one 3 by 3 Jordan block with 4 on the diagonal. Now, for λ = 3, since dim W 3 = 2, there must be 2 Jordan blocks with 3 s on the diagonal. Since the total dimension of U 3, the sum of sizes of all 3-Jordan blocks, is 3, both eigenvalue 3 blocks must be 1 by 1. Thus, up to reordering the Jordan blocks, the Jordan matrix for T looks like 2 1 0 0 0 0 0 0 2 0 0 0 0 0 0 0 3 0 0 0 0 0 0 0 3 0 0 0 0 0 0 0 4 1 0 0 0 0 0 0 4 1 0 0 0 0 0 0 4. Remarks: A number of students thought dim ker(t λi) was the maximal size of a Jordan block, which led to an incorrect answer on this section. As a

simple example to see why this is not the case, consider a linear transformation T : R 3 R 3 whose matrix with respect to some basis is 3I = 3 0 0 0 3 0 0 0 3 Note that this matrix is a collection of 3 1 by 1 Jordan blocks, so the maximal size of a Jordan block is 1. However, the dimension of the 3 eigenspace ker(t 3I) is 3.. Also, it was possible to prove this problem without verifying the entire claim, just by observing that if there are e.g., k λ-jordan blocks, then there are at least k linearly independent λ-eigenvectors (then use process of elimination). (c) (10 points) What is the minimal polynomial of T? Solution. The minimal polynomial is (z λ) k λ λ eigenvalue of T where k λ is the size of the largest Jordan block with λ on the diagonal for any Jordan matrix for T. Using the Jordan form determined in the previous section, we can read off that the minimal polynomial is (z 2) 2 (z 3)(z 4) 3. 4. (20 points total; 10 points each) Suppose that (V,, ) is a finite-dimensional inner product space, and T : V V is a positive linear operator with all eigenvalues strictly greater than 0. (a) (10 points) Prove that u, v := T u, v (2) defines a new inner product on V.

Solution: Note first that since T is positive with all eigenvalues strictly greater than zero, there exists a unique square root P which is positive with all eigenvalues strictly greater than zero. Then T u, v = P P u, v = P u, P v. with P positive and all eigenvalues > 0. Now, we can quickly verify the conditions of being an inner product (it s not strictly necessary to do so, but our solution will use P ): positivity: Note that v, v = P v, P v = P v 2 0. (alternatively, this follows by definition of T being positive). definiteness: Since P is positive, it is self-adjoint so admits an orthonormal eigenbasis e 1,..., e n. Let (λ 1,..., λ n ) be the eigenvalues; these are strictly greater than zero by Axler. Then note that any non-zero v = a i e i, P v 2 = a i λ 2 i which is strictly positive as each λ i is non-zero and positive, and at least one a i is not equal to zero. linear in first slot: This follows from linearity of P, and linearity in the first slot of the original inner product. Conjugate symmetry: this follows from conjugate symmetry of the original inner product, and either self-adjointness of T or the symmetry of the expression P u, P v. Remarks: It was not necessary to take a square root to prove that, is an inner product; one can simply use self-adjointness of T, the fact that T is linear, the positivity condition, the fact that all eigenvalues were strictly positive, and the fact that, is an inner product to obtain the above properties for,. This exercise did not require using any bases, although some students successfully took this approach. (b) (10 points) Suppose that T is as above, and S : V V is a self-adjoint linear map (with respect to the original inner product,. Prove that ST is diagonalizable (i.e. that it admits a basis of eigenvectors).

Hint/warning: ST is not self-adjoint with respect to the original inner product, ; in fact, if denotes taking adjoints with respect to the original inner product, note that (ST ) = T S = T S, which is not in general equal to ST. Solution: If ST is self-adjoint with respect to the modified inner product,, then by the spectral theorem, ST will have an orthonormal eigenbasis (with respect to the modified inner product). This in particular would imply that ST has an eigenbasis, as desired. So it suffices to verify that ST is self-adjoint with respect to,, which follows from a computation: ST u, v = T (ST u), v = T ST u, v = T u, S T v = T u, ST v = u, v where the second last equality followed from S and T individually being selfadjoint. 5. (15 points) Determinants via wedge products. Let V be a 4-dimensional real vector space. Suppose T : V V is a linear map which, with respect to a basis (v 1, v 2, v 3, v 4 ) has the following form: v 1 3v 2 v 3 v 2 2v 4 v 3 v 1 + v 4 v 4 v 2 + v 3 Using wedge products, calculate the determinant of T.

Solution: We compute: T (v 1 v 4 ) = T v 1 T v 2 T v 3 T v 4 = (3v 2 v 3 ) (2v 4 ) (v 1 + v 4 ) (v 2 + v 3 ) = (3v 2 v 3 ) (2v 4 ) v 1 (v 2 + v 3 ) = 3v 2 (2v 4 ) v 1 v 3 + ( v 3 ) (2v 4 ) v 1 v 2 = 6v 2 v 4 v 1 v 3 2v 3 v 4 v 1 v 2 (3) where we have expanded using multilinearity and canceled using the alternating condition. Now, note that v 2 v 4 v 1 v 3 = v 4 v 2 v 1 v 3 = v 1 v 2 v 4 v 3 = v 1 v 2 v 3 v 4 where we ve swapped vectors one at a time with sign flips. Similarly, v 3 v 4 v 1 v 2 = v 2 v 4 v 1 v 3 = v 1 v 2 v 3 v 4. So the above expression becomes ( 6 2)v 1 v 2 v 3 v 4 = ( 8)v 1 v 2 v 3 v 4, hence det(t ) = 8. 6. (15 points) Norm bounds. Let V be a finite-dimensional inner product space, and T : V V an invertible linear map. Prove that there exists a positive real constant c > 0 such that T v c v for all vectors v V. Hint: Singular Value Decomposition or Polar Decomposition may be helpful.

Solution: As indicated during the exam, SVD is probably the simpler way to proceed. By SVD, for any T, there exists a pair of orthonormal bases e 1,..., e n and f 1,..., f n of V, and non-negative real numbers s 1,..., s n, called the singular values of T, such that T e i = s i f i. Note first that if T is invertible, then T is injective, so each s i is strictly positive. (if some s i = 0, then T e i = 0f i = 0, a contradiction to injectivity). Set c = min(s 1,..., s n ) > 0. Then, note that for any v V, v = a 1 e 1 + + a n e n, T v 2 = a 1 s 1 f 1 + + a n s n f n 2 n = a 2 i s 2 i i=1 c 2 n a 2 i i=1 = c 2 v 2, where second and fourth lines used the pythagorean theorem, and the third line used the facts that every term in the sum was positive and each s i c by definition. Taking square roots implies the desired result. 7. (20 points total; 10 points each) Composition as a map from the tensor product. Let V be a finite-dimensional vector space of dimension n, and let L(V ) denote the space of linear maps from V to V. Consider the map comp : L(V ) L(V ) L(V ) defined on pure tensors by S T ST and extended by linearity to a general element of the tensor product. (a) Prove that comp is a linear map.

Solution: By our class notes on the tensor product, it suffices to show that the induced map on pure tensors is bilinear; that is the map is bilinear. We check that comp ˆ : L(V ) L(V ) L(V ) comp(as ˆ + bs, T ) = (as + bs )T (S, T ) comp(s T ) = ST = ast + bs T verifying linearity in the first slot. Similarity, comp(s, ˆ at + bt ) = S(aT + bt ) = a comp(s, ˆ T ) + b comp(s ˆ, T ), = ast + bst = a comp(s, ˆ T ) + b comp(s, ˆ T ), verifying linearity in the second slot. (Both of these verifications use properties of composition which were discussed in class and Axler). (b) Calculate dim ker comp. Solution: First, by Axler we know that dim L(V ) = n 2, and by class notes we know that dim L(V ) L(V ) = n 2 n 2 = n 4. Now, for any S L(V ), the pure tensor S I maps to S via comp; hence comp is surjective; i.e. Now, we can apply Rank-Nullity: dim im comp = n 2. dim ker comp = dim(l(v ) L(V )) dim im comp = n 4 n 2.