Math 113 Practice Final Solutions

Similar documents
5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by

MAT Linear Algebra Collection of sample exams

Math 113 Final Exam: Solutions

Mathematics Department Stanford University Math 61CM/DM Inner products

LINEAR ALGEBRA REVIEW

MATH 115A: SAMPLE FINAL SOLUTIONS

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam May 25th, 2018

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

Math 308 Practice Test for Final Exam Winter 2015

1 Invariant subspaces

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r

Math 113 Winter 2013 Prof. Church Midterm Solutions

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 8, 2012

( 9x + 3y. y 3y = (λ 9)x 3x + y = λy 9x + 3y = 3λy 9x + (λ 9)x = λ(λ 9)x. (λ 2 10λ)x = 0

MATH 235. Final ANSWERS May 5, 2015

1 Last time: least-squares problems

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007

Math 4153 Exam 3 Review. The syllabus for Exam 3 is Chapter 6 (pages ), Chapter 7 through page 137, and Chapter 8 through page 182 in Axler.

PRACTICE PROBLEMS FOR THE FINAL

Homework 11 Solutions. Math 110, Fall 2013.

Linear Algebra Final Exam Solutions, December 13, 2008

Math 115A: Homework 5

235 Final exam review questions

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MA 265 FINAL EXAM Fall 2012

MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2.

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Eigenvalues and Eigenvectors A =

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Linear Algebra 2 Spectral Notes

Lecture Notes for Math 414: Linear Algebra II Fall 2015, Michigan State University

Math 113 Solutions: Homework 8. November 28, 2007

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

Final A. Problem Points Score Total 100. Math115A Nadja Hempel 03/23/2017

2. Every linear system with the same number of equations as unknowns has a unique solution.

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

(v, w) = arccos( < v, w >

Linear Algebra- Final Exam Review

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure.

Linear Algebra Lecture Notes-II

Elementary Linear Algebra Review for Exam 2 Exam is Monday, November 16th.

GQE ALGEBRA PROBLEMS

MATH 423 Linear Algebra II Lecture 12: Review for Test 1.

Math 4153 Exam 1 Review

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

DEPARTMENT OF MATHEMATICS

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Review of linear algebra

80 min. 65 points in total. The raw score will be normalized according to the course policy to count into the final score.

Dimension. Eigenvalue and eigenvector

LINEAR ALGEBRA MICHAEL PENKAVA

A PRIMER ON SESQUILINEAR FORMS

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam January 22, 2016

Last name: First name: Signature: Student number:

Linear Algebra. Min Yan

Math 110, Spring 2015: Midterm Solutions

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

No books, no notes, no calculators. You must show work, unless the question is a true/false, yes/no, or fill-in-the-blank question.

Study Guide for Linear Algebra Exam 2

Math 113 Homework 5. Bowei Liu, Chao Li. Fall 2013

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

LINEAR ALGEBRA REVIEW

FINAL EXAM Ma (Eakin) Fall 2015 December 16, 2015

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

Solutions to Final Exam

Quizzes for Math 304

DEPARTMENT OF MATHEMATICS

DEPARTMENT OF MATHEMATICS

Solving a system by back-substitution, checking consistency of a system (no rows of the form

6 Inner Product Spaces

Math 113 Midterm Exam Solutions

Math Exam 2, October 14, 2008

Lecture 19: Polar and singular value decompositions; generalized eigenspaces; the decomposition theorem (1)

Math 312 Final Exam Jerry L. Kazdan May 5, :00 2:00

MATH 260 LINEAR ALGEBRA EXAM III Fall 2014

A linear algebra proof of the fundamental theorem of algebra

DM554 Linear and Integer Programming. Lecture 9. Diagonalization. Marco Chiarandini

Review problems for MA 54, Fall 2004.

Math 414: Linear Algebra II, Fall 2015 Final Exam

A linear algebra proof of the fundamental theorem of algebra

MATH PRACTICE EXAM 1 SOLUTIONS

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

Generalized eigenspaces

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7

Linear Algebra Practice Problems

NATIONAL UNIVERSITY OF SINGAPORE MA1101R

Practice Midterm Solutions, MATH 110, Linear Algebra, Fall 2013

Math 369 Exam #2 Practice Problem Solutions

Numerical Linear Algebra

Linear algebra 2. Yoav Zemel. March 1, 2012

Transcription:

Math 113 Practice Final Solutions 1

There are 9 problems; attempt all of them. Problem 9 (i) is a regular problem, but 9(ii)-(iii) are bonus problems, and they are not part of your regular score. So do them only if you have completed all other problems, as well as 9(i). This is a closed book, closed notes exam, with no calculators allowed (they shouldn t be useful anyway). You may use any theorem, proposition, etc., proved in class or in the book, provided that you quote it precisely, and provided you are not asked explicitly to prove it. Make sure that you justify your answer to each question, including the verification that all assumptions of any theorem you quote hold. Try to be brief though. If on a problem you cannot do part (i) or (ii), you may assume its result for the subsequent parts. All unspecified vector spaces can be assumed finite dimensional unless stated otherwise. Allotted time: 3 hours. Total regular score: 16 points. Problem 1. (2 points) (i) Suppose V is finite dimensional, S, T L(V ). Show that ST = I if and only if TS = I. (ii) Suppose V, W are finite dimensional, S L(W, V ), T L(V, W ), ST = I. Does this imply that TS = I? Prove this, or give a counterexample. (iii) Suppose that S, T L(V ) and ST is nilpotent. Show that TS is nilpotent. Solution 1. (i) We prove ST = I implies TS = I; the converse follows by reversing the role of S and T. So suppose ST = I. Then S is surjective (since for v V, v = S(Tv)) and T is injective (for Tv = implies = S(Tv) = (ST)v = v), so as V is finite dimensional, they are both invertible. Thus, S has an inverse R, so RS = I in particular. But T = IT =(RS)T = R(ST)=RI = R, so TS = I as claimed. (ii) Suppose V = {}, W = F, T = (V has only one element!), Sw = for all w W. Then ST =, so ST = I, but T Sw = for all w W, and as W is not the trivial vector space, TS I. (While this is the lowest dimensional possibility, with dim V =, dim W = 1, it is easy to add dimensions: e.g. V = F, W = F 2, Tv =(v, ), S(w 1,w 2 )=w 1, so ST v = v for v F, TS(w 1,w 2 ) = (w 1, ), which is not the identity map.) (iii) Suppose (ST) k =. Then (TS) k+1 = T (ST) k S = T S =. Problem 2. (15 points) Suppose that T L(V, W ) is injective and (v 1,...,v n ) is linearly independent in V. Show that (Tv 1,..., T v n ) is linearly independent in W. Solution 2. Suppose that n a jtv j =. Then T ( n a jv j )= n a jtv j =. As T is injective, this implies n a jv j =. Since (v 1,...,v n ) is linearly independent in V, all a j are. Thus, (Tv 1,..., T v n ) is linearly independent in W. Problem 3. (2 points) Consider the following map (.,.) :R 2 R 2 R: x, y =(x 1 + x 2 )(y 1 + y 2 ) + (2x 1 + x 2 )(2y 1 + y 2 ); x =(x 1,x 2 ), y =(y 1,y 2 ). 1

2 MATH 113: PRACTICE FINAL SOLUTIONS (i) Show that.,. is an inner product on R 2. (ii) Find an orthonormal basis of R 2 with respect to this inner product. (iii) Let φ(x) =x 1, x =(x 1,x 2 ). Find y R 2 such that φ(x) = x, y for all x R 2. Solution 3. (i) x, x =(x 1 + x 2 ) 2 + (2x 1 + x 2 ) 2, and is equal to if and only if x 1 = x 2 and 2x 1 = x 2, so x 1 = x 2 =. Thus,.,. is positive definite. As x, y are symmetric in the definition, it is clearly symmetric, and ax, y =(ax 1 + ax 2 )(y 1 + y 2 ) + (2ax 1 + ax 2 )(2y 1 + y 2 ) = a((x 1 + x 2 )(y 1 + y 2 ) + (2x 1 + x 2 )(2y 1 + y 2 )) = a x, y, with a similar argument for x + x,y = x, y + x,y. (ii) One can apply Gram-Schmidt to the basis ((1, ), (, 1)). As (1, ) 2 = 1 + 2 2 = 5, e 1 = 1 5 (1, ). Then (1, ), (, 1) =1 1+1 2 = 3, so (, 1) (, 1),e 1 e 1 = (, 1) 3 5 (1, ) = ( 3 5, 1). Normalizing: ( 3 5, 1) 2 =( 2 5 )2 +( 1 5 )2 = 1 5, so e 2 = 5( 3 5, 1). Then (e 1,e 2 ) is an orthonormal basis. (iii) If (e 1,e 2 ) is an orthonormal basis, then y = φ(e 1 )e 1 + φ(e 2 )e 2. Using the basis from the previous part gives y = 1 5 e 1 + 5( 3 5 )e 2 = (2, 3). We check that (x 1,x 2 ), (2, 3) =(x 1 + x 2 )( 1) + (2x 1 + x 2 ) 1=x 1. Problem 4. (15 points) Suppose V is a complex inner product space. Let (e 1,e 2,...,e n ) be an orthonormal basis of V. Show that trace(t T )= Te j 2. Solution 4. If A L(V ), then the matrix entries of A with respect to the basis (e 1,...,e n ) are Ae j,e i, since as (e 1,...,e n ) is orthonormal. Thus, Ae j = Ae j,e 1 e 1 +...+ Ae j,e n e n trace A = trace M(A, (e 1,...,e n )) = Applying this with A = T T : trace T = T Te j,e j = Ae j,e j. Te j, T e j = Te j 2. Problem 5. (2 points) Find p P 3 (R) such that p() =, p () =, and is as small as possible. 1 p(x) 2 dx

MATH 113: PRACTICE FINAL SOLUTIONS 3 Solution 5. Let U be the subspace of P 3 (R) consisting of polynomials p with p() = and p () =. These are exactly the polynomials ax 2 + bx 3, a, b R. Let p, q be the inner product p, q = p(x) q(x) dx on P 3(R). As the constant function 1 P 3 (R), the p U that minimizes 1 p 2 (which is what we want) is the orthogonal projection P U 1 of 1 onto U, so we just need to find this. There are different ways one can proceed. One can find an orthonormal basis (e 1,e 2 ) of U using Gram-Schmidt applied to the basis (x 2,x 3 ), and then P U 1= 1,e 1 e 1 + 1,e 2 e 2. Explicitly, as (x2 ) 2 dx = 1 5, e 1 = 5x 2, then as x3 x 2 dx = 1 6, and finally x 3 x 3,e 1 e 1 = x 3 5 6 x2, (x 3 5 6 x2 ) 2 dx = 1 7 36, one has e 2 = 7 6 (x 3 5 6 x2 ). Thus, using we deduce that 1 x 2 dx = 1 3, 1 (x 3 5 5 x2 ) dx = 1 4 5 18 = 1 36, P U 1= 5 3 x2 7(x 3 5 6 x2 ). A different way of proceeding is to work without orthonormal bases, rather use the characterization of P U 1 as the unique element of U with 1 P U 1 orthogonal to all vectors in U. So suppose P U 1=ax 2 +bx 3. Then we must have 1 P U 1,x 2 = and 1 P U 1,x 3 = which give = = (1 ax 2 bx 3 )x 2 dx = 1 3 a 5 b 6, (1 ax 2 bx 3 )x 3 dx = 1 4 a 6 b 7, so 21 1 = 6a +5b, =7a +6b, 2 taking 7 times the first minus 6 times the second gives 7 63 = b, so b = 7, while taking 6 times the first minus 5 times the second gives 6 5 21 = a, 2 so a = 12 15 2 = 15 2. Problem 6. (2 points) Suppose that S, T L(V ), S invertible. (i) Prove that if p P(F) is a polynomial then p(st S 1 )=Sp(T )S 1. (ii) Show that the generalized eigenspaces of ST S 1 are given by null(st S 1 λi) dim V = {Sv : v null(t λi) dim V }. (iii) Show that trace(st S 1 ) = trace(t ), where trace is the operator trace (sum of eigenvalues, with multiplicity). (Assume that F = C in this part.)

4 MATH 113: PRACTICE FINAL SOLUTIONS Solution 6. (i) Observe that (ST S 1 ) k = ST k S 1 for all non-negative integers k. Indeed, this is immediate for k =, 1. In general, it is easy to prove this by induction: if it holds for some k, then (ST S 1 ) k+1 =(ST S 1 ) k ST S 1 = ST k S 1 ST S 1 = ST k+1 S 1, providing the inductive step and proving the claim. Now, if p P(F) then p = m j= a jz j, a j F, and m m m p(st S 1 )= a j (ST S 1 ) j = a j ST j S 1 = S( a j T j )S 1 = Sp(T )S 1 j= as claimed. (ii) We first show j= j= {Sv : v null(t λi) dim V } null(st S 1 λi) dim V. Indeed, let p(z) = (z λ) dim V. If v null(t λi) dim V, i.e. p(t )v = then =Sp(T )v = Sp(T )S 1 Sv = p(st S 1 )Sv, so Sv null(st S 1 λi) dim V as claimed. As T = S 1 (ST S 1 )S, this also gives {S 1 w : w null(st S 1 λi) dim V } null(t λi) dim V. As S(S 1 w)=w, we deduce that for w null(st S 1 λi) dim V there is in fact v null(t λi) dim V with Sv = w, namely v = S 1 w, so in summary {Sv : v null(t λi) dim V } = null(st S 1 λi) dim V. (iii) Let λ j, j =1,..., k be the distinct eigenvalues of T with multiplicity m 1,..., m k, respectively. That is, dim null(t λ j I) dim V = m j. By what we have shown, the distinct eigenvalues of ST S 1 are also exactly the λ j, the generalized eigenspaces are {Sv : v null(t λ j I) dim V }, and as S null(t λji) dim V : null(t λ ji) dim V null(st S 1 λ j I) dim V is a bijection, hence an isomorphism, we deduce that these generalized eigenspaces also have dimension m j. Thus, k trace T = m j λ j = trace(st S 1 ). Problem 7. (2 points) Suppose that V is a complex inner product space. (i) Suppose T L(V ) and there is a constant C such that Tv C v for all v V. Show that all eigenvalues λ of T satisfy λ C. (ii) Suppose now that T L(V ) in normal, and all eigenvalues satisfy λ C. Show that Tv C v for all v V. (iii) Give an example of an operator T L(C 2 ) for which there is a constant C such that λ C for all eigenvalues λ of T, but there is a vector v C 2 with Tv >C v. Solution 7. (i) Suppose that λ is an eigenvalue of T. Then there is v V, v, such that Tv = λv. Thus, taking norms, C v Tv = λv = λ v. Dividing by v gives the conclusion.

MATH 113: PRACTICE FINAL SOLUTIONS 5 (ii) Since T is normal, V has an orthonormal basis (e 1,..., e n ) consisting of eigenvectors of T, with eigenvalue λ 1,..., λ n. Recall that for any v V, v = n v, e j e j, and that for any a j C, n a je j 2 = n a j 2. Now we compute for v V : Tv 2 = T ( v, e j e j ) 2 = v, e j Te j 2 = v, e j λ j e j 2 = v, e j 2 λ j 2 C 2 n v, e j 2 = C 2 v 2. Taking positive square roots gives the conclusion. (iii) Take any non-zero nilpotent operator: the eigenvalues are all, so one can take C =, but the operator is not the zero-operator, so there is some v such that Tv, hence Tv > = v = C v. For example, take T L(C 2 ) given by T (z 1,z 2 ) = (z 2, ), so T 2 =, hence T is nilpotent, but T (, 1) = (1, ). Problem 8. (2 points) (i) Suppose that V, W are inner product spaces. Show that for all T L(V, W ), dim range T = dim range T, dim null T = dim null T + dim W dim V. (ii) Consider the map T : R n R n+1, T (x 1,..., x n ) = (x 1,..., x n, ). Find T explicitly. What is null T? Solution 8. (i) As range T is the orthocomplement of null T in W, dim range T + dim null T = dim W. As range T is the orthocomplement of null T in V, dim range T + dim null T = dim V. By the rank-nullity theorem applied to T, one has dim range T + dim null T = dim V. Subtracting the first of the three displayed equations from the third, we get dim null T dim null T = dim V dim W, which proves the claim regarding nullspaces, while subtracting the second displayed equation from the third gives dim range T dim range T =, which proves the conclusion regarding ranges. (ii) Recall that unless otherwise specified, we have the standard inner product on R p, p = n, n + 1 here. For y =(y 1,..., y n+1 ) R n+1, T y R n is defined by the property that for all x R n n+1 x, T y R n = T x, y R n+1 = (Tx) j y j = Thus, T y =(y 1,...,y n ). Correspondingly x j y j = x, (y 1,..., y n ). null T = {(,...,,y n+1 ): y n+1 R}.

6 MATH 113: PRACTICE FINAL SOLUTIONS Problem 9. (Bonus problem: do it only if you have solved all other problems.) If V, W, Z are vector spaces over a field F, and F : V W Z, we say that F is bilinear if F (v 1 + v 2,w)=F (v 1,w)+F (v 2,w), F(cv, w) =cf (v, w) F (v, w 1 + w 2 )=F (v, w 1 )+F (v, w 2 ), F(v, cw) =cf (v, w) for all v, v 1,v 2 V, w, w 1,w 2 W, c F, i.e. if F is linear in both slots. (i) (1 points) Show that the set B(V, W ; Z) of bilinear maps from V W to Z is a vector space over F under the usual addition and multiplication by scalars: (F + G)(v, w) =F (v, w)+g(v, w), (λf )(v, w) =λf (v, w), (v V, w W, c F) make sure to check that F + G and cf are indeed bilinear. (ii) (Bonus problem: do it only if you have solved all other problems.) Recall that V = L(V,F) is the dual of V. For f V, g W, let f g : V W F be the map (f g)(v, w) =f(v)g(w) F. Show that f g B(V, W ; F), and the map j : V W B(V, W ; F), j(f, g) =f g, is bilinear. (iii) (Bonus problem: do it only if you have solved all other problems.) If V, W are finite dimensional with bases (v 1,..., v m ), (w 1,..., w n ), (f 1,...,f m ) is the basis of V dual to (v 1,...,v m ), and (g 1,..., g n ) is the basis of W dual to (w 1,..., w n ), let e ij = f i g j, so e ij (v r,w s ) = if r i or s j, and e ij (v r,w s ) = 1 if r = i and s = j. Show that {e ij : i =1,..., m, j =1,..., n} is a basis for B(V, W ; F). In particular, conclude that dim B(V, W ; F) = (dim V )(dim W ). Remark: One could now define the tensor product of V and W as V W = B(V, W ; F), or the tensor product of V and W, using (V ) = V, as Solution 9. V W = B(V,W, F). (i) First, if F is bilinear and λ F, then (λf )(cv, w) =λf (cv, w) =λ(cf (v, w)) = c(λf (v, w)) = c(λf )(v, w), (λf )(v 1 + v 2,w)=λF (v 1 + v 2,w)=λ(F (v 1,w)+F (v 2,w)) = λf (v 1,w)+λF (v 2,w) = (λf )(v 1,w) + (λf )(v 2,w), so λf is linear in the first slot. Here the first and last equalities in both chains are the definition of λf, the second is that F is linear in the first slot, while the third is using the vector space structure in Z, namely that λ(cz) =c(λz) for all z Z, respectively the distributive law. The calculation that λf is linear in the second slot is identical, with the role of v s and w s interchanged. A different way of phrasing this is the following: for each fixed w W, let Fw (v) =F (v, w), and for each fixed v V let F v (w) =F (v, w). Then the statement that F is bilinear is equivalent to Fw : V Z being linear for each w W and Fv : W Z being linear for each v V. Since (λf ) w (v) = (λf )(v, w) =λf (v, w) =λf w (v), (λf ) w = λf w, so the fact that (λf ) w is linear follows from the fact that λf w is and the latter we have already shown (a scalar multiple of a linear map is linear). A similar calculation for F v shows that λf is bilinear as claimed.

MATH 113: PRACTICE FINAL SOLUTIONS 7 Similarly, F + G is bilinear if F and G are because (F + G) w = F w + G w, F w and G w are linear, with analogous formulae for (F + G) v. Now, if U is any set (and Z is a vector space as above), the set M(U; Z) of all maps from U to Z is a vector space with the usual addition and multiplication by scalars: (λf )(u) =λf (u), (F + G)(u) =F (u) +G(u). Now, B(V, W ; Z) is a subset of M(V W ; Z), and we have shown that it is closed under addition and multiplication by scalars. As the -map is bilinear, we deduce that B(V, W ; Z) is a subspace of M(V W ; Z), so in particular is a vector space itself. (ii) (f g) w(v) = (f g)(v, w) =f(v)g(w) =g(w)f(v), so (f g) w = g(w)f. As f is linear, g(w) F, we deduce that (f g) w is linear. The proof that (f g) v is linear is similar, and we deduce that f g is bilinear, so f g B(V, W ; F). Now consider the map j : V W B(V, W ; F), j(f, g) =f g. Then j(λf, g)(v, w) = ((λf) g)(v, w) = (λf)(v)g(w) =λf(v)g(w) = λ(f g)(w) =λj(f, g)(v, w), with a similar calculation for homogeneity in the second slot. Additivity is also similar: j(f 1 + f 2,g)(v, w) = ((f 1 + f 2 ) g)(v, w) = (f 1 + f 2 )(v)g(w) = (f 1 (v)+f 2 (v))g(w) = f 1 (v)g(w)+f 2 (v)g(w) = (f 1 g)(v, w) + (f 2 g)(v, w) =(j(f 1,g)+j(f 2,g))(v, w), with a similar calculation in the second slot. Thus, j is indeed bilinear. (iii) First, i,j a ije ij = means that i,j a ije ij (v, w) = for all v V, w W. Letting v = v r, w = w s we deduce that a rs = for all r, s, so {e ij : i =1,..., m, j = 1,..., n} is linearly independent. To see that it spans, suppose that F is a bilinear map. For v V, w W, write v = i b iv i, w = j c jw j. Then F (v, w) =F ( i b i v i, j c j w j )= i,j b i c j F (v i,w j ) = i,j F (v i,w j )e ij ( r b r v r, s c s w s )= i,j F (v i,w j )e ij (v, w). Thus F = F (v i,w j )e ij, proving that {e ij : i =1,..., m, j =1,..., n} spans. Combined with the already shown linear independence, we deduce that it is a basis, so dim B(V, W ; F) =mn = (dim V )(dim W ), as claimed.