CHAPTER The zero vector. This vector has the property that for every vector, v, v + 0 = 0+v = v and λv = 0 if λ is the real number, zero.

Size: px
Start display at page:

Download "CHAPTER The zero vector. This vector has the property that for every vector, v, v + 0 = 0+v = v and λv = 0 if λ is the real number, zero."

Transcription

1 This is page 1 Printer: Opaque this CHAPTER 1 MULTILINEAR ALGEBRA 1.1 Background We will list below some definitions and theorems that are part of the curriculum of a standard theory-based sophomore level course in linear algebra. (Such a course is a prerequisite for reading these notes.) A vector space is a set, V, the elements of which we will refer to as vectors. It is equipped with two vector space operations: Vector space addition. Given two vectors, v 1 and v 2, one can add them to get a third vector, v 1 + v 2. Scalar multiplication. Given a vector, v, and a real number, λ, one can multiply v by λ to get a vector, λv. These operations satisfy a number of standard rules: associativity, commutativity, distributive laws, etc. which we assume you re familiar with. (See exercise 1 below.) In addition we ll assume you re familiar with the following definitions and theorems. 1. The zero vector. This vector has the property that for every vector, v, v + 0 = 0+v = v and λv = 0 if λ is the real number, zero. 2. Linear independence. A collection of vectors, v i, i = 1,...,k, is linearly independent if the map (1.1.1) R k V, (c 1,...,c k ) c 1 v c k v k is The spanning property. A collection of vectors, v i, i = 1,...,k, spans V if the map (1.1.1) is onto. 4. The notion of basis. The vectors, v i, in items 2 and 3 are a basis of V if they span V and are linearly independent; in other words, if the map (1.1.1) is bijective. This means that every vector, v, can be written uniquely as a sum (1.1.2) v = c i v i.

2 2 Chapter 1. Multilinear algebra 5. The dimension of a vector space. If V possesses a basis, v i, i = 1,...,k, V is said to be finite dimensional, and k is, by definition, the dimension of V. (It is a theorem that this definition is legitimate: every basis has to have the same number of vectors.) In this chapter all the vector spaces we ll encounter will be finite dimensional. 6. A subset, U, of V is a subspace if it s vector space in its own right, i.e., for v, v 1 and v 2 in U and λ in R, λv and v 1 + v 2 are in U. 7. Let V and W be vector spaces. A map, A : V W is linear if, for v, v 1 and v 2 in V and λ R (1.1.3) and (1.1.4) A(λv) = λav A(v 1 + v 2 ) = Av 1 + Av The kernel of A. This is the set of vectors, v, in V which get mapped by A into the zero vector in W. By (1.1.3) and (1.1.4) this set is a subspace of V. We ll denote it by Ker A. 9. The image of A. By (1.1.3) and (1.1.4) the image of A, which we ll denote by Im A, is a subspace of W. The following is an important rule for keeping track of the dimensions of KerA and Im A. (1.1.5) dim V = dim Ker A + dim Im A. Example 1. The map (1.1.1) is a linear map. The v i s span V if its image is V and the v i s are linearly independent if its kernel is just the zero vector in R k. 10. Linear mappings and matrices. Let v 1,...,v n be a basis of V and w 1,...,w m a basis of W. Then by (1.1.2) Av j can be written uniquely as a sum, m (1.1.6) Av j = c i,j w i, c i,j R. i=1 The m n matrix of real numbers, [c i,j ], is the matrix associated with A. Conversely, given such an m n matrix, there is a unique linear map, A, with the property (1.1.6).

3 1.1 Background An inner product on a vector space is a map B : V V R having the three properties below. (a) For vectors, v, v 1, v 2 and w and λ R B(v 1 + v 2,w) = B(v 1,w) + B(v 2,w) and B(λv,w) = λb(v,w). (b) For vectors, v and w, B(v,w) = B(w,v). (c) For every vector, v B(v,v) 0. Moreover, if v 0, B(v,v) is positive. Notice that by property (b), property (a) is equivalent to B(w,λv) = λb(w,v) and B(w,v 1 + v 2 ) = B(w,v 1 ) + B(w,v 2 ). The items on the list above are just a few of the topics in linear algebra that we re assuming our readers are familiar with. We ve highlighted them because they re easy to state. However, understanding them requires a heavy dollop of that indefinable quality mathematical sophistication, a quality which will be in heavy demand in the next few sections of this chapter. We will also assume that our readers are familiar with a number of more low-brow linear algebra notions: matrix multiplication, row and column operations on matrices, transposes of matrices, determinants of n n matrices, inverses of matrices, Cramer s rule, recipes for solving systems of linear equations, etc. (See 1.1 and 1.2 of Munkres book for a quick review of this material.)

4 4 Chapter 1. Multilinear algebra Exercises. 1. Our basic example of a vector space in this course is R n equipped with the vector addition operation (a 1,...,a n ) + (b 1,...,b n ) = (a 1 + b 1,...,a n + b n ) and the scalar multiplication operation λ(a 1,...,a n ) = (λa 1,...,λa n ). Check that these operations satisfy the axioms below. (a) Commutativity: v + w = w + v. (b) Associativity: u + (v + w) = (u + v) + w. (c) For the zero vector, 0 = (0,...,0), v + 0 = 0 + v. (d) v + ( 1)v = 0. (e) 1v = v. (f) (g) Associative law for scalar multiplication: (ab)v = a(bv). Distributive law for scalar addition: (a + b)v = av + bv. (h) Distributive law for vector addition: a(v + w) = av + aw. 2. Check that the standard basis vectors of R n : e 1 = (1,0,...,0), e 2 = (0,1,0,...,0), etc. are a basis. 3. Check that the standard inner product on R n B((a 1,...,a n ),(b 1,...,b n )) = n a i b i i=1 is an inner product. 1.2 Quotient spaces and dual spaces In this section we will discuss a couple of items which are frequently, but not always, covered in linear algebra courses, but which we ll need for our treatment of multilinear algebra in

5 The quotient spaces of a vector space 1.2 Quotient spaces and dual spaces 5 Let V be a vector space and W a vector subspace of V. A W-coset is a set of the form v + W = {v + w, w W }. It is easy to check that if v 1 v 2 W, the cosets, v 1 + W and v 2 + W, coincide while if v 1 v 2 W, they are disjoint. Thus the W-cosets decompose V into a disjoint collection of subsets of V. We will denote this collection of sets by V/W. One defines a vector addition operation on V/W by defining the sum of two cosets, v 1 + W and v 2 + W to be the coset (1.2.1) v 1 + v 2 + W and one defines a scalar multiplication operation by defining the scalar multiple of v + W by λ to be the coset (1.2.2) λv + W. It is easy to see that these operations are well defined. For instance, suppose v 1 + W = v 1 + W and v 2 + W = v 2 + W. Then v 1 v 1 and v 2 v 2 are in W; so (v 1 + v 2 ) (v 1 + v 2 ) is in W and hence v 1 + v 2 + W = v 1 + v 2 + W. These operations make V/W into a vector space, and one calls this space the quotient space of V by W. We define a mapping (1.2.3) π : V V/W by setting π(v) = v + W. It s clear from (1.2.1) and (1.2.2) that π is a linear mapping, and that it maps V to V/W. Moreover, for every coset, v + W, π(v) = v + W; so the mapping, π, is onto. Also note that the zero vector in the vector space, V/W, is the zero coset, 0+W = W. Hence v is in the kernel of π if v +W = W, i.e., v W. In other words the kernel of π is W. In the definition above, V and W don t have to be finite dimensional, but if they are, then (1.2.4) dim V/W = dim V dim W. by (1.1.5). The following, which is easy to prove, we ll leave as an exercise.

6 6 Chapter 1. Multilinear algebra Proposition Let U be a vector space and A : V U a linear map. If W KerA there exists a unique linear map, A # : V/W U with property, A = A # π. The dual space of a vector space We ll denote by V the set of all linear functions, l : V R. If l 1 and l 2 are linear functions, their sum, l 1 + l 2, is linear, and if l is a linear function and λ is a real number, the function, λl, is linear. Hence V is a vector space. One calls this space the dual space of V. Suppose V is n-dimensional, and let e 1,...,e n be a basis of V. Then every vector, v V, can be written uniquely as a sum Let v = c 1 e c n e n c i R. (1.2.5) e i (v) = c i. If v = c 1 e c n e n and v = c 1 e c ne n then v + v = (c 1 + c 1 )e (c n + c n)e n, so e i (v + v ) = c i + c i = e i (v) + e i (v ). This shows that e i (v) is a linear function of v and hence e i V. Claim: e i, i = 1,...,n is a basis of V. Proof. First of all note that by (1.2.5) (1.2.6) e i (e j ) = { 1, i = j 0, i j. If l V let λ i = l(e i ) and let l = λ i e i. Then by (1.2.6) (1.2.7) l (e j ) = λ i e i (e j ) = λ j = l(e j ), i.e., l and l take identical values on the basis vectors, e j. Hence l = l. Suppose next that λ i e i = 0. Then by (1.2.6), λ j = ( λ i e i )(e j) = 0 for all j = 1,...,n. Hence the e j s are linearly independent.

7 1.2 Quotient spaces and dual spaces 7 Let V and W be vector spaces and A : V W, a linear map. Given l W the composition, l A, of A with the linear map, l : W R, is linear, and hence is an element of V. We will denote this element by A l, and we will denote by A : W V the map, l A l. It s clear from the definition that A (l 1 + l 2 ) = A l 1 + A l 2 and that A λl = λa l, i.e., that A is linear. Definition. A is the transpose of the mapping A. We will conclude this section by giving a matrix description of A. Let e 1,...,e n be a basis of V and f 1,...,f m a basis of W; let e 1,...,e n and f 1,...,f m be the dual bases of V and W. Suppose A is defined in terms of e 1,...,e n and f 1,...,f m by the m n matrix, [a i,j ], i.e., suppose Ae j = a i,j f i. Claim. A is defined, in terms of f 1,...,f m and e 1,...,e n by the transpose matrix, [a j,i ]. Proof. Let A f i = c j,i e j. Then A f i (e j) = k c k,i e k (e j) = c j,i by (1.2.6). On the other hand ( ) A fi (e j ) = fi (Ae j ) = fi ak,j f k = k a k,j f i (f k ) = a i,j so a i,j = c j,i.

8 8 Chapter 1. Multilinear algebra Exercises. 1. Let V be an n-dimensional vector space and W a k-dimensional subspace. Show that there exists a basis, e 1,...,e n of V with the property that e 1,...,e k is a basis of W. Hint: Induction on n k. To start the induction suppose that n k = 1. Let e 1,...,e n 1 be a basis of W and e n any vector in V W. 2. In exercise 1 show that the vectors f i = π(e k+i ), i = 1,...,n k are a basis of V/W. 3. In exercise 1 let U be the linear span of the vectors, e k+i, i = 1,...,n k. Show that the map U V/W, u π(u), is a vector space isomorphism, i.e., show that it maps U bijectively onto V/W. 4. Let U, V and W be vector spaces and let A : V W and B : U V be linear mappings. Show that (AB) = B A. 5. Let V = R 2 and let W be the x 1 -axis, i.e., the one-dimensional subspace {(x 1,0); x 1 R} of R 2. (a) Show that the W-cosets are the lines, x 2 = a, parallel to the x 1 -axis. (b) Show that the sum of the cosets, x 2 = a and x 2 = b is the coset x 2 = a + b. (c) Show that the scalar multiple of the coset, x 2 = c by the number, λ, is the coset, x 2 = λc. 6. (a) Let (V ) be the dual of the vector space, V. For every v V, let µ v : V R be the function, µ v (l) = l(v). Show that the µ v is a linear function on V, i.e., an element of (V ), and show that the map (1.2.8) µ : V (V ) v µ v is a linear map of V into (V ).

9 1.2 Quotient spaces and dual spaces 9 (b) Show that the map (1.2.8) is bijective. (Hint: dim(v ) = dimv = dimv, so by (1.1.5) it suffices to show that (1.2.8) is injective.) Conclude that there is a natural identification of V with (V ), i.e., that V and (V ) are two descriptions of the same object. 7. Let W be a vector subspace of V and let W = {l V, l(w) = 0 if w W }. Show that W is a subspace of V and that its dimension is equal to dimv dimw. (Hint: By exercise 1 we can choose a basis, e 1,...,e n of V such that e 1,...e k is a basis of W. Show that e k+1,...,e n is a basis of W.) W is called the annihilator of W in V. 8. Let V and V be vector spaces and A : V V a linear map. Show that if W is the kernel of A there exists a linear map, B : V/W V, with the property: A = B π, π being the map (1.2.3). In addition show that this linear map is injective. 9. Let W be a subspace of a finite-dimensional vector space, V. From the inclusion map, ι : W V, one gets a transpose map, ι : (V ) (W ) and, by composing this with (1.2.8), a map ι µ : V (W ). Show that this map is onto and that its kernel is W. Conclude from exercise 8 that there is a natural bijective linear map ν : V/W (W ) with the property ν π = ι µ. In other words, V/W and (W ) are two descriptions of the same object. (This shows that the quotient space operation and the dual space operation are closely related.) 10. Let V 1 and V 2 be vector spaces and A : V 1 V 2 a linear map. Verify that for the transpose map: A : V2 V 1 and KerA = (Im A) ImA = (Ker A).

10 10 Chapter 1. Multilinear algebra 11. (a) Let B : V V R be an inner product on V. For v V let l v : V R be the function: l v (w) = B(v,w). Show that l v is linear and show that the map (1.2.9) L : V V, v l v is a linear mapping. (b) Prove that this mapping is bijective. (Hint: Since dim V = dim V it suffices by (1.1.5) to show that its kernel is zero. Now note that if v 0 l v (v) = B(v,v) is a positive number.) Conclude that if V has an inner product one gets from it a natural identification of V with V. 12. Let V be an n-dimensional vector space and B : V V R an inner product on V. A basis, e 1,...,e n of V is orthonormal is { 1 i = j (1.2.10) B(e i,e j ) = 0 i j (a) Show that an orthonormal basis exists. Hint: By induction let e i, i = 1,...,k be vectors with the property (1.2.10) and let v be a vector which is not a linear combination of these vectors. Show that the vector w = v B(e i,v)e i is non-zero and is orthogonal to the e i s. Now let e k+1 = λw, where λ = B(w,w) 1 2. (b) Let e 1,...e n and e 1,... e n be two orthogonal bases of V and let (1.2.11) e j = a i,j e i. Show that (1.2.12) ai,j a i,k = { 1 j = k 0 j k (c) Let A be the matrix [a i,j ]. Show that (1.2.12) can be written more compactly as the matrix identity (1.2.13) AA t = I where I is the identity matrix.

11 1.2 Quotient spaces and dual spaces 11 (d) Let e 1,...,e n be an orthonormal basis of V and e 1,...,e n the dual basis of V. Show that the mapping (1.2.9) is the mapping, Le i = e i, i = 1,... n.

12 12 Chapter 1. Multilinear algebra 1.3 Tensors Let V be an n-dimensional vector space and let V k be the set of all k-tuples, (v 1,...,v k ), v i V. A function T : V k R is said to be linear in its i th variable if, when we fix vectors, v 1,...,v i 1, v i+1,...,v k, the map (1.3.1) v V T(v 1,...,v i 1,v,v i+1,...,v k ) is linear in V. If T is linear in its i th variable for i = 1,...,k it is said to be k-linear, or alternatively is said to be a k-tensor. We denote the set of all k-tensors by L k (V ). We will agree that 0-tensors are just the real numbers, that is L 0 (V ) = R. Let T 1 and T 2 be functions on V k. It is clear from (1.3.1) that if T 1 and T 2 are k-linear, so is T 1 + T 2. Similarly if T is k-linear and λ is a real number, λt is k-linear. Hence L k (V ) is a vector space. Note that for k = 1, k-linear just means linear, so L 1 (V ) = V. Let I = (i 1,...i k ) be a sequence of integers with 1 i r n, r = 1,...,k. We will call such a sequence a multi-index of length k. For instance the multi-indices of length 2 are the square arrays of pairs of integers (i,j), 1 i,j n and there are exactly n 2 of them. Exercise. Show that there are exactly n k multi-indices of length k. Now fix a basis, e 1,...,e n, of V and for T L k (V ) let (1.3.2) T I = T(e i1,...,e ik ) for every multi-index I of length k. Proposition The T I s determine T, i.e., if T and T are k-tensors and T I = T I for all I, then T = T.

13 1.3 Tensors 13 Proof. By induction on n. For n = 1 we proved this result in 1.1. Let s prove that if this assertion is true for n 1, it s true for n. For each e i let T i be the (k 1)-tensor Then for v = c 1 e 1 + c n e n (v 1,...,v n 1 ) T(v 1,...,v n 1,e i ). T(v 1,...,v n 1,v) = c i T i (v 1,...,v n 1 ), so the T i s determine T. Now apply induction. The tensor product operation If T 1 is a k-tensor and T 2 is an l-tensor, one can define a k+l-tensor, T 1 T 2, by setting (T 1 T 2 )(v 1,...,v k+l ) = T 1 (v 1,...,v k )T 2 (v k+1,...,v k+l ). This tensor is called the tensor product of T 1 and T 2. We note that if T 1 or T 2 is a 0-tensor, i.e., scalar, then tensor product with it is just scalar multiplication by it, that is a T = T a = at (a R, T L k (V )). Similarly, given a k-tensor, T 1, an l-tensor, T 2 and an m-tensor, T 3, one can define a (k + l + m)-tensor, T 1 T 2 T 3 by setting (1.3.3) T 1 T 2 T 3 (v 1,...,v k+l+m ) = T 1 (v 1,...,v k )T 2 (v k+1,...,v k+l )T 3 (v k+l+1,...,v k+l+m ). Alternatively, one can define (1.3.3) by defining it to be the tensor product of T 1 T 2 and T 3 or the tensor product of T 1 and T 2 T 3. It s easy to see that both these tensor products are identical with (1.3.3): (1.3.4) (T 1 T 2 ) T 3 = T 1 (T 2 T 3 ) = T 1 T 2 T 3. We leave for you to check that if λ is a real number (1.3.5) λ(t 1 T 2 ) = (λt 1 ) T 2 = T 1 (λt 2 ) and that the left and right distributive laws are valid: For k 1 = k 2, (1.3.6) (T 1 + T 2 ) T 3 = T 1 T 3 + T 2 T 3

14 14 Chapter 1. Multilinear algebra and for k 2 = k 3 (1.3.7) T 1 (T 2 + T 3 ) = T 1 T 2 + T 1 T 3. A particularly interesting tensor product is the following. For i = 1,...,k let l i V and let (1.3.8) T = l 1 l k. Thus, by definition, (1.3.9) T(v 1,...,v k ) = l 1 (v 1 )... l k (v k ). A tensor of the form (1.3.9) is called a decomposable k-tensor. These tensors, as we will see, play an important role in what follows. In particular, let e 1,...,e n be a basis of V and e 1,...,e n the dual basis of V. For every multi-index, I, of length k let e I = e i 1 e i k. Then if J is another multi-index of length k, { e 1, I = J (1.3.10) I(e j1,...,e jk ) = 0, I J by (1.2.6), (1.3.8) and (1.3.9). From (1.3.10) it s easy to conclude Theorem The e I s are a basis of Lk (V ). Proof. Given T L k (V ), let T = T I e I where the T I s are defined by (1.3.2). Then (1.3.11) T (e j1,...,e jk ) = T I e I (e j 1,...,e jk ) = T J by (1.3.10); however, by Proposition the T J s determine T, so T = T. This proves that the e I s are a spanning set of vectors for L k (V ). To prove they re a basis, suppose CI e I = 0 for constants, C I R. Then by (1.3.11) with T = 0, C J = 0, so the e I s are linearly independent. As we noted above there are exactly n k multi-indices of length k and hence n k basis vectors in the set, {e I }, so we ve proved Corollary. dim L k (V ) = n k.

15 1.3 Tensors 15 The pull-back operation Let V and W be finite dimensional vector spaces and let A : V W be a linear mapping. If T L k (W), we define to be the function A T : V k R (1.3.12) A T(v 1,...,v k ) = T(Av 1,...,Av k ). It s clear from the linearity of A that this function is linear in its i th variable for all i, and hence is k-tensor. We will call A T the pull-back of T by the map, A. Proposition The map (1.3.13) A : L k (W) L k (V ), T A T, is a linear mapping. We leave this as an exercise. We also leave as an exercise the identity (1.3.14) A (T 1 T 2 ) = A T 1 A T 2 for T 1 L k (W) and T 2 L m (W). Also, if U is a vector space and B : U V a linear mapping, we leave for you to check that (1.3.15) (AB) T = B (A T) for all T L k (W). Exercises. 1. Verify that there are exactly n k multi-indices of length k. 2. Prove Proposition Verify (1.3.14). 4. Verify (1.3.15).

16 16 Chapter 1. Multilinear algebra 5. Let A : V W be a linear map. Show that if l i, i = 1,...,k are elements of W A (l 1 l k ) = A l 1 A l k. Conclude that A maps decomposable k-tensors to decomposable k-tensors. 6. Let V be an n-dimensional vector space and l i, i = 1,2, elements of V. Show that l 1 l 2 = l 2 l 1 if and only if l 1 and l 2 are linearly dependent. (Hint: Show that if l 1 and l 2 are linearly independent there exist vectors, v i, i =,1,2 in V with property l i (v j ) = { 1, i = j 0, i j. Now compare (l 1 l 2 )(v 1,v 2 ) and (l 2 l 1 )(v 1,v 2 ).) Conclude that if dim V 2 the tensor product operation isn t commutative, i.e., it s usually not true that l 1 l 2 = l 2 l Let T be a k-tensor and v a vector. Define T v : V k 1 R to be the map (1.3.16) T v (v 1,...,v k 1 ) = T(v,v 1,...,v k 1 ). Show that T v is a (k 1)-tensor. 8. Show that if T 1 is an r-tensor and T 2 is an s-tensor, then if r > 0, (T 1 T 2 ) v = (T 1 ) v T Let A : V W be a linear map mapping v V to w W. Show that for T L k (W), A (T w ) = (A T) v.

17 1.4 Alternating k-tensors Alternating k-tensors We will discuss in this section a class of k-tensors which play an important role in multivariable calculus. In this discussion we will need some standard facts about the permutation group. For those of you who are already familiar with this object (and I suspect most of you are) you can regard the paragraph below as a chance to refamiliarize yourselves with these facts. Permutations Let k be the k-element set: {1,2,...,k}. A permutation of order k is a bijective map, σ : k k. Given two permutations, σ 1 and σ 2, their product, σ 1 σ 2, is the composition of σ 1 and σ 2, i.e., the map, i σ 1 (σ 2 (i)), and for every permutation, σ, one denotes by σ 1 the inverse permutation: σ(i) = j σ 1 (j) = i. Let S k be the set of all permutations of order k. One calls S k the permutation group of k or, alternatively, the symmetric group on k letters. Check: There are k! elements in S k. For every 1 i < j k, let τ = τ i,j be the permutation (1.4.1) τ(i) = j τ(j) = i τ(l) = l, l i,j. τ is called a transposition, and if j = i + 1, τ is called an elementary transposition. Theorem Every permutation can be written as a product of finite number of transpositions.

18 18 Chapter 1. Multilinear algebra Proof. Induction on k: k = 2 is obvious. The induction step: k 1 implies k : Given σ S k, σ(k) = i τ ik σ(k) = k. Thus τ ik σ is, in effect, a permutation of k 1. By induction, τ ikσ can be written as a product of transpositions, so σ = τ ik (τ ik σ) can be written as a product of transpositions. Theorem Every transposition can be written as a product of elementary transpositions. Proof. Let τ = τ ij, i < j. With i fixed, argue by induction on j. Note that for j > i + 1 Now apply induction to τ i,j 1. τ ij = τ j 1,j τ i,j 1 τ j 1,j. Corollary. Every permutation can be written as a product of elementary transpositions. The sign of a permutation Let x 1,...,x k be the coordinate functions on R k. For σ S k we define (1.4.2) ( 1) σ = i<j x σ(i) x σ(j) x i x j. Notice that the numerator and denominator in this expression are identical up to sign. Indeed, if p = σ(i) < σ(j) = q, the term, x p x q occurs once and just once in the numerator and one and just one in the denominator; and if q = σ(i) > σ(j) = p, the term, x p x q, occurs once and just once in the numerator and its negative, x q x p, once and just once in the numerator. Thus (1.4.3) ( 1) σ = ±1.

19 1.4 Alternating k-tensors 19 Claim: For σ,τ S k (1.4.4) ( 1) στ = ( 1) σ ( 1) τ. Proof. By definition, ( 1) στ = i<j x στ(i) x στ(j) x i x j. We write the right hand side as a product of (1.4.5) i<j x τ(i) x τ(j) x i x j = ( 1) τ and (1.4.6) x στ(i) x στ(j) x τ(i) x τ(j) i<j For i < j, let p = τ(i) and q = τ(j) when τ(i) < τ(j) and let p = τ(j) and q = τ(i) when τ(j) < τ(i). Then x στ(i) x στ(j) x τ(i) x τ(j) = x σ(p) x σ(q) x p x q (i.e., if τ(i) < τ(j), the numerator and denominator on the right equal the numerator and denominator on the left and, if τ(j) < τ(i) are negatives of the numerator and denominator on the left). Thus (1.4.6) becomes x σ(p) x σ(q) = ( 1) σ. x p x q p<q We ll leave for you to check that if τ is a transposition, ( 1) τ = 1 and to conclude from this: Proposition If σ is the product of an odd number of transpositions, ( 1) σ = 1 and if σ is the product of an even number of transpositions ( 1) σ = +1.

20 20 Chapter 1. Multilinear algebra Alternation Let V be an n-dimensional vector space and T L (v) a k-tensor. If σ S k, let T σ L (V ) be the k-tensor (1.4.7) T σ (v 1,...,v k ) = T(v σ 1 (1),...,v σ 1 (k)). Proposition If T = l 1 l k, l i V, then T σ = l σ(1) l σ(k). 2. The map, T L k (V ) T σ L k (V ) is a linear map. 3. T στ = (T τ ) σ. Proof. To prove 1, we note that by (1.4.7) (l 1 l k ) σ (v 1,...,v k ) = l 1 (v σ 1 (1)) l k (v σ 1 (k)). Setting σ 1 (i) = q, the i th term in this product is l σ(q) (v q ); so the product can be rewritten as l σ(1) (v 1 )... l σ(k) (v k ) or (l σ(1) l σ(k) )(v 1,...,v k ). The proof of 2 we ll leave as an exercise. Proof of 3: By item 2, it suffices to check 3 for decomposable tensors. However, by 1 (l 1 l k ) στ = l στ(1) l στ(k) = (l τ(1) l τ(k) ) σ = ((l 1 l) τ ) σ. Definition T L k (V ) is alternating if T σ = ( 1) σ T for all σ S k. We will denote by A k (V ) the set of all alternating k-tensors in L k (V ). By item 2 of Proposition this set is a vector subspace of L k (V ).

21 1.4 Alternating k-tensors 21 It is not easy to write down simple examples of alternating k- tensors; however, there is a method, called the alternation operation, for constructing such tensors: Given T L (V ) let (1.4.8) Alt T = τ S k ( 1) τ T τ. We claim Proposition For T L k (V ) and σ S k, 1. (Alt T) σ = ( 1) σ Alt T 2. if T A k (V ), AltT = k!t. 3. Alt T σ = (Alt T) σ 4. the map is linear. Alt : L k (V ) L k (V ), T Alt (T) Proof. To prove 1 we note that by Proposition (1.4.4): (Alt T) σ = ( 1) τ (T στ ) = ( 1) σ ( 1) στ T στ. But as τ runs over S k, στ runs over S k, and hence the right hand side is ( 1) σ Alt (T). Proof of 2. If T A k Alt T = ( 1) τ T τ = ( 1) τ ( 1) τ T = k!t. Proof of 3. Alt T σ = ( 1) τ T τσ = ( 1) σ ( 1) τσ T τσ = ( 1) σ Alt T = (Alt T) σ.

22 22 Chapter 1. Multilinear algebra Finally, item 4 is an easy corollary of item 2 of Proposition We will use this alternation operation to construct a basis for A k (V ). First, however, we require some notation: Let I = (i 1,...,i k ) be a multi-index of length k. Definition I is repeating if i r = i s for some r s. 2. I is strictly increasing if i 1 < i 2 < < i r. 3. For σ S k, I σ = (i σ(1),...,i σ(k) ). Remark: If I is non-repeating there is a unique σ S k so that I σ is strictly increasing. Let e 1,...,e n be a basis of V and let e I = e i 1 e i k and ψ I = Alt (e I). Proposition ψ I σ = ( 1) σ ψ I. 2. If I is repeating, ψ I = If I and J are strictly increasing, ψ I (e j1,...,e jk ) = { 1 I = J 0 I J. Proof. To prove 1 we note that (e I )σ = e Iσ; so Alt (e I σ) = Alt (e I) σ = ( 1) σ Alt (e I). Proof of 2: Suppose I = (i 1,...,i k ) with i r = i s for r s. Then if τ = τ ir,i s, e I = e I r so ψ I = ψ I r = ( 1) τ ψ I = ψ I.

23 1.4 Alternating k-tensors 23 Proof of 3: By definition But by (1.3.10) ψ I (e j1,...,e jk ) = ( 1) τ e I τ(e j 1,...,e jk ). (1.4.9) e I τ(e j 1,...,e jk ) = { 1 if I τ = J 0 if I τ J. Thus if I and J are strictly increasing, I τ is strictly increasing if and only if I τ = I, and (1.4.9) is non-zero if and only if I = J. Now let T be in A k. By Proposition 1.3.2, Since T = a J e J, a J R. k!t = Alt (T) T = 1 k! aj Alt (e J ) = b J ψ J. We can discard all repeating terms in this sum since they are zero; and for every non-repeating term, J, we can write J = I σ, where I is strictly increasing, and hence ψ J = ( 1) σ ψ I. Conclusion: We can write T as a sum (1.4.10) T = c I ψ I, with I s strictly increasing. Claim. The c I s are unique.

24 24 Chapter 1. Multilinear algebra Proof. For J strictly increasing (1.4.11) T(e j1,...,e jk ) = c I ψ I (e j1,...,e jk ) = c J. By (1.4.10) the ψ I s, I strictly increasing, are a spanning set of vectors for A k (V ), and by (1.4.11) they are linearly independent, so we ve proved Proposition The alternating tensors, ψ I, I strictly increasing, are a basis for A k (V ). Thus dim A k (V ) is equal to the number of strictly increasing multiindices, I, of length k. We leave for you as an exercise to show that this number is equal to ( ) n n! (1.4.12) = = n choose k k (n k)!k! if 1 k n. Hint: Show that every strictly increasing multi-index of length k determines a k element subset of {1,..., n} and vice-versa. Note also that if k > n every multi-index I = (i 1,...,i k ) of length k has to be repeating: i r = i s for some r s since the i p s lie on the interval 1 i n. Thus by Proposition ψ I = 0 for all multi-indices of length k > 0 and (1.4.13) A k = {0}. Exercises. 1. Show that there are exactly k! permutations of order k. Hint: Induction on k: Let σ S k, and let σ(k) = i, 1 i k. Show that τ ik σ leaves k fixed and hence is, in effect, a permutation of k Prove that if τ S k is a transposition, ( 1) τ = 1 and deduce from this Proposition

25 3. Prove assertion 2 in Proposition Prove that dim A k (V ) is given by (1.4.12). 5. Verify that for i < j 1 τ i,j = τ j 1,j τ i,j 1,τ j 1,j. 1.4 Alternating k-tensors For k = 3 show that every one of the six elements of S 3 is either a transposition or can be written as a product of two transpositions. 7. Let σ S k be the cyclic permutation σ(i) = i + 1, i = 1,...,k 1 and σ(k) = 1. Show explicitly how to write σ as a product of transpositions and compute ( 1) σ. Hint: Same hint as in exercise In exercise 7 of Section 3 show that if T is in A k, T v is in A k 1. Show in addition that for v,w V and T A k, (T v ) w = (T w ) v. 9. Let A : V W be a linear mapping. Show that if T is in A k (W), A T is in A k (V ). 10. In exercise 9 show that if T is in L k (W), Alt (A T) = A (Alt (T)), i.e., show that the Alt operation commutes with the pull-back operation.

26 26 Chapter 1. Multilinear algebra 1.5 The space, Λ k (V ) In 1.4 we showed that the image of the alternation operation, Alt : L k (V ) L k (V ) is A k (V ). In this section we will compute the kernel of Alt. Definition A decomposable k-tensor l 1 l k, l i V, is redundant if for some index, i, l i = l i+1. Let I k be the linear span of the set of reductant k-tensors. Note that for k = 1 the notion of redundant doesn t really make sense; a single vector l L 1 (V ) can t be redundant so we decree I 1 (V ) = {0}. Proposition If T I k, Alt (T) = 0. Proof. Let T = l k l k with l i = l i+1. Then if τ = τ i,i+1, T τ = T and ( 1) τ = 1. Hence Alt (T) = Alt (T τ ) = Alt (T) τ = Alt(T); so Alt (T) = 0. To simplify notation let s abbreviate L k (V ), A k (V ) and I k (V ) to L k, A k and I k. Proposition If T I r and T L s then T T and T T are in I r+s. Proof. We can assume that T and T are decomposable, i.e., T = l 1 l r and T = l 1 l s and that T is redundant: l i = l i+1. Then T T = l 1 l i 1 l i l i l r l 1 l s is redundant and hence in I r+s. The argument for T T is similar. Proposition If T L k and σ S k, then (1.5.1) T σ = ( 1) σ T + S where S is in I k.

27 1.5 The space, Λ k (V ) 27 Proof. We can assume T is decomposable, i.e., T = l 1 l k. Let s first look at the simplest possible case: k = 2 and σ = τ 1,2. Then T σ ( ) σ T = l 1 l 2 + l 2 l 1 = ((l 1 + l 2 ) (l 1 + l 2 ) l 1 l 1 l 2 l 2 )/2, and the terms on the right are redundant, and hence in I 2. Next let k be arbitrary and σ = τ i,i+1. If T 1 = l 1 l i 2 and T 2 = l i+2 l k. Then T ( 1) σ T = T 1 (l i l i+1 + l i+1 l i ) T 2 is in I k by Proposition and the computation above. The general case: By Theorem 1.4.2, σ can be written as a product of m elementary transpositions, and we ll prove (1.5.1) by induction on m. We ve just dealt with the case m = 1. The induction step: m 1 implies m. Let σ = τβ where β is a product of m 1 elementary transpositions and τ is an elementary transposition. Then T σ = (T β ) τ = ( 1) τ T β + = ( 1) τ ( 1) β T + = ( 1) σ T + where the dots are elements of I k, and the induction hypothesis was used in line 2. Corollary. If T L k, the (1.5.2) Alt (T) = k!t + W, where W is in I k. Proof. By definition Alt (T) = ( 1) σ T σ, and by Proposition 1.5.4, T σ = ( 1) σ T + W σ, with W σ I k. Thus Alt (T) = ( 1) σ ( 1) σ T + ( 1) σ W σ where W = ( 1) σ W σ. = k!t + W

28 28 Chapter 1. Multilinear algebra Corollary. I k is the kernel of Alt. Proof. We ve already proved that if T I k, Alt (T) = 0. To prove the converse assertion we note that if Alt (T) = 0, then by (1.5.2) with W I k. T = 1 k! W. Putting these results together we conclude: Theorem Every element, T, of L k can be written uniquely as a sum, T = T 1 + T 2 where T 1 A k and T 2 I k. Proof. By (1.5.2), T = T 1 + T 2 with and T 1 = 1 k! Alt (T) T 2 = 1 k! W. To prove that this decomposition is unique, suppose T 1 + T 2 = 0, with T 1 A k and T 2 I k. Then so T 1 = 0, and hence T 2 = 0. 0 = Alt (T 1 + T 2 ) = k!t 1 Let (1.5.3) Λ k (V ) = L k (V )/I k (V ), i.e., let Λ k = Λ k (V ) be the quotient of the vector space L k by the subspace, I k, of L k. By (1.2.3) one has a linear map: (1.5.4) π : L k Λ k, T T + I k which is onto and has I k as kernel. We claim: Theorem The map, π, maps A k bijectively onto Λ k. Proof. By Theorem every I k coset, T + I k, contains a unique element, T 1, of A k. Hence for every element of Λ k there is a unique element of A k which gets mapped onto it by π.

29 1.5 The space, Λ k (V ) 29 Remark. Since Λ k and A k are isomorphic as vector spaces many treatments of multilinear algebra avoid mentioning Λ k, reasoning that A k is a perfectly good substitute for it and that one should, if possible, not make two different definitions for what is essentially the same object. This is a justifiable point of view (and is the point of view taken by Spivak and Munkres 1 ). There are, however, some advantages to distinguishing between A k and Λ k, as we ll see in 1.6. Exercises. 1. A k-tensor, T, L k (V ) is symmetric if T σ = T for all σ S k. Show that the set, S k (V ), of symmetric k tensors is a vector subspace of L k (V ). 2. Let e 1,...,e n be a basis of V. Show that every symmetric 2- tensor is of the form aije i e j where a i,j = a j,i and e 1,...,e n are the dual basis vectors of V. 3. Show that if T is a symmetric k-tensor, then for k 2, T is in I k. Hint: Let σ be a transposition and deduce from the identity, T σ = T, that T has to be in the kernel of Alt. 4. Warning: In general S k (V ) I k (V ). Show, however, that if k = 2 these two spaces are equal. 5. Show that if l V and T I k 2, then l T l is in I k. 6. Show that if l 1 and l 2 are in V and T is in I k 2, then l 1 T l 2 + l 2 T l 1 is in I k. 7. Given a permutation σ S k and T I k, show that T σ I k. 8. Let W be a subspace of L k having the following two properties. (a) For S S 2 (V ) and T L k 2, S T is in W. (b) For T in W and σ S k, T σ is in W. 1 and by the author of these notes in his book with Alan Pollack, Differential Topology

30 30 Chapter 1. Multilinear algebra Show that W has to contain I k and conclude that I k is the smallest subspace of L k having properties a and b. 9. Show that there is a bijective linear map with the property α : Λ k A k (1.5.5) απ(t) = 1 Alt (T) k! for all T L k, and show that α is the inverse of the map of A k onto Λ k described in Theorem (Hint: 1.2, exercise 8). 10. Let V be an n-dimensional vector space. Compute the dimension of S k (V ). Some hints: (a) Introduce the following symmetrization operation on tensors T L k (V ): Sym(T) = T τ. τ S k Prove that this operation has properties 2, 3 and 4 of Proposition and, as a substitute for property 1, has the property: (SymT) σ = SymT. (b) Let ϕ I = Sym(e I ), e I = e i 1 e i n. Prove that {ϕ I, I non-decreasing} form a basis of S k (V ). (c) Conclude from (b) that dims k (V ) is equal to the number of non-decreasing multi-indices of length k: 1 i 1 i 2 l k n. (d) Compute this number by noticing that (i 1,...,i n ) (i 1 + 0,i 2 + 1,...,i k + k 1) is a bijection between the set of these non-decreasing multi-indices and the set of increasing multi-indices 1 j 1 < < j k n+k 1.

31 1.6 The wedge product The wedge product The tensor algebra operations on the spaces, L k (V ), which we discussed in Sections 1.2 and 1.3, i.e., the tensor product operation and the pull-back operation, give rise to similar operations on the spaces, Λ k. We will discuss in this section the analogue of the tensor product operation. As in 4 we ll abbreviate L k (V ) to L k and Λ k (V ) to Λ k when it s clear which V is intended. Given ω i Λ k i, i = 1,2 we can, by (1.5.4), find a T i L k i with ω i = π(t i ). Then T 1 T 2 L k 1+k 2. Let (1.6.1) ω 1 ω 2 = π(t 1 T 2 ) Λ k 1+k 2. Claim. This wedge product is well defined, i.e., doesn t depend on our choices of T 1 and T 2. Proof. Let π(t 1 ) = π(t 1 ) = ω 1. Then T 1 = T 1 + W 1 for some W 1 I k 1, so T 1 T 2 = T 1 T 2 + W 1 T 2. But W 1 I k 1 implies W 1 T 2 I k 1+k 2 and this implies: π(t 1 T 2 ) = π(t 1 T 2 ). A similar argument shows that (1.6.1) is well-defined independent of the choice of T 2. More generally let ω i Λ k i, i = 1,2,3, and let ω i = π(t i ), T i L k i. Define ω 1 ω 2 ω 3 Λ k 1+k 2 +k 3 by setting ω 1 ω 2 ω 3 = π(t 1 T 2 T 3 ). As above it s easy to see that this is well-defined independent of the choice of T 1, T 2 and T 3. It is also easy to see that this triple wedge product is just the wedge product of ω 1 ω 2 with ω 3 or, alternatively, the wedge product of ω 1 with ω 2 ω 3, i.e., (1.6.2) ω 1 ω 2 ω 3 = (ω 1 ω 2 ) ω 3 = ω 1 (ω 2 ω 3 ).

32 32 Chapter 1. Multilinear algebra We leave for you to check: For λ R (1.6.3) λ(ω 1 ω 2 ) = (λω 1 ) ω 2 = ω 1 (λω 2 ) and verify the two distributive laws: (1.6.4) and (1.6.5) (ω 1 + ω 2 ) ω 3 = ω 1 ω 3 + ω 2 ω 3 ω 1 (ω 2 + ω 3 ) = ω 1 ω 2 + ω 1 ω 3. As we noted in 1.4, I k = {0} for k = 1, i.e., there are no non-zero redundant k tensors in degree k = 1. Thus (1.6.6) Λ 1 (V ) = V = L 1 (V ). A particularly interesting example of a wedge product is the following. Let l i V = Λ 1 (V ), i = 1,...,k. Then if T = l 1 l k (1.6.7) l 1 l k = π(t) Λ k (V ). We will call (1.6.7) a decomposable element of Λ k (V ). We will prove that these elements satisfy the following wedge product identity. For σ S k : (1.6.8) l σ(1) l σ(k) = ( 1) σ l 1 l k. Proof. For every T L k, T = ( 1) σ T + W for some W I k by Proposition Therefore since π(w) = 0 (1.6.9) π(t σ ) = ( 1) σ π(t). In particular, if T = l 1 l k, T σ = l σ(1) l σ(k), so π(t σ ) = l σ(1) l σ(k) = ( 1) σ π(t) = ( 1) σ l 1 l k. In particular, for l 1 and l 2 V (1.6.10) l 1 l 2 = l 2 l 1

33 1.6 The wedge product 33 and for l 1, l 2 and l 3 V (1.6.11) l 1 l 2 l 3 = l 2 l 1 l 3 = l 2 l 3 l 1. More generally, it s easy to deduce from (1.6.8) the following result (which we ll leave as an exercise). Theorem If ω 1 Λ r and ω 2 Λ s then (1.6.12) ω 1 ω 2 = ( 1) rs ω 2 ω 1. Hint: It suffices to prove this for decomposable elements i.e., for ω 1 = l 1 l r and ω 2 = l 1 l s. Now make rs applications of (1.6.10). Let e 1,...,e n be a basis of V and let e 1,...,e n be the dual basis of V. For every multi-index, I, of length k, (1.6.13) e i 1 e i k = π(e I ) = π(e i 1 e i k ). Theorem The elements (1.6.13), with I strictly increasing, are basis vectors of Λ k. Proof. The elements ψ I = Alt (e I ), I strictly increasing, are basis vectors of A k by Proposition 3.6; so their images, π(ψ I ), are a basis of Λ k. But π(ψ I ) = π ( 1) σ (e I )σ = ( 1) σ π(e I )σ = ( 1) σ ( 1) σ π(e I) = k!π(e I ). Exercises: 1. Prove the assertions (1.6.3), (1.6.4) and (1.6.5). 2. Verify the multiplication law, (1.6.12) for wedge product.

34 34 Chapter 1. Multilinear algebra 3. Given ω Λ r let ω k be the k-fold wedge product of ω with itself, i.e., let ω 2 = ω ω, ω 3 = ω ω ω, etc. (a) Show that if r is odd then for k > 1, ω k = 0. (b) Show that if ω is decomposable, then for k > 1, ω k = If ω and µ are in Λ 2r prove: (ω + µ) k = k l=0 ( ) k ω l µ k l. l Hint: As in freshman calculus prove this binomial theorem by induction using the identity: ( ) ( k l = k 1 ) ( l 1 + k 1 ) l. 5. Let ω be an element of Λ 2. By definition the rank of ω is k if ω k 0 and ω k+1 = 0. Show that if ω = e 1 f e k f k with e i,f i V, then ω is of rank k. Hint: Show that ω k = k!e 1 f 1 e k f k. 6. Given e i V, i = 1,...,k show that e 1 e k 0 if and only if the e i s are linearly independent. Hint: Induction on k.

35 1.7 The interior product The interior product We ll describe in this section another basic product operation on the spaces, Λ k (V ). As above we ll begin by defining this operator on the L k (V ) s. Given T L k (V ) and v V let ι v T be the be the (k 1)-tensor which takes the value (1.7.1) k ι v T(v 1,...,v k 1 ) = ( 1) r 1 T(v 1,...,v r 1,v,v r,...,v k 1 ) r=1 on the k 1-tuple of vectors, v 1,...,v k 1, i.e., in the r th summand on the right, v gets inserted between v r 1 and v r. (In particular the first summand is T(v,v 1,...,v k 1 ) and the last summand is ( 1) k 1 T(v 1,...,v k 1,v).) It s clear from the definition that if v = v 1 + v 2 (1.7.2) ι v T = ι v1 T + ι v2 T, and if T = T 1 + T 2 (1.7.3) ι v T = ι v T 1 + ι v T 2, and we will leave for you to verify by inspection the following two lemmas: Lemma If T is the decomposable k-tensor l 1 l k then (1.7.4) ι v T = ( 1) r 1 l r (v)l 1 l r l k where the cap over l r means that it s deleted from the tensor product, and Lemma If T 1 L p and T 2 L q (1.7.5) ι v (T 1 T 2 ) = ι v T 1 T 2 + ( 1) p T 1 ι v T 2. We will next prove the important identity (1.7.6) ι v (ι v T) = 0. Proof. It suffices by linearity to prove this for decomposable tensors and since (1.7.6) is trivially true for T L 1, we can by induction

36 36 Chapter 1. Multilinear algebra assume (1.7.6) is true for decomposible tensors of degree k 1. Let l 1 l k be a decomposable tensor of degree k. Setting T = l 1 l k 1 and l = l k we have by (1.7.5). Hence ι v (l 1 l k ) = ι v (T l) = ι v T l + ( 1) k 1 l(v)t ι v (ι v (T l)) = ι v (ι v T) l + ( 1) k 2 l(v)ι v T +( 1) k 1 l(v)ι v T. But by induction the first summand on the right is zero and the two remaining summands cancel each other out. V From (1.7.6) we can deduce a slightly stronger result: For v 1,v 2 (1.7.7) ι v1 ι v2 = ι v2 ι v1. Proof. Let v = v 1 + v 2. Then ι v = ι v1 + ι v2 so 0 = ι v ι v = (ι v1 + ι v2 )(ι v1 + ι v2 ) = ι v1 ι v1 + ι v1 ι v2 + ι v2 ι v1 + ι v2 ι v2 = ι v1 ι v2 + ι v2 ι v1 since the first and last summands are zero by (1.7.6). We ll now show how to define the operation, ι v, on Λ k (V ). We ll first prove Lemma If T L k is redundant then so is ι v T. Proof. Let T = T 1 l l T 2 where l is in V, T 1 is in L p and T 2 is in L q. Then by (1.7.5) ι v T = ι v T 1 l l T 2 +( 1) p T 1 ι v (l l) T 2 +( 1) p+2 T 1 l l ι v T 2.

37 1.7 The interior product 37 However, the first and the third terms on the right are redundant and ι v (l l) = l(v)l l(v)l by (1.7.4). Now let π be the projection (1.5.4) of L k onto Λ k and for ω = π(t) Λ k define (1.7.8) ι v ω = π(ι v T). To show that this definition is legitimate we note that if ω = π(t 1 ) = π(t 2 ), then T 1 T 2 I k, so by Lemma ι v T 1 ι v T 2 I k 1 and hence π(ι v T 1 ) = π(ι v T 2 ). Therefore, (1.7.8) doesn t depend on the choice of T. By definition ι v is a linear mapping of Λ k (V ) into Λ k 1 (V ). We will call this the interior product operation. From the identities (1.7.2) (1.7.8) one gets, for v,v 1,v 2 V ω Λ k, ω 1 Λ p and ω 2 Λ 2 (1.7.9) (1.7.10) (1.7.11) and (1.7.12) ι (v1 +v 2 )ω = ι v1 ω + ι v2 ω ι v (ω 1 ω 2 ) = ι v ω 1 ω 2 + ( 1) p ω 1 ι v ω 2 ι v (ι v ω) = 0 ι v1 ι v2 ω = ι v2 ι v1 ω. Moreover if ω = l 1 l k is a decomposable element of Λ k one gets from (1.7.4) (1.7.13) ι v ω = k ( 1) r 1 l r (v)l 1 l r l k. r=1 In particular if e 1,...,e n is a basis of V, e 1,...,e n the dual basis of V and ω I = e i 1 e i k, 1 i 1 < < i k n, then ι(e j )ω I = 0 if j / I and if j = i r (1.7.14) ι(e j )ω I = ( 1) r 1 ω Ir where I r = (i 1,...,î r,...,i k ) (i.e., I r is obtained from the multiindex I by deleting i r ).

38 38 Chapter 1. Multilinear algebra Exercises: 1. Prove Lemma Prove Lemma Show that if T A k, i v = kt v where T v is the tensor (1.3.16). In particular conclude that i v T A k 1. (See 1.4, exercise 8.) 4. Assume the dimension of V is n and let Ω be a non-zero element of the one dimensional vector space Λ n. Show that the map (1.7.15) ρ : V Λ n 1, v ι v Ω, is a bijective linear map. Hint: One can assume Ω = e 1 e n where e 1,...,e n is a basis of V. Now use (1.7.14) to compute this map on basis elements. 5. (The cross-product.) Let V be a 3-dimensional vector space, B an inner product on V and Ω a non-zero element of Λ 3. Define a map by setting V V V (1.7.16) v 1 v 2 = ρ 1 (Lv 1 Lv 2 ) where ρ is the map (1.7.15) and L : V V the map (1.2.9). Show that this map is linear in v 1, with v 2 fixed and linear in v 2 with v 1 fixed, and show that v 1 v 2 = v 2 v For V = R 3 let e 1, e 2 and e 3 be the standard basis vectors and B the standard inner product. (See 1.1.) Show that if Ω = e 1 e 2 e 3 the cross-product above is the standard cross-product: (1.7.17) e 1 e 2 = e 3 e 2 e 3 = e 1 e 3 e 1 = e 2. Hint: If B is the standard inner product Le i = e i. Remark One can make this standard cross-product look even more standard by using the calculus notation: e 1 = î, e 2 = ĵ and e 3 = k

39 1.8 The pull-back operation on Λ k 1.8 The pull-back operation on Λ k 39 Let V and W be vector spaces and let A be a linear map of V into W. Given a k-tensor, T L k (W), the pull-back, A T, is the k-tensor (1.8.1) A T(v 1,...,v k ) = T(Av 1,...,Av k ) in L k (V ). (See 1.3, equation ) In this section we ll show how to define a similar pull-back operation on Λ k. Lemma If T I k (W), then A T I k (V ). Proof. It suffices to verify this when T is a redundant k-tensor, i.e., a tensor of the form T = l 1 l k where l r W and l i = l i+1 for some index, i. But by (1.3.14) A T = A l 1 A l k and the tensor on the right is redundant since A l i = A l i+1. Now let ω be an element of Λ k (W ) and let ω = π(t) where T is in L k (W). We define (1.8.2) A ω = π(a T). Claim: The left hand side of (1.8.2) is well-defined. Proof. If ω = π(t) = π(t ), then T = T + S for some S I k (W), and A T = A T + A S. But A S I k (V ), so Proposition The map π(a T ) = π(a T). A : Λ k (W ) Λ k (V ), mapping ω to A ω is linear. Moreover,

40 40 Chapter 1. Multilinear algebra (i) If ω i Λ k i (W), i = 1,2, then (1.8.3) A (ω 1 ω 2 ) = A ω 1 A ω 2. (ii) If U is a vector space and B : U V a linear map, then for ω Λ k (W ), (1.8.4) B A ω = (AB) ω. We ll leave the proof of these three assertions as exercises. Hint: They follow immediately from the analogous assertions for the pullback operation on tensors. (See (1.3.14) and (1.3.15).) As an application of the pull-back operation we ll show how to use it to define the notion of determinant for a linear mapping. Let V be a n-dimensional vector space. Then dim Λ n (V ) = ( n n) = 1; i.e., Λ n (V ) is a one-dimensional vector space. Thus if A : V V is a linear mapping, the induced pull-back mapping: A : Λ n (V ) Λ n (V ), is just multiplication by a constant. We denote this constant by det(a) and call it the determinant of A, Hence, by definition, (1.8.5) A ω = det(a)ω for all ω in Λ n (V ). From (1.8.5) it s easy to derive a number of basic facts about determinants. Proposition If A and B are linear mappings of V into V, then (1.8.6) det(ab) = det(a) det(b). Proof. By (1.8.4) and (AB) ω = det(ab)ω so, det(ab) = det(a) det(b). = B (A ω) = det(b)a ω = det(b) det(a)ω,

41 1.8 The pull-back operation on Λ k 41 Proposition If I : V V is the identity map, Iv = v for all v V, det(i) = 1. We ll leave the proof as an exercise. Hint: I is the identity map on Λ n (V ). Proposition If A : V V is not onto, det(a) = 0. Proof. Let W be the image of A. Then if A is not onto, the dimension of W is less than n, so Λ n (W ) = {0}. Now let A = I W B where I W is the inclusion map of W into V and B is the mapping, A, regarded as a mapping from V to W. Thus if ω is in Λ n (V ), then by (1.8.4) A ω = B I W ω and since I W ω is in Λn (W) it is zero. We will derive by wedge product arguments the familiar matrix formula for the determinant. Let V and W be n-dimensional vector spaces and let e 1,...,e n be a basis for V and f 1,...,f n a basis for W. From these bases we get dual bases, e 1,...,e n and f 1,...,f n, for V and W. Moreover, if A is a linear map of V into W and [a i,j ] the n n matrix describing A in terms of these bases, then the transpose map, A : W V, is described in terms of these dual bases by the n n transpose matrix, i.e., if then Ae j = a i,j f i, A f j = a j,i e i. (See 2.) Consider now A (f 1 f n). By (1.8.3) A (f1 fn) = A f1 A fn = (a 1,k1 e k 1 ) (a n,kn e k n ) the sum being over all k 1,...,k n, with 1 k r n. Thus, A (f1 fn) = a 1,k1...a n,kn e k 1 e k n.

42 42 Chapter 1. Multilinear algebra If the multi-index, k 1,...,k n, is repeating, then e k 1 e k n is zero, and if it s not repeating then we can write k i = σ(i) i = 1,...,n for some permutation, σ, and hence we can rewrite A (f 1 f n ) as the sum over σ S n of a1,σ(1) a n,σ(n) (e 1 e n )σ. But so we get finally the formula (e 1 e n )σ = ( 1) σ e 1 e n (1.8.7) A (f 1 f n) = det[a i,j ]e 1 e n where (1.8.8) det[a i,j ] = ( 1) σ a 1,σ(1) a n,σ(n) summed over σ S n. The sum on the right is (as most of you know) the determinant of [a i,j ]. Notice that if V = W and e i = f i, i = 1,...,n, then ω = e 1 e n = f 1 f n, hence by (1.8.5) and (1.8.7), (1.8.9) det(a) = det[a i,j ]. Exercises. 1. Verify the three assertions of Proposition Deduce from Proposition a well-known fact about determinants of n n matrices: If two columns are equal, the determinant is zero. 3. Deduce from Proposition another well-known fact about determinants of n n matrices: If one interchanges two columns, then one changes the sign of the determinant. Hint: Let e 1,...,e n be a basis of V and let B : V V be the linear mapping: Be i = e j, Be j = e i and Be l = e l, l i,j. What is B (e 1 e n )?

43 1.8 The pull-back operation on Λ k Deduce from Propositions and another well-known fact about determinants of n n matrix. If [b i,j ] is the inverse of [a i,j ], its determinant is the inverse of the determinant of [a i,j ]. 5. Extract from (1.8.8) a well-known formula for determinants of 2 2 matrices: [ ] a11, a det 12 = a a 21, a 11 a 22 a 12 a Show that if A = [a i,j ] is an n n matrix and A t = [a j,i ] is its transpose deta = deta t. Hint: You are required to show that the sums ( 1) σ a 1,σ(1)...a n,σ(n) σ S n and ( 1) σ a σ(1),1...a σ(n),n σ S n are the same. Show that the second sum is identical with ( 1) τ a τ(1),1...a τ(n),n summed over τ = σ 1 S n. 7. Let A be an n n matrix of the form [ ] B A = 0 C where B is a k k matrix and C the l l matrix and the bottom l k block is zero. Show that deta = det B detc. Hint: Show that in (1.8.8) every non-zero term is of the form where σ S k and τ S l. ( 1) στ b 1,σ(1)...b k,σ(k) c 1,τ(1)...c l,τ(l) 8. Let V and W be vector spaces and let A : V W be a linear map. Show that if Av = w then for ω Λ p (w ), A ι(w)ω = ι(v)a ω. (Hint: By (1.7.10) and proposition it suffices to prove this for ω Λ 1 (W ), i.e., for ω W.)

k times l times n times

k times l times n times 1. Tensors on vector spaces Let V be a finite dimensional vector space over R. Definition 1.1. A tensor of type (k, l) on V is a map T : V V R k times l times which is linear in every variable. Example

More information

W if p = 0; ; W ) if p 1. p times

W if p = 0; ; W ) if p 1. p times Alternating and symmetric multilinear functions. Suppose and W are normed vector spaces. For each integer p we set {0} if p < 0; W if p = 0; ( ; W = L( }... {{... } ; W if p 1. p times We say µ p ( ; W

More information

First we introduce the sets that are going to serve as the generalizations of the scalars.

First we introduce the sets that are going to serve as the generalizations of the scalars. Contents 1 Fields...................................... 2 2 Vector spaces.................................. 4 3 Matrices..................................... 7 4 Linear systems and matrices..........................

More information

A proof of the Jordan normal form theorem

A proof of the Jordan normal form theorem A proof of the Jordan normal form theorem Jordan normal form theorem states that any matrix is similar to a blockdiagonal matrix with Jordan blocks on the diagonal. To prove it, we first reformulate it

More information

is the desired collar neighbourhood. Corollary Suppose M1 n, M 2 and f : N1 n 1 N2 n 1 is a diffeomorphism between some connected components

is the desired collar neighbourhood. Corollary Suppose M1 n, M 2 and f : N1 n 1 N2 n 1 is a diffeomorphism between some connected components 1. Collar neighbourhood theorem Definition 1.0.1. Let M n be a manifold with boundary. Let p M. A vector v T p M is called inward if for some local chart x: U V where U subsetm is open, V H n is open and

More information

Review of linear algebra

Review of linear algebra Review of linear algebra 1 Vectors and matrices We will just touch very briefly on certain aspects of linear algebra, most of which should be familiar. Recall that we deal with vectors, i.e. elements of

More information

LINEAR ALGEBRA REVIEW

LINEAR ALGEBRA REVIEW LINEAR ALGEBRA REVIEW JC Stuff you should know for the exam. 1. Basics on vector spaces (1) F n is the set of all n-tuples (a 1,... a n ) with a i F. It forms a VS with the operations of + and scalar multiplication

More information

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)

More information

Dot Products, Transposes, and Orthogonal Projections

Dot Products, Transposes, and Orthogonal Projections Dot Products, Transposes, and Orthogonal Projections David Jekel November 13, 2015 Properties of Dot Products Recall that the dot product or standard inner product on R n is given by x y = x 1 y 1 + +

More information

NOTES ON DIFFERENTIAL FORMS. PART 3: TENSORS

NOTES ON DIFFERENTIAL FORMS. PART 3: TENSORS NOTES ON DIFFERENTIAL FORMS. PART 3: TENSORS 1. What is a tensor? Let V be a finite-dimensional vector space. 1 It could be R n, it could be the tangent space to a manifold at a point, or it could just

More information

Math 121 Homework 4: Notes on Selected Problems

Math 121 Homework 4: Notes on Selected Problems Math 121 Homework 4: Notes on Selected Problems 11.2.9. If W is a subspace of the vector space V stable under the linear transformation (i.e., (W ) W ), show that induces linear transformations W on W

More information

A PRIMER ON SESQUILINEAR FORMS

A PRIMER ON SESQUILINEAR FORMS A PRIMER ON SESQUILINEAR FORMS BRIAN OSSERMAN This is an alternative presentation of most of the material from 8., 8.2, 8.3, 8.4, 8.5 and 8.8 of Artin s book. Any terminology (such as sesquilinear form

More information

Vector Analysis HOMEWORK IX Solution. 1. If T Λ k (V ), v 1,..., v k is a set of k linearly dependent vectors on V, prove

Vector Analysis HOMEWORK IX Solution. 1. If T Λ k (V ), v 1,..., v k is a set of k linearly dependent vectors on V, prove 1. If T Λ k (V ), v 1,..., v k is a set of k linearly dependent vectors on V, prove T ( v 1,..., v k ) = 0 Since v 1,..., v k is a set of k linearly dependent vectors, there exists a 1,..., a k F such

More information

Math 396. Quotient spaces

Math 396. Quotient spaces Math 396. Quotient spaces. Definition Let F be a field, V a vector space over F and W V a subspace of V. For v, v V, we say that v v mod W if and only if v v W. One can readily verify that with this definition

More information

The geometry of projective space

The geometry of projective space Chapter 1 The geometry of projective space 1.1 Projective spaces Definition. A vector subspace of a vector space V is a non-empty subset U V which is closed under addition and scalar multiplication. In

More information

Linear Algebra MAT 331. Wai Yan Pong

Linear Algebra MAT 331. Wai Yan Pong Linear Algebra MAT 33 Wai Yan Pong August 7, 8 Contents Linear Equations 4. Systems of Linear Equations..................... 4. Gauss-Jordan Elimination...................... 6 Linear Independence. Vector

More information

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 1. True or False (28 points, 2 each) T or F If V is a vector space

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Equivalence Relations

Equivalence Relations Equivalence Relations Definition 1. Let X be a non-empty set. A subset E X X is called an equivalence relation on X if it satisfies the following three properties: 1. Reflexive: For all x X, (x, x) E.

More information

Groups of Prime Power Order with Derived Subgroup of Prime Order

Groups of Prime Power Order with Derived Subgroup of Prime Order Journal of Algebra 219, 625 657 (1999) Article ID jabr.1998.7909, available online at http://www.idealibrary.com on Groups of Prime Power Order with Derived Subgroup of Prime Order Simon R. Blackburn*

More information

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2 Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch

More information

THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS

THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS KEITH CONRAD. Introduction The easiest matrices to compute with are the diagonal ones. The sum and product of diagonal matrices can be computed componentwise

More information

(f + g)(s) = f(s) + g(s) for f, g V, s S (cf)(s) = cf(s) for c F, f V, s S

(f + g)(s) = f(s) + g(s) for f, g V, s S (cf)(s) = cf(s) for c F, f V, s S 1 Vector spaces 1.1 Definition (Vector space) Let V be a set with a binary operation +, F a field, and (c, v) cv be a mapping from F V into V. Then V is called a vector space over F (or a linear space

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same. Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read

More information

MAT 445/ INTRODUCTION TO REPRESENTATION THEORY

MAT 445/ INTRODUCTION TO REPRESENTATION THEORY MAT 445/1196 - INTRODUCTION TO REPRESENTATION THEORY CHAPTER 1 Representation Theory of Groups - Algebraic Foundations 1.1 Basic definitions, Schur s Lemma 1.2 Tensor products 1.3 Unitary representations

More information

Linear Algebra I. Ronald van Luijk, 2015

Linear Algebra I. Ronald van Luijk, 2015 Linear Algebra I Ronald van Luijk, 2015 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents Dependencies among sections 3 Chapter 1. Euclidean space: lines and hyperplanes 5 1.1. Definition

More information

(K + L)(c x) = K(c x) + L(c x) (def of K + L) = K( x) + K( y) + L( x) + L( y) (K, L are linear) = (K L)( x) + (K L)( y).

(K + L)(c x) = K(c x) + L(c x) (def of K + L) = K( x) + K( y) + L( x) + L( y) (K, L are linear) = (K L)( x) + (K L)( y). Exercise 71 We have L( x) = x 1 L( v 1 ) + x 2 L( v 2 ) + + x n L( v n ) n = x i (a 1i w 1 + a 2i w 2 + + a mi w m ) i=1 ( n ) ( n ) ( n ) = x i a 1i w 1 + x i a 2i w 2 + + x i a mi w m i=1 Therefore y

More information

is an isomorphism, and V = U W. Proof. Let u 1,..., u m be a basis of U, and add linearly independent

is an isomorphism, and V = U W. Proof. Let u 1,..., u m be a basis of U, and add linearly independent Lecture 4. G-Modules PCMI Summer 2015 Undergraduate Lectures on Flag Varieties Lecture 4. The categories of G-modules, mostly for finite groups, and a recipe for finding every irreducible G-module of a

More information

a(b + c) = ab + ac a, b, c k

a(b + c) = ab + ac a, b, c k Lecture 2. The Categories of Vector Spaces PCMI Summer 2015 Undergraduate Lectures on Flag Varieties Lecture 2. We discuss the categories of vector spaces and linear maps. Since vector spaces are always

More information

Some notes on linear algebra

Some notes on linear algebra Some notes on linear algebra Throughout these notes, k denotes a field (often called the scalars in this context). Recall that this means that there are two binary operations on k, denoted + and, that

More information

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3 Answers in blue. If you have questions or spot an error, let me know. 3 4. Find all matrices that commute with A =. 4 3 a b If we set B = and set AB = BA, we see that 3a + 4b = 3a 4c, 4a + 3b = 3b 4d,

More information

Determinants - Uniqueness and Properties

Determinants - Uniqueness and Properties Determinants - Uniqueness and Properties 2-2-2008 In order to show that there s only one determinant function on M(n, R), I m going to derive another formula for the determinant It involves permutations

More information

Math 110, Spring 2015: Midterm Solutions

Math 110, Spring 2015: Midterm Solutions Math 11, Spring 215: Midterm Solutions These are not intended as model answers ; in many cases far more explanation is provided than would be necessary to receive full credit. The goal here is to make

More information

Topics in linear algebra

Topics in linear algebra Chapter 6 Topics in linear algebra 6.1 Change of basis I want to remind you of one of the basic ideas in linear algebra: change of basis. Let F be a field, V and W be finite dimensional vector spaces over

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

NOTES ON LINEAR ALGEBRA. 1. Determinants

NOTES ON LINEAR ALGEBRA. 1. Determinants NOTES ON LINEAR ALGEBRA 1 Determinants In this section, we study determinants of matrices We study their properties, methods of computation and some applications We are familiar with the following formulas

More information

WOMP 2001: LINEAR ALGEBRA. 1. Vector spaces

WOMP 2001: LINEAR ALGEBRA. 1. Vector spaces WOMP 2001: LINEAR ALGEBRA DAN GROSSMAN Reference Roman, S Advanced Linear Algebra, GTM #135 (Not very good) Let k be a field, eg, R, Q, C, F q, K(t), 1 Vector spaces Definition A vector space over k is

More information

Chapter 4. Matrices and Matrix Rings

Chapter 4. Matrices and Matrix Rings Chapter 4 Matrices and Matrix Rings We first consider matrices in full generality, i.e., over an arbitrary ring R. However, after the first few pages, it will be assumed that R is commutative. The topics,

More information

Section I.6. Symmetric, Alternating, and Dihedral Groups

Section I.6. Symmetric, Alternating, and Dihedral Groups I.6. Symmetric, alternating, and Dihedral Groups 1 Section I.6. Symmetric, Alternating, and Dihedral Groups Note. In this section, we conclude our survey of the group theoretic topics which are covered

More information

Notes on Linear Algebra

Notes on Linear Algebra 1 Notes on Linear Algebra Jean Walrand August 2005 I INTRODUCTION Linear Algebra is the theory of linear transformations Applications abound in estimation control and Markov chains You should be familiar

More information

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Math 443 Differential Geometry Spring 2013 Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Endomorphisms of a Vector Space This handout discusses

More information

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016 Linear Algebra Notes Lecture Notes, University of Toronto, Fall 2016 (Ctd ) 11 Isomorphisms 1 Linear maps Definition 11 An invertible linear map T : V W is called a linear isomorphism from V to W Etymology:

More information

Exercises on chapter 0

Exercises on chapter 0 Exercises on chapter 0 1. A partially ordered set (poset) is a set X together with a relation such that (a) x x for all x X; (b) x y and y x implies that x = y for all x, y X; (c) x y and y z implies that

More information

DETERMINANTS 1. def. (ijk) = (ik)(ij).

DETERMINANTS 1. def. (ijk) = (ik)(ij). DETERMINANTS 1 Cyclic permutations. A permutation is a one-to-one mapping of a set onto itself. A cyclic permutation, or a cycle, or a k-cycle, where k 2 is an integer, is a permutation σ where for some

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Determinants: Uniqueness and more

Determinants: Uniqueness and more Math 5327 Spring 2018 Determinants: Uniqueness and more Uniqueness The main theorem we are after: Theorem 1 The determinant of and n n matrix A is the unique n-linear, alternating function from F n n to

More information

Multilinear forms. Joel Kamnitzer. April 1, 2011

Multilinear forms. Joel Kamnitzer. April 1, 2011 Multilinear forms Joel Kamnitzer April 1, 2011 Assume that all fields are characteristic 0 (i.e. 1 + + 1 0), for example F = Q, R, C. Assume also that all vector spaces are finite dimensional. 1 Dual spaces

More information

Definitions, Theorems and Exercises. Abstract Algebra Math 332. Ethan D. Bloch

Definitions, Theorems and Exercises. Abstract Algebra Math 332. Ethan D. Bloch Definitions, Theorems and Exercises Abstract Algebra Math 332 Ethan D. Bloch December 26, 2013 ii Contents 1 Binary Operations 3 1.1 Binary Operations............................... 4 1.2 Isomorphic Binary

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Throughout these notes, F denotes a field (often called the scalars in this context). 1 Definition of a vector space Definition 1.1. A F -vector space or simply a vector space

More information

ISOMETRIES OF R n KEITH CONRAD

ISOMETRIES OF R n KEITH CONRAD ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x

More information

Linear Systems and Matrices

Linear Systems and Matrices Department of Mathematics The Chinese University of Hong Kong 1 System of m linear equations in n unknowns (linear system) a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.......

More information

1 Fields and vector spaces

1 Fields and vector spaces 1 Fields and vector spaces In this section we revise some algebraic preliminaries and establish notation. 1.1 Division rings and fields A division ring, or skew field, is a structure F with two binary

More information

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra D. R. Wilkins Contents 3 Topics in Commutative Algebra 2 3.1 Rings and Fields......................... 2 3.2 Ideals...............................

More information

0.2 Vector spaces. J.A.Beachy 1

0.2 Vector spaces. J.A.Beachy 1 J.A.Beachy 1 0.2 Vector spaces I m going to begin this section at a rather basic level, giving the definitions of a field and of a vector space in much that same detail as you would have met them in a

More information

Math 52H: Multilinear algebra, differential forms and Stokes theorem. Yakov Eliashberg

Math 52H: Multilinear algebra, differential forms and Stokes theorem. Yakov Eliashberg Math 52H: Multilinear algebra, differential forms and Stokes theorem Yakov Eliashberg March 202 2 Contents I Multilinear Algebra 7 Linear and multilinear functions 9. Dual space.........................................

More information

MATH 323 Linear Algebra Lecture 6: Matrix algebra (continued). Determinants.

MATH 323 Linear Algebra Lecture 6: Matrix algebra (continued). Determinants. MATH 323 Linear Algebra Lecture 6: Matrix algebra (continued). Determinants. Elementary matrices Theorem 1 Any elementary row operation σ on matrices with n rows can be simulated as left multiplication

More information

EXTERIOR POWERS KEITH CONRAD

EXTERIOR POWERS KEITH CONRAD EXTERIOR POWERS KEITH CONRAD 1. Introduction Let R be a commutative ring. Unless indicated otherwise, all modules are R-modules and all tensor products are taken over R, so we abbreviate R to. A bilinear

More information

Calculating determinants for larger matrices

Calculating determinants for larger matrices Day 26 Calculating determinants for larger matrices We now proceed to define det A for n n matrices A As before, we are looking for a function of A that satisfies the product formula det(ab) = det A det

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Let V, W be two finite-dimensional vector spaces over R. We are going to define a new vector space V W with two properties:

Let V, W be two finite-dimensional vector spaces over R. We are going to define a new vector space V W with two properties: 5 Tensor products We have so far encountered vector fields and the derivatives of smooth functions as analytical objects on manifolds. These are examples of a general class of objects called tensors which

More information

5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 101. A H w. 5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

More information

Determinants. Beifang Chen

Determinants. Beifang Chen Determinants Beifang Chen 1 Motivation Determinant is a function that each square real matrix A is assigned a real number, denoted det A, satisfying certain properties If A is a 3 3 matrix, writing A [u,

More information

Math 346 Notes on Linear Algebra

Math 346 Notes on Linear Algebra Math 346 Notes on Linear Algebra Ethan Akin Mathematics Department Fall, 2014 1 Vector Spaces Anton Chapter 4, Section 4.1 You should recall the definition of a vector as an object with magnitude and direction

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

Duality of finite-dimensional vector spaces

Duality of finite-dimensional vector spaces CHAPTER I Duality of finite-dimensional vector spaces 1 Dual space Let E be a finite-dimensional vector space over a field K The vector space of linear maps E K is denoted by E, so E = L(E, K) This vector

More information

Chapter 1. Measure Spaces. 1.1 Algebras and σ algebras of sets Notation and preliminaries

Chapter 1. Measure Spaces. 1.1 Algebras and σ algebras of sets Notation and preliminaries Chapter 1 Measure Spaces 1.1 Algebras and σ algebras of sets 1.1.1 Notation and preliminaries We shall denote by X a nonempty set, by P(X) the set of all parts (i.e., subsets) of X, and by the empty set.

More information

DETERMINANTS. , x 2 = a 11b 2 a 21 b 1

DETERMINANTS. , x 2 = a 11b 2 a 21 b 1 DETERMINANTS 1 Solving linear equations The simplest type of equations are linear The equation (1) ax = b is a linear equation, in the sense that the function f(x) = ax is linear 1 and it is equated to

More information

II. Determinant Functions

II. Determinant Functions Supplemental Materials for EE203001 Students II Determinant Functions Chung-Chin Lu Department of Electrical Engineering National Tsing Hua University May 22, 2003 1 Three Axioms for a Determinant Function

More information

HW2 Solutions Problem 1: 2.22 Find the sign and inverse of the permutation shown in the book (and below).

HW2 Solutions Problem 1: 2.22 Find the sign and inverse of the permutation shown in the book (and below). Teddy Einstein Math 430 HW Solutions Problem 1:. Find the sign and inverse of the permutation shown in the book (and below). Proof. Its disjoint cycle decomposition is: (19)(8)(37)(46) which immediately

More information

1 Matrices and Systems of Linear Equations. a 1n a 2n

1 Matrices and Systems of Linear Equations. a 1n a 2n March 31, 2013 16-1 16. Systems of Linear Equations 1 Matrices and Systems of Linear Equations An m n matrix is an array A = (a ij ) of the form a 11 a 21 a m1 a 1n a 2n... a mn where each a ij is a real

More information

M. Matrices and Linear Algebra

M. Matrices and Linear Algebra M. Matrices and Linear Algebra. Matrix algebra. In section D we calculated the determinants of square arrays of numbers. Such arrays are important in mathematics and its applications; they are called matrices.

More information

Survey on exterior algebra and differential forms

Survey on exterior algebra and differential forms Survey on exterior algebra and differential forms Daniel Grieser 16. Mai 2013 Inhaltsverzeichnis 1 Exterior algebra for a vector space 1 1.1 Alternating forms, wedge and interior product.....................

More information

REPRESENTATIONS OF S n AND GL(n, C)

REPRESENTATIONS OF S n AND GL(n, C) REPRESENTATIONS OF S n AND GL(n, C) SEAN MCAFEE 1 outline For a given finite group G, we have that the number of irreducible representations of G is equal to the number of conjugacy classes of G Although

More information

MATH 235. Final ANSWERS May 5, 2015

MATH 235. Final ANSWERS May 5, 2015 MATH 235 Final ANSWERS May 5, 25. ( points) Fix positive integers m, n and consider the vector space V of all m n matrices with entries in the real numbers R. (a) Find the dimension of V and prove your

More information

Linear and Bilinear Algebra (2WF04) Jan Draisma

Linear and Bilinear Algebra (2WF04) Jan Draisma Linear and Bilinear Algebra (2WF04) Jan Draisma CHAPTER 1 Basics We will assume familiarity with the terms field, vector space, subspace, basis, dimension, and direct sums. If you are not sure what these

More information

Abstract Algebra II Groups ( )

Abstract Algebra II Groups ( ) Abstract Algebra II Groups ( ) Melchior Grützmann / melchiorgfreehostingcom/algebra October 15, 2012 Outline Group homomorphisms Free groups, free products, and presentations Free products ( ) Definition

More information

Supplementary Notes on Linear Algebra

Supplementary Notes on Linear Algebra Supplementary Notes on Linear Algebra Mariusz Wodzicki May 3, 2015 1 Vector spaces 1.1 Coordinatization of a vector space 1.1.1 Given a basis B = {b 1,..., b n } in a vector space V, any vector v V can

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

Vector spaces. EE 387, Notes 8, Handout #12

Vector spaces. EE 387, Notes 8, Handout #12 Vector spaces EE 387, Notes 8, Handout #12 A vector space V of vectors over a field F of scalars is a set with a binary operator + on V and a scalar-vector product satisfying these axioms: 1. (V, +) is

More information

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure.

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure. Hints for Exercises 1.3. This diagram says that f α = β g. I will prove f injective g injective. You should show g injective f injective. Assume f is injective. Now suppose g(x) = g(y) for some x, y A.

More information

V (v i + W i ) (v i + W i ) is path-connected and hence is connected.

V (v i + W i ) (v i + W i ) is path-connected and hence is connected. Math 396. Connectedness of hyperplane complements Note that the complement of a point in R is disconnected and the complement of a (translated) line in R 2 is disconnected. Quite generally, we claim that

More information

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8. Linear Algebra M1 - FIB Contents: 5 Matrices, systems of linear equations and determinants 6 Vector space 7 Linear maps 8 Diagonalization Anna de Mier Montserrat Maureso Dept Matemàtica Aplicada II Translation:

More information

NOTES (1) FOR MATH 375, FALL 2012

NOTES (1) FOR MATH 375, FALL 2012 NOTES 1) FOR MATH 375, FALL 2012 1 Vector Spaces 11 Axioms Linear algebra grows out of the problem of solving simultaneous systems of linear equations such as 3x + 2y = 5, 111) x 3y = 9, or 2x + 3y z =

More information

2. Every linear system with the same number of equations as unknowns has a unique solution.

2. Every linear system with the same number of equations as unknowns has a unique solution. 1. For matrices A, B, C, A + B = A + C if and only if A = B. 2. Every linear system with the same number of equations as unknowns has a unique solution. 3. Every linear system with the same number of equations

More information

Chapter SSM: Linear Algebra. 5. Find all x such that A x = , so that x 1 = x 2 = 0.

Chapter SSM: Linear Algebra. 5. Find all x such that A x = , so that x 1 = x 2 = 0. Chapter Find all x such that A x : Chapter, so that x x ker(a) { } Find all x such that A x ; note that all x in R satisfy the equation, so that ker(a) R span( e, e ) 5 Find all x such that A x 5 ; x x

More information

Apprentice Program: Linear Algebra

Apprentice Program: Linear Algebra Apprentice Program: Linear Algebra Instructor: Miklós Abért Notes taken by Matt Holden and Kate Ponto June 26,2006 1 Matrices An n k matrix A over a ring R is a collection of nk elements of R, arranged

More information

1 Invariant subspaces

1 Invariant subspaces MATH 2040 Linear Algebra II Lecture Notes by Martin Li Lecture 8 Eigenvalues, eigenvectors and invariant subspaces 1 In previous lectures we have studied linear maps T : V W from a vector space V to another

More information

MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS

MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS There will be eight problems on the final. The following are sample problems. Problem 1. Let F be the vector space of all real valued functions on

More information

(6) For any finite dimensional real inner product space V there is an involutive automorphism α : C (V) C (V) such that α(v) = v for all v V.

(6) For any finite dimensional real inner product space V there is an involutive automorphism α : C (V) C (V) such that α(v) = v for all v V. 1 Clifford algebras and Lie triple systems Section 1 Preliminaries Good general references for this section are [LM, chapter 1] and [FH, pp. 299-315]. We are especially indebted to D. Shapiro [S2] who

More information

Parameterizing orbits in flag varieties

Parameterizing orbits in flag varieties Parameterizing orbits in flag varieties W. Ethan Duckworth April 2008 Abstract In this document we parameterize the orbits of certain groups acting on partial flag varieties with finitely many orbits.

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

1.1 Definition. A monoid is a set M together with a map. 1.3 Definition. A monoid is commutative if x y = y x for all x, y M.

1.1 Definition. A monoid is a set M together with a map. 1.3 Definition. A monoid is commutative if x y = y x for all x, y M. 1 Monoids and groups 1.1 Definition. A monoid is a set M together with a map M M M, (x, y) x y such that (i) (x y) z = x (y z) x, y, z M (associativity); (ii) e M such that x e = e x = x for all x M (e

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

Lecture 2: Linear operators

Lecture 2: Linear operators Lecture 2: Linear operators Rajat Mittal IIT Kanpur The mathematical formulation of Quantum computing requires vector spaces and linear operators So, we need to be comfortable with linear algebra to study

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F has characteristic zero. The following are facts (in

More information

Math 594. Solutions 5

Math 594. Solutions 5 Math 594. Solutions 5 Book problems 6.1: 7. Prove that subgroups and quotient groups of nilpotent groups are nilpotent (your proof should work for infinite groups). Give an example of a group G which possesses

More information