Selected exercises from Abstract Algebra by Dummit and Foote (3rd edition). Bryan Félix Abril 12, 217 Section 11.1 Exercise 6. Let V be a vector space of finite dimension. If ϕ is any linear transformation from V to V prove that there is an integer m such that the intersection of the image of ϕ m and the kernel of ϕ m is {}. Proof. Note that for all i we have and ker(ϕ i ) ker(ϕ i+1 ) Im (ϕ i+1 ) Im (ϕ i ). Since V is finite dimensional, so are the kernel and the image therefore, there exist a, b such thatker(ϕ a ) = ker(ϕ a+1 ) and Im (ϕ b+1 ) = Im (ϕ b ). Without loss of generality let b a. We will show that ker ϕ b Im ϕ b =. Any element v ker ϕ b Im ϕ b satisfies: i) v = ϕ b (w) for some w V, and ii) = ϕ b (v). By substitution, = ϕ b (v) = ϕ b (ϕ b (w))) = ϕ 2b (w). Therefore w ker ϕ 2b and since 2b > b, w ker ϕ b as well. Therefore v = ϕ b (w) = as desired. Exercise 7. Let ϕ be a linear transformation from a vector space V of dimension n that satisfies ϕ 2 =. Prove that the image of ϕ is contained in the kernel of ϕ and hence that the rank of ϕ is at most n/2. Proof. We wish to show that Im (ϕ) ker(ϕ). Take v Im (ϕ). Then, there exist w such that ϕ(w) = v. Now, observe that ϕ(v) = ϕ(ϕ(w)) = ϕ 2 (w) = Hence, v is in the kernel of ϕ, and furthermore Im (ϕ) ker(ϕ). It follows that n = dim(v ) = dim(im (ϕ)) + dim(ker ϕ)) 2 dim(im (ϕ)) it is to say that n 2 dim(im (ϕ)). 1
Exercise 8. Let V be a vector space over F and let ϕ be a linear transformation of the vector space V to itself. A non zero element v V satisfying ϕ(v) = λv for some λ F is called an eigenvector of ϕ with eigenvalue λ. Prove that for any fixed λ F the collection of eigenvectors of ϕ with eigenvalue λ together with forms a subspace of V. Proof. We proceed by the standard method. Let v i and v 2 be eigenvectors of ϕ with eigenvalue λ and let c 1 and c 2 be elements in F. Observe that ϕ(cv 1 + c 2 v 2 ) = c 1 ϕ(v 1 ) + c 2 ϕ(v 2 ) = c 1 (λv 1 ) + c 2 (λv 2 ) = λ(c 1 v 1 + c 2 v 2 ) Therefore the space is closed under scalar multiplication and vector addition as desired. Note that is in the space by construction. Exercise 9. Let V be a vector space over F and let ϕ be a linear transformation of the vector space V to itself. Suppose for i = 1, 2,..., k that v i V is an eigenvector of ϕ with eigenvalue λ i F and that all the eigenvalues λ i are distinct. Prove that v 1, v 2,..., v k are linearly independent. Conclude that any linear transformation on an n-dimensional vector space has at most n eigenvalues. Proof. We proceed by induction over k. For k = 1, v 1 is trivially a linearly independent set since, by construction, v 1. Assume that {v 1, v 2,..., v k } as described in the problem is linearly independent and consider the set {v 1, v 2,..., v k } v k+1 as prescribed in the problem. Now, consider the linear combination c 1 v 1 + c 2 v 2 + + c k v k + c k+1 v k+1 =. We will show that the only solution is c 1 = c 2 = = c k = c k+1 = by inspecting the following system { λ1 (c 1 v 1 + c 2 v 2 + + c k v k + c k+1 v k+1 ) = λ 1 () ϕ (c 1 v 1 + c 2 v 2 + + c k v k + c k+1 v k+1 ) = ϕ () equivalent to { c1 λ 1 v 1 + c 2 λ 1 v 2 + + c k λ 1 v k + c k+1 λ 1 v k+1 = c 1 λ 1 v 1 + c 2 λ 2 v 2 + + c k λ k v k + c k+1 λ k+1 v k+1 = If we subtract the second equation form the first one we have (λ 1 λ 2 )c 2 v 2 + (λ 1 λ 3 )c 3 v 3 + + (λ 1 λ k )c k v k + (λ 1 λ k+1 )c k+1 v k+1 =. Observe that the factors (λ 1 λ i ) are non zero since the eigenvalues are distinct. Furthermore, by the induction hypothesis, the only solution to the previous linear combination is the trivial solution. It is to say that c 2 = c 3 = = c k = c k+1 =. Then, it must be the case that c 1 v 1 =. But that implies that c 1 = by construction. The result then follows. For the second part, we know that any subset from a vector space of dimension n has at most n linearly dependent vectors. Since {v 1, v 2,..., v k, v k+1 } V the result follows. Section 11.2 Exercise 9. If W is a subspace of the vector space V stable under the linear transformation ϕ (i.e., ϕ(w ) W ), show that ϕ induces linear transformations ϕ W on W and ϕ on the quotient 2
vector space V/W. If ϕ W and ϕ are non singular prove ϕ is non singular. Prove the converse holds if V has finite dimension and give a counterexample with V infinite dimensional. Exercise 11. Let ϕ be a linear transformation form the finite dimensional vector space V to itself such that ϕ 2 = ϕ. a) Prove that Im (ϕ) ker(ϕ) =. Proof. Assume there exist a non zero element v in Im ϕ ker ϕ. Then, i) for some w in V, ϕ(w) = v, and ii) ϕ(v) =. We apply the transformation ϕ to ϕ(w) = v and see that a contradiction. b) Prove that V = Im (ϕ) ker(ϕ). ϕ (ϕ(w)) = ϕ(v) ϕ 2 (w) = ϕ(w) = v = Proof. Let v = ϕ(v) + (v ϕ(v)). Observe that ϕ (v ϕ(v)) = ϕ(v) ϕ 2 (v) = ϕ(v) ϕ(v) =. Therefore (v ϕ(v)) ker ϕ and, trivially, ϕ(v) Im ϕ. The result then follows. c) Prove that there is a basis of V such that the matrix of ϕ with respect to this basis is a diagonal matrix whose entries are all or 1. Proof. We chose basis B 1 for Im ϕ and B 2 for ker ϕ separately. Note that by part b) of the exercise B 1 B 2 is a basis for V. Furthermore if v B 1, we know there exist w such that ϕ(w) = v for which ϕ(v) = ϕ 2 (w) = ϕ(w) = v. This shows that the matrix representation of ϕ restricted to the ordered basis B 1 is the identity matrix. Trivially, the matrix of ϕ restricted to the kernel is the zero matrix. We conclude that the matrix representation of ϕ under the ordered basis B 1 B 2 is equal to ( ) I as desired. Exercise 12. Let V = R 2, v 1 = (1, ), v 2 = (, 1), so that v 1, v 2 are a basis for ( V. Let) ϕ be the 2 1 linear transformation of V to itself whose matrix whit respect to this basis is. Prove 2 that if W is the subspace generated by v 1 then W is stable under the action of ϕ. Prove that there is no subspace W invariant under ϕ so that V = W W. 3
Proof. Note that W = span(v 1 ). For arbitrary v span(v 1 ) we have that v = cv 1 (for c in the field of the vector space) then, ϕ(v) = ϕ(cv 1 ) = cϕ(v 1 ) = 2cv 1 span(v 1 ). Hence, W is stable under ϕ. For the second part, assume that such W exists. Then, it implies that the matrix of ϕ is diagonalizable. Observe that the characteristic polynomial of ϕ is (2 λ) 2 and hence, ϕ has a unique eigenvalue. It can easily be show that the corresponding eigenspace of has dimension 1 and is spanned by v 1. Therefore the matrix is not diagonaizable and we conclude that W can not exist. Exercise 36. Let V be the 6-dimensional vector space over Q consisting of polynomials in the variable x of degree at most 5. Let ϕ be the map of V to itself defined by ϕ(f) = x 2 f 6xf + 12f. a) Prove that ϕ is a linear transformation of V to itself. Proof. Take p 1 and p 2 to be polynomials in the space and let α and β be rational numbers. Then ϕ(αp 1 + βp 2 ) = x 2 (αp 1 + βp 2 ) 6x (αp 1 + βp 2 ) + 12 (αp 1 + βp 2 ) = αx 2 p 1 + βx 2 p 2 α6xp 1 β6xp 2 + α12p 1 + β12p 2 = α ( ) ( ) x 2 p 1 6xp 1 + 12p 1 + β x 2 p 2 6xp 2 + 12p 2 = αϕ(p 1 ) + βϕ(p 2 ). It follows that ϕ is a linear transformation. Note that the space is preserved as all of the terms in the map are polynomials of degree at most 5. b) Determine a basis for the image and for the kernel of ϕ. Proof. Let {1, x, x 2, x 3, x 4, x 5 } of Q be the basis for the space. Then the matrix representation of ϕ is 12 6 2 2 Clearly, the first three columns together with the last one are linearly independent. Therefore the image of the transformation has basis {1, x, x 2, x 5 } (note that dim Im ϕ = 3). For the basis of the kernel it suffices to find two linearly independent vectors in the kernel (by the nullity-rank theorem). Observe that ϕ(x 4 ) = ϕ(x 3 ) =. Therefore, {x 3, x 4 } is a basis for the kernel. Exercise 37. Let V be the 7-dimensional vector space over the field F consisting of the polynomials in the variable x of degree at most 6. Let ϕ be the linear transformation of V to itself defined by ϕ(f) = f. For each of the fields below, determine a basis for the image and for the 4
kernel of ϕ. Note. For all the following cases I proceed as in the previous exercise. I use the standard basis {1, x,..., x 7 } to construct a matrix for the transformation over the field, then I use the column space to see the basis of the image (denoted as I) and the basis of the kernel (denoted as K). a) F = R Proof. By baby calculus, any polynomial of degree 6 has an antiderivative of degree 7. Hence I = {1, x, x 2, x 3, x 4, x 5, x 6 }. Trivially K = 1 as any constant has derivative equal to. b) F = F 2 Proof. The corresponding matrix representation is 1 1 1 The linearly independent columns give I = {1, x 2, x 4 } and the zero columns give K = {1, x 2, x 4, x 6 }. c) F = F 3 Proof. The corresponding matrix representation is 1 2 1 2 The linearly independent columns give I = {1, x, x 3, x 4 } and the zero columns give K = {1, x 3, x 6 }. d) F = F 5 Proof. The corresponding matrix representation is 1 2 3 4 1 The linearly independent columns give I = {1, x, x 2, x 3, x 5 } and the zero columns give K = {1, x 5 }. 5
Section 11.3 Exercise 2. Let V be the collection of polynomials with coefficients in Q in the variable x of degree at most 5 with 1, x, x 2,..., x 5 as basis. Prove that the following are elements of the dual space of V and express them as linear combinations of the dual basis. Note. I will use the natural basis for the dual space; i.e. { vi 1 i = j (v j ) = i j where v i is the ordered element of the basis for V. a) E : V Q defined by E(p(x)) = p(3). Proof. Let αp 1 + βp 2 be an element of V. Note that E(αp 1 + βp 2 ) = (αp 1 + βp 2 )(3) = αp 1 (3) + βp 2 (3) = αe(p 1 ) + βe(p 2 ). Since E is linear, E is an element of the dual space of V. Note that E(x i ) = 3 i. Then, under the standard dual basis E = 5 3 i vi. i= b) ϕ : V Q defined by ϕ(p(x)) = p(t) dt. Proof. Let αp 1 + βp 2 be an element of V. Note that ϕ(αp 1 + βp 2 ) = αp 1 + βp 2 dt = α p 1 dt + β Since ϕ is linear, ϕ is an element of the dual space of V. Note that ϕ(x i ) = xi dx = 1 Then, under the standard dual basis i+1 ϕ = 5 i= 1 i + 1 v i. p 2 dt = αϕ(p 1 ) + βϕ(p 2 ). c) ϕ : V Q defined by ϕ(p(x)) = t2 p(t) dt. Proof. Let αp 1 + βp 2 be an element of V. Note that ϕ(αp 1 + βp 2 ) = t 2 (αp 1 + βp 2 ) dt = α t 2 p 1 dt + β Since ϕ is linear, ϕ is an element of the dual space of V. Note that ϕ(x i ) = x2 (x i ) dx = 1 Then, under the standard dual basis i+3 ϕ = 5 i= 1 i + 3 v i. t 2 p 2 dt = αϕ(p 1 ) + βϕ(p 2 ). 6
d) ϕ : V Q defined by ϕ(p(x)) = p (5). Proof. Let αp 1 + βp 2 be an element of V. Note that ϕ(αp 1 + βp 2 ) = (αp 1 + βp 2 ) (5) = αp 1(5) + βp 2(5) = αϕ(p 1 ) + βϕ(p 2 ). Since ϕ is linear, ϕ is an element of the dual space of V. Note that ϕ(x i ) = (x i ) (5) = i 5 i 1. Then, under the standard dual basis ϕ = 5 i 5 i 1 vi. i= Exercise 4. If V is infinite dimensional with basis A, prove that A = {v v A} does not span V. Proof. Let ϕ : V F be defined as ϕ(v i ) = 1 for all v i A. Then, ϕ = i A v i can only have finitely many non zero terms. In other words ϕ = i B v i for some finite set B. Note that this implies that ϕ(w) = for w / A, therefore ϕ is not in the span of A. Section 11.4 Exercise 6. (Minkowski s Criterion) Suppose A is an n n matrix with real entries such that the diagonal elements are all positive, the off-diagonal elements are all negative and the row sums are all positive. Prove that det A. Proof. Assume det(a) =. Then, there exists a non zero solution for the system AX =. Let x m be the maximum coordinate element of the vector X in absolute value. Then, we have the following. a i,j x j ai,j x j ai,j xj ai,j xm Taking the last inequality, we fix i to be equal to m and we multiply by 1 to get am,j xj am,j xm Now we add 2 a m,m x m and simplify to a m,m x m am,j xj am,m x m 7 am,j xm
Note that the right hand side of the inequality is equal to x j n a m,j which on its own is greater that by construction. It is to say or simply a m,m x m am,j xj xj a m,m x m a m,j > a m,j x j >. The left hand side can be compacted by means of the reverse triangle inequality, thus a m,m x m a m,j x j a m,m x m am,j xj >. Lastly, the left side of the inequality reduces to n a m,j x j and we have Which is a contradiction to AX =. Section 11.5 a m,j x j >. Exercise 13. Let F be any field in which 1 1 and let V be a vector space over F. Prove that V F V = S 2 (V ) 2 (V ) i.e., that every 2-tensor may be written uniquely as a sum of a symmetric and an alternating tensor. Proof. First, observe that dim S 2 (V ) = n(n+1) and that dim 2 (V ) = n(n 1). Hence, dim S 2 (V ) 2 2 2 (V ) = n 2 = dim V F V. The result follows if we prove that the spaces S 2 (V ) and 2 (V ) intersect trivially. To that end, assume v S 2 (V ) 2 (V ). Then, by definition { σv = v v = sign(σ)v v(1 sign(σ)) = σv = sign(σ)v Note that sign(σ) is necessarily equal to 1 since we are working with the symmetric group S 2. The above equation implies that v(1 1) =. By construction 1 1, and thus v is forced to be equal to, as desired. 8