Selected exercises from Abstract Algebra by Dummit and Foote (3rd edition).

Similar documents
Solution. That ϕ W is a linear map W W follows from the definition of subspace. The map ϕ is ϕ(v + W ) = ϕ(v) + W, which is well-defined since

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

Online Exercises for Linear Algebra XM511

Dimension. Eigenvalue and eigenvector

MAT Linear Algebra Collection of sample exams

Definitions for Quizzes

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

1 Invariant subspaces

Calculating determinants for larger matrices

Lecture Summaries for Linear Algebra M51A

2. Every linear system with the same number of equations as unknowns has a unique solution.

PRACTICE FINAL EXAM. why. If they are dependent, exhibit a linear dependence relation among them.

A = 1 6 (y 1 8y 2 5y 3 ) Therefore, a general solution to this system is given by

Math 113 Midterm Exam Solutions

Study Guide for Linear Algebra Exam 2

Topics in linear algebra

Math 121 Homework 4: Notes on Selected Problems

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure.

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

Abstract Vector Spaces

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

Linear Algebra 2 Final Exam, December 7, 2015 SOLUTIONS. a + 2b = x a + 3b = y. This solves to a = 3x 2y, b = y x. Thus

(f + g)(s) = f(s) + g(s) for f, g V, s S (cf)(s) = cf(s) for c F, f V, s S

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

Solutions to Final Exam

235 Final exam review questions

MATH 304 Linear Algebra Lecture 20: Review for Test 1.

Math 115A: Homework 5

Linear Algebra Practice Problems

Linear Algebra Practice Problems

Math 113 Winter 2013 Prof. Church Midterm Solutions

LINEAR ALGEBRA SUMMARY SHEET.

Math 217: Eigenspaces and Characteristic Polynomials Professor Karen Smith

NORMS ON SPACE OF MATRICES

Elementary linear algebra

MATH 583A REVIEW SESSION #1

MA 265 FINAL EXAM Fall 2012

Econ Slides from Lecture 7

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

Math 396. Quotient spaces

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7

MATH 115A: SAMPLE FINAL SOLUTIONS

NATIONAL UNIVERSITY OF SINGAPORE MA1101R

Mathematics 700 Test #2 Name: Solution Key

2 b 3 b 4. c c 2 c 3 c 4

MATH 221, Spring Homework 10 Solutions

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial

MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS

Review problems for MA 54, Fall 2004.

Review of Some Concepts from Linear Algebra: Part 2

Vector Spaces and Linear Transformations

Lecture 7: Positive Semidefinite Matrices

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

Then since v is an eigenvector of T, we have (T λi)v = 0. Then

2 Eigenvectors and Eigenvalues in abstract spaces.

Final A. Problem Points Score Total 100. Math115A Nadja Hempel 03/23/2017

Control Systems. Linear Algebra topics. L. Lanari

LINEAR ALGEBRA REVIEW

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.

The Jordan Canonical Form

The eigenvalues are the roots of the characteristic polynomial, det(a λi). We can compute

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

What is on this week. 1 Vector spaces (continued) 1.1 Null space and Column Space of a matrix

Computationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity:

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

MATH 225 Summer 2005 Linear Algebra II Solutions to Assignment 1 Due: Wednesday July 13, 2005

MODEL ANSWERS TO THE FIRST QUIZ. 1. (18pts) (i) Give the definition of a m n matrix. A m n matrix with entries in a field F is a function

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

1. General Vector Spaces

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

2.3. VECTOR SPACES 25

LINEAR ALGEBRA REVIEW

Linear Algebra Lecture Notes-I

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

(VI.D) Generalized Eigenspaces

Chapter 2 Linear Transformations

Math Final December 2006 C. Robinson

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

Homework For each of the following matrices, find the minimal polynomial and determine whether the matrix is diagonalizable.

Test 1 Review Problems Spring 2015

Introduction to Linear Algebra, Second Edition, Serge Lange

6 Inner Product Spaces

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Math 54. Selected Solutions for Week 5

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra

Determining a span. λ + µ + ν = x 2λ + 2µ 10ν = y λ + 3µ 9ν = z.

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

NAME MATH 304 Examination 2 Page 1

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

ARCS IN FINITE PROJECTIVE SPACES. Basic objects and definitions

MTH 5102 Linear Algebra Practice Final Exam April 26, 2016

Transcription:

Selected exercises from Abstract Algebra by Dummit and Foote (3rd edition). Bryan Félix Abril 12, 217 Section 11.1 Exercise 6. Let V be a vector space of finite dimension. If ϕ is any linear transformation from V to V prove that there is an integer m such that the intersection of the image of ϕ m and the kernel of ϕ m is {}. Proof. Note that for all i we have and ker(ϕ i ) ker(ϕ i+1 ) Im (ϕ i+1 ) Im (ϕ i ). Since V is finite dimensional, so are the kernel and the image therefore, there exist a, b such thatker(ϕ a ) = ker(ϕ a+1 ) and Im (ϕ b+1 ) = Im (ϕ b ). Without loss of generality let b a. We will show that ker ϕ b Im ϕ b =. Any element v ker ϕ b Im ϕ b satisfies: i) v = ϕ b (w) for some w V, and ii) = ϕ b (v). By substitution, = ϕ b (v) = ϕ b (ϕ b (w))) = ϕ 2b (w). Therefore w ker ϕ 2b and since 2b > b, w ker ϕ b as well. Therefore v = ϕ b (w) = as desired. Exercise 7. Let ϕ be a linear transformation from a vector space V of dimension n that satisfies ϕ 2 =. Prove that the image of ϕ is contained in the kernel of ϕ and hence that the rank of ϕ is at most n/2. Proof. We wish to show that Im (ϕ) ker(ϕ). Take v Im (ϕ). Then, there exist w such that ϕ(w) = v. Now, observe that ϕ(v) = ϕ(ϕ(w)) = ϕ 2 (w) = Hence, v is in the kernel of ϕ, and furthermore Im (ϕ) ker(ϕ). It follows that n = dim(v ) = dim(im (ϕ)) + dim(ker ϕ)) 2 dim(im (ϕ)) it is to say that n 2 dim(im (ϕ)). 1

Exercise 8. Let V be a vector space over F and let ϕ be a linear transformation of the vector space V to itself. A non zero element v V satisfying ϕ(v) = λv for some λ F is called an eigenvector of ϕ with eigenvalue λ. Prove that for any fixed λ F the collection of eigenvectors of ϕ with eigenvalue λ together with forms a subspace of V. Proof. We proceed by the standard method. Let v i and v 2 be eigenvectors of ϕ with eigenvalue λ and let c 1 and c 2 be elements in F. Observe that ϕ(cv 1 + c 2 v 2 ) = c 1 ϕ(v 1 ) + c 2 ϕ(v 2 ) = c 1 (λv 1 ) + c 2 (λv 2 ) = λ(c 1 v 1 + c 2 v 2 ) Therefore the space is closed under scalar multiplication and vector addition as desired. Note that is in the space by construction. Exercise 9. Let V be a vector space over F and let ϕ be a linear transformation of the vector space V to itself. Suppose for i = 1, 2,..., k that v i V is an eigenvector of ϕ with eigenvalue λ i F and that all the eigenvalues λ i are distinct. Prove that v 1, v 2,..., v k are linearly independent. Conclude that any linear transformation on an n-dimensional vector space has at most n eigenvalues. Proof. We proceed by induction over k. For k = 1, v 1 is trivially a linearly independent set since, by construction, v 1. Assume that {v 1, v 2,..., v k } as described in the problem is linearly independent and consider the set {v 1, v 2,..., v k } v k+1 as prescribed in the problem. Now, consider the linear combination c 1 v 1 + c 2 v 2 + + c k v k + c k+1 v k+1 =. We will show that the only solution is c 1 = c 2 = = c k = c k+1 = by inspecting the following system { λ1 (c 1 v 1 + c 2 v 2 + + c k v k + c k+1 v k+1 ) = λ 1 () ϕ (c 1 v 1 + c 2 v 2 + + c k v k + c k+1 v k+1 ) = ϕ () equivalent to { c1 λ 1 v 1 + c 2 λ 1 v 2 + + c k λ 1 v k + c k+1 λ 1 v k+1 = c 1 λ 1 v 1 + c 2 λ 2 v 2 + + c k λ k v k + c k+1 λ k+1 v k+1 = If we subtract the second equation form the first one we have (λ 1 λ 2 )c 2 v 2 + (λ 1 λ 3 )c 3 v 3 + + (λ 1 λ k )c k v k + (λ 1 λ k+1 )c k+1 v k+1 =. Observe that the factors (λ 1 λ i ) are non zero since the eigenvalues are distinct. Furthermore, by the induction hypothesis, the only solution to the previous linear combination is the trivial solution. It is to say that c 2 = c 3 = = c k = c k+1 =. Then, it must be the case that c 1 v 1 =. But that implies that c 1 = by construction. The result then follows. For the second part, we know that any subset from a vector space of dimension n has at most n linearly dependent vectors. Since {v 1, v 2,..., v k, v k+1 } V the result follows. Section 11.2 Exercise 9. If W is a subspace of the vector space V stable under the linear transformation ϕ (i.e., ϕ(w ) W ), show that ϕ induces linear transformations ϕ W on W and ϕ on the quotient 2

vector space V/W. If ϕ W and ϕ are non singular prove ϕ is non singular. Prove the converse holds if V has finite dimension and give a counterexample with V infinite dimensional. Exercise 11. Let ϕ be a linear transformation form the finite dimensional vector space V to itself such that ϕ 2 = ϕ. a) Prove that Im (ϕ) ker(ϕ) =. Proof. Assume there exist a non zero element v in Im ϕ ker ϕ. Then, i) for some w in V, ϕ(w) = v, and ii) ϕ(v) =. We apply the transformation ϕ to ϕ(w) = v and see that a contradiction. b) Prove that V = Im (ϕ) ker(ϕ). ϕ (ϕ(w)) = ϕ(v) ϕ 2 (w) = ϕ(w) = v = Proof. Let v = ϕ(v) + (v ϕ(v)). Observe that ϕ (v ϕ(v)) = ϕ(v) ϕ 2 (v) = ϕ(v) ϕ(v) =. Therefore (v ϕ(v)) ker ϕ and, trivially, ϕ(v) Im ϕ. The result then follows. c) Prove that there is a basis of V such that the matrix of ϕ with respect to this basis is a diagonal matrix whose entries are all or 1. Proof. We chose basis B 1 for Im ϕ and B 2 for ker ϕ separately. Note that by part b) of the exercise B 1 B 2 is a basis for V. Furthermore if v B 1, we know there exist w such that ϕ(w) = v for which ϕ(v) = ϕ 2 (w) = ϕ(w) = v. This shows that the matrix representation of ϕ restricted to the ordered basis B 1 is the identity matrix. Trivially, the matrix of ϕ restricted to the kernel is the zero matrix. We conclude that the matrix representation of ϕ under the ordered basis B 1 B 2 is equal to ( ) I as desired. Exercise 12. Let V = R 2, v 1 = (1, ), v 2 = (, 1), so that v 1, v 2 are a basis for ( V. Let) ϕ be the 2 1 linear transformation of V to itself whose matrix whit respect to this basis is. Prove 2 that if W is the subspace generated by v 1 then W is stable under the action of ϕ. Prove that there is no subspace W invariant under ϕ so that V = W W. 3

Proof. Note that W = span(v 1 ). For arbitrary v span(v 1 ) we have that v = cv 1 (for c in the field of the vector space) then, ϕ(v) = ϕ(cv 1 ) = cϕ(v 1 ) = 2cv 1 span(v 1 ). Hence, W is stable under ϕ. For the second part, assume that such W exists. Then, it implies that the matrix of ϕ is diagonalizable. Observe that the characteristic polynomial of ϕ is (2 λ) 2 and hence, ϕ has a unique eigenvalue. It can easily be show that the corresponding eigenspace of has dimension 1 and is spanned by v 1. Therefore the matrix is not diagonaizable and we conclude that W can not exist. Exercise 36. Let V be the 6-dimensional vector space over Q consisting of polynomials in the variable x of degree at most 5. Let ϕ be the map of V to itself defined by ϕ(f) = x 2 f 6xf + 12f. a) Prove that ϕ is a linear transformation of V to itself. Proof. Take p 1 and p 2 to be polynomials in the space and let α and β be rational numbers. Then ϕ(αp 1 + βp 2 ) = x 2 (αp 1 + βp 2 ) 6x (αp 1 + βp 2 ) + 12 (αp 1 + βp 2 ) = αx 2 p 1 + βx 2 p 2 α6xp 1 β6xp 2 + α12p 1 + β12p 2 = α ( ) ( ) x 2 p 1 6xp 1 + 12p 1 + β x 2 p 2 6xp 2 + 12p 2 = αϕ(p 1 ) + βϕ(p 2 ). It follows that ϕ is a linear transformation. Note that the space is preserved as all of the terms in the map are polynomials of degree at most 5. b) Determine a basis for the image and for the kernel of ϕ. Proof. Let {1, x, x 2, x 3, x 4, x 5 } of Q be the basis for the space. Then the matrix representation of ϕ is 12 6 2 2 Clearly, the first three columns together with the last one are linearly independent. Therefore the image of the transformation has basis {1, x, x 2, x 5 } (note that dim Im ϕ = 3). For the basis of the kernel it suffices to find two linearly independent vectors in the kernel (by the nullity-rank theorem). Observe that ϕ(x 4 ) = ϕ(x 3 ) =. Therefore, {x 3, x 4 } is a basis for the kernel. Exercise 37. Let V be the 7-dimensional vector space over the field F consisting of the polynomials in the variable x of degree at most 6. Let ϕ be the linear transformation of V to itself defined by ϕ(f) = f. For each of the fields below, determine a basis for the image and for the 4

kernel of ϕ. Note. For all the following cases I proceed as in the previous exercise. I use the standard basis {1, x,..., x 7 } to construct a matrix for the transformation over the field, then I use the column space to see the basis of the image (denoted as I) and the basis of the kernel (denoted as K). a) F = R Proof. By baby calculus, any polynomial of degree 6 has an antiderivative of degree 7. Hence I = {1, x, x 2, x 3, x 4, x 5, x 6 }. Trivially K = 1 as any constant has derivative equal to. b) F = F 2 Proof. The corresponding matrix representation is 1 1 1 The linearly independent columns give I = {1, x 2, x 4 } and the zero columns give K = {1, x 2, x 4, x 6 }. c) F = F 3 Proof. The corresponding matrix representation is 1 2 1 2 The linearly independent columns give I = {1, x, x 3, x 4 } and the zero columns give K = {1, x 3, x 6 }. d) F = F 5 Proof. The corresponding matrix representation is 1 2 3 4 1 The linearly independent columns give I = {1, x, x 2, x 3, x 5 } and the zero columns give K = {1, x 5 }. 5

Section 11.3 Exercise 2. Let V be the collection of polynomials with coefficients in Q in the variable x of degree at most 5 with 1, x, x 2,..., x 5 as basis. Prove that the following are elements of the dual space of V and express them as linear combinations of the dual basis. Note. I will use the natural basis for the dual space; i.e. { vi 1 i = j (v j ) = i j where v i is the ordered element of the basis for V. a) E : V Q defined by E(p(x)) = p(3). Proof. Let αp 1 + βp 2 be an element of V. Note that E(αp 1 + βp 2 ) = (αp 1 + βp 2 )(3) = αp 1 (3) + βp 2 (3) = αe(p 1 ) + βe(p 2 ). Since E is linear, E is an element of the dual space of V. Note that E(x i ) = 3 i. Then, under the standard dual basis E = 5 3 i vi. i= b) ϕ : V Q defined by ϕ(p(x)) = p(t) dt. Proof. Let αp 1 + βp 2 be an element of V. Note that ϕ(αp 1 + βp 2 ) = αp 1 + βp 2 dt = α p 1 dt + β Since ϕ is linear, ϕ is an element of the dual space of V. Note that ϕ(x i ) = xi dx = 1 Then, under the standard dual basis i+1 ϕ = 5 i= 1 i + 1 v i. p 2 dt = αϕ(p 1 ) + βϕ(p 2 ). c) ϕ : V Q defined by ϕ(p(x)) = t2 p(t) dt. Proof. Let αp 1 + βp 2 be an element of V. Note that ϕ(αp 1 + βp 2 ) = t 2 (αp 1 + βp 2 ) dt = α t 2 p 1 dt + β Since ϕ is linear, ϕ is an element of the dual space of V. Note that ϕ(x i ) = x2 (x i ) dx = 1 Then, under the standard dual basis i+3 ϕ = 5 i= 1 i + 3 v i. t 2 p 2 dt = αϕ(p 1 ) + βϕ(p 2 ). 6

d) ϕ : V Q defined by ϕ(p(x)) = p (5). Proof. Let αp 1 + βp 2 be an element of V. Note that ϕ(αp 1 + βp 2 ) = (αp 1 + βp 2 ) (5) = αp 1(5) + βp 2(5) = αϕ(p 1 ) + βϕ(p 2 ). Since ϕ is linear, ϕ is an element of the dual space of V. Note that ϕ(x i ) = (x i ) (5) = i 5 i 1. Then, under the standard dual basis ϕ = 5 i 5 i 1 vi. i= Exercise 4. If V is infinite dimensional with basis A, prove that A = {v v A} does not span V. Proof. Let ϕ : V F be defined as ϕ(v i ) = 1 for all v i A. Then, ϕ = i A v i can only have finitely many non zero terms. In other words ϕ = i B v i for some finite set B. Note that this implies that ϕ(w) = for w / A, therefore ϕ is not in the span of A. Section 11.4 Exercise 6. (Minkowski s Criterion) Suppose A is an n n matrix with real entries such that the diagonal elements are all positive, the off-diagonal elements are all negative and the row sums are all positive. Prove that det A. Proof. Assume det(a) =. Then, there exists a non zero solution for the system AX =. Let x m be the maximum coordinate element of the vector X in absolute value. Then, we have the following. a i,j x j ai,j x j ai,j xj ai,j xm Taking the last inequality, we fix i to be equal to m and we multiply by 1 to get am,j xj am,j xm Now we add 2 a m,m x m and simplify to a m,m x m am,j xj am,m x m 7 am,j xm

Note that the right hand side of the inequality is equal to x j n a m,j which on its own is greater that by construction. It is to say or simply a m,m x m am,j xj xj a m,m x m a m,j > a m,j x j >. The left hand side can be compacted by means of the reverse triangle inequality, thus a m,m x m a m,j x j a m,m x m am,j xj >. Lastly, the left side of the inequality reduces to n a m,j x j and we have Which is a contradiction to AX =. Section 11.5 a m,j x j >. Exercise 13. Let F be any field in which 1 1 and let V be a vector space over F. Prove that V F V = S 2 (V ) 2 (V ) i.e., that every 2-tensor may be written uniquely as a sum of a symmetric and an alternating tensor. Proof. First, observe that dim S 2 (V ) = n(n+1) and that dim 2 (V ) = n(n 1). Hence, dim S 2 (V ) 2 2 2 (V ) = n 2 = dim V F V. The result follows if we prove that the spaces S 2 (V ) and 2 (V ) intersect trivially. To that end, assume v S 2 (V ) 2 (V ). Then, by definition { σv = v v = sign(σ)v v(1 sign(σ)) = σv = sign(σ)v Note that sign(σ) is necessarily equal to 1 since we are working with the symmetric group S 2. The above equation implies that v(1 1) =. By construction 1 1, and thus v is forced to be equal to, as desired. 8