MATRIX THEORY (WEEK 2)

Similar documents
Math 113 Winter 2013 Prof. Church Midterm Solutions

Math 113 Midterm Exam Solutions

LINEAR ALGEBRA MICHAEL PENKAVA

MATH 115A: SAMPLE FINAL SOLUTIONS

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

( 9x + 3y. y 3y = (λ 9)x 3x + y = λy 9x + 3y = 3λy 9x + (λ 9)x = λ(λ 9)x. (λ 2 10λ)x = 0

1 Invariant subspaces

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

Lecture 21: The decomposition theorem into generalized eigenspaces; multiplicity of eigenvalues and upper-triangular matrices (1)

Linear Algebra Practice Problems

Linear Maps and Matrices

The following definition is fundamental.

Solutions to the Calculus and Linear Algebra problems on the Comprehensive Examination of January 28, 2011

1. General Vector Spaces

The Jordan Normal Form and its Applications

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

DEPARTMENT OF MATHEMATICS

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r

Lecture 6: Corrections; Dimension; Linear maps

Vector Spaces and Linear Maps

Eigenspaces and Diagonalizable Transformations

Eigenvalues and Eigenvectors

Linear Algebra Lecture Notes-I

Solutions for Math 225 Assignment #5 1

Math 113 Practice Final Solutions

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7

Math 110, Spring 2015: Midterm Solutions

This last statement about dimension is only one part of a more fundamental fact.

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

Lecture notes - Math 110 Lec 002, Summer The reference [LADR] stands for Axler s Linear Algebra Done Right, 3rd edition.

Topics in linear algebra

2 Eigenvectors and Eigenvalues in abstract spaces.

A PRIMER ON SESQUILINEAR FORMS

(c) For each α R \ {0}, the mapping x αx is a homeomorphism of X.

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Dimension. Eigenvalue and eigenvector

Vector Spaces and Linear Transformations

LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS

2: LINEAR TRANSFORMATIONS AND MATRICES

Lecture 19: Polar and singular value decompositions; generalized eigenspaces; the decomposition theorem (1)

Math 113 Solutions: Homework 8. November 28, 2007

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial

235 Final exam review questions

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

fy (X(g)) Y (f)x(g) gy (X(f)) Y (g)x(f)) = fx(y (g)) + gx(y (f)) fy (X(g)) gy (X(f))

Eigenvalues and Eigenvectors A =

Mathematics Department Stanford University Math 61CM/DM Vector spaces and linear maps

Linear transformations

Final A. Problem Points Score Total 100. Math115A Nadja Hempel 03/23/2017

(f + g)(s) = f(s) + g(s) for f, g V, s S (cf)(s) = cf(s) for c F, f V, s S

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

LECTURE 7: LINEAR TRANSFORMATION (CHAPTER 4 IN THE BOOK)

Lecture 19: Polar and singular value decompositions; generalized eigenspaces; the decomposition theorem (1)

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

5 Compact linear operators

V (v i + W i ) (v i + W i ) is path-connected and hence is connected.

Lecture 11: Finish Gaussian elimination and applications; intro to eigenvalues and eigenvectors (1)

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

MATH 205 HOMEWORK #3 OFFICIAL SOLUTION. Problem 1: Find all eigenvalues and eigenvectors of the following linear transformations. (a) F = R, V = R 3,

Definition (T -invariant subspace) Example. Example

8 General Linear Transformations

Lecture Notes for Math 414: Linear Algebra II Fall 2015, Michigan State University

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure.

Quadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix.

First we introduce the sets that are going to serve as the generalizations of the scalars.

Midterm 1 Solutions Math Section 55 - Spring 2018 Instructor: Daren Cheng

NORMS ON SPACE OF MATRICES

A linear algebra proof of the fundamental theorem of algebra

MATH 220: INNER PRODUCT SPACES, SYMMETRIC OPERATORS, ORTHOGONALITY

Test 1 Review Problems Spring 2015

A linear algebra proof of the fundamental theorem of algebra

Linear Algebra Lecture Notes-II

Linear Transformations: Kernel, Range, 1-1, Onto

Math Linear Algebra II. 1. Inner Products and Norms

Lec 2: Mathematical Economics

CHAPTER 5 REVIEW. c 1. c 2 can be considered as the coordinates of v

Last name: First name: Signature: Student number:

Math 121 Homework 4: Notes on Selected Problems

Lecture Summaries for Linear Algebra M51A

Math 113 Final Exam: Solutions

Linear Algebra: Graduate Level Problems and Solutions. Igor Yanovsky

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Chapter 2 Linear Transformations

LINEAR ALGEBRA REVIEW

Equivalence Relations

Linear Algebra. Paul Yiu. Department of Mathematics Florida Atlantic University. Fall 2011

LINEAR ALGEBRA (PMTH213) Tutorial Questions

Linear algebra 2. Yoav Zemel. March 1, 2012

Math 110: Worksheet 3

Math Topology II: Smooth Manifolds. Spring Homework 2 Solution Submit solutions to the following problems:

Section 6.2: THE KERNEL AND RANGE OF A LINEAR TRANSFORMATIONS

Normed Vector Spaces and Double Duals

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

Mathematics Department Stanford University Math 61CM/DM Inner products

Math 113 Homework 5 Solutions (Starred problems) Solutions by Guanyang Wang, with edits by Tom Church.

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

1.1 Limits and Continuity. Precise definition of a limit and limit laws. Squeeze Theorem. Intermediate Value Theorem. Extreme Value Theorem.

Review of Linear Algebra

Transcription:

MATRIX THEORY (WEEK 2) JAMES FULLWOOD Example 0.1. Let L : R 3 R 4 be the linear map given by L(a, b, c) = (a, b, 0, 0). Then which is the z-axis in R 3, and ker(l) = {(x, y, z) R 3 x = y = 0}, im(l) = {(x, y, z, w) R 4 z = w = 0}. The nullity of L is then 1, and the rank of L is 2. Whenever you define a class of maps, the composition of two maps in the class should be of the same class. The case of linear maps follows from the following Proposition 0.2. Let L : V W and M : W Z be linear maps. Then the composition M L : V Z is linear as well. Proof. Let v, w V and c K. Then (M L)(cv + w) = M(L(cv + w)) = M(cL(v) + L(w)) = cm(l(v)) + M(L(w)) = c(m L)(v) + (M L)(w). 0.1. Dual spaces. Let V and W be vector spaces over a field K, and let Hom(V, W ) = {linear maps L : V W }. Then Hom(V, W ) is a vector space over K. Indeed, given L 1, L 2 Hom(V, W ) and c K, then cl 1 + L 2 is the linear map given by (cl 1 + L 2 )(v) = cl 1 (v) + L 2 (v). If V and W are finite dimensional then is it straightforward to show Hom(V, W ) is finite dimensional as well, and moreover, dim (Hom(V, W )) = dim(v ) dim(w ). (0.1) For any vector space V over K, the vector space Hom(V, K) is referred to as the dual space of V, and is denoted V. From formula (0.1) we see dim(v ) = dim(v ) 1 = dim(v ), so that V is isomorphic to V. Given a basis e = {v 1,..., v 2 } of V, the associated dual basis of V, given by e = {v1,..., vn}, is such that { vi 1 if v = v i (v) = 0 otherwise. An explicit isomorphism between V and V is then induced by mapping v i to v i. 1

Example 0.3 (On the meaning of differentials). Let p = (x 0, y 0 ) R 2, and consider the vector space T p R 2 introduced in Example 1.4. Now given an arrow v p T p R 2 whose tip has coordinates (x 1, y 1 ) R 2, we will often write v p = x 1 x 0, y 1 y 0 p, which we refer to as the component notation of v p. Now let (a, b) = (x 1 x 0, y 1 y 0 ), so that v p = a, b p, and let F(R 2 ) be the set of smooth real-valued functions on R 2 (i.e., infinitely differentiable functions with continuous partial derivatives of all orders). Then with v p T p R 2 we can associate a map v p : F(R 2 ) R given by v p (f) = a f (p) + b f x y (p), which we refer to as the vectorial derivative of f with respect to the vector v p. Dynamically, we can think of v p (f) as the rate of change of f along a curve passing through p with velocity v p. By thinking of v p as a real-valued map on differentiable functions, we can then associate with every f F(R 2 ) an element df in the dual space (T p R 2 ), which refer to as the differential of f. More precisely, df acts on a vector v p T p R 2 via the formula df(v p ) = v p (f). (0.2) In particular, from equation (0.2) it immediately follows that {dx, dy} is the dual basis of {, }, where x denotes the function f(x, y) = x and y denotes the function g(x, y) = y. We then have df(v p ) = a f (p) + b f x y (p) = v p (x) f x (p) + v p(y) f y (p) so that = f x (p)dx(v p) + f y (p)dy(v p) ( ) f f = (p)dx + x y (p)dy (v p ), df = f f (p)dx + (p)dy. (0.3) x y Formula (0.3) should be familiar from calculus texts, which now should have a clear meaning. 0.2. The concept of a canonical isomorphism. The isomorphism between a vector space V and its dual induced by sending a basis e = {v 1,..., v n } to its associated dual e = {v1,...vn}, depends manifestly on e. If we consider a basis e = {v 1,..., v n} different from e, then the isomorphism induced by mapping v i to v i will be a different linear map than the one induced by mapping v i to vi. Now let L : V W be a general isomorphism of vector spaces. The map L is said to be a canonical isomorphism if and only if it may be defined without any reference to a basis of V, and in such a case, V and W are said to be canonically isomorphic. As the choice of a basis of a general vector space is arbitrary, a canonical isomorphism between vector spaces V and W is an isomrphism which doesn t depend on arbitrary choices, and thus is more natural 2

from an abstract point of view. In particular, the isomorphism between a vector space V and its dual V by sending a basis e of V to its dual e is not a canonical isomorphism, and more generally, one can show that a canonical isomorphism between V and V does not exist. Now consider the double dual V = (V ) of a vector space V. An element λ of V is then a linear map λ : V K. It turns out that V is canonically isomorphic to V, which we now show by constructing a canonical isomorphism L : V V. For this, given an element v V we have to specify a linear map L(v) : V K, without making any reference to a basis of V. This turns out to be easy (because it s canonical). For f V, simply let L(v)(f) = f(v) K. (0.4) Since no reference to basis appears in (0.4), and L is injective, L is in fact a canonical isomorphism, so that a vector space V is always canonically isomorphic to its double dual V. Example 0.4. Let L Hom(V, W ). Then L induces a map L Hom(W, V ), referred to as the dual of L. More precisely, L takes an element f W, which is a linear map of the form f : W K, to the map L (f) = f L : V K, which is linear (since compositions of linear maps are linear). Moreover, the map L L is in fact a canonical isomorphism between Hom(V, W ) and Hom(W, V ). 0.3. Direct sums. Given two vector spaces V and W over a field K, we can form another vector space V W over K, referred to as the direct sum of V and W. As a set, the direct sum V W is just the cartesian product V W, and the vector space structure is given by c(v 1, w 1 ) + (v 2, w 2 ) = (cv 1 + v 2, cw 1 + w 2 ), for all v 1, v 2 V, w 1, w 2 W and c K. It then follows that 0 V W = (0 V, 0 W ). There are four linear maps canonically associated with the direct sum V W, namely, the two injective maps ι V : V V W and ι W : W V W given by v (v, 0 W ) and w (0 V, w) respectively, and the two surjective maps π V : V W V and π : V W W given by (v, w) v and (v, w) w respectively. The maps ι V and ι W are referred to as the inclusions of V and W into V W, and the maps π V and π W are referred to as the projections from V W onto V and W. The subspaces im(ι V ) and im(ι W ) of V W are then canonically isomorphic to V and W respectively, so that we may naturally identify im(ι V ) with V and im(ι W ) with W. Moreover, im(ι V ) im(ι W ) is canonically isomorphic to V W. As such, when working with a direct sum V W we will often denote im(ι V ) by V and im(ι W ) by W, and in such a case, an element (v, 0 W ) im(ι V ) will then be denoted simply by v and an element (0 V, w) im(ι W ) will be denoted simply by w. Note that after identifying V with ι V (V ) and W with ι W (W ), we have 0 V = 0 W = (0 V, 0 W ) = 0 V W. Proposition 0.5. Let e V and e W be bases of vector spaces V and W over a field K. Then e V e W is a basis of V W. Proof. Suppose I have a linear combination of elements of e V and e W which give 0 V W, namely, (a 1 v 1 + + a n v n ) + (b 1 w 1 + + b m w m ) = 0 V W, 3

with v i e V, w j e W, and a i, b j K. Now let v = a 1 v 1 + + a n v n and w = b 1 w 1 + + b m w m. Then 0 V W = v + w = (v, w). But this implies v = 0 V and w = 0 W, so that e V e W is a linearly independent set. Moreover, any element (v, w) V W may be expressed as the sum (v, w) = (v, 0 W ) + (0 V, w), and since (v, 0 W ) Span(e V ) and (0 V, w) Span(e W ), it follows that Span(e V e W ) = V W. Corollary 0.6. Let V and W be vector spaces over a field K. Then every element of ϑ V W be be expressed uniquely as a sum ϑ = v + w, with v V and w W. Moreover, if V and W are finite dimensional, then dim(v W ) = dim(v ) dim(w ). Proof. Both claims are immediate consequences of Proposition 0.5. In the definition of direct sum, we started with two vector spaces V and W, and cooked up a new vector space V W in such a way that V and W may be canonically identified with subspaces of V W. Moreover, the subspaces V and W of V W are such that V W = {0 V W } and dim(v ) + dim(w ) = dim(v W ). From a dual perspective, given a vector space V, the next proposition says precisely when V may be given the structure of a direct sum V = V 1 V 2, where V 1 and V 2 are subspaces of V. Proposition 0.7. Let V 1 and V 2 be subspaces of a finite dimensional vector space V over a field K. Then V is canonically isomorphic to V 1 V 2 if and only if V 1 V 2 = {0 V } and dim(v 1 ) + dim(v 2 ) = dim(v ). In such a case, we write V = V 1 V 2, and refer to V as a direct sum decomposition of the subspaces V 1 and V 2. Proof. Suppose V is canonically isomorphic to V 1 V 2. Then dim(v ) = dim(v 1 V 2 ) = dim(v 1 ) + dim(v 2 ), where the second equality follows from Corollary 0.6. Moreover, since V 1 consists of pairs (v 1, 0 V2 ) with v 1 V 1 and V 2 consists of pairs (0 V1, v 2 ) with v 2 V 2, it follows that V 1 V 2 = {(0 V, 0 V2 )} = {0 V }. For the reverse implication, suppose V 1 and V 2 are subspaces of V such that V 1 V 2 = {0 V } and dim(v ) = dim(v 1 ) + dim(v 2 ). We then have to show that V is canonically isomorphic to V 1 V 2. Now since any element of V 1 V 2 is of the form (v 1, v 2 ) with v 1 V 1 and v 2 V 2, we let L : V 1 V 2 V be the map given by L((v 1, v 2 )) = v 1 + v 2 V. Then it is clear that the conditions V 1 V 2 = {0 V } and dim(v ) = dim(v 1 )+dim(v 2 ) imply L is a linear isomorphism, and since L doesn t depend on the choice of a basis, it is a canonical isomorphism. Given vector spaces V 1, V 2,...V n, over K, one can iterate the process of taking direct sums to form the vector space V = V 1 V 2 V n. In such a case, each V i is canonically isomorphic to a subspace of V via an associated inclusion map ι Vi : V i V, and we say that V 1 V 2 V n is a decomposition of V into a direct sum of the subspaces im(ι Vi ) = V i V, 4

or rather, that V = V 1 V 2 V n is a direct sum decomposition of V. In such a case, it follows by from Proposition 0.7 and induction that V i V j = 0 V for i j and that dim(v ) = dim(v 1 ) + + dim(v n ). Moreover, it follows that every v V may be expressed uniquely as a sum v = v 1 + + v n, with v i V i. 0.4. Eigenspaces. Let V be a finite dimensional vector space over a field K, and let L : V V be a linear operator (which we recall is just a linear map from a vector space to itself). An element λ K is said to be an eigenvalue of L if and only if there exists a non-zero vector v V such that L(v) = λv. In such a case, v is said to be an eigenvector associated with the eigenvalue λ of L. If λ K is an eigenvalue of a linear operator L : V V, let E λ V be the set given by For any c K and v, w E λ we have E λ = {v V L(v) = λv}. L(cv + w) = cl(v) + L(w) = c(λv) + (λw) = λ(cv + w), so that E λ is in fact a subspace of V, which we refer to as the eigenspace of V associated with the eigenvalue λ of L. Note that since an eigenvector is required to be non-zero, it follows that dim(e λ ) 1 for any eigenvalue λ of a linear operator L : V V. Moreover, it follows from the definition of eigenspace that if λ 1 and λ 2 are distinct eigenvalues of a linear operator L : V V, then E λ1 E λ2 = 0 V, so that L can have no more than dim(v ) distinct eigenvalues in the case that V is finite dimensional. Example 0.8. Let L : K 2 K 2 be the linear operator given by L(x, y) = ( y, x). As such, if λ is a eigenvalue of L then there exists a non-zero (x 0, y 0 ) K 2 such that (x 0, y 0 ) = ( λy 0, λx 0 ), so that x 0 = λy 0, and y 0 = λx 0. Substituting the second equation in the first yields x 0 = λ 2 x 0 = λ 2 + 1 = 0. We then see L admits an eigenvalue if and only if the polynomial z 2 + 1 = 0 admits zeros in K. In particular, for K = R, L admits no eigenvalues, and for K = C, L admits the eigenvalues ±i, where i is the imaginary unit in C. For K = C, we have E i = {(x 0, ix 0 ) C 2 } and E i = {(x 0, ix 0 ) C 2 }. Moreover, we have {(1, i)} is a basis of E i, {(1, i)} is a basis of E i, and by Proposition 0.7 we have C 2 = E i E i. School of Mathematical Sciences, Shanghai Jiao Tong University, 800 Dongchuan Road, Shanghai, China E-mail address: fullwood@sjtu.edu.cn 5