Mathematics Department Stanford University Math 61CM/DM Vector spaces and linear maps

Similar documents
Mathematics Department Stanford University Math 61CM/DM Inner products

Definition 2.3. We define addition and multiplication of matrices as follows.

MATH 115A: SAMPLE FINAL SOLUTIONS

Lecture notes - Math 110 Lec 002, Summer The reference [LADR] stands for Axler s Linear Algebra Done Right, 3rd edition.

Linear Vector Spaces

This last statement about dimension is only one part of a more fundamental fact.

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by

Chapter 2 Linear Transformations

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

Review of Linear Algebra

Abstract Vector Spaces and Concrete Examples

Lecture Notes for Math 414: Linear Algebra II Fall 2015, Michigan State University

Review 1 Math 321: Linear Algebra Spring 2010

Definition 3 A Hamel basis (often just called a basis) of a vector space X is a linearly independent set of vectors in X that spans X.

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016

MATH 205 HOMEWORK #3 OFFICIAL SOLUTION. Problem 1: Find all eigenvalues and eigenvectors of the following linear transformations. (a) F = R, V = R 3,

Rank & nullity. Defn. Let T : V W be linear. We define the rank of T to be rank T = dim im T & the nullity of T to be nullt = dim ker T.

18.312: Algebraic Combinatorics Lionel Levine. Lecture 22. Smith normal form of an integer matrix (linear algebra over Z).

1 Invariant subspaces

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

Econ Lecture 8. Outline. 1. Bases 2. Linear Transformations 3. Isomorphisms

Criteria for Determining If A Subset is a Subspace

Linear Algebra Lecture Notes-I

Definition Suppose S R n, V R m are subspaces. A map U : S V is linear if

Linear Algebra Review

IDEAL CLASSES AND RELATIVE INTEGERS

Lecture Summaries for Linear Algebra M51A

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

Math 110, Spring 2015: Midterm Solutions

Math 110: Worksheet 3

Math 4377/6308 Advanced Linear Algebra

1.8 Dual Spaces (non-examinable)

Systems of Linear Equations

MA3051: Mathematical Analysis II

The definition of a vector space (V, +, )

Linear Algebra. Paul Yiu. Department of Mathematics Florida Atlantic University. Fall 2011

0.2 Vector spaces. J.A.Beachy 1

Math 113 Midterm Exam Solutions

MATH 225 Summer 2005 Linear Algebra II Solutions to Assignment 1 Due: Wednesday July 13, 2005

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure.

Math 121 Homework 4: Notes on Selected Problems

( 9x + 3y. y 3y = (λ 9)x 3x + y = λy 9x + 3y = 3λy 9x + (λ 9)x = λ(λ 9)x. (λ 2 10λ)x = 0

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

Kernel and range. Definition: A homogeneous linear equation is an equation of the form A v = 0

Math 3191 Applied Linear Algebra

Name: Handed out: 14 November Math 4310 Take-home Prelim 2 Solutions OFFICIAL USE ONLY 1. / / / / / 16 6.

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions

Exercises on chapter 0

Solutions for Math 225 Assignment #5 1

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

MATH 583A REVIEW SESSION #1

Math 115A: Homework 4

2: LINEAR TRANSFORMATIONS AND MATRICES

Outline. Linear maps. 1 Vector addition is commutative: summands can be swapped: 2 addition is associative: grouping of summands is irrelevant:

Math 353, Practice Midterm 1

Mathematical Methods wk 1: Vectors

Mathematical Methods wk 1: Vectors

Test 1 Review Problems Spring 2015

MATH PRACTICE EXAM 1 SOLUTIONS

A = 1 6 (y 1 8y 2 5y 3 ) Therefore, a general solution to this system is given by

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

SYMBOL EXPLANATION EXAMPLE

Math 115A: Linear Algebra

8 General Linear Transformations

MATH2210 Notebook 3 Spring 2018

Tues Feb Vector spaces and subspaces. Announcements: Warm-up Exercise:

Math 321: Linear Algebra

Vector Spaces and Linear Transformations

MATH Linear Algebra

Lecture 29: Free modules, finite generation, and bases for vector spaces

Linear Algebra. Chapter 5

Vector Spaces. Addition : R n R n R n Scalar multiplication : R R n R n.

Online Exercises for Linear Algebra XM511

Review Notes for Midterm #2

Math 396. Quotient spaces

Vector Spaces and Linear Maps

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

NOTES (1) FOR MATH 375, FALL 2012

LINEAR ALGEBRA REVIEW

Math Linear algebra, Spring Semester Dan Abramovich

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

Honors Algebra II MATH251 Course Notes by Dr. Eyal Goren McGill University Winter 2007

1.1 Limits and Continuity. Precise definition of a limit and limit laws. Squeeze Theorem. Intermediate Value Theorem. Extreme Value Theorem.

Math 113 Winter 2013 Prof. Church Midterm Solutions

The following definition is fundamental.

Math 205, Summer I, Week 3a (continued): Chapter 4, Sections 5 and 6. Week 3b. Chapter 4, [Sections 7], 8 and 9

Linear and Bilinear Algebra (2WF04) Jan Draisma

Linear Algebra MAT 331. Wai Yan Pong

Math Linear Algebra Final Exam Review Sheet

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

Chapter 2: Linear Independence and Bases

MTH5102 Spring 2017 HW Assignment 4: Sec. 2.2, #3, 5; Sec. 2.3, #17; Sec. 2.4, #14, 17 The due date for this assignment is 2/22/

Math 3108: Linear Algebra

Kernel. Prop-Defn. Let T : V W be a linear map.

Chapter 3. Abstract Vector Spaces. 3.1 The Definition

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Math 308 Discussion Problems #4 Chapter 4 (after 4.3)

NAME MATH 304 Examination 2 Page 1

Solution to Homework 1

Transcription:

Mathematics Department Stanford University Math 61CM/DM Vector spaces and linear maps We start with the definition of a vector space; you can find this in Section A.8 of the text (over R, but it works over any field). Definition 1 A vector space (V, +, ) over a field F is a set V with two maps + : V V V and : F V V such that 1. (V, +) is a commutative group, with unit 0 and with the inverse of x denoted by x, 2. for all x V, λ, µ F, (λµ) x = λ (µ x), 1 x = x (here 1 is the multiplicative unit of the field; also note that in the expression λµ on the left hand side, we are using the field mutliplication; everywhere else the operation : F V V ), 3. for all x, y V, λ, µ F, and i.e. two distributive laws hold. (λ + µ) x = λ x + µ x λ (x + y) = λ x + λ y, Notice that the two distributive laws are different: in the first, on the left hand side, + is in F, in the second in V. The same argument as for fields shows that 0 x = 0, and λ 0 = 0, where in the first case on the left 0 is the additive unit of the field, and all other 0 s are the zero vector, i.e. the additive unit of the vector space. Another example of what one can prove in a vector space: Lemma 1 Suppose (V, +, ) is a vector space over F. Then ( 1) x = x. Proof: We have, using the distributive law, and that 0 x = 0, observed above, 0 = 0 x = (1 + ( 1)) x = 1 x + ( 1) x = x + ( 1) x, which says exactly that ( 1) x is the additive inverse of x, which we denote by x. Now, if F is a field, F n = F... F, with n factors, can be thought of as the set of maps x : {1,..., n} F where one writes x j = x(j) F ; of course one may just write x = (x 1, x 2,..., x n ) as usual, i.e. think of elements of F n as ordered n-tuples of elements of F. Then F n is a vector space with the definition of componentwise addition and componentwise multiplication by scalars λ F, as is easy to check. A more complicated (but similar, in that it is also a function space!) vector space, over R, is the set C([0, 1]) of continuous real-valued functions on the interval [0, 1]. Here addition and multiplication by elements of R is defined by: (f + g)(x) = f(x) + g(x), (λ f)(x) = λf(x), λ R, f, g C([0, 1]), x [0, 1], i.e. f + g is the continuous function on [0, 1] whose value at any x [0, 1] is the sum of the values of f and g at that point (this is a sum in R!), and λf is the continuous function on [0, 1] whose value at any x [0, 1] is the product of λ and the value of f at x (this is a product in R). (Notice that this is the same definition as +, in F n, if one writes elements of the latter as maps from {1,..., n}: e.g. one adds components, i.e. values at j {1,..., n}, so e.g. (x + y)(j) = x(j) + y(j) in the function notation corresponds to (x + y) j = x j + y j in the component notation.) For a vector space V (one often skips the operations and the field when understood), the notion of subspace, linear (in)dependence, spanning, etc., are just as for subspaces of R n. In particular, as pointed out in lecture, the linear dependence lemma holds. However, not all vector spaces have bases in the sense we discussed.

Definition 2 A vector space V over F is called finite dimensional if there is a finite subset {v 1,..., v n } of elements of V such that Span{v 1,..., v n } = V. Notice that F n is finite dimensional (the standard basis spans), while C([0, 1]) is not. Just as for subspaces of R n : Definition 3 A basis of a general vector space V is a finite subset of V which is linearly independent and which spans V. As pointed out in lecture, the proof of the basis theorem goes through without change in arbitrary finite dimensional vector spaces. This includes the statement that all bases of a finite dimensional vector space have the same number of elements, so the notion of dimension is well-defined. There is a more general notion of basis in which the requirement of being finite is dropped. (Linear independence and spanning still only involves finite linear combinations though in algebra one cannot take infinite sums; one needs a notion of convergence for the latter!) Using sophisticated tools (such as the axiom of choice), one can prove that every vector space has such a basis. However, this is not a very useful notion in practice, and we will not consider this possible definition any longer. We now turn to linear maps, which are again defined as for subspaces of R n. Definition 4 Suppose V, W are vector spaces over a field F. A linear map T : V W is a map V W such that T (λx + µy) = λt x + µt y, λ, µ F, x, y V. It is easy to check that T 0 = 0 and T ( x) = T x for any linear map using 0 = 0 0 (with the first zero being the zero in the field), and x = ( 1) x, pulling the 0, resp. 1 through T using linearity. Thus, a linear map is exactly one which preserves all aspects of the vector space structure (addition, multiplication, additive units and inverses). Two basic notions associated to a linear map T are: Definition 5 The nullspace, or kernel, N(T ) = Ker T of a linear map T : V W is the set {x V : T x = 0}. The range, or image, Ran T = Im T of a linear map T : V W is the set {T x : x V }. Both of these sets are subspaces, N(T ) V, Ran T W, as is easily checked. Recall that any map f : X Y between sets X, Y has a set-theoretic inverse if and only if it is one-to-one, i.e. injective, and onto, i.e. surjective (such a one-to-one and onto map is called bijective); one defines f 1 (y) to be the unique element x of X such that f(x) = y. (Existence of x follows from being onto, uniqueness from being one-to-one.) Lemma 2 A linear map T : V W is one-to-one if and only if N(T ) = {0}. Proof: Since T 0 = 0 for any linear map, if T is one-to-one, the only element of V it may map to 0 is 0, so N(T ) = {0}. Conversely, suppose that N(T ) = 0. If T x = T x for some x, x V then T x T x = 0, i.e. T (x x ) = 0, so x x = 0 since N(T ) = {0}, so x = x showing injectivity. The following is used in the proof of the rank-nullity theorem in the book in the special setting considered there:

Lemma 3 If T : V W is a linear map (F the field), and v 1,..., v n span V, then Ran T = Span{T v 1,....T v n }. Notice that this lemma shows that if V is finite dimensional, then Ran T is finite dimensional, even if W is not. Proof of lemma: We have Ran T = {T x : x V } = {T ( c j v j ) : c 1,..., c n F } = { c j T v j : c 1,..., c n F } = Span{T v 1,....T v n }. Here the second equality uses that v 1,..., v n span V, and the third that T is linear. Recall that if T : R n R m, written as a matrix, then the jth column of T is T e j, e j the jth standard basis vector. Thus, in this case, the column space C(T ) of T (which is by definition the span of the T e j, j = 1,..., n) is Ran T by the lemma. The rank-nullity theorem then is valid in general, provided one defines the rank as the dimension of the range, nullity as the dimension of the nullspace, with the same proof. (The proof in the textbook on the third line of the long chain of equalities starting with C(A), shows C(A) = Ran A; just start here with the argument.) Thus: Theorem 1 (Rank-nullity) Suppose T : V W is linear with V finite dimensional. Then dim Ran T + dim N(T ) = dim V. This can be re-written in a slightly different way. Recall that N(T ) = {0} is exactly the statement that T is injective. Thus, dim N(T ) measures the failure of injectivity. On the other hand, if W is finite dimensional, dim W = dim Ran T if and only if W = Ran T, i.e. if and only of T is surjective. Thus, dim W dim Ran T measures the failure of surjectivity of T. By rank-nullity, i.e. the difference (dim Ran T dim W ) + dim N(T ) = dim V dim W, dim N(T ) (dim W dim Ran T ) = dim V dim W (1) of the measure of the failure of injectivity and surjectivity relates to the difference of the dimensions of the domain and the target space. The expression on the left hand side is often called the index of T ; note that bijections have index 0, but of course not every map of index 0 is bijective. However, (1) shows that a map T between two vector spaces can only be a bijection if dim V = dim W, i.e. bijective (hence invertible) linear maps can only exist between two vector spaces of equal dimensions. Notice that if T : V W is linear and bijective, so it has a set theoretic inverse, this inverse is necessarily linear. Lemma 4 If T : V W is linear and bijective, then the inverse map T 1 is linear. Proof: Indeed, let T 1 be this map. Then T (T 1 (λx + µy)) = λx + µy by the definition of T 1, and T (λt 1 x + µt 1 y) = λt T 1 x + µt T 1 y = λx + µy

where the first equality is the linearity of T and the second the definition of T 1. Thus, T (T 1 (λx + µy)) = T (λt 1 x + µt 1 y), so by injectivity of T, T 1 (λx + µy) = λt 1 x + µt 1 y, proving linearity. As discussed above for C([0, 1]), which works equally well for any other set of maps with values in a vector space, if one has two linear maps S, T : V W, one can add these pointwise (S + T )(x) = Sx + T x, x V, and multiply them by elements of the field pointwise: Then it is straightforward to check the following: (ct )(x) = c T x, c F, x V. Lemma 5 For V, W are vector spaces over a field F, S, T : V W linear, c F, S + T, ct are linear. Sketch of proof: For instance, we have (S + T )(λx + µy) = S(λx + µy) + T (λx + µy) = λsx + µsy + λt x + µt y = λ(sx + T x) + µ(sy + T y) = λ(s + T )x + µ(s + T )y, where the first and last equalities are the definition of S + T, while the second one is the linearity of S and that of T, while the third one uses that W is a vector space. The next lemma, which again is easy to check, is that the set of linear maps, with the addition and multiplication by scalars which we just defined, is also a vector space, i.e. the operations +, satisfy all properties in the definition of a vector space. Lemma 6 If V, W are vector spaces over a field F, then the set of linear maps L(V, W ) is a vector space itself. An important property of maps (in general) is that one can compose them if the domains/targets match: if f : X Y, g : Y Z, (g f)(x) = g(f(x)), x X. For linear maps: Lemma 7 Suppose T : V W, S : W Z are linear, then S T is linear as well. Proof: (S T )(λx + µy) = S(T (λx + µy)) = S(λT x + µt y) = λs(t x) + µs(t y) = λ(s T )(x) + µ(s T )(y). In particular, linear maps V V both have a vector space structure, and a multiplication. Recall from the Week 1 section that a ring is a set R with two operations +, with the same properties as those of a field except is not required to be commutative, and non-zero elements are not required to have multiplicative inverses. With this, the following is easy to check:

Lemma 8 If V is a vector space, L(V, V ) is a ring. Finally recall that in lecture we discussed the matrix of a linear map between finite dimensional vector spaces, and used this to motivate matrix multiplication, wanting that the product of two matrices corresponding to linear maps be the matrix of the composite linear map. The matrix of a linear map is discussed in Section 3.6 of the textbook. Recall that if e 1,..., e n form a basis of V, f 1,..., f m a basis of W, T : V W linear, we write T e j = m a ij f i, i=1 using that the f i form a basis of W, and then {a ij } i=1,...,m,j=1,...,n is the matrix of T. The computation in the lecture shows that the matrix of S T has lj entry b li a ij if {b lk } l=1,...,p,k=1,...,m is the matrix of S. In general, two vector spaces V, W are considered the same as vector spaces, i.e. isomorphic if there is an invertible linear map T : V W (so T 1 is also linear as discussed above). If W is a dimension n vector space, V = F n, e 1,..., e n the standard basis of V, f 1,..., f n a basis of W (which exists, but one needs to pick one), then there is such a map T, namely the map T (x 1,..., x n ) = n j=1 x jf j. This is linear (it is the unique linear map with T e j = f j ), injective and surjective (the f j are linearly independent and span). This is the sense in which any n-dimensional vector space is the same as F n, i.e. they are isomorphic; notice however that this isomorphism requires a choice, that of a basis of W. Notice now that if W, Z are n-dimensional vector spaces, they are also isomorphic: if T : F n W, S : F n Z are isomorphism (such as those above, depending on choices of bases) then S T 1 : W Z is the desired isomorphism.