Spanning, linear dependence, dimension

Similar documents
[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

Vector Spaces. 9.1 Opening Remarks. Week Solvable or not solvable, that s the question. View at edx. Consider the picture

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3

ARE211, Fall2012. Contents. 2. Linear Algebra (cont) Vector Spaces Spanning, Dimension, Basis Matrices and Rank 8

LINEAR ALGEBRA: THEORY. Version: August 12,

0.2 Vector spaces. J.A.Beachy 1

Linear transformations: the basics

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

1 Last time: inverses

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Math 110, Spring 2015: Midterm Solutions

Vector space and subspace

Linear Algebra (Math-324) Lecture Notes

LECTURES 14/15: LINEAR INDEPENDENCE AND BASES

BASIC NOTIONS. x + y = 1 3, 3x 5y + z = A + 3B,C + 2D, DC are not defined. A + C =

Chapter 2: Linear Independence and Bases

Math 54 HW 4 solutions

Footnotes to Linear Algebra (MA 540 fall 2013), T. Goodwillie, Bases

Lecture 3q Bases for Row(A), Col(A), and Null(A) (pages )

NOTES (1) FOR MATH 375, FALL 2012

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

Gaussian elimination

Math 205, Summer I, Week 3a (continued): Chapter 4, Sections 5 and 6. Week 3b. Chapter 4, [Sections 7], 8 and 9

The definition of a vector space (V, +, )

First we introduce the sets that are going to serve as the generalizations of the scalars.

Let V be a vector space, and let X be a subset. We say X is a Basis if it is both linearly independent and a generating set.

Inverses and Elementary Matrices

Math 308 Discussion Problems #4 Chapter 4 (after 4.3)

MA106 Linear Algebra lecture notes

Topic 14 Notes Jeremy Orloff

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016

a (b + c) = a b + a c

Dot Products, Transposes, and Orthogonal Projections

Lecture 14: The Rank-Nullity Theorem

Linear Algebra Handout

Chapter 1 Vector Spaces

Linear Algebra Highlights

Solution to Homework 1

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Linear Algebra Lecture Notes

Linear Algebra. Preliminary Lecture Notes

Linear Algebra, Summer 2011, pt. 2

FINITE ABELIAN GROUPS Amin Witno

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

Getting Started with Communications Engineering

Math 308 Midterm November 6, 2009

LINEAR ALGEBRA REVIEW

Abstract Vector Spaces and Concrete Examples

4.3 - Linear Combinations and Independence of Vectors

Review Notes for Midterm #2

MTH 2032 Semester II

Chapter 2 Subspaces of R n and Their Dimensions

Chapter SSM: Linear Algebra. 5. Find all x such that A x = , so that x 1 = x 2 = 0.

2: LINEAR TRANSFORMATIONS AND MATRICES

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

Sequence convergence, the weak T-axioms, and first countability

Linear Algebra March 16, 2019

2. Prime and Maximal Ideals

Linear Algebra, Summer 2011, pt. 3

Math 240, 4.3 Linear Independence; Bases A. DeCelles. 1. definitions of linear independence, linear dependence, dependence relation, basis

7. Dimension and Structure.

One-to-one functions and onto functions

Math 314 Lecture Notes Section 006 Fall 2006

Math 24 Spring 2012 Questions (mostly) from the Textbook

2. Every linear system with the same number of equations as unknowns has a unique solution.

MATH 167: APPLIED LINEAR ALGEBRA Chapter 2

Roberto s Notes on Linear Algebra Chapter 4: Matrix Algebra Section 7. Inverse matrices

Linear Algebra. Preliminary Lecture Notes

A Do It Yourself Guide to Linear Algebra

SUMMARY OF MATH 1600

Linear Independence Reading: Lay 1.7

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

Math 346 Notes on Linear Algebra

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Math Linear algebra, Spring Semester Dan Abramovich

Row Reduced Echelon Form

Math 54 First Midterm Exam, Prof. Srivastava September 23, 2016, 4:10pm 5:00pm, 155 Dwinelle Hall.

. = V c = V [x]v (5.1) c 1. c k

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

ALGEBRA. 1. Some elementary number theory 1.1. Primes and divisibility. We denote the collection of integers

Linear Algebra. Chapter Linear Equations

LECTURE 6: VECTOR SPACES II (CHAPTER 3 IN THE BOOK)

Advanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur

Linear Algebra for Beginners Open Doors to Great Careers. Richard Han

Chapter 4 & 5: Vector Spaces & Linear Transformations

Math 123, Week 5: Linear Independence, Basis, and Matrix Spaces. Section 1: Linear Independence

Math 3108: Linear Algebra

T ((x 1, x 2,..., x n )) = + x x 3. , x 1. x 3. Each of the four coordinates in the range is a linear combination of the three variables x 1

Vector Spaces. Chapter Two

Modern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur

AN ALGEBRA PRIMER WITH A VIEW TOWARD CURVES OVER FINITE FIELDS

EXAM 2 REVIEW DAVID SEAL

Homogeneous Linear Systems and Their General Solutions

Linear Algebra, 4th day, Thursday 7/1/04 REU Info:

Math 344 Lecture # Linear Systems

4. Linear Subspaces Addition and scaling

2 Systems of Linear Equations

Transcription:

Spanning, linear dependence, dimension In the crudest possible measure of these things, the real line R and the plane R have the same size (and so does 3-space, R 3 ) That is, there is a function between R and R which is one-to-one and onto But everybody knows that there is a definite sense in which the plane is bigger than the line, and Euclidean space is bigger still The line is -dimensional, the plane -dimensional, and 3-space 3-d One of the purposes of these notes is to make this idea precise In what follows, we assume we have fixed a field F and a vector space V over F We start, of course, with a DEFINITION (LINEAR COMBINATION) Suppose that S V and v V (So S is a set of vectors from V ) We say that v is a linear combination of the vectors in S if there are v,, v k in S and scalars α,, α k such that v = α v + + α k v k 3 For a simple example, suppose that F = R, V = R 3 and S =, 7 Then 6 9 is a linear combination of the vectors in S because 3 + ( ) 7 But 6 9 vectors in S because we cannot solve the system α 6 9 = is not a linear combination of the 3 + α 7 = 6 9 (Try it) (This example brings up a small matter of language In case S = { v,, v k } is a fairly small finite set and it usually will be, for us and v is a linear combo of the vectors in S, we will often just say that v isa linearcombination of v,, v k So it is correct (and standard) to say that 6 9 is a linear combination of 3 and 7 However, we do want to leave open the possibility that S is infinite But even if S is infinite, a linear combination involves only finitely many of the vectors from S) For a slightly more sophisticated example, again suppose that F = R, but V = C(R), the space of continuous functions on the real numbers Then cos x is a linear combination of and sin x This follows immediately from the well-

known fact that cos x = sin x (for all x) [Please note that as usual in this context, does not denote the number but the constant function] Here s a closely related notion: DEFINITION (DEPENDENCE RELATION) Suppose that v,, v k are distinct vectors in V and there are scalars α,, α k which are not all zero such that α v + +α k v k = In such a case we call the expression α v + +α k v k = a (nontrivial) dependence relation We won t count the trivial case where all the α s are zero So 3 + ( ) 6 7 + ( ) 9 = is a dependence rela- tion On the other hand, there is no dependence relation involving 7 and 6 9 + α 7 To see this, try to solve the system + α 3 6 9 3 α 3 = (Do it; don t just take my word for it) As I hope you noticed, there s a very direct connection between linear combos and dependence relations Let s spell it out Another DEFINITION (DEPENDENT SETS) Suppose that S is a subset of the vector space V ; S is called (linearly) dependent if there is a nontrivial dependence relation among the elements of S that is, there are v,, v k in S and scalars α,, α k such that α v + + α k v k =, and at least one α j In case this doesn t happen S is called aw, you guessed independent So the set 3, 7, 6 9 is dependent The set 3, 7, 6 9 is independent There are two points to be made here first, why linearly? Well, there are other notions of dependence/independence in math, but we won t be deal with them in this course We will just say a set is either dependent or independent from now on The other point concerns order order; that is, S = 3, 7, Aset doesn t come in any particular, 6 9 is the same thing as

3, 6 9, 7 In fact, it s also the same thing as 3, 3, 7, 6 9 A set is determined by what its elements are; how they are presented doesn t matter But later in the course, we will be considering sets of vectors that come in a particular order, and that will be of some importance The ordered sets 3, 7, 6 9, 3, 6 9, 7, and 3, 3, 7, 6 9 are all different things [Which one of them is the same as the unordered set S? None of them] Note that any set S which has the zero vector in it is automatically dependent [Why?] Further, the only one-element set which is dependent is { } [Again, why?] Here s the relationship between these two ideas PROPOSITION Suppose that S V The following are equivalent: S is dependent There are v S and v, v k S (different from v) such that v is a linear combination of v,, v k PROOF: (a) (b) Suppose that S is dependent; then there are distinct vectors v,, v m S and scalars α,, α m such that α v + + α m v m = Also, not all of the α s are We may assume that α m ; then α m v m = α v α m v m and hence v m = α α m v α m α m v m This is exactly what (b) says [Get used to this kind of arguement using a dependence relation to solve for one vector in terms of the others It comes up a lot] (b) (a) Suppose that v is a linear combination of v,, v k (all distinct vectors from S) Then v = β v + + β k v k, where β,,β k are scalars So ( ) v + β v + + β k v k = and this is a (nontrivial) dependence relation This completes the proof 3

So for instance the dependence relation 3 +( ) 7 +( ) 6 9 yields 3 = 7 + 6 9, expressing the first vector as a linear combo of the others I should mention that the notion of linear combination/dependence, um, depends on the field For instance, C 3 can be regarded as a vector space over C, but also over R The vectors, and 3 + i + i are dependent i + 3i over C since 3 + i + i = (+i) +, but it is not hard to check + 3i i that they are independent over R; there is no nontrivial dependence relation among these three vectors involving only real scalars DEFINITION (LINEAR SPAN) If S V, where as usual V is a vector space over the field F, then the (linear) span of S is the set W of all linear combinations of the vectors in S We write W = Span(S) in this case, and also call S a spanning set for W [Some people write span(s) or S for the span of S; the notation in the definition will be used in this class We will usually omit the word linear here, too We will also make the following convention, which not everybody uses] CONVENTION Span( ) = { } ( is the empty set; it has no elements) PROPOSITION For any subset S of the space V, W = Span(S) is a subspace of V In fact, it is the smallest subspace of V having such that S W (That is, S W and for any subspace U of V with S U, W U) PROOF: The convention takes care of the trivial case where S =, so we may assume that S is not empty For any v S, v is a linear combination of vectors from S; thus W If w W, then there are v,, v k in S and scalars α,,α k such that w = α v + + α k v k Now for any scalar β, β w = (βα ) v + (βα k ) v k is a linear combination of vectors from S, so β w W and hence W is closed under scalar multiplication Now suppose that w and w are in W Then there are vectors v,, v m in S and scalars β,,β m and γ,,γ m such that w = β v m + + β m v m and w = γ v + + γ m v m [We may assume that the vectors involved to get w as a linear combo of vectors in S are the same ones involved in getting w, as we can always add v to a linear combo without changing it] So w + w = (β + γ ) v + + (β m + γ m ) v m, = 4

which witnesses that w + w W and thus that W is closed under addition So it is a subspace of V S W since for any v S, v is a linear combo of vectors from S But any subspace of V which contains S must also contain all linear combos of vectors from S because it will be obtained by taking scalar multiples and sums of vectors, and any subspace is closed under these operations That s it Note that this result tells us that S V is a subspace if and only if Span(S) = S If S = 3, 7, and V = R3 (as a vector space over R), then it is not hard to see that W = Span(S) is a plane through the origin There are many other spanning sets besides S for this plane For instance 3, 7, 6 9 spans the same subspace W, and so does 3, 6 9 And so on It is clear for any S S V, then Span(S ) Span(S ); when are they equal? Our convention says that Span( ) = Span{ } Apart from that, to say that Span(S ) = Span(S ) when S S is just the same as saying that every vector in S is a linear combination of the vectors in S I hope this is clear As noted above, Span 3, 7 = Span 3, 7, 6 9 But Span 3, 7, 6 9 is all of R3 This can be seen di- rectly, by showing that for any a b R 3, there a (unique) solution to the c system 3 6 7 9 x x = a b (Try it) x 3 c Another way to say what we just observed is that a dependent subset of a vector space is redundant in a certain way you can remove at least one vector from it without changing the subspace it spans But this is not true for independent sets With this in mind, we make the following DEFINITION (BASIS) If W = Span(S) and S is independent, then S is called a basis for W An ordered basis for W is simply an ordered set of vectors with no repetition where the underlying set is a basis (So an arbitrary independent set is a basis for something its span Here 5

of course S is a subset and W a subspace of some given vector space V over some given field F ) In our example example above with the plane W = Span 3, 7 = Span 3, 6 9 = Span 3, 7, 6 9, the 3 two-element sets are bases for W, but the 3-element set is not, 7 is an ordered basis for W The only basis for the trivial subspace { } (of any V ) is the empty set by our convention This corresponds to the usual idea that a single point is -dimensional But most subspaces have many bases However, we will see shortly that the number of vectors in one basis for W is the same number as in any other basis (in case that number is finite) If the basis has a single element, necessarily nonzero, then the space it spans is -dimensional; in R or R 3 this just means it is a line through the origin (Note that any set consisting of a single nonzero vector by itself is independent) If a basis has elements, the space it spans is -dimensional, which makes it a plane in R or R 3 (Note that two vectors are dependent just if one is a scalar multiple of the other, and in R n this is usually easily determined by sight; for 3 or more vectors, it usually requires some work) In case the field is F and the vector space F n, we have the standard basis { e,, e n } where each e j is has all zero coordinates, except that the jth coordi- nate is For instance, the standard basis for F 3 is { e, e, e 3 } =,, It is readily seen that this set is independent and spans all of F n In case of R 3, the standard ordered basis is often denoted ( i, j, k) The independent set 3, 7, 6 9 is a nonstandard basis for R3 The vector space of polynomials R[x] also has a standard basis, which is {, x, x, x 3, } Note that this makes the space infinite-dimensional (The here is the constant polynomial, not the number) To show that the dimension of a space (possibly a subspace of a bigger space) is well-defined, we first prove a lemma, which contains most of the work LEMMA Suppose that V is a vector space over the field F and S, S are subsets of V such that S is finite, S is independent and Span(S ) = V Then S S ( S is the number of elements of the set S) Before starting the proof, we make a couple of comments First, the assump- 6

tion that S is finite can be removed, but first we would have to define the size of a possibly infinite set; also the proof in that case requires some version of the axiom of choice We will stick to the case when S is finite we don t need to assume that S is finite, but that will follow Also, of course we may assume that S ; otherwise there is nothing to do Here s the basic strategy of the proof We start with a vector v S and show that there is a vector (which we will call w ) in S such that if we throw w out of S and replace it by v, we still have a spanning set for (all of) V If S =, we would then be done If not, we pick a second vector v S and we show that there is a vector w S different from w such that if we throw w out of S and replace it by v we still have a spanning set for V This implies in particular that there is a second vector in S Again, if S has just two elements, we would at that stage be done If not, we do it again, finding a third vector v 3 in S and a third vector w 3 in S such that (S \ { w, w, w 3 }) { v, v, v 3 } still spans V We continue like this as long as there are elements left in S, but we must run out before we run out of elements of S Of course, we have to justify that all this can be done PROOF: As noted, we may assume that S Let S = { w,, w k } (with no repetitions) Let v be any vector in S it cannot be Since it in Span(S ), we must have k and we can write v = a w + + a k w k for some scalars a,,a k These scalars cannot all be zero, and by relabelling the vectors in S, we may assume a So a w = v a w a k w k and w = a v a a w a k a w k Thus w is in the span of { v, w,, w k } and so this span must be all of V This completes the first step If v is the only vector in S we are finished, so assume there is another vector v S It is b v + c w + + c k w k for some scalars b, c,,c k It cannot be the case that c = = c k =, because if that happened, v would be equal to a scalar multiple of v, which is not possible by independence of S Without loss of generality, c and thus c w = v b v c 3 w 3 c k w k (Possibly k =, but it can t be ) Again solving for w by dividing by c shows that w Span{ v, v, w 3,, w k } and therefore this span must be all of V Once again, if S has only two elements, we are finished Otherwise, choose any v 3 S different from v and v and express it as v 3 = d v + d v + e 3 w 3 + + e k w k where the d s and e s are scalars It cannot be the case that either k = or that all of the e j s are zero, because either of these would contradict the independence of S We may assume that e 3 and so we can solve for w 3 in terms of v, v, v 3 and the vectors w 4,, w k if there are any of those So w 3 is in the span of v, v, v 3 and the w j s for j 4 (if any) If S has just 3 elements, we are now finished If not, we keep going in the same manner By using independence of S and the fact that our new S spans all of V, we can keep doing this as long as 7

there are elements left in S ; we can t run out of elements of S first because that would contradict the independence of S This is a little loose (the proper proof is by induction), but if you ve made it this far, you should get the picture I will call the proof completed COROLLARY If S and S are bases for the same vector space V, and either of them is finite, then S = S PROOF: Suppose S is finite Then it is a spanning set for V and S is independent and so S S But now S is a finite spanning set for V and S is independent, so S S That s it DEFINITION (DIMENSION) Suppose that V is a vector space with a finite spanning set The dimension dim(v ) is the number of vectors in some (any) basis for V We collect some basic facts in a PROPOSITION Suppose that V is a vector space with finite dimension n If S is an independent subset of V, and v V \ S, then S { v} is independent if and only if v / Span(S) If S is an independent subset of V, then there is a set S S which is a basis for V Consequently, if S has exactly n elements, then S itself is a basis for V 3 If S is a finite spanning set for V, then there is S S such that S is a basis for V Consequently, if S has exactly n elements, then S itself is a basis for V PROOF: This part doesn t need finite-dimensionality We already know that if v Span(S), then S { v} is dependent On the other hand, if S { v} is dependent, then there are v, v,, v k in S and scalars a, a,, a k and b such that a v + a v + + a k v k + b v = is a dependence relation b, since otherwise we d have a dependence relation just involving vectors from S, so b v = a v a k v k, v = a b v a k b v k and v Span(S) If S itself is a basis for V, just take S = S If not, there must be some v / Span(S) S { v} is still independent and must span a larger subspace than S does If S { v} is a basis for all of V, we stop; if not, find w / Span(S { v}) Then S { v, w} is still independent; if it is a basis for for V, we stop Otherwise we continue; but we must eventually stop in at most n steps Then we have S If S is independent and has n elements and we actually increase it to get a basis S, that basis would have more than n elements, which is impossible So S itself is a basis for V 8

3 If S is a spanning set for V but not a basis, it must be dependent So there v S which is a linear combination of other elements of S So S \ { v} still spans V ; maybe it s a basis for V if not, repeat the process throwing out another vector w without changing the fact that we still have a spanning set for V We repeat this process as long as necessary, and must eventually stop When we do, we have a basis S S If S itself has exactly n elements, we can t throw out any vectors to make our basis S, because that basis wouldn t have enough elements So S itself is a basis A basis for a space provides a useful and fairly economical way of presenting the space Often, we simply use the standard basis if there is one But for some purposes, it makes more sense to use a nonstandard basis; and some spaces don t have anything that could reasonably be called the standard basis We describe some spaces associated with a matrix, and we will see how to find a basis for each of them DEFINITION (ROW, COLUMN AND NULL SPACES) Suppose that A is an m n matrix over the field F The row space of A, row(a), is the space spanned by the rows of A (hence it is a subspace of M,n (F )) The column space of A, col(a), is the space spanned by the columns of A; it is a subspace of F m 3 The null space of A, null(a), is the set of all solutions to the system A x = ; it is a subspace of F n There is also something called the row null space, defined like the null space, except with rows and multiplication on the other side; we won t deal with it The null space is of course nothing new, except for its name (It s the set of solutions to the homogeneous system A x = ) We don t at this point need to justify that any of these are subspaces of the various larger spaces mentioned We know that row-reducing A does not change the null space It is easily seen that it doesn t change the row space, either Obviously switching two rows will not affect the span of the set of rows Also, replacing a row R j by R j = αr j where α is a nonzero scalar doesn t either, since R j is clearly a linear combo of the rows of A, and also vice versa, since R j = α R j (If we allowed α =, this would probably not be the case) Finally, replacing R j by R j = R j +αr k where k j doesn t affect the row space either, since of course R j is a linear combo of the rows of A, but also R j = R j αr k (Each elementary row operation is reversible) Row-reduction does usually change the column space, but what it doesn t change is any dependence relations among the columns That is, suppose B is obtained from A by row-reduction, the columns of A are C,,C n and the 9

columns of B are C,, C n Then for any scalars α,,α n we have α C + + α n C n = if and only if α C + + α n C n = The reason for this is simple α C + + α n C n is simply A α where α = α α n So A α = if and only if α null(a) This will be true if and only if α null(b) if and only if α C + + α n C n = It is easy to read off bases for these spaces in case the matrix is in RREF If B is the RREF of A, then we can take the same basis for row(a) as for row(b), and the same basis for null(a) as for null(b) For col(a), we consider a basis for col(b) among the columns of B (which is easy) and take the corresponding columns of A Let s see an example (over the reals) 3 4 5 Say A = 3 6 We produce the RREF B by performing the row-operations R R 3R, R 3 R 3 8R, R R 3R and 8 6 6 4 46 47 R 3 R 3 R to obtain B = 4 4, in RREF I trust it is clear that a basis for row(b) is just { ( 46 47 ), ( 4 4 ) } This is also a basis forrow(a) The solutions of the homogeneous system A x = 46 47 have the form r +s 4 +t 4 A basis for null(a) is then, 46 4, 47 4 as the three vectors in this set are easily seen to be independent It is obvious that the first and third columns of B are independent, but the second is - times the first, the fourth is -46 times the first plus 4 times the third, and the fifth is 47 times the first plus -4 times the third A basis for col(b) consists of its first and third columns The corresponding thing will be true for col(a); that is, a basis for col(a) is 3, 8 3 6 instructive to note (and check!) that the second column of A is just It is 3 8,

the fourth column of A is 46 3 + 4 3 8 6 and the fifth column of A is 47 3 + ( 4) 3 This kind of thing always works; it follows from 8 6 what we said earlier about linear combos From the example, and the preceding comments, I hope the following further comments are clear The dimension of the row space of a matrix A is simply the number of nonzero rows in the RREF version of A This is also the dimension of the column space, as the columns with the leading s in the RREF matrix B are distinct elements of the standard basis for F m (if A has m rows), and this dimension doesn t change as we row-reduce Finally, the dimension of the null space of an m n matrix is n r where r is the dimension of the row and column space DEFINITION (RANK/NULLITY) If A is an m n matrix, the rank of A, r(a) (or sometimes rk(a)), is the dimension of the row space (and column space) The nullity of A, n(a), is the dimension of the null space of A To summarize what we just said, here are the slogans: row rank equals column rank and rank plus nullity equals the number of columns Given an ordered basis B = ( v,, v n ) for a space V over F and a vector w V, it is possible to assign (uniquely) a vector in F n to w [It also, and crucially, depends on B] Specifically, here s the result and the relevant definition PROPOSITION Suppose that V is a vector space of finite dimension n (over the field F ) Suppose that B = ( v,, v n ) is an ordered basis for V Then for any w V there is a unique tuple (a,, a n ) of scalars such that w = a v + + a n v n PROOF: The fact that such a,, a n exist follows immediately from the fact that B spans V The (minor) issue is their uniqueness So suppose that w = a v + + a n v n = b v + + b n v n Then = (a b ) v + + (a n b n ) v n Since v,, v n are independent, this implies that a b = = a n b n = ; that is, a = b, a = b,,a n = b n, which is just what we had to show DEFINITION (COORDINATES OF A VECTOR WITH RESPECT TO A BASIS) Suppose that V is a finite-dimensional vector space over the field F, and an ordered basis for V is B = ( v,, v n ) is an ordered basis for V If w V, the coordinates of w with respect to B are the scalars a,,a n such a that w = a v + + a n v n We write [ w] B = (This is a case where the notation is more significant than the definition itself) For a trivial example, if w is the zero vector of V and B is any basis, [ w] B a n

will be Only slightly less trivial is this kind of thing; if V = F n and B is the standard ordered basis and w = α α n, then [ w] B = α α n, too If it were always this simple, we wouldn t bother with the definition/notation Getting just slightly less trivial, consider V = P n (x), the vector space of polynomials over the reals of degree n and B = (, x,, x n ), the standard ordered basis If p(x) = a + a x + + a n x n is a polynomial, then [p(x)] B = a a In this fashion, we can treat the space P n(x) just like the space R n+, a n which you probably already knew Here s a nontrivial example If V = Span 3, 7 and B is the basis in those brackets in that order, and w = 6 9, then since w = 3 + ( ) ( ) 7, we have [ w] B = Notice two things about this Even though w R 3, its representation in terms of B is a -vector Why does this happen? Another is, if we change B to B = 7, 3 ( ), then [ w] B =, which is different The order matters This brings up the sticky but very important (if you care about linear algebra) question of what happens when you change the basis Please pay close attention to this, because the traditional notation/terminology here is very screwed up (ie, it s backwards) and many mathematicians are traditionalists The notation here befuddles many students, even though the idea is quite simple; it caught me several times, and I m a pro So here s the situation; say V is a finite-dimensional vector space over the field F, and each of B = ( v,, v n ) and B = ( w,, w n ) is an ordered basis for V DEFINITION (CHANGE-OF-BASIS MATRIX) The change-of-basis matrix B P B is the n n matrix which has jth column [ v j ] B for j n (Several texts use different notation, and often call this the change-of-basis

matrix from B to B, where it really should be the other way around We will use the notation just as above for this entire course and will not say which basis we are changing from or to) For a(( first example, ) ( consider )) V = R with the standard ordered basis B and 3 5 let B =, be a nonstandard ordered basis for V To find B P B, we must express the standard basis vectors ( e ) and e in( terms ) 3 5 of the new basis It is not hard to see that e = + ( ), ( ) so the first column of this change-of-basis matrix is (Generally, to find these coordinates, ( ) ( you) would have to solve a system( of equations) ) Also, 3 5 e = () + 3 So the second column is and 3 B P B = ( ) 3 In general, there will of course also be the other change-of-basis matrix BP B, going the other way In a case( like) the above example, this one requires 3 no calculation, for it is trivial that = 3 e + e, so the first column is ( ) ( ) ( ) 3 5 3 5 just and the second, similarly, is B P B = It is easy to check that B P B and B P B are inverses of each other As we shall soon see, this is no accident First we explain what the change-of-basis matrix does PROPOSITION Let V be a finite-dimensional vector space over a given field F, each of B and B be an ordered basis for V, and v be any vector in V Then [ v] B = B P B [ v] B Thus, the change-of-basis matrix provides a means (by matrix multiplication) of translating the presentation of any given vector from the input basis B into the output basis B Note that the vector itself doesn t change; it s just how we present it that changes PROOF: Suppose B = ( v,, v n ), B = ( w,, w n ) and P = B P B = (p j,k ) j,k n ; finally suppose that [ v] B = a a n Then we have the following: v = a v + + a n v n and for each k n, v k = p,k w + + p n,k w n Then v = a (p, w + + p n, w n ) + + a n (p,n w + + p n,n w n ) = (p, a + + p n, a n ) w + + (p n, a + + p n,n a n ) w n 3

Thus the coordinates of [ v] B are (in order) p, a + + p,n a n,, a p n, a + + p n,n a n These are just the coordinates of P That does a n it An example is in order, and let s keep the first one ( simple Let ) V = R with the two bases B and B mentioned above So P = Suppose, say, ( ) 3 7 that v is represented by with respect to the standard basis B Now ( ) ( ) 5 7 39 P = So [ v] 5 B should be this; of course it is easy to check ( ) ( ) ( ) 7 3 5 directly that = 39 + 5 Now suppose V is any finite-dimensional vector space over F and B, B are ordered bases for V Then the two change-of-basis matrices P = B P B and P = B P B are inverses of each other; let s see why Suppose a is any vector in F n ; there is a unique vector v V such that a = [ v] B Specifically, if a = a a n and B = ( v,, v n ), then v is a v + + a n v n Now (P P ) a = P (P a) = P [ v] B = [ v] B = a That is, if we multiply P P by any a F n, we get a back So P P = I; similarly P P = I We will see more examples and applications of these ideas later Now onto a different matter There are several ways to construct new vector spaces from given ones We mention two of them now Suppose that V is a vector space over F and W and W are subspaces of V Consider the intersection W W (the set of all vectors in both W and in W ); we will make it an exercise to show that this is itself a subspace of V By contrast, the union W W is almost never a subspace of V (The union consists of those vectors in either W or W, or both Obviously it is a subspace if W W, or W W, but these are the only cases Again, this will be left as an exercise) But there is a subspace corresponding to the union in fact, it is the span of the union, but it usually called the sum DEFINITION (THE SUM OF TWO SUBSPACES) Suppose that V is a vector space, and W and W are subspaces of V The sum of W and W, denoted W + W, is the set { w + w : w W, w W } (It will be an exercise to show that the sum is again a subspace) In case W W = { }, we call the sum the direct sum of W and W, and will often write W W in this case [For instance, if W and W are distinct lines in the Euclidean 3-space, both 4

going through the origin, W + W is the plane containing both of them; it is in fact a direct sum] (There are infinite versions of intersection and sum, but we will not deal with them) It is well-known for finite sets A and B, A B + A B = A + B There is a corresponding rule for the dimensions of finite-dimensional spaces It is often known as the modular law, but I prefer to call it Lunch In Chinatown Here it is PROPOSITION Suppose that V is a vector space, and W and W are finitedimensional subspaces of V Then dim(w + W ) + dim(w W ) = dim(w ) + dim(w ) PROOF: Suppose that dim(w W ) = k, dim(w ) = k + l and dim(w ) = k +m clearly each of these has at least as big a dimension as the intersection We must show that dim(w + W ) = k + l + m Let { u,, u k } be a basis for W W We can extend it to a basis { u,, u k, v,, v l } for W, and we can also extend it to a basis { u,, u k, x,, x m } for W Once we show that { u,, u k, v,, v l, x,, x m } is basis for W + W, then we will be finished Note that each of these vectors is in W + W since they are all in W or W or both (the u s) and we can take either of w or w to be in the definition of W + W Now we show that this set of all these vectors spans W + W ; let w = w + w be any vector in W + W, where w W and w W There must be scalars a,, a k, and b,, b l such that w = a u + + a k u k + b v + +b l v l, since the u s and v together span W Similarly there are scalars c,, c k and d,, d m such that w = c u + + c k v k + d x + + d m x m Then w = (a + c ) u + + (a k + c k ) u k + b v + + b k v k + d x + + d m x m So w Span{ u,, u k, v,, v l, x,, x m }, and so this set spans all of W + W Now we show the set is independent To this end, suppose there are scalars e,, e k, f,, f l, g,, g m so that e u + + e k e k + f v + + f l v l + g x + + g m x m = We must show that this can only happen if all the scalars are equation we just wrote can be rewritten as The long e u + + e k u k + f v + + f l v l = g x g m x m The left-hand side of this equation is a vector in W as it is a linear combination of the basis vectors for W ; for the same reason, the right-hand side is in W Both sides being equal, this vector is in W W Hence it is h u + +h m u k for some scalars h,,h k Using the right-hand side, we see that h u + +h k u k + g x + + g m x m = By the independence of our basis for W, we must have 5

that all the h s and g s are Now this forces e u + +e k v k +f v + +f l v l = That makes all the e s and f s, and shows that the given set is indeed independent Hence it s basis for W + W, and that finishes the proof A simple instance of this is the well-known fact that if two distinct planes in R 3 meet at all, they meet in a line Let s see an example of this result in action Let V = R 4, W = Span, 5 6 W = Span, 6, 8 We will find a basis for each of W, 3 W, W + W and W W We start by putting all these vectors in as columns of one big matrix (I trust you understand why there s a line down the middle) We call the columns of this matrix C,,C 6 Note that these columns span W + W We row- reduce this as a single matrix to get the RREF and call its columns C,,C 6 Note that the left half is also in RREF, and from that we can just read off the fact that {C, C } forms a basis for W The left half is not quite in RREF, but I trust it is close enough to see that the columns C 4 and C 5 are independent, but C 6 is a linear combination of them (in fact, their sum) If this were less clear, we could just row-reduce this half a bit more Anyway, a basis for W is just {C 4, C 5 } Also, in the whole thing, we see that C, C, C 4 are independent, but the rest are linear combos of them A basis for W + W is then just {C, C, C 4 } What about the intersection? First, the Lunch tells us its dimension is (as 3+=+) So we need a single nonzero vector in the intersection Looking at the RREF, it is clear that C 5 = C +C +C 4; this dependence relation will also hold for the original matrix Thus, C 5 = C +C +C 4, ie, C 5 C 4 = C +C By the left-hand side, this vector is in W ; by the right-hand side, it s in W So it (which is 4 4 ) is in W W ; the set with just this vector in it is a basis for W W A couple of comments Notice the hardest basis to find was for the inter-, 3 3 3 5 6 3 6 8 3, 6

section this is typical of these kinds of problems Also, we could have used C 6 C 4 = C + C, but that gives us the same vector Another example, with the same question Here V = R 5, W = Span, 6 and W = Span 4, 3 4, 3 3 3 6 5 3 3 We start by forming the big matrix 3 3 4 4 3, again calling its columns C,,C 6 Once more we skip the easy row-reduction details, 3 4 just moving straight to the result It is From this it is apparent that {C, C, C 3 } is a basis for W, nearly as apparent that a basis for W is {C 4, C 5, C 6 } and a basis for W +W is {C, C, C 3, C 4 } Here (since 3 + 3 = 4 + ), W W is -dimensional We need two independent vectors, and again we use the dependence relations apparent from the RREF C 5 = 3C C +C 4, so C 5 = 3C C +C 4, and C 5 C 4 = 3C C = is good for one of them (as it s in both W and W ) Also, C 6 = 4C +C 3 +C 4, 4 9 so C 6 = 4C + C 3 + C 4 and C 6 C 4 = 4C + C 3 = is also in the 5 4 intersection A basis for W W is then, 9 5 5 3,, 7