Unit 2, Section 3: Linear Combinations, Spanning, and Linear Independence Linear Combinations, Spanning, and Linear Independence

Similar documents
LECTURES 14/15: LINEAR INDEPENDENCE AND BASES

Computationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity:

Key Point. The nth order linear homogeneous equation with constant coefficients

Chapter 2: Linear Independence and Bases

Determinants and Scalar Multiplication

Study Guide for Linear Algebra Exam 2

19. Basis and Dimension

Chapter 1. Vectors, Matrices, and Linear Spaces

(II.B) Basis and dimension

Linear Independence Reading: Lay 1.7

Vector space and subspace

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Chapter 3. Vector spaces

MAT 242 CHAPTER 4: SUBSPACES OF R n

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Determinants of 2 2 Matrices

Abstract Vector Spaces

There are two things that are particularly nice about the first basis

EIGENVALUES AND EIGENVECTORS 3

Eigenvectors and Hermitian Operators

Math 205, Summer I, Week 3a (continued): Chapter 4, Sections 5 and 6. Week 3b. Chapter 4, [Sections 7], 8 and 9

Vector Spaces ปร ภ ม เวกเตอร

Footnotes to Linear Algebra (MA 540 fall 2013), T. Goodwillie, Bases

Determinants and Scalar Multiplication

Section 1.5. Solution Sets of Linear Systems

2. Duality and tensor products. In class, we saw how to define a natural map V1 V2 (V 1 V 2 ) satisfying

The Cyclic Decomposition of a Nilpotent Operator

Linear Algebra, Summer 2011, pt. 2

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Chapter 1 Vector Spaces

12. Perturbed Matrices

Vector Spaces 4.4 Spanning and Independence

1 Counting spanning trees: A determinantal formula

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

Review Let A, B, and C be matrices of the same size, and let r and s be scalars. Then

NOTES (1) FOR MATH 375, FALL 2012

Exercises Chapter II.

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

Matrix Inverses. November 19, 2014

6.4 BASIS AND DIMENSION (Review) DEF 1 Vectors v 1, v 2,, v k in a vector space V are said to form a basis for V if. (a) v 1,, v k span V and

Linear Independence x

Generalized eigenspaces

LECTURE 6: VECTOR SPACES II (CHAPTER 3 IN THE BOOK)

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Linear algebra and differential equations (Math 54): Lecture 10

3 Matrix Algebra. 3.1 Operations on matrices

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

7. Dimension and Structure.

The Jordan Canonical Form

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

The Cayley-Hamilton Theorem and the Jordan Decomposition

Math 346 Notes on Linear Algebra

MATH 304 Linear Algebra Lecture 10: Linear independence. Wronskian.

It is the purpose of this chapter to establish procedures for efficiently determining the solution of a system of linear equations.

Solutions to Final Exam

Chapter 3. More about Vector Spaces Linear Independence, Basis and Dimension. Contents. 1 Linear Combinations, Span

Vector Spaces ปร ภ ม เวกเตอร

Definition 1. A set V is a vector space over the scalar field F {R, C} iff. there are two operations defined on V, called vector addition

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016

Notes on the Matrix-Tree theorem and Cayley s tree enumerator

Linear Combination. v = a 1 v 1 + a 2 v a k v k

MAT1302F Mathematical Methods II Lecture 19

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS

August 23, 2017 Let us measure everything that is measurable, and make measurable everything that is not yet so. Galileo Galilei. 1.

( 9x + 3y. y 3y = (λ 9)x 3x + y = λy 9x + 3y = 3λy 9x + (λ 9)x = λ(λ 9)x. (λ 2 10λ)x = 0

Linear Algebra, Summer 2011, pt. 3

Math 4377/6308 Advanced Linear Algebra

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

Lecture 22: Section 4.7

Chapter 3 Transformations

MATLAB Project 2: MATH240, Spring 2013

Eigenvalues and Eigenvectors

INTRODUCTION TO LIE ALGEBRAS. LECTURE 2.

Spectral Graph Theory Lecture 2. The Laplacian. Daniel A. Spielman September 4, x T M x. ψ i = arg min

Math 54 HW 4 solutions

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

Chapter 2. General Vector Spaces. 2.1 Real Vector Spaces

Spanning, linear dependence, dimension

Conceptual Questions for Review

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3

A Field Extension as a Vector Space

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

MATH 583A REVIEW SESSION #1

Infinite-Dimensional Triangularization

Chapter Two Elements of Linear Algebra

Mathematics Department

Math 3108: Linear Algebra

Lecture 6: Spanning Set & Linear Independency

Vector Spaces. Addition : R n R n R n Scalar multiplication : R R n R n.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

1111: Linear Algebra I

EIGENVALUES AND EIGENVECTORS

Transcription:

Linear Combinations Spanning and Linear Independence We have seen that there are two operations defined on a given vector space V :. vector addition of two vectors and. scalar multiplication of a vector by a scalar. The most fundamental way to combine vectors in a vector space is by employing these two operations on some collection of vectors and scalars. The definition below makes this idea precise: Definition.3. A linear combination of a list v... v m of vectors in a vector space V over a field F is a vector of the form α v +... + α m v m where α... α m F. Example. As a simple example consider the vectors ( ) 3 5 w = and e = ( ) e = ( ) and e = ( ) in the vector space U (R) of real upper triangular matrices. We can think of w as a linear combination of the vectors e e and e : indeed it is quite easy to see that w = 3e 5e + e. In a sense we can use e e and e together with the two operations of addition and scalar multiplication in U (R) to build w. Example. Describe the set of all linear combinations of e e and e. It should be clear that you can use the vectors e e and e to build any other vector in U (R): if ( ) u u u = u is a vector in U (R) then we can think of u as the linear combination u = u e + u e + u e. On the other hand it is easy to see that any linear combination of the vectors e e and e will be another vector in U (R). Thus there is a sense in which these three vectors give you all the information you need to know about U (R).

Span In the example above we saw that we could use three vectors to build all of U (R). This leads us to a definition: Definition.5. Let S = (v v... v m ) be a list of vectors in a vector space V. The span of S denoted span (S) or span (v v... v m ) is the set of all linear combinations of vectors in S. The span of the empty list () is defined to be {} that is span () = {}. Remark. Most literature refers to the span of a set as opposed to the span of a list (the difference being that a set is unorderd while a list has an order). However order is invariably important when referring to some set of vectors that span so we solve the problem by thinking of our collection of vectors as a list (even though I will often refer to spanning sets ). Example. Describe the span of the list (e e e ). We saw above that the set of all linear combinations of vectors in the list above is all of U (R); we write span (e e e ) = U (R). It turns out that spans of sets are extremely important in the context of vector spaces: Theorem.7. Let S = (v v... v m ) be a list of vectors in a vector space V. Then:. The span of S is a subspace W of V.. W = span (S) is the smallest subspace of V that contains all of the vectors in S. With these ideas in mind we record a definition: Definition.8. If there is a list of vectors (v v... v n ) so that V = span (v v... v n ) we say that the vectors v v... v n span V. The theorem above tells us several useful facts. First of all it guarantees that the set that we create when we create the span of a list of vectors is automatically a subspace of the original vector space. Second the theorem guarantees that if U is a subspace of V different from W = span (S) and U contains all of the vectors in S then U contains all of the vectors in W and is a larger vector space than is W :

Key Point. The key point of the theorem is that spans of sets are vector subspaces. In a sense the theorem gives us the power to build many different subspaces of a vector space V : just choose your favorite vectors from V and take the span (set of all linear combinations) of those vectors. You automatically get a subspace. Example In a previous section we saw that the set U (R) of all real upper triangular matrices is a subspace of the vector space M (R) of all real matrices by checking that U (R) was closed under the operations of addition and scalar multiplication. Theorem.7 gives us another way to check that U (R) is a subspace: we know that U (R) = span (e e e ). Since the vectors e e and e are also vectors in the vector space M (R) Theorem.7 says that their span U (R) is a subspace. 3

Example The standard unit vectors e =. e =. and e n =. in R n actually span R n ; any n dimensional vector can be written as a linear combination of the vectors e e... e n. Example 3 In the vector space P 5 (F) of all polynomials over F of degree at most 5 the list S = ( x x x 3 x 4 x 5 ) spans P 5 because any vector in P 5 (F) (i.e. any polynomial of degree at most 5) can be written as a linear combination of the vectors in S. Example 4 In section of this unit we saw that the set sl( R) of trace matrices is a vector space. Recall that every trace matrix has form ( ) a b. c a With this in mind it shoud be clear that the vectors ( ) ( ) and ( ) span sl( R). Example 5: A set that does not span Let s return once more to the set U (R) of all upper triangular matrices. We saw earlier that the three vectors ( ) ( ) ( ) e = e = and e = 4

span U (R). Now if we only consider the list (e e ) (i.e. throw away e ) I claim that we no longer have a spanning list of U (R). It is actually quite easy to see that the list (e e ) does not span U (R): for example there is no way to find a linear combination of that will yield the vector e = ( ) w = and e = ( ) 4. ( ) Since there are elements of U (R) that are not linear combinations of e and e the list (e e ) does not span U (R). Example Do the vectors span R? v = ( ) 3 and v = ( ) Determining if v and v span R amounts to checking that every vector in R can be written as a linear combination of v and v. Given an arbitrary vector ( ) b b = in R we hope to be able to find scalars k and k so that b b = k v + k v ; alternatively we can write this equation as ( ) ( ) ( ) b 3k k = +. b k k In equation form we want to show that the system 3k + k = b k + k = b is consistent for any real numbers b and b. Fortunately we learned a theorem in Unit Section that will allow us to check this quite easily: Theorem. Let A be an n n matrix. Then the following are equivalent: 5

Ax = b is consistent for every n matrix b. det A. Thinking of our system 3k + k = b k + k = b in matrix form as ( ) ( ) 3 k = k we see that the system will be consistent for every vector b in R as desired if and only if the determinant of the coefficient matrix for the system is nonzero. Let s check the determinant: ( ) 3 det = 3 ( b b = 6 = 4. Since the determinant of the coefficient matrix for the system is nonzero the system is consistent for any vector b in R. In other words no matter how we choose the vector b we will always be able to find scalars k and k so that ( ) ( ) ( ) b 3k k = +. b k k Thus the vectors v and v span R so that we may use them to build all of R. For example if we choose ( ) 7 b = 3 we can write b as the linear combination ( ) 7 = 5 3 ( ) 3 + 4 ) ( ). Key Point. At this point we know of two different lists that span R : ( ( ) ( )) ( ( ) ( )) 3 and. In a sense these two sets give us two different ways to build R. The two alternate spanning sets are graphed below in R : 6

As indicated above we can use either spanning set to build any vector in R that we like as illustrated below with the vector ( 7 3 ) : Finite and Infinite Dimensional Vector Spaces We will soon see that the ideas of spanning and linear combination are closely tied to the idea of the size of a vector space. Thinking back to Example above in R n we saw that there is a set of vectors that spans R ; a set of 3 vectors that spans R 3 ; etc. Of course we also think of R as a smaller space than R 3 (indeed R is embedded in R 3 ); we will soon see that the relative sizes of spanning sets correspond in a natural way to the relative sizes of the spaces themselves (we will discuss this idea further in the next few sections). 7

At this point we would like to make an observation: there are vector spaces that are not spanned by any finite set. Example. Let P(R) be the vector space of all polynomials over R and let S be any (finite) list of vectors in P(R). The list is finite so there is a vector in the list of highest degree say α m x m +... + α x + α. Clearly the polynomial x m+ is not a linear combination of vectors from S so no finite set of vectors spans P(R). With this example in mind we record a definition and follow the definition with a reasonable assumption: Definitions./.5. A vector space V is called finite dimensional if there is a finite list of vectors spanning V. Otherwise V is infinite dimensional. Example. We saw above that there is no finite list spanning P(R) so it is an example of an infinite dimensional vector space. In fact the spaces R( ) and C( ) of real-valued functions and continuous functions with domain ( ) are both infinite dimensional. On the other hand P n (F) is finite dimensional for any n < as are R n and M mn. Remark. Unless otherwise specified we will assume throughout this course that all of our vector spaces are finite dimensional i.e. that each one has a finite list of vectors spanning it. Linear Independence We saw above that if a list S = (v v... v m ) of vectors from a vector space V spans V then every vector in V can be written as a linear combination of the vectors in S. In a sense we can use the vectors in S along with the operations of addition and scalar multiplication on V to build any vector we want from V. We looked at an example above in R : we saw that both of the lists ( ( ) ( )) ( ( ) 3 S = (e e ) = and T = (v v ) = ( )) span R ; the vectors from S is graphed below in red and those from T are graphed in blue. 8

Now I claim that the list of four vectors ( ( ) R = ( ) ( ) 3 ( )) is also a spanning list. Of course my claim is quite easy to check: since the list R includes all of the vectors from the spanning lists S and T we can certainly write any vector in R as a linear combination of the vectors in R. Unfortunately this spanning list causes us some problems. As an example consider the vector ( ) 6 w =. 4 Since w is in R it is a linear combination of the vectors in the spanning set R say w = ( ) ( ) ( ) ( ) 3 + + + = e + e + v + v. However we could also write w as a linear combination of the vectors in R as w = ( ) ( ) 6 + 4 = 6e + 4e + v + v or even as w = ( ) ( ) ( ) 4 6 + 5 = 4e 6e + v + 5v. 9

In a sense this particular spanning set for R introduces ambiguity: it gives us multiple ways to write a particular vector in R as a linear combination of vectors in the spanning set. Now if we restrict our lists to just ( ( ) ( )) S = or alternatively to ( ( ) 3 T = ( )) we won t have this difficulty: a vector in R can be written in only one way as a linear combination of vectors in S and in only one way as a linear combination of vectors in T. So there is a sense in which the spanning list ( ( ) ( ) R = ( ) 3 ( )) for R is just too big: it gives us too much flexibility introducing ambiguity into the way we build vectors (should we write w as w = e + e + v + v or as w = 4e 6e + 5v?) Key Point. Saying that a list S spans a vector space V is the same as saying that we can use the vectors in S to build any vector that we like from V ; there is a sense in which S is all we need to know in order to understand V. Of course we want to avoid any ambiguity in our understanding of V so we would like to choose the list S to be as small as possible. We quantify what we mean when we say that a set is too big below with a discussion of linear independence. Linearly Independent Lists The reason that the spanning set ( ( ) R = ( ) ( ) 3 ( )) is too big is that we can use some of the vectors in R to build other vectors in R. For example you should check that e can be written as the linear combination e = v 4 v. So in a sense we could throw out e from our spanning list without losing any information: anytime we want to write e we could just as well write v 4 v. With these ideas in mind we introduce the idea of a list of linearly independent vectors:

Definitions.7/.9. A nonempty list S = (v v... v m ) of one or more vectors in V is said to be linearly independent if the only choice of scalars α... α n F so that is α v +... + α m v m = α =... = α m =. Otherwise the list of vectors is linearly dependent. For purposes of convenience we choose to call the empty list () independent. Key Point. Saying that a set of vectors is linearly independent is the same as saying that they are unique or perhaps even essential; you can t build one of them using the others so losing one of the vectors in the list would result in a loss of information about the list and its span. Altering the observation from the example above just a bit we see that e v + 4 v = ; thus ( ( ) R = ( ) ( ) 3 ( )) is a list of dependent vectors. However we will see momentarily that lists ( ( ) S = ( )) and are both linearly independent. ( ( ) 3 T = ( )) Remark. Any (finite) set containing the vector is automatically linearly dependent.

Examples of Linearly Independent Vectors Example In R n the standard unit vectors e =. e =. and e n =. (which span R n ) are also linearly independent. To see why suppose that α... a n are constants so that α e +... + α n e n = =.. Then clearly α =... = α n = and the vectors are linearly independent. Example The list ( ( ) 3 T = ( )) of vectors in R is an independent list: indeed suppose that ( ) ( ) ( ) 3 α + α =. Then we have ( ) ) ) ( ( 3 = α + α ( ) ( ) 3α α = + α α ( ) 3α + α =. α + α In other words we would like to know what values for α and α will make the system 3α + α = α + α = consistent. Once again we return to a theorem from Unit Section :

Theorem. Let A be an n n matrix. Then the following are equivalent: Ax = has only the trivial solution. det A. Thinking of our system 3α + α = α + α = in matrix form as ( ) ( ) 3 α = α ( ) we see that we should check the determinant of the coefficient matrix: ( ) 3 det = 3 = 6 = 4. Since the determinant of the coefficient matrix is nonzero the system 3α + α = α + α = has only the trivial solution α = α = : in other words the only way to make the statements true is to choose α = and α =. Thus the vectors in the list ( ( ) ( )) 3 T = are linearly independent. Example 3 We saw above that the list ( ( ) ( )) does not span the vector space U (R) of all real upper triangular matrices. However it is quite easy to see that this set is linearly independent: the only way to make ( ) ( ) ( ) k k + = is by choosing k = k =. 3

Example 4 The vectors ( ) ( ) and ( ) span sl( R); I claim that they are also linearly independent. This fact is easy to see: if ( ) α + α ( ) β + ( ) = γ ( ) then clearly α = β = γ =. Example of a List of Dependent Vectors I claim that the vectors 3 3 are not linearly independent in R 4. We can check that this is true by inspecting the coefficient matrix of the system of equations α + α + α 3 + α 4 = α + α + α 3 + α 4 = α + α α 3 + α 4 = 3α + α + α 3 + 3α 4 =. Again we resort to the theorem from Unit Section : Theorem. Let A be an n n matrix. Then the following are equivalent: Ax = has only the trivial solution. det A. Since the statements above are equivalent we know that there are nontrivial solutions α α α 3 and α 4 to the system if and only if the determinant of the coefficient matrix is. The coefficient matrix is given by A =. 3 3 4

Since the matrix is lower triangular its determinant is just the product of its diagonal entries; we have det A = ( ) 3 = which indicates that there are nontrivial solutions to the system. Thus the vectors are linearly dependent. Relative Sizes of Spanning Lists and Independent Lists We have considered the list ( ( ) S = ( )) of vectors in U (R) several times; in particular we have seen that the list is independent but does not span U (R). On the other hand the list ( ( ) S = ( ) ( ) ( )) certainly spans U (R); however it is not an independent list (you should check)! The relative sizes of these lists is important to note when considering the phenomena described above. Notice that S is a relatively small list while S is relatively large. Smaller lists are more likely to be independent whereas larger lists are more likely to span. In a sense the more vectors you add to a list the more likely you are to introduce some dependence relations (thus preventing the list from being independent); but adding more vectors means that you are also more likely to be able to build the entire space (i.e. span). These ideas are made concrete by the following theorem; recall that the notation when applied to a list or set refers to the number of items in the set or length of the list. Theorem.3. Let V be a finite dimensional vector space. Suppose that I is a list of independent vectors in the space and that S is a list of vectors that span V. Then I S. Before we look at a proof of the theorem let us discuss a helpful lemma: Linear Dependence Lemma. Suppose that v... v m is a linearly dependent list in V. Then there is a j j m so that. v j span (v v... v j ) and. span (v v... v m ) = span (v v... v j v j+... v m ); that is removing v j from the list does not affect the span of the list. 5

We will not give a detailed proof of the lemma but will discuss it briefly. Clearly if the set is dependent then there are constants α... α n not all so that α v +... + α m v m =. We may assume that α m (reorder the list if necessary). Then v m = α α m v... α m α m v m so that v m span (v... v m ). For part () suppose that w is a linear combination of the vectors in the list; simply replace v m by α v... α m v m α m α m so that w is a linear combination of the remaining vectors. Key Point. If v is a linear combination of other vectors in a list removing v from the list will not affect the span of the list. Proof of Theorem.3. Given the independent list and the (finite) spanning list I = (u u... u m ) S = (w w... w n ) we wish to show that m n. We proceed in an iterative fashion: Step : Create the list Ŝ = (u w... w n ). Vector u is in the span of S so Ŝ is a dependent list with the same span as that of S that is span (Ŝ) = span (S) = V. Now u is a linear combination of the vectors in S say u = α w +... + α n w n ; we may assume that α n (reorder the list if necessary). Thus w n is a linear combination of vectors in the list S = (u w w... w n ) created from Ŝ by deleting w n. By applying the Linear Dependence Lemma to the dependent set Ŝ we see that removing w n from the list does not change its span. We see that span (S ) = span (Ŝ) = V. 6

Step : Vector u is in the span of S so the list Ŝ = (u u w... w n ) is dependent (and spans V ). Now u is a linear combination of the vectors in the list say Now the list I is independent so u = β u + α w +... + α n w n. u β u. Thus at least one α i say α n (again reorder the list if necessary). Thus w n is a linear combination of the vectors in the list S = (u u w w... w n ) created from the dependent list Ŝ by removing w n. Again we have merely removed a vector that is a linear combination of the others from a dependent list so the Linear Dependence Lemma applies: span (S ) = span (Ŝ) = V. Steps 3-m: Iterate the process from step of adding u i to S i and deleting an appropriate w j to create the new spanning list S i. At each step there must be at least one w j available for deletion because if not we would have u i = β u +... + β i u i violating the independence of I. Thus we will not run out of vectors w j before we run out of vectors u i implying that I S. We record without proof one more theorem in this section which will become useful later: Theorem.6. Every subspace of a finite dimensional vector space is finite dimensional. In other words if V has a finite spanning list then so does any subspace of V. 7