SUPPLEMENT TO CHAPTER 3

Similar documents
Chapter 2: Linear Independence and Bases

Problem set #4. Due February 19, x 1 x 2 + x 3 + x 4 x 5 = 0 x 1 + x 3 + 2x 4 = 1 x 1 x 2 x 4 x 5 = 1.

OHSX XM511 Linear Algebra: Multiple Choice Exercises for Chapter 2

Abstract Vector Spaces

BASES. Throughout this note V is a vector space over a scalar field F. N denotes the set of positive integers and i,j,k,l,m,n,p N.

Review of Linear Algebra

MTH5102 Spring 2017 HW Assignment 3: Sec. 1.5, #2(e), 9, 15, 20; Sec. 1.6, #7, 13, 29 The due date for this assignment is 2/01/17.

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

Algorithms to Compute Bases and the Rank of a Matrix

Worksheet for Lecture 15 (due October 23) Section 4.3 Linearly Independent Sets; Bases

Vector Spaces. (1) Every vector space V has a zero vector 0 V

Math 24 Spring 2012 Questions (mostly) from the Textbook

Vector Spaces 4.5 Basis and Dimension

Linear algebra and differential equations (Math 54): Lecture 10

MATH 110: LINEAR ALGEBRA HOMEWORK #2

Worksheet for Lecture 15 (due October 23) Section 4.3 Linearly Independent Sets; Bases

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS

Math 550 Notes. Chapter 2. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

A linear algebra proof of the fundamental theorem of algebra

A linear algebra proof of the fundamental theorem of algebra

Definition 1. A set V is a vector space over the scalar field F {R, C} iff. there are two operations defined on V, called vector addition

Chapter 1. Vectors, Matrices, and Linear Spaces

Solution to Homework 1

We showed that adding a vector to a basis produces a linearly dependent set of vectors; more is true.

Footnotes to Linear Algebra (MA 540 fall 2013), T. Goodwillie, Bases

Math 3191 Applied Linear Algebra

Vector Spaces and Dimension. Subspaces of. R n. addition and scalar mutiplication. That is, if u, v in V and alpha in R then ( u + v) Exercise: x

MATH 304 Linear Algebra Lecture 10: Linear independence. Wronskian.

Chapter 1: Linear Equations

Math 314H EXAM I. 1. (28 points) The row reduced echelon form of the augmented matrix for the system. is the matrix

MTH 309 Supplemental Lecture Notes Based on Robert Messer, Linear Algebra Gateway to Mathematics

LINEAR ALGEBRA: THEORY. Version: August 12,

Lecture Summaries for Linear Algebra M51A

GENERAL VECTOR SPACES AND SUBSPACES [4.1]

August 23, 2017 Let us measure everything that is measurable, and make measurable everything that is not yet so. Galileo Galilei. 1.

Chapter 1: Linear Equations

Math Linear algebra, Spring Semester Dan Abramovich

6.4 BASIS AND DIMENSION (Review) DEF 1 Vectors v 1, v 2,, v k in a vector space V are said to form a basis for V if. (a) v 1,, v k span V and

Linear Independence. Consider a plane P that includes the origin in R 3 {u, v, w} in P. and non-zero vectors

Linear Algebra. Paul Yiu. Department of Mathematics Florida Atlantic University. Fall 2011

Lecture 17: Section 4.2

a (b + c) = a b + a c

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Family Feud Review. Linear Algebra. October 22, 2013

LECTURE 6: VECTOR SPACES II (CHAPTER 3 IN THE BOOK)

Basic Linear Algebra Ideas. We studied linear differential equations earlier and we noted that if one has a homogeneous linear differential equation

Math 102, Winter 2009, Homework 7

Chapter 3. Vector spaces

Math 235: Linear Algebra

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Apprentice Linear Algebra, 1st day, 6/27/05

Math 240, 4.3 Linear Independence; Bases A. DeCelles. 1. definitions of linear independence, linear dependence, dependence relation, basis

1.5 Linear Dependence and Linear Independence

1 Last time: inverses

BASIC NOTIONS. x + y = 1 3, 3x 5y + z = A + 3B,C + 2D, DC are not defined. A + C =

DEPARTMENT OF MATHEMATICS

Let V be a vector space, and let X be a subset. We say X is a Basis if it is both linearly independent and a generating set.

Chapter 1 Vector Spaces

Math 110, Summer 2012: Practice Exam 1 SOLUTIONS

A Do It Yourself Guide to Linear Algebra

4.6 Bases and Dimension

Writing proofs for MATH 51H Section 2: Set theory, proofs of existential statements, proofs of uniqueness statements, proof by cases

3.2 Subspace. Definition: If S is a non-empty subset of a vector space V, and S satisfies the following conditions: (i).

Objective: Introduction of vector spaces, subspaces, and bases. Linear Algebra: Section

MATH 326: RINGS AND MODULES STEFAN GILLE

2. Every linear system with the same number of equations as unknowns has a unique solution.

NOTES (1) FOR MATH 375, FALL 2012

Sept. 26, 2013 Math 3312 sec 003 Fall 2013

Lecture 1: Basic Concepts

Math 369 Exam #2 Practice Problem Solutions

Chapter 3. Directions: For questions 1-11 mark each statement True or False. Justify each answer.

Lecture 6: Corrections; Dimension; Linear maps

GALOIS THEORY I (Supplement to Chapter 4)

MA 2030 Linear Algebra and Numerical Analysis A V Jayanthan & Arindama Singh

Linear Algebra Differential Equations Math 54 Lec 005 (Dis 501) July 10, 2014

Elementary linear algebra

Lecture 9: Vector Algebra

1.3 Linear Dependence & span K

Vector Spaces. 9.1 Opening Remarks. Week Solvable or not solvable, that s the question. View at edx. Consider the picture

Math 308 Spring Midterm Answers May 6, 2013

Minimal-span bases, linear system theory, and the invariant factor theorem

MATH 304 Linear Algebra Lecture 20: Review for Test 1.

MATH JORDAN FORM

MAT 242 CHAPTER 4: SUBSPACES OF R n

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Solutions for Math 225 Assignment #4 1

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

Advanced Linear Algebra Math 4377 / 6308 (Spring 2015) March 5, 2015

Lecture 11. Andrei Antonenko. February 26, Last time we studied bases of vector spaces. Today we re going to give some examples of bases.

Online Exercises for Linear Algebra XM511

MATH2210 Notebook 3 Spring 2018

c i r i i=1 r 1 = [1, 2] r 2 = [0, 1] r 3 = [3, 4].

Midterm 1 Review. Written by Victoria Kala SH 6432u Office Hours: R 12:30 1:30 pm Last updated 10/10/2015

Direct Sums and Invariants. Direct Sums

MATH 225 Summer 2005 Linear Algebra II Solutions to Assignment 1 Due: Wednesday July 13, 2005

Orthogonal Complements

Lecture 4: Linear independence, span, and bases (1)

Math 4153 Exam 1 Review

MathQuest: Linear Algebra

Transcription:

SUPPLEMENT TO CHAPTER 3 1.1 Linear combinations and spanning sets Consider the vector space R 3 with the unit vectors e 1 = (1, 0, 0), e 2 = (0, 1, 0), e 3 = (0, 0, 1). Every vector v = (a, b, c) R 3 can be expressed in terms of vectors e 1, e 2, e 3, namely: v = (a, b, c) = ae 1 + be 2 + ce 3 = a(1, 0, 0) + b(0, 1, 0) + c(0, 0, 1), this means that every vector of R 3 is a sum of scalar multiples of e 1, e 2, and e 3. Similarly, in M(2, 2) consider the matrices: 1 0 0 1 0 0 0 0 E 11 =, E 0 0 12 =, E 0 0 21 =, E 1 0 22 =. 0 1 a b For all M(2, 2) we have that c d a b c d = a 1 0 + b 0 0 0 1 + c 0 0 = ae 11 + be 12 + ce 21 + de 22. 0 0 + d 1 0 0 0 0 1 In the following we want to study this property more closely. To do so we first need a definition: Definition. Let V be a vector space and v 1,..., v n V a sequence of vectors in V. A vector w V is called a linear combination of vectors v 1,..., v n if there are scalars r 1,..., r n R so that w = r 1 v 1 +... + r n v n = r i v i. The scalars r 1,..., r n are called the coefficients of this linear combination. We have just seen that every vector v R 3 is a linear combination of the 3 vectors e 1, e 2, e 3 and that every matrix of M(2, 2) is a linear combination of the matrices E 11, E 12, E 21, E 22. Given any n vectors v 1,..., v n in a vector space V we want to investigate the set of all linear combinations of these n vectors. Thus we make the definition: Definition. Let V be a vector space and v 1,..., v n V vectors in V. The span of the vectors v 1,..., v n is the set of all linear combinations of these n vectors: S = span{v 1,..., v n } = {r 1 v 1 +... + r n v n r 1,..., r n R} = { r i v i r 1,..., r n R}. 1 Typeset by AMS-TEX

2 SUPPLEMENT TO CHAPTER 3 We also say that the set S is spanned by the vectors v 1,..., v n. Whenever we define certain subsets of a vector space the first question which arises is if these subsets relate in some ways to the vector space structure on the whole vector space? In almost all cases we want to know if these subsets are subspaces. Here is the answer for spanning sets: Theorem 1.1. (Theorem 3.3 of the book) Let V be a vector space and v 1,..., v n V vectors in V. The spanning set of v 1,..., v n is a subspace of V. S = span{v 1,..., v n } Proof. In order to prove this theorem remember that by Theorem 1.11 (of the book) we have to show: (0) (0) S (i) S is closed under addition (ii) S is closed under scalar multiplication 0 = 0v i = 0v 1 +... 0v n S Thus S contains (at least) the zero vector and is not empty. (i) Next we want to show that S is closed under vector addition. Let v, w be vectors in S. Since S is the set of all linear combinations of v 1,..., v n, vectors v and w are linear combinations of v 1,..., v n, that is, there are scalars r 1,..., r n R and s 1,..., s n R with Then v = r i v i and w = v + w = = = r i v i + s i v i. s i v i (r i v i + s i v i ) (r i + s i )v i and the vector v + w is a linear combination of the vectors v 1,..., v n. Hence v + w S. (ii) In order to show that S is closed under scalar multiplication, let v S and c R. Then v = r i v i

SUPPLEMENT TO CHAPTER 3 3 where r i R and cv = c = = r i v i c(r i v i ) (cr i )v i and cv S = span{v 1,..., v n }. This proves that span{v 1,..., v n } is a subspace of V. The examples at the beginning of this section show that e 1, e 2, e 3 span the vector space R 3 and that E 11, E 12, E 21, E 22 span M(2, 2): R 3 = span{e 1, e 2, e 3 } and M(2, 2) = span{e 11, E 12, E 21, E 22 }. You can easily convince yourself that when you leave out one of the vectors in the sequence e 1, e 2, e 3 (in E 11, E 12, E 21, E 22 in case of M(2, 2)) the vector (resp. matrix) left out is not in the spanning set of the remaining vectors, for example: e 3 = (0, 0, 1) / span{e 1, e 2 } and E 21 / span{e 11, E 12, E 22 }. We say that {e 1, e 2, e 3 } (resp. {E 11, E 12, E 21, E 22 }) are minimal spanning sets of R n (M(2, 2), resp.). Another obvious fact is that whenever any vector v = (a, b, c) is written as a linear combination of the unit vectors e 1, e 2, e 3 : v = (a, b, c) = ae 1 + be 2 + ce 3, then the scalars a, b, c are unique, which means that if w = (a, b, c ) = a e 1 + b e 2 + c e 3 a b with a a or b b or c c then v w. Similarly, every matrix c d M(2, 2) can be written in a unique way as a linear combination of the matrices E 11, E 12, E 21, E 22. On the other hand, the vectors e 1, e 2, e 3, u = (2, 0, 1) also span R 3 (why?). Here the vector v = (1, 0, 1) can be written as a linear combination of these 4 vectors in at least two different ways: v = 1e 1 + 0e 2 + 1e 3 + 0u = ( 1)e 1 + 0e 2 + 0e 3 + 1u. 2 0 span M(2, 2) and the ma- 0 1 Similarly, the matrices E 11, E 12, E 21, E 22, D = [ 1 2 trix A = 4 0 in at least two different ways. ] can be written as a linear combination of matrices E 11, E 12, E 21, E 22, D

4 SUPPLEMENT TO CHAPTER 3 In the following sections we are going to address the questions: (1) Are there shortest spanning sets and when is a spanning set shortest? (2) Suppose we can write a vector v as a linear combination of vectors v 1,..., v n, say: v = r i v i where r i R. When are the coefficients r 1,..., r n unique? The best tool to study these questions is the notion of linear independence which will be introduced next. Linear independence Definition. In a vector space V a finite sequence of vectors v 1,..., v n V is called linearly independent if and only if the equation r 1 v 1 +... + r n v n = 0 implies that r 1 = r 2 =... = r n = 0. If it is possible for the equation to hold when one or more of the coefficients are nonzero, the set is linearly dependent. Remark. For any sequence of vectors v 1,..., v n the zero vector can always be written as ( ) 0 = 0v 1 +... + 0v n. The definition of linear independence states that the sequence v 1,..., v n is linearly independent if ( ) is the one and only way the zero vector can be written as a linear combination of vectors v 1,..., v n. Thus the sequence of vectors v 1,..., v n is linearly independent if and only if the zero vector can be written in a unique way (namely ( )) as a linear combination of the sequence v 1,..., v n. The following question seems natural: Suppose that the sequence v 1,..., v n is linearly independent. We know then that the zero vector can be written in a unique way as a linear combination of v 1,..., v n. What about other linear combinations? Can other vectors in span{v 1,..., v n } also be written as a linear combination of v 1,..., v n with unique coefficients? The answer is yes and even more can be shown: Theorem 1.2. Let v 1,..., v n be a sequence of vectors in a vector space V. The following are equivalent: (1) v 1,..., v n are linearly independent. (2) Every vector in span{v 1,..., v n } has a unique expression as a linear combination of vectors v 1,..., v n. (3) No vector v i is a linear combination of the other vectors v 1,..., v i 1, v i+1,..., v n. (Compare with Theorem 3.5 of the book.) Proof. (1) (2) : Let w span{v 1,..., v n } and assume that w = r 1 v 1 +... + r n v n = s 1 v 1 +... + s n v n where r i, s i R. We have to show that r i = s i for all 1 i n. By subtracting w from itself we obtain: 0 = w w = (r 1 s 1 )v 1 +... + (r n s n )v n.

SUPPLEMENT TO CHAPTER 3 5 Since the vectors v 1,..., v n are linearly independent: r 1 s 1 = r 2 s 2 =... r n s n = 0 r 1 = s 1, r 2 = s 2,..., r n = s n. (2) (3) : Proof by contrapositive: We show that if a vector v i is a linear combination of the vectors v 1,..., v i 1, v i+1,..., v n, then there is a vector in span{v 1,..., v n } which can be written in at least two different ways as a linear combination of v 1,..., v n. Suppose that Then there are scalars r i R so that: On the other hand: v i span{v 1,..., v i 1, v i+1,..., v n }. v i = r 1 v 1 +... + r i 1 v i 1 + r i+1 v i+1 +... + r n v n = r 1 v 1 +... + r i 1 v i 1 + 0v i + r i+1 v i+1 +... + r n v n. v i = 0v 1 +... + 0v i 1 + 1v i + 0v i+1 +... + 0v n, which shows that the vector v i span{v 1,..., v n } can be written in two different ways as a linear combination of v 1,..., v n. Thus if (3) is false, then (2) is false. (3) (1) : Proof by contrapositive: Suppose that vectors v 1,..., v n are linearly dependent and that 0 = r 1 v 1 +... + r n v n = r i v i with r i 0 for some 1 i n. Then and therefore r i v i = ( r 1 )v 1 +... + ( r i 1 )v i 1 + ( r i+1 )v i+1 +... + ( r n )v n v i = ( r 1 /r i )v 1 +... + ( r i 1 /r i )v i 1 + ( r i+1 /r i )v i+1 +... + ( r n /r i )v n and v i span{v 1,..., v i 1, v i+1,..., v n }. This shows that if (1) is false then so is (3). Theorem 1.4. Let V be a vector space and v 1,..., v n V a sequence of linearly independent vectors. If w / span{v 1,..., v n } then the sequence v 1,..., v n, w is linearly independent. (This is the expansion lemma 3.12 of the book.) Proof. Let r i, t R be scalars with 0 = r 1 v 1 +... + r n v n + tw. Then and if t 0 then tw = ( r 1 )v 1 +... + ( r n )v n w = ( r 1 /t)v 1 +... + ( r n /t)v n span{v 1,..., v n },

6 SUPPLEMENT TO CHAPTER 3 a contradiction. Thus we must have that t = 0. But then 0 = r 1 v 1 +... + r n v n and also r 1 = r 2 =... = r n = 0 since the vectors v 1,..., v n are linearly independent. Example. The set of unit vectors {e 1, e 2, e 3 } in R 3 is linearly independent, since 0 = (0, 0, 0) = ae 1 + be 2 + ce 3 = (a, b, c) implies that a = b = c = 0. On the other hand, the set is linearly dependent since {e 1, e 2, e 3, u = (2, 0, 1)} u = (2, 0, 1) span{e 1, e 2, e 3 } = R 3. [ A similar ] argument shows that the sequence of matrices E 11, E 12, E 21, E 22, D = 2 0 from section 1 is linearly dependent while E 0 1 11, E 12, E 21, E 22 is linearly independent. The following Lemma will become useful in the next section when we discuss bases of vector spaces. Linear Dependence Lemma 1.5. If v 1,..., v n V is a sequence of linearly dependent vectors in a vector space V with v 1 0, then there is a j {2,..., n} so that (a) v j span{v 1,..., v j 1 } (b) span{v 1,..., v n } = span{v 1,..., v j 1, v j+1,..., v n }. Proof. (a) By assumption the sequence v 1,..., v n is linearly dependent. This implies that there are scalars r 1,..., r n R, not all 0, so that: 0 = r 1 v 1 +... + r n v n. Since v 1 0, not all r 2,..., r n are 0. Let j 2 be the largest integer with r j 0. Then v j = ( r 1 /r j )v 1 +... + ( r j 1 /r j )v j 1 which shows that v j span{v 1,..., v j 1 }. (b) By (a) we know that there is an integer j {2,..., n} so that v j span{v 1,..., v j 1 }. Obviously, span{v 1,..., v j 1, v j+1,..., v n } span{v 1,..., v n }. In order to show the other inclusion we use the fact that v j span{v 1,..., v j 1 } and write v j = r 1 v 1 +... r j 1 v j 1 with r i R. Let v span{v 1,..., v n } then v = t 1 v 1 +... + t n v n for some t i R. Substituting the first equation for v j into the second equation yields: v = (t 1 + t j r 1 )v 1 +... + (t j 1 + t j r j 1 )v j 1 + r j+1 v j+1 +... + r n v n and v span{v 1,..., v j 1, v j+1,..., v n }. This shows (b). Corollary 1.6. If {v 1,..., v n } is a set of vectors in V with v 1 0 and v j / span{v 1,..., v j 1 } for all 2 j n then {v 1,..., v n } is a set of linearly independent vectors in V.

SUPPLEMENT TO CHAPTER 3 7 Bases Definition. Let V be a vector space and B = {v 1,..., v n } V a finite set of vectors in V. B is called a basis of V if B is linearly independent and spans V, i.e., B is linearly independent and V = span(b) = span{v 1,..., v n }. Definition. A vector space V is called finite dimensional if there is a finite sequence of vectors v 1,..., v n V so that V = span{v 1,..., v n }. In this course most of our focus will be on the study of finite dimensional vector spaces, like the vector spaces R n, P n and M(m, n). We also know a few examples of vector spaces which are not finite dimensional, for example, P, the vector space of all polynomials or C([0, 1]), the vector space of all continuous functions from the interval [0, 1] into R. This first theorem on finite dimensional vector spaces is the foundation for rest of the course: Theorem 1.7. Every finite dimensional vector space has a basis. Proof. Let V be a vector space and v 1,..., v n V a sequence of vectors of V with V = span{v 1,..., v n }. If the sequence v 1,..., v n is linearly independent, then the set B = {v 1,..., v n } is a basis of V and we are done. If vectors v 1,..., v n are linearly dependent we need to distinguish two more cases. In the first case we have that v 1 = 0, then V = span{v 1,..., v n } = span{v 2,..., v n }. Thus we may replace the spanning sequence v 1,..., v n by the shorter spanning sequence v 2,..., v n. In the second case where v 1,..., v n are linearly dependent and the first vector v 1 0 we apply the Linear Dependence Lemma 1.5. According to Lemma 1.5 there is an integer 2 j n so that V = span{v 1,..., v n } = span{v 1,..., v j 1, v j+1,..., v n }. Again we can replace the spanning sequence v 1,..., v n by the shorter spanning sequence v 1,..., v j 1, v j+1,..., v n. In summary, we have just proven that if the spanning sequence of vectors v 1,..., v n is linearly dependent, then there is a shorter spanning sequence, say v 1,..., v n 1 (after renumbering the vectors). We now apply the same arguments as before to the new and shorter spanning sequence v 1,..., v n 1 and obtain (as before): If the sequence v 1,..., v n 1 is linearly independent then B = {v 1,..., v n 1 } is a basis of V. If the spanning sequence v 1,..., v n 1 is linearly dependent we use the same arguments as before to show that it can be shorten to a spanning sequence of n 2 vectors, say (after renumbering) v 1,..., v n 2 is a spanning sequence of V. This process has to stop since there are only finitely many vectors in any spanning sequence. It stops when we have found a spanning sequence v 1,..., v r of linearly independent vectors. This means that the set B = {v 1,..., v r } is a basis of the vector space V.

8 SUPPLEMENT TO CHAPTER 3 Corollary 1.8. (Contraction theorem 3.11 of the book) Suppose that the sequence of vectors v 1,..., v n spans the vector space V. Then some subset of {v 1,..., v n } is a basis of V. Proof. The proof is exactly the same as the proof of Theorem 1.7. The set of unit vectors B = {e 1, e 2, e 3 } is a basis of R 3 while the sets C = {e 1, e 2, e 3, u = (2, 0, 1)} and D = {e 1, e 2 } are not bases of R 3. Similarly, the set B = {E 11, E 12, E 21, E 22 } is a basis of the vector space M(2, 2). In the following we call the set of unit vectors {e 1, e 2, e 3 } the standard basis of R 3. More generally, if n is any positive integer and 1 i n, then the ith standard (basis) vector of R n is the vector e i that has 0 s in all coordinate positions except the ith, where it has 1. Thus e 1 = (1, 0,..., 0), e 2 = (0, 1, 0,..., 0),..., e n = (0,..., 0, 1). The set {e 1,..., e n } is called the standard basis of R n. (Note that {e 1,..., e n } is a basis of R n.) We call a subset A of a set B a proper subset if A is a subset of B (i.e. A B) and A B. Theorem 1.9. Let S = {v 1,..., v n } be a set of vectors in V. The following are equivalent: (1) S is linearly independent and spans V. (2) For every v V there are unique scalars r 1,..., r n R so that v = r 1 v 1 +... + r n v n. (3) S is a minimal spanning set, that is, S spans V and no proper subset of S spans V. (4) S is a maximal linear independent set, that is, S is linearly independent and any subset T of V that properly contains S is linearly dependent. Proof. (1) (2) : Let v V. Since S spans V, there are scalars r 1,..., r n R so that v = r 1 v 1 +... + r n v n. If there is another list of scalars s 1,..., s n R with v = s 1 v 1 +... + s n v n then 0 = v v = (r 1 s 1 )v 1 +... + (r n s n )v n. Since the sequence v 1,..., v n is linearly independent, we have that r 1 = s 1, r 2 = s 2,..., r n = s n. (2) (3) : By contradiction: Suppose that A S is a subset with A S and suppose that V = span(a). Since A is properly contained in S there is at least one v i S with v i / A. After renumbering the v s - if necessary - we may assume that v 1 / A. Since A spans V, any set containing A spans V. Thus we may assume that A = {v 2, v 3,..., v n }. Then V = span(a), v 1 Span(A), and there are scalars r 2,..., r n R so that v 1 = r 2 v 2 +... + r n v n = 0v 1 + r 2 v 2 +... + r n v n.

SUPPLEMENT TO CHAPTER 3 9 This is one way to write v 1 as a linear combination of vectors v 1,..., v n. Another is v 1 = 1v 1 + 0v 2 +... + 0v n, a contradiction to assumption (2). (3) (4) : Let T V be a subset with S T and S T and let v T S. We know by (3) that V = span(s). Thus there are scalars r 1,..., r n R so that v = r 1 v 1 +... + r n v n. This gives a nontrivial linear combination of 0: 0 = r 1 v 1 +... + r n v n + ( 1)v and the sequence of vectors v 1,..., v n, v is linearly dependent. Thus T is linearly dependent. (4) (1) : By assumption (4) the set S is linearly independent. We have to show that S spans V. Let v V. If v S then v = v i for some i = 1, 2,..., n and, in particular, v = v i span(s). Let v / S. By assumption the set S {v} is linearly dependent and there are scalars t, r 1,..., r n R, not all 0, so that 0 = r 1 v 1 +... + r n v n + tv. If t = 0 then not all of the r i are 0 and 0 = r 1 v 1 +... + r n v n, a contradiction to S being a linearly independent set. Thus t 0 and v = ( r 1 /t)v 1 +... + ( r n /t)v n span(s). Dimension It is not hard to show that every nonzero vector space which has a basis must have infinitely many different bases. Here are some examples for bases in R 3 : B 1 = {(1, 1, 0), (1, 0, 1), (0, 1, 1)} B 2 = {(1, 2, 3), (4, 5, 6), (7, 8, 0)} B 3 = {(1, 2, 4), (1, 3, 9), (1, 4, 16)} It turns out that which ever basis of R 3 we choose to construct, every such basis has exactly 3 elements. This is part of a general theorem which states that whenever a vector space has a basis all the bases in this vector space have the same number of vectors. Before we can prove this theorem we need: Theorem 1.10. (see the Comparison Theorem 3.11 of the book) Let V be a finite vector space with V = span{v 1,..., v n }. If u 1,..., u m is a sequence of linearly independent vectors then m n. Proof. The idea of the proof is to show that we can replace j vectors in the set {v 1,..., v n } by vectors u 1,..., u j, so that the new set of n vectors {u 1,..., u j, v j+1,..., v n } is again a spanning set of V. Then we show that we must exhaust the set of the u s before we have exhausted the set of v s, that is, m n. One difficulty in the proof is that at every stage we need to renumber vectors v 1,..., v n in order to avoid multiple index sets or renaming remaining vectors from {v 1,..., v n }. We start by distinguishing two cases:

10 SUPPLEMENT TO CHAPTER 3 Case 1: u 1 {v 1,..., v n } Then u 1 = v i for some 1 i n. We renumber vectors v 1,..., v n so that u 1 = v 1 to obtain that {u 1, v 2,..., v n } = {v 1,..., v n } is a spanning set with n elements. Case 2: u 1 / {v 1,..., v n } Since {v 1,..., v n } is a spanning set of V, by Theorem 1.2 the set of vectors {u 1, v 1,..., v n } is linearly dependent. Moreover, since u 1,..., u m are linearly independent, u 1 0, and by the Linear Dependence Lemma we can remove one of the v s, say v i, from the spanning set {u 1, v 1,..., v n } of V. After renumbering vectors v 1,..., v n so that v i becomes the first vector on the list, we obtain a new spanning set with n vectors, namely, {u 1, v 2,..., v n }. Now we suppose that we have added u s and removed v s so that the set ( ) {u 1,..., u j 1, v j,..., v n } is a spanning set of V of length n. In order to add u j and remove one of the vectors v j,..., v n we again need to distinguish two cases: Case 1: u j {v j,..., v n } If u j = v i for some j i n, we again renumber vectors v j,..., v n, so that u j = v j and obtain a spanning set {u 1,..., u j, v j+1,..., v n } of n vectors. Case 2: u j / {v j,..., v n } In this case we repeat the argument from above. The set {u 1,..., u j 1, u j, v j,..., v n } is a spanning set of V with n+1 elements where u j is a linear combination of vectors u 1,..., u j 1, v j,..., v n. By Theorem 1.2 the set of n+1 vectors {u 1,..., u j 1, u j, v j,..., v n } is linear independent with u 1 0. Since the set {u 1,..., u m } is linearly independent, by the Linear Dependence Lemma there is a vector v i where j i n with v i span{u 1,..., u j, u j, v j,..., v i 1 }. Renumbering the renaming vectors v j,..., v n so that u j = v j the second part of the Linear Dependence Lemma yields that the set of n vectors {u 1,..., u j, v j+1,..., v n } spans V. The process stops when there are no u s or no v s left. If there are no u s left then m n, as desired. If there are no v s left then {u 1,..., u n } is a linearly independent spanning set of V. If m > n then u n+1 span{u 1,..., u n }, a contradiction to the linear independence of the u s. Thus in this case n = m. Corollary 1.11. (see Theorem 3.14 of the book) Every subspace of a finite dimensional vector space V is finite dimensional. Proof. Let U V be a subspace of V. If U = {0} we are done. Suppose U {0} and take u 1 U with u 1 0. If U = span{u 1 } we are done. If U span{u 1 } take u 2 U span{u 1 }. Again, if U = span{u 1, u 2 ) we are done. If U span{u 1, u 2 } take u 3 U span{u 1, u 2 } etc. This way we obtain a set of vectors {u 1,..., u n } U with u j / span{u 1,..., u j 1 }. Since u 1 0 by the Theorem 1.3 the sequence u 1,..., u n is linearly independent. Theorem 1.10 tells us that the process must stop after finitely many steps. Thus U has a finite spanning set. Theorem 1.12. Let V be a finite dimensional nonzero vector space. Then: (a) Any linearly independent set in V is contained in a basis of V. (b) Any spanning set of V contains a basis of V. (see the Contraction Theorem 3.11 and the Expansion Theorem 3.13 of the book.) Proof. (a) Let {u 1,..., u m } V be a set of linearly independent vectors in V. If V = span{u 1,..., u m } we are done. If not we expand the list by a vector

SUPPLEMENT TO CHAPTER 3 11 u m+1 V span{u 1,..., u m }. Again if V = span{u 1,..., u m+1 } we are done. If not take u m+2 V span{u 1,..., u m+1 } etc. This way we create a set of vectors {u 1,..., u n } with u j / span{u 1,..., u j 1 } for 2 j n. By Theorem 1.3 this set of vectors is linearly independent. The process must stop after finitely many steps since any finite spanning set of V provides an upper bound to the length of a linearly independent set of vectors of V. (b) Let V = span{v 1,..., v n }. We may remove any v i = 0 from the spanning set and still have a spanning set. Thus we may assume that v 1 0. If {v 1,..., v n } is a set of linearly independent vectors we are done. If not apply the Linear Dependence Lemma and remove one of the v j where 2 j n so that V = span{v 1,..., v j 1, v j+1,..., v n }. Apply the same argument to the spanning set of n 1 vectors {v 1,..., v j 1, v j+1,..., v n } and so on. The process stops when the reduced spanning set of vectors is linearly independent. Theorem 1.13. (see Theorem 3.10 of the book) Any two bases of a finite dimensional vector space contain the same number of vectors. Proof. Let B 1 = {v 1,..., v n } and B 2 = {u 1,..., u m } be two bases of V. Then B 1 is a linearly independent set of V while B 2 is a spanning set of V. Thus by Theorem 1.11 n m. Since B 1 and B 2 are each linearly independent spanning sets of V we can switch the role of B 1 and B 2 in the above argument, that is, B 2 is a linearly independent set and B 1 is a spanning set of V. Thus again by Theorem 1.11 m n. Hence we have n m and m n which implies that n = m. Definition. Let V be a finite dimensional vector space. If a basis of V consists of n vectors we say that V is a vector space of dimension n which is denoted by dimv = n. Corollary 1.14. (see Theorem 3.15 of the book) Let V be a vector space of dimension n. Then: (a) Any linearly independent set of n vectors is a basis of V. (b) Any spanning set of V with exactly n vectors is a basis of V. Proof. (a) Let S = S = {v 1,..., v n } V be a linearly independent subset of V. By Theorem 1.12(a) this set can be extended to a basis of V. On the other hand any basis of V contains exactly n vectors. Thus S must be a basis of V (b) If T = {u 1,..., u n } V is a spanning set of V, by Theorem 1.12(b) a subset of T is a basis of V. Any proper subset of T has fewer than n vectors. Thus T is a basis of V.