Vector Spaces 1. Vector Spaces A (real) vector space V is a set which has two operations: 1. An association of x, y V to an element x+y V. This operation is called vector addition. 2. The association of c R and x V to an element cx V. this operation is called scalar multiplication. These operations must satisfy certain properties which generalize the familiar properties of vector addition and scalar multiplication in R n. In particular, (1) Every vector space V has a zero vector 0 V and every element (vector) x V has an additive inverse x V which satisfies x+ x = x + x = 0 V. The complete list of properties satisfied by the addition and scalar multiplication on a vector space V are: 1. x + y = y + x for all x, y V. 2. (x + y) + z = x + (y + z) for all x, y, z V. 3. There exists an element 0 V V such that x + 0 V = x for all x V. 4. For every x V, there exists an element x V such that x + x = 0 V. 5. α(x + y) = αx + αy for all α R and x, y V. 6. (α + β)x = αx + βx for all α, β R and x V. 7. (αβ)x = α(βx) for all α, β R and x V. 8. 1x = x for all x V. Theorem 1.1. Suppose that V is a vector space. Then 1. If e V satisfies x + e = x for all x V then e = 0 V. (The additive identity is unique). 2. Suppose x, y V and x + y = 0 V. Then y = x. (The additive inverse of x is unique). 3. Suppose x V. Then 0x = 0 V. 4. Suppose x V. Then ( 1)x = x. We will often write the zero vector of V as 0, 0 V or 0 V. Notice that statement 4. is a theorem and is not a definition. You are encouraged to write the proof (using the vector space properties) of this theorem, especially parts 3. and 4. 2. Examples 1) The real numbers R with the usual addition for vector addition and scalar multiplication being the usual multiplication. 2) The set of all column vectors R n with the usual vector addition and scalar multiplication. 3) The set of all row vectors R m with the usual vector addition and scalar multiplication. 1
4) The set of all m n matrices a 11 a 1n R m n =. such that a 11, a 12,..., a mn R a m1 a mn is a vector space with matrix addition for vector addition and scalar multiplication of matrices as scalar multiplication. 5) A more sophisticated example is the set of polynomials P. Let x be an indeterminate. P = {a 0 + a 1 x + + a n x n n is a positive integer and a 0, a 1,..., a n R}. Suppose that f = a 0 + a 1 x + a n x n P and g = b 0 + b 1 x + + b m x m P. When comparing f and g, it is often convenient to assume that m = n. If, say, m < n, we can write g = b 0 + b 1 x + + b m x m + 0x m+1 + + 0x n = b 0 + b 1 x + + b n x n where we set b i = 0 for m < i n. The meaning of the statement that x is an indeterminate is that f = g if and only if a i = b i for 0 i n. This statement is essential for doing calculations with polynomials. P is a vector space with the following vector addition and scalar multiplication. f + g = (a 0 + b 0 ) + (a 1 + b 1 )x + + (a n + b n )x n, for c R, The zero vector is cf = ca 0 + ca 1 x + + ca n x n. 0 P = 0 (= 0 + 0x + + 0x n ). Sometimes we will view f P as a function on R. In this case we write f = f(x). We have that f(α) = a 0 + a 1 α + + a n α n R for α R. 6) A final example is F[a, b], the set of functions on the closed interval [a, b] (F(R) is the set of functions on R). A function f on [a, b] is a rule which associates to every α with a α b an element f(α) R. If g is also a function on [a, b], then f = g if and only if f(α) = g(α) for all α with a α b. For f, g F[a, b], f + g F[a, b] is defined by the rule (f + g)(α) = f(α) + g(α) for all α [a, b] (the addition here is the addition in R). For c R, cf F[a, b] is defined by (cf)(α) = cf(α) for all α R (the multiplication here is the multiplication in R). The zero vector 0 F[a,b] is defined by 0 F[a,b] (α) = 0 for all α [a, b]. 2
3. Subspaces A subset S of a vector space V is a subspace of V if S is a vector space, with the vector operations of V. Suppose that f P is a polynomial. The degree of f is degree(f) = if f = 0 and degree(f) = d if f 0 and with a 0,..., a d R and a d 0. For d 0, define f = a 0 + a 1 x + + a d 1 x d 1 + a d x d P d = {f P degree(f) < d} = {a 0 + a 1 x + + a d 1 x d 1 a 0, a 1,..., a d 1 R}. P d is a subspace of P. This is most easily checked by the following Subspace Theorem. Theorem 3.1. (Subspace Theorem) A subset S of a vector space V is a subspace of V if and only if the following three conditions hold. 1. The zero vector 0 V S. 2. If v S and w S then v + w S. 3. If c R and v S then c v S. Condition 1 of the theorem is necessary to exclude the empty set; by (1) of the definition of a vector space (3. of the list of properties): every vector space must contain at least the zero vector. To determine if a subset S of a vector space V is a subspace of V, first determine if 0 V S. If 0 V S, then S is not a subspace. If 0 V S, then you must determine if conditions 2 and 3 hold. The method of establishing this is completely different, depending on if S is a subspace or not. To show that S is a subspace, you must verify that conditions 2 and 3 hold for all v, w S and c R. To show that S is not a subspace, you must give specific vectors in S, and show that their sum is not in S, or give a specific vector v S and a specific c R and show that c v S. Example 1. Let S = {(x, y) R 2 x + y = 1}. Determine if S is a subspace of R 2 (the 1 2 row vectors). Solution: 0 R2 = (0, 0) S since 0 + 0 = 0 1. Thus S is not a subspace of R 2 by the Subspace Theorem. Example 2. Let V = {f = a 0 + a 1 x + a 2 x 2 P 3 a 0 + a 1 + a 2 = 0} Determine if V is a subspace of P 3 (the vector space of polynomials in x of degree 2). Solution: The zero polynomial 0 P3 = 0 = 0 + 0x + 0x 2 satisfies 0 + 0 + 0 = 0, so that 0 P3 V. Thus condition 1 of the Subspace Theorem holds. 3
Suppose that f = a 0 + a 1 x + a 2 x 2 V and g = b 0 + b 1 x + b 2 x 2 V. Since f and g are in V, we have that a 0 + a 1 + a 2 = 0 and b 0 + b 1 + b 2 = 0. When we sum the coefficients, we get f + g = (a 0 + b 0 ) + (a 1 + b 1 )x + (a 2 + b 2 )x 2. (a 0 + b 0 ) + (a 1 + b 1 ) + (a 2 + b 2 ) = (a 0 + a 1 + a 2 ) + (b 0 + b 1 + b 2 ) = 0 + 0 = 0. Thus f + g V, and condition 2 of the Subspace Theorem holds. Suppose that f = a 0 + a 1 x + a 2 x 2 V and c R. Since f V, we have that a 0 + a 1 + a 2 = 0. cf = ca 0 + ca 1 x + ca 2 x 2. The sum of the coefficients is ca 0 + ca 1 + ca 2 = c(a 0 + a 1 + a 2 ) = c 0 = 0. Thus cf V, and condition 3 of the Subspace Theorem holds. Since all three conditions of the Subspace Theorem hold, V is a subspace of P 3. Example 3. Let U = {( ) } a11 a 12 R a 21 a 2 2 a 11 a 12 a 21 a 22 = 0. 22 Determine if U is a subspace of R 2 2 (the 2 2 matrices). Solution: since 1 0 0 1 = 0. Also, since 0 1 1 0 = 0. However, A = B = A + B = ( 1 0 0 1 ( 0 1 1 0 ( 1 1 1 1 ) U ) U ) U since 1 1 1 1 = 1 0. Thus condition 2 of the Subspace Theorem does not hold, and so U is not a subspace of R 2 2. Let C i [a, b] be the i-times continuously differentiable functions on the closed interval [a, b], and C [a, b] be the infinitely many times continuously differentiable functions on the closed interval [a, b]. By the subspace theorem, and formulas from calculus, these are all subspaces of F[a, b] (Example 6). Similarly, we have subspaces C i (R) and C (R) of F(R). 4. Span and Linear Independence Definition 4.1. Suppose that V is a vector space and v 1,..., v n V. The span of v 1,..., v n is the set Span(v 1,..., v n ) = {c 1 v 1 + c 2 v 2 + + c n v n c 1,..., c n R}. An expression c 1 v 1 + + c n v n is called a linear combination of v 1,..., v n. Theorem 4.2. Suppose that V is a vector space and v 1,..., v n V. Then Span(v 1,..., v n ) is a subspace of V. 4
To prove this, use the subspace theorem. Using the above theorem, we can obtain a geometric understanding of the subspaces of R n. The zero vector { 0} is a subspace of R n. The span of a nonzero vector v 1 R n is the line in R n containing 0 and v 1, Span(v 1 ) = {tv 1 t R}. The span of two independent vectors v 1 and v 2 is the plane in R n containing 0, v 1 and v 2 (independent means that v 1 0 and v 2 is not on the line containing 0 and v 1 ), Span(v 1, v 2 ) = {sv 1 + tv 2 s, t R}. The span of three independent vectors v 1, v 2 and v 3 in R n is the linear 3-space containing 0, v 1, v 2 and v 3. (independent means that v 1 0, v 2 is not on the line containing 0 and v 1, and further, v 3 is not on the plane containing 0, v 1 and v 2 ). In particular, the span of three independent vectors in R 3 is R 3. More generally, the subspaces of R n are the linear r-spaces for r n, containing 0 and r independent vectors v 1,..., v r. Definition 4.3. Suppose that V is a vector space and v 1,..., v n V. linearly dependent if there exist c 1,..., c n R, not all zero, such that c 1 v 1 + c 2 v 2 + + c n v n = 0. v 1,..., v n are linearly independent if they are not linearly dependent. Vectors v 1,..., v n are linearly independent precisely when the equation c 1 v 1 + + c n v n = 0 has only the trivial solution, c 1 = c 2 = = c n = 0. A relation c 1 v 1 + c 2 v 2 + + c n v n = 0 with c 1,..., c n R not all zero is called a dependence relation on v 1,..., v n. We will abbreviate linearly dependent as LD and linearly independent as LI. Example 4.4. Determine if the vectors are linearly independent. v 1 = (1, 2, 3, 2), v 2 = (2, 3, 1, 2), v 3 = (3, 1, 5, 2) R 4 v 1,..., v n are Solution: We must solve (2) c 1 v 1 + c 2 v 2 + c 3 v 3 = 0 for c 1, c 2, c 2. We must solve the homogeneous system of equations c 1 v 1 +c 2 v 2 +c 3 v 3 = (c 1 +2c 2 +3c 3, 2c 1 +3c 2 +c 3, 3c 1 +c 2 +5c 3, 2c 1 +2c 2 +2c 3 ) = (0, 0, 0, 0). The coefficient matrix of this system is A = 1 2 3 2 3 1 3 1 5 2 2 2 5
We compute that the RRE form of A is Thus 1 0 0 0 1 0 0 0 1 0 0 0 c 1 = 0 c 2 = 0 c 3 = 0 is the only solution to (2), and so v 1, v 2, v 3 are linearly independent. Example 4.5. Determine if the polynomials are linearly independent. f 1 = 1 + x, f 2 = 2 + 3x 2, f 3 = 3 + x + 3x 2 P 3 Solution: We must solve (3) c 1 f 1 + c 2 f 2 + c 3 f 3 = 0 for c 1, c 2, c 2. c 1 f 1 + c 2 f 2 + c 3 f 3 = (c 1 + 2c 2 + 3c 3 ) + (c 1 + c 3 )x + (3c 2 + 3c 3 )x 2 = 0 = 0 + 0x + 0x 2. That is, we seek solutions c 1, c 2, c 3 to the homogeneous system of equations The coefficient matrix of this system is c 1 + 2c 2 + 3c 3 = 0 c 1 + c 3 = 0 3c 2 + 3c 3 = 0 A = We compute that the RRE form of A is 1 2 3 1 0 1 0 3 3 1 0 1 0 1 1 0 0 0 Thus the Standard Form Solution to (3) is: c 1 = t c 2 = t c 3 = t t R. Thus there are infinitely many solutions to (3). Taking t = 1, we have that f 1 f 2 + f 3 = 0 is a dependence relation, so f 1, f 2, f 3 are linearly dependent. 6
Definition 4.6. Suppose that V is a vector space and v 1,..., v n V. {v 1,..., v n } is an (ordered) basis of V if 1. Span(v 1,..., v n ) = V. 2. v 1,..., v n are linearly independent. Theorem 4.7. Suppose that V is a vector space and {v 1,..., v n } is a basis of V. Then every element v V has a unique expansion with c 1,..., c n R. v = c 1 v 1 + + c n v n There is a more general definition of a basis, which allows for vector spaces with infinite bases. We will not consider this more general definition here, beyond stating the following theorem. Theorem 4.8. Suppose that V is a vector space. Then V has a basis. If a vector space V has a finite basis (Definition 4.6) then V is called a finite dimensional vector space. The vector space { 0} is a special case. It has the empty set (the set with no elements) as its basis. The infinite dimensional vector spaces we have looked at in this note are P, F[a, b], F(R), C i [a, b] and C i (R). Dimension Definition 4.9. The dimension of a vector space V is the number of elements in a basis of V. This number is independent of choice of basis of V. Theorem 4.10. Suppose that W is a subspace of a vector space V. Then 1) dim (W ) dim V. 2) If V is finite dimensional and dim W = dim V then W = V. Since the vector space { 0} has the empty set as its basis, and the empty set has no elements, we have that dim ({ 0}) = 0. Theorem 4.11. Suppose that V is an n-dimensional vector space and v 1,..., v n V are linearly independent. Then {v 1,..., v n } is a basis of V. Theorem 4.11 follows from 2) of Theorem 4.10. {v 1,..., v n } is a basis of the subspace W = Span(v 1,..., v n ) of V, so W has dimension n, which is the dimension of V. 7