Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Similar documents
Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Chapter 1 Vector Spaces

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces

Math Linear Algebra Final Exam Review Sheet

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

Chapter 3. Vector spaces

Math 54 HW 4 solutions

Math 3108: Linear Algebra

Study Guide for Linear Algebra Exam 2

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Linear Algebra, 4th day, Thursday 7/1/04 REU Info:

Chapter 2: Matrix Algebra

Linear Algebra. Min Yan

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

Chapter 1: Systems of Linear Equations

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

MAT 242 CHAPTER 4: SUBSPACES OF R n

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

MATH 304 Linear Algebra Lecture 10: Linear independence. Wronskian.

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Solution to Homework 1

Math113: Linear Algebra. Beifang Chen

Inverses and Elementary Matrices

4.3 - Linear Combinations and Independence of Vectors

Abstract Vector Spaces and Concrete Examples

1 Last time: inverses

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math Linear Algebra II. 1. Inner Products and Norms

Linear Algebra and Matrix Inversion

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Chapter 2: Linear Independence and Bases

Linear Algebra Massoud Malek

Calculus and linear algebra for biomedical engineering Week 3: Matrices, linear systems of equations, and the Gauss algorithm

Math 110, Spring 2015: Midterm Solutions

Elementary maths for GMT

Chapter 2 Subspaces of R n and Their Dimensions

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

LECTURE 6: VECTOR SPACES II (CHAPTER 3 IN THE BOOK)

Unit 2, Section 3: Linear Combinations, Spanning, and Linear Independence Linear Combinations, Spanning, and Linear Independence

290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f

Lecture Summaries for Linear Algebra M51A

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Vector Spaces 4.5 Basis and Dimension

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

NOTES (1) FOR MATH 375, FALL 2012

Abstract Vector Spaces

Math 314 Lecture Notes Section 006 Fall 2006

Review 1 Math 321: Linear Algebra Spring 2010

APPENDIX: MATHEMATICAL INDUCTION AND OTHER FORMS OF PROOF

LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT FALL 2006 PRINCETON UNIVERSITY

LECTURES 14/15: LINEAR INDEPENDENCE AND BASES

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

1. General Vector Spaces

Introduction to Matrices

BASIC NOTIONS. x + y = 1 3, 3x 5y + z = A + 3B,C + 2D, DC are not defined. A + C =

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

Choose three of: Choose three of: Choose three of:

LS.1 Review of Linear Algebra

Chapter 7. Linear Algebra: Matrices, Vectors,

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

Definition 2.3. We define addition and multiplication of matrices as follows.

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

MATH10212 Linear Algebra B Homework Week 4

2. Every linear system with the same number of equations as unknowns has a unique solution.

6.4 BASIS AND DIMENSION (Review) DEF 1 Vectors v 1, v 2,, v k in a vector space V are said to form a basis for V if. (a) v 1,, v k span V and

Chapter 1. Vectors, Matrices, and Linear Spaces

MATH 167: APPLIED LINEAR ALGEBRA Chapter 2

Vector Spaces 4.4 Spanning and Independence

Linear Algebra. Linear Algebra. Chih-Wei Yi. Dept. of Computer Science National Chiao Tung University. November 12, 2008

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

1 Linear transformations; the basics

ELEMENTARY SUBALGEBRAS OF RESTRICTED LIE ALGEBRAS

MATH 304 Linear Algebra Lecture 20: Review for Test 1.

Linear Algebra, Summer 2011, pt. 2

Numerical Linear Algebra Homework Assignment - Week 2

This last statement about dimension is only one part of a more fundamental fact.

LINEAR ALGEBRA: THEORY. Version: August 12,

MAT 2037 LINEAR ALGEBRA I web:

Linear Equations in Linear Algebra

MATH 2331 Linear Algebra. Section 1.1 Systems of Linear Equations. Finding the solution to a set of two equations in two variables: Example 1: Solve:

Homework For each of the following matrices, find the minimal polynomial and determine whether the matrix is diagonalizable.

Worksheet for Lecture 15 (due October 23) Section 4.3 Linearly Independent Sets; Bases

MATH2210 Notebook 3 Spring 2018

Our goal is to solve a general constant coecient linear second order. this way but that will not always happen). Once we have y 1, it will always

EIGENVALUES AND EIGENVECTORS 3

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016

Lecture 3: Linear Algebra Review, Part II

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Matrix Arithmetic. j=1

MTH 309 Supplemental Lecture Notes Based on Robert Messer, Linear Algebra Gateway to Mathematics

Linear algebra and differential equations (Math 54): Lecture 10

4 Vector Spaces. 4.1 Basic Definition and Examples. Lecture 10

Dot Products, Transposes, and Orthogonal Projections

Math 314H EXAM I. 1. (28 points) The row reduced echelon form of the augmented matrix for the system. is the matrix

Transcription:

Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational approach. Chapter 3 of Braun also covers most of this material. I will assume basic knowledge about matrices, matrix multiplication, matrix inversion, row reduction etc.) 1 Abstract Vector Spaces Denition 1. A (real) vector space is a set V with two binary operations + : V V V and : R V V, called vector addition and scalar multiplication respectively, such that all the following properties hold: 1. (commutativity of vector addition) u, v V, u + v = v + u 2. (associativity of vector addition) u, v, w V, u + (v + w) = (u + v) + w 3. (existence of additive identity) z V such that y V, z + y = y + z = y 1

(a) (corollary to properties (1-3): there is only one such z, call it 0 V ) 4. (existence of additive inverse) u V w V such that u + w = w + u = 0 V (a) (we say that w is an additive inverse to u if w + u = u + w = 0 V ; property (4) says that every u V has at least one additive inverse) 5. (scalar multiplication distributes over vector addition) u, v V, c R, c (u + v) = (c u) + (c v) 6. (scalar multiplication distributes over addition in R) u V, a, b R, (a + b) u = (a u) + (b u) 7. (compatibility of scalar multiplication with multiplication in R) u V, a, b R, (ab) u = a (b u) 8. (identity law for scalar multiplication) u V, 1 u = u 2

The above list of assumptions is essentially minimal to obtain the full power of linear algebra; the price for this generality is that several obvious results are not actually completely obvious. We will show now that all the usual behavior we desire does follow from the above assumptions. Lemma 2. Let u V ; then, 0 u = 0 V. Also, if c R then c 0 V = 0 V. Proof. By property (6) and the fact that 0 + 0 = 0 holds in R, we have: 0 u = (0 + 0) u = (0 u) + (0 u) Let w V be an additive inverse of 0 u; such an element of V does exist by property (4). In particular we have (0 u) + w = w + (0 u) = 0 V Hence 0 V = w + (0 u) = w + [(0 u) + (0 u)] Using the associativity of vector addition, this implies 0 V = [w + (0 u)] + (0 u) But w is an additive inverse of 0 u so this implies (using property (3)) 0 V = 0 V + (0 u) = 0 u 3

In particular we have 0 u = 0 V. Taking u = 0 V yields 0 0 V = 0 V, therefore using property (7) we have c 0 V = c (0 0 V ) = (c0) 0 V = 0 0 V = 0 V Lemma 3. Let u V and let v be an additive inverse of u; then, v = ( 1) u. Proof. First, by the previous lemma, ( 1) u is itself an additive inverse of u, because for instance we have: u + (( 1) u) = (1 u) + (( 1) u) = (1 + ( 1)) u = 0 u = 0 V (which properties have been used?) On the other hand since v is an additive inverse of u we have v + u = u + v = 0 V therefore v = v + 0 V = v + [u + (( 1) u)] = = (v + u) + (( 1) u) = 0 V + (( 1) u) = ( 1) u so v = ( 1) u as desired. 4

We can conclude from the above lemmas that each u V has a unique additive inverse, and moreover it is equal to ( 1) u. This allows us to dene vector subtraction the following way: if u, v V, u v = u + (( 1) v) In particular vector subtraction : V V V is now its own binary operation, distinct from vector addition, and we can check that the usual rules of subtraction are obeyed. Here are a number of examples of vector spaces, ranging from very concrete to highly abstract: Example 4. R n is a vector space if, for every x = (x 1, x 2,..., x n ) R n and y = (y 1, y 2,..., y n ) R n, and c R, we dene x + y = (x 1 + y 1, x 2 + y 2,..., x n + y n ) c x = (cx 1, cx 2,..., cx n ) Example 5. The set of all m n real matrices, M mn, is a vector space under entrywise addition and entrywise scalar multiplication (the usual addition and scalar multiplication operations for matrices). 5

Example 6. Let A M mn be a xed matrix. Then the set of all x R n such that Ax = 0 R m is a vector space under the usual vector addition and scalar multiplication operations of R n. Example 7. Let A M mn be a xed matrix and let 0 R m b R m be a nonzero vector such that the inhomogeneous equation Ax = b has at least one solution, say x 0. Then the set of all x R n such that Ax = b is not a vector space under the usual operations of R n. However, it is a vector space under the following operations, call them and, where x, y R n and c R: x y = (x x 0 ) + (y x 0 ) + x 0 c x = c (x x 0 ) + x 0 (It is not at all obvious that these operations dene a vector space it has to be checked carefully.) Note that a dierent choice of x 0 would result in dierent vector space operations on the set of solutions of the inhomogeneous system, because for instance x x = x if and only if x = x 0. Example 8. The set of all polynomials with degree at most 5, call it P 5, is a vector space under the following operations: for any two polynomials p, q, 6

and c R, (p + q) (t) = p (t) + q (t) (c p) (t) = cp (t) Note that P 5 is in some sense the same as R 6, because the addition and scalar multiplication of fth-degree polynomials is the same as addition and scalar multiplication of their coecients, which are six in number. Example 9. The set of all polynomials (in one variable) with degree at most n, written P n, is a vector space under the same operations as in Example 8. (note that P n is somehow the same as R n+1 ). Example 10. The set P of all polynomials (in one variable) is a vector space under the same operations as in Example 8. These can be expressed as p (t) = n=0 a nt n where all but nitely many a n 's are zero. Example 11. The set of all functions f : R R is a vector space under the following operations: for two functions f, g : R R and c R, (f + g) (t) = f (t) + g (t) (c f) (t) = cf (t) 7

Example 12. The set of all power series with non-zero radius of convergence is a vector space under the usual addition and scalar multiplication. (This requires us to prove that the sum of two power series with non-zero radius of convergence again has non-zero radius of convergence.) Example 13. The set of all formal power series (which may have radius of convergence equal to zero) is a vector space under the following operations: a n X n + n=0 b n X n = n=0 ( ) c a n X n = n=0 (a n + b n ) X n n=0 (ca n ) X n Note that in this denition we cannot generally substitute any real value for X (except perhaps X = 0) because we are not guaranteed any convergence. For example n=0 n!xn is a valid formal power series but it does not converge for any real X 0. The formal power series is a useful construction when you want to prove abstract results which would hold for any convergent series, without actually considering the detailed convergence process in your proof. n=0 8

1.1 Subspaces. Checking that you have a dened a vector space is usually a two step process. First you have to show that the vector and scalar multiplication operations actually map into V (and not some bigger set or nowhere at all), and second you have to verify all eight properties stated in the denition of a vector space. This is extremely painful and tedious in general which is why we introduce the following denition: Denition 14. Let V be a vector space, with vector addition + and scalar multiplication, and let S be a subset of V such that 0 V S. Suppose that the following two properties hold: x, y S, x + y S x S, c R, c x S Then it follows from the denition of vector spaces that S is itself a vector space with operations + and inherited from V. We say that the set S, equipped with the operations + and from V, is a subspace of V. Remark 15. Note that usually we only know that x + y V and c x V ; thus only very special subsets of V can be subspaces. The convenience of this denition is the following: we already have a number of mathematical objects which we know are vector spaces (such as R n, or the set of all polynomials in one variable). Hence if we have a set 9

S which is embedded in some vector space V, and we can show that S is closed under the operations of V, then S is automatically a vector space under those same operations. (If we want to put dierent operations on S then this trick does not work!) Remark 16. Clearly an equivalent denition is obtained if we let S be a nonempty subset of V which is closed under addition and scalar multiplication (then automatically it follows that 0 V S). The empty set is never a vector space since it does not contain a zero element (because it does not contain any elements!). On the other hand if V is any vector space then the singleton set {0 V } is actually a subspace of V, and a vector space in its own right. Most theorems we will prove will either hold trivially or fail trivially for the trivial vector space {0 V }. When theorems fail for the trivial vector space we will try to say for any non-trivial vector space V... When theorems hold trivially for the trivial vector space we will not provide a separate proof for this case (since it is trivial). From now on (unless stated otherwise) we will write cx instead of c x for scalar multiplication, and + will be understood as vector addition as above; additionally, 0 V will simply be written 0. Example 17. Let A M mn, then the following set S = {x R n such that Ax = 0} is a subspace of R n (under the usual addition and scalar multiplication in 10

R n ). To prove this, note that if Ax = 0 and Ay = 0 and c R, then A (x + y) = 0 and A (cx) = 0; and, clearly 0 S. 2 Linear Transformations Denition 18. Let V, W be vector spaces and suppose T : V W is a map. (This means that for every v V, there is an assignment T (v) W ; the word map is interchangeable with the word function.) We say that T is a linear map, or a linear transformation, if both of the following properties hold: u, v V, T (u + v) = T (u) + T (v) u V, c R, T (cu) = ct (u) (Note that in either formula, the operations on the left hand side occur in V whereas the operations on the right hand side occur in W.) We sometimes abbreviate T u in place of T (u) when T is a linear transformation. Proposition 19. Let U, V, W be vector spaces and suppose T : U V and S : V W are linear transformations. Then the composition S T : U W is also a linear transformation. Denition 20. Let V be a vector space; then we dene the identity transformation Id V : V V by Id V (v) = v. Lemma 21. The identity transformation on any vector space V is a linear transformation. 11

Lemma 22. Let V, W be vector spaces and let T : V W be a linear transformation. Then T Id V = T and Id W T = T. Example 23. Let A M mn be any m n real matrix, and dene the map T : R n R m by T x = Ax In other words T x is what we get if we view x R n as a column vector and multiply A from the left. That this denes a linear transformation follows from the properties of matrix multiplication. Example 24. Let P n denote the space of polynomials with degree at most n. Dene the map T : P n P n 1 by T p = p where p (t) = n k=0 ka kt k 1 is the derivative of p (t) = n k=0 a kt k. By the properties of dierentiation, this denes a linear transformation. Example 25. Dene the map T : M mn M nm by T A = A T 12

that is T takes A to the transpose of A. Then T is a linear transformation by the properties of transpose. Example 26. Fix any nonsingular matrix B M nn and dene the map T : M nn M nn by T A = B 1 AB Then T is a linear transformation by the properties of matrix multiplication. Sometimes we are interested in linear transformations that completely identify two spaces, so that (at least as far as vector space structure is concerned) the two spaces are the same. Denition 27. Let V, W be vector spaces and let T : V W be a linear map. We say that T is one to one if the following property holds: u, v V, T (u) = T (v) = u = v Equivalently (by contrapositive) T is one to one if u v implies T (u) T (v); that is, distinct points map to distinct points. Denition 28. Let V, W be vector spaces and let T : V W be a linear 13

map. We say that T is onto if w W v V such that T (v) = w In other words T is onto if its range is all of W. Denition 29. Let V, W be vector spaces and let T : V W be a linear map. We say that T is a linear isomorphism if it is one-to-one and onto. If there exists a linear isomorphism T : V W then we say that V and W are linearly isomorphic. These denitions are highly abstract so we will try to make them more concrete with some examples. Example 30. Let P 5 denote the space of polynomials with degree at most ve. Dene the map T : R 6 P 5 by T ((a 0,..., a 5 )) = p (a0,...,a 5 ) where p (a0,...,a 5 ) (t) = 5 a k t k k=0 Then T : R 6 P 5 is a linear isomorphism. Denition 31. Let V, W be vector spaces and let T : V W be a linear transformation. We say that T is invertible if there exists a linear trans- 14

formation S : W V such that S T = Id V and T S = Id W. Such a transformation S is called an inverse of T. Remark 32. If T : V W is a linear transformation of vector spaces V, W and there exists a map S : W V (not assumed linear) such that S T = Id V and T S = Id W, then it automatically follows that S is a linear transformation. The proof is two lines: Sw 1 + Sw 2 = S (T (Sw 1 + Sw 2 )) = S (T Sw 1 + T Sw 2 ) = S (w 1 + w 2 ) cs (w) = S (T (cs (w))) = S (ct Sw) = S (cw) Proposition 33. A linear transformation has at most one inverse. Proof. Let T : V W be a linear transformation of vector spaces V, W ; furthermore, suppose that T has two inverses, S 1 : W V and S 2 : W V. Then we have S 1 = S 1 Id W = S 1 (T S 2 ) = (S 1 T ) S 2 = Id V S 2 = S 2 hence S 1 = S 2. Since a linear transformation T can have at most one inverse, when it has one we call it the inverse of T and we write it T 1. Example 34. Let A M nn be a nonsingular matrix with inverse matrix 15

A 1. Dene the linear transformations S, T : R n R n by Sx = Ax T x = A 1 x Then S, T are both invertible linear transformations; furthermore, S 1 = T and T 1 = S. Theorem 35. Let V, W be vector spaces and let T : V W be a linear transformation. Then T is invertible if and only if T is a linear isomorphism. Proof. Assume T is invertible, with inverse T 1 : W V. We see that T is one-to-one because v 1 v 2 = T 1 (T (v 1 v 2 )) = T 1 (T v 1 T v 2 ) so if T v 1 = T v 2 then v 1 = v 2. Additionally for any w W we have w = T ( T 1 w ) so w = T v where v = T 1 w; hence, T is onto. Since T is one-to-one and onto, T is a linear isomorphism. Now suppose instead that T is a linear isomorphism. Since T is onto, for any w W there is some v V such that T v = w; morover, since T is one-to-one, there can be at most one such v. Therefore we can dene a map 16

S : W V so that Sw is the unique vector v V such that T v = w. We have w W, T (Sw) = w by denition of S, and therefore T S = Id W. Additionally, v V, S (T v) = v and again this follows from the denition of S, so S T = Id V. We easily show that S is a linear transformation; altogether we can conclude that S is an inverse of T, and in particular T is invertible. Remark 36. If T : V W is a linear isomorphism then T 1 : W V is also a linear isomorphism. 3 Linear Independence, Bases, Dimension Denition 37. Let V be a vector space. A subset E V is said to be linearly dependent if there exists a nite collection of distinct elements v 1, v 2,..., v N E, and scalars c 1, c 2,..., c N R, such that at least one c i 0 and c 1 v 1 + c 2 v 2 + + c N v N = 0 Denition 38. Let V be a vector space. A subset E V is said to be 17

linearly independent if it is not linearly dependent. Remark 39. The empty set is linearly independent. Denition 40. Let V be a vector space; we say that v V is a linear combination (or nite linear combination) of the vectors v 1, v 2,..., v N V if there exist scalars c 1, c 2,..., c N R such that v = c 1 v 1 + c 2 v 2 + + c N v N Lemma 41. If V is a vector space and E V is a subset, then E is linearly dependent if and only if there exists a vector v E which is a linear combination of other elements v 1, v 2,..., v N E. Denition 42. Let V be a vector space and let E V be a subset. Then we dene span E to be the set of all (nite) linear combinations of elements of E. We also dene span = {0} V. Lemma 43. If V is a vector space and E V then span E is a subspace of V. Moreover if E W V and W is a subspace of V, then span E W. Thus span E is the smallest subspace of V containing every element of E. Denition 44. Let V be a vector space and let B V be a subset. We say that B is a basis of V if B is linearly independent and span B = V. 18

Denition 45. Let V be a vector space; if there exists a subset B V such that B is a basis of V and B is a nite set, then we say that V is nite dimensional. If V is not nite dimensional then we say that V is innite dimensional. Example 46. The space P n of polynomials (in one variable) with degree at most n is nite dimensional, because the set {1, t, t 2,..., t n } is a basis of P n. The space P of all polynomials (in one variable) is innite dimensional. Lemma 47. Let V, W be vector spaces and let T : V W be a linear transformation. If T is a linear isomorphism and V is nite dimensional then W is also nite dimensional. Lemma 48. Let V be an innite dimensional vector space; then, for every n N there exists a linearly independent subset E V such that E has exactly n elements. Proof. Use induction. For n = 1 this is trivial: let E = {x 0 } for any 0 x 0 V (such an x 0 exists because the trivial vector space is nite dimensional). Suppose now that for some n N there exist a linearly independent subset E V such that E has exactly n elements. We claim that span E V ; indeed, if this were not the case then V would be nite dimensional. Therefore, there exists a vector z V such that z / span E. Then E {z} is a linearly independent subset of V having exactly n + 1 elements. 19

Lemma 49. Let V be a nite dimensional vector space, with a nite basis B having exactly N elements. Then every linearly independent subset of V has at most N elements. Proof. Let E V be a linearly independent subset; we will assume E has at least N + 1 elements to reach a contradiction, hence proving the lemma. Let v 1, v 2,..., v N+1 E be N + 1 distinct elements of E. Since B is a basis of V, each v V is a linear combination of elements of B. Denoting the elements of B as w 1, w 2,..., w N we have numbers c i,j R such that v 1 = c 1,1 w 1 + c 2,1 w 2 + + c N,1 w N v 2 = c 1,2 w 1 + c 2,2 w 2 + + c N,2 w N. v N+1 = c 1,N+1 w 1 + c 2,N+1 w 2 + + c N,N+1 w N Arrange the numbers c i,j as the following N (N + 1) matrix: C = c 1,1 c 1,2... c 1,N+1 c 2,1 c 2,2... c 2,N+1...... c N,1 c N,2 c N,N+1 20

We can solve the equation Cx = 0 (with x R N+1 ) by row reduction. Now in reduced row echelon form (RREF) each row can have at most one pivot; since there are N rows, there can be at most N pivots in the RREF. Hence there is at least one free variable, which can take on any real value. Therefore there are innitely many solutions to the equation Cx = 0, and this certainly implies that there exists some x 0 such that Cx = 0. Call this vector x = ( x 1, x 2,..., x N+1 ) 0. Now consider the following vector: ṽ = x 1 v 1 + x 2 v 2 + + x N+1 v N+1 Using the more compact summation notation, this can be written But v k = N j=1 c j,kw j, therefore Re-arranging, this says ṽ = ṽ = N+1 k=1 N+1 k=1 x k v k N c j,k x k w j j=1 ṽ = N j=1 ( N+1 k=1 c j,k x k ) w j But N+1 k=1 c j,k x k is just the jth entry of the vector Cx, and by construction 21

Cx = 0. Therefore N+1 k=1 c j,k x k for each j {1, 2,..., N} and we have ṽ = N 0w j = 0 j=1 Hence ṽ = 0. Then again we have ṽ = N+1 k=1 x kv k, hence x 1 v 1 + x 2 v 2 + + x N+1 v N+1 = 0 Since v 1, v 2,..., v N+1 are distinct elements of E, and the numbers x k are not all zero, this implies that the set E is not linearly independent, so we have a contradiction. Theorem 50. Let V be a vector space and let W V be a subspace. If V is nite dimensional then W is nite dimensional. Proof. Suppose W is innite dimensional. By Lemma 48, for each natural number n there is a linearly independent subset E of W having exactly n elements. But a linearly independent subset of W is also a linearly independent subset of V. Therefore, for each natural number n there is a linearly independent subset E of V having exactly n elements. On the other hand, V is nite dimensional so it has a nite basis B. Let N be the number of elements of B. Then by Lemma 49, any linearly independent subset of V has at most N elements. But we just said that, for every n N, V has a linearly independent subset E having exactly n elements; choosing n = N + 1 yields the contradiction. 22

Remark 51. Note that in Theorem 50, we have proven that if W is a subspace of the nite dimensional space V then W is nite dimensional; in particular, W has a nite basis. However we did not actually construct any particular basis for W ; indeed, given a basis B of V, it is entirely possible that B W =. Due to Theorem 50, we do not always have to exhibit a nite basis to show that a vector space W is nite dimensional; it is sucient to show that W is linearly isomorphic to a subspace of a nite dimensional space. Example 52. Let V be the set of all smooth functions f : R R such that t R, f (t) f (t) = 0 Now V is a vector space, and it is a subspace of the space of all smooth functions on R, but that larger vector space is not nite dimensional. To show that V is nite dimensional, we can dene the following map T : R 2 V T c 1 c 2 = c 1 f 1 + c 2 f 2 where f 1 (t) = e t and f 2 (t) = e t. Then T is one-to-one because the Wronskian W [f 1, f 2 ] = 2 0; also, T is onto because all solutions of the ODE are of the form c 1 f 1 + c 2 f 2 for some constants c 1, c 2. Hence V is linearly isomorphic to the nite dimensional space R 2, so V is itself nite dimensional. (Note that we could equally well observe that {f 1, f 2 } is a basis of V in order to conclude that V is nite dimensional.) 23

Theorem 53. Let V be a nite dimensional vector space; furthermore, let B 1 be a basis of V, and suppose B 2 is also a basis of V. Then B 1 and B 2 are both nite sets and they have the same number of elements. Proof. Since V is nite dimensional, there is a nite basis of V, call it B 0. Let N 0 N {0} be the number of elements of B 0. By Lemma 49, since B 0 is a nite basis of V and B 1 is a linearly independent subset of V, we nd that B 1 has at most N 0 elements. Moreover, again by Lemma 49, since B 0 is a nite basis of V and B 2 is a linearly independent subset of V, we nd that B 2 has at most N 0 elements. In particular, both B 1 and B 2 are nite bases. Now since B 1, B 2 are nite sets, let N 1, N 2 N {0} denote (respectively) the size of B 1, B 2. By Lemma 49, since B 1 is a nite basis of V and B 2 is a linearly independent subset of V, we have that N 2 N 1. Then again, since B 2 is a nite basis of V and B 1 is a linearly indepdendent subset of V, we have that N 1 N 2. Therefore, N 1 = N 2. Denition 54. Let V be a nite dimensional vector space (then V has a nite basis because that is what it means to be nite dimensional). The dimension of V is dened to be the number of elements in a basis of V ; by Theorem 53, it does not matter which basis we choose. We write the dimension of V as dim V. If V is innite dimensional we may write dim V = as a convenient (but nonrigorous) shorthand. Theorem 55. Let V, W be vector spaces and let T : V W be a linear transformation. If T is a linear isomorphism, and either V or W is nite 24

dimensional, then both V and W are nite dimensional and dim V = dim W. Proof. Simply observe, if V is nite dimensional, that the image of any basis of V under T is a basis of W. Similarly, if W is nite dimensional, then the image of any basis of W under T 1 is a basis of V. Example 56. (Euclidean space) dim R n = n Example 57. (polynomials of degree at most n) dim P n = n + 1 Example 58. (all m n real matrices) dim M mn = mn Example 59. (all polynomials) dim P = Theorem 60. Let V be a nite-dimensional vector space with dim V = n. Then V is linearly isomorphic to R n. Proof. Let B = {v 1, v 2,..., v n } be a nite basis of V. Dene a map T : R n V as follows: T c 1 c 2. c n = c 1 v 1 + c 2 v 2 + + c n v n 25

It is trivial to check that T is a linear map. Clearly T is onto, since any v V can be written as a linear combination of the vectors v 1, v 2,..., v n (since B is a basis). So it only remains to show that T is one-to-one. Suppose there are numbers c 1, c 2,..., c n and c 1, c 2,..., c n such that T c 1 c 2. c n = T c 1 c 2. c n Then by the denition of T we have c 1 v 1 + c 2 v 2 + + c n v n = c 1 v 1 + c 2 v 2 + + c n v n Therefore (c 1 c 1 ) v 1 + (c 2 c 2 ) v 2 + + (c n c n ) v n = 0 But B is a basis, hence linearly independent, so we conclude c 1 c 1 = 0, c 2 c 2 = 0,..., c n c n = 0. Hence c 1 c 2. c n = c 1 c 2. c n 26

so T is one-to-one. 4 Matrix Representation of Linear Transformations We have seen in Theorem 60 that, just by choosing a basis, any nitedimensional vector space can be regarded as equivalent (in the sense of linear isomorphism) to a copy of R n. (Note carefully that extra structures, such as dot products, are not necessarily preserved even for linear isomorphisms from R n to itself.) We have also seen that if A M mn is a matrix then A denes a linear transformation R n R m by left-multiplication of any (column) vector x R n. What we are going to show is that any linear transformation R n R m arises as left-multiplication by some m n matrix. Though we will not go into all the details (which you can nd in any linear algebra textbook), by combining this result with Theorem 60, any linear transformation of nite-dimensional vector spaces V and W can be represented by a matrix. Of course the matrix will depend on your choice of bases for V and W ; there is a standard rule (written in any linear algebra textbook) for transforming the matrix of a linear transformation from one pair of bases to another pair. We will not discuss those details. Theorem 61. Let T : R n R m be a linear transformation (where elements 27

of R n and R m are regarded as column vectors). Then there exists a unique m n matrix A M mn such that x R n, T x = Ax where Ax is the usual matrix-vector product. Proof. Let us rst prove the uniqueness. Suppose that A, B M mn are two matrices that both coincide with T ; in that case, we clearly have x R n, Ax = Bx Therefore, taking x = e j (with 1 j n) and dotting both sides against e i (with 1 i m) we have 1 j n, 1 i m, e T i Ae j = e T i Be j But this is equivalent to the following statement: 1 j n, 1 i m, a ij = b ij in particular A = B. Now we turn to the existence. Let us write y j = T e j R m for 1 j n; furthermore, let us dene the numbers a ij, with 1 i m and 1 j n, 28

by the following formula: a ij = e T i y j Dene the matrix A as follows: A = a 11 a 12... a 1n a 21 a 22... a 2n...... a m1 a m2 a mn Clearly Ae j = y j. Let x R n ; then we can write x = c 1 e 1 + c 2 e 2 + + c n e n = n k=1 c ke k. Therefore, On the other hand, T x = Ax = n c k T e k = k=1 n c k Ae k = k=1 n c k y k k=1 n c k y k Since both T x and Ax are equal to n k=1 c ky k, it follows that T x = Ax. But x R n was arbitrary so we conclude that k=1 x R n, T x = Ax 29