ENGINEERING MATH 1 Fall 2009 VECTOR SPACES

Similar documents
Linear Algebra, Summer 2011, pt. 2

BASIC NOTIONS. x + y = 1 3, 3x 5y + z = A + 3B,C + 2D, DC are not defined. A + C =

Chapter 13: General Solutions to Homogeneous Linear Differential Equations

Math 54 HW 4 solutions

Linear algebra and differential equations (Math 54): Lecture 10

Math 24 Spring 2012 Questions (mostly) from the Textbook

7.5 Operations with Matrices. Copyright Cengage Learning. All rights reserved.

1 Last time: inverses

40h + 15c = c = h

2.3 Terminology for Systems of Linear Equations

Math 54 Homework 3 Solutions 9/

LECTURES 14/15: LINEAR INDEPENDENCE AND BASES

Lecture 10: Powers of Matrices, Difference Equations

Review Solutions for Exam 1

CSL361 Problem set 4: Basic linear algebra

Linear independence, span, basis, dimension - and their connection with linear systems

Linear equations in linear algebra

2 so Q[ 2] is closed under both additive and multiplicative inverses. a 2 2b 2 + b

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

Chapter 3. Vector spaces

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Linear Algebra. Hoffman & Kunze. 2nd edition. Answers and Solutions to Problems and Exercises Typos, comments and etc...

APPM 3310 Problem Set 4 Solutions

Chapter 2 Notes, Linear Algebra 5e Lay

Some Notes on Linear Algebra

Chapter 1 Vector Spaces

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS

Linear Algebra. Min Yan

Linear Independence Reading: Lay 1.7

Linear Algebra. Linear Algebra. Chih-Wei Yi. Dept. of Computer Science National Chiao Tung University. November 12, 2008

Linear Algebra. Chapter Linear Equations

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Row Space, Column Space, and Nullspace

b for the linear system x 1 + x 2 + a 2 x 3 = a x 1 + x 3 = 3 x 1 + x 2 + 9x 3 = 3 ] 1 1 a 2 a

LECTURES 4/5: SYSTEMS OF LINEAR EQUATIONS

Vector Spaces. Chapter Two

LINEAR ALGEBRA W W L CHEN

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

x + 2y + 3z = 8 x + 3y = 7 x + 2z = 3

1 Last time: multiplying vectors matrices

First we introduce the sets that are going to serve as the generalizations of the scalars.

Abstract Vector Spaces

MAT 242 CHAPTER 4: SUBSPACES OF R n

Vector Spaces and Dimension. Subspaces of. R n. addition and scalar mutiplication. That is, if u, v in V and alpha in R then ( u + v) Exercise: x

Midterm 1 Review. Written by Victoria Kala SH 6432u Office Hours: R 12:30 1:30 pm Last updated 10/10/2015

Math 4310 Solutions to homework 1 Due 9/1/16

Math 110 Professor Ken Ribet

1.4 Techniques of Integration

Math 601 Solutions to Homework 3

Math 369 Exam #2 Practice Problem Solutions

LS.1 Review of Linear Algebra

4.3 - Linear Combinations and Independence of Vectors

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

2.3. VECTOR SPACES 25

Lecture 22: Section 4.7

Section 29: What s an Inverse?

3 Fields, Elementary Matrices and Calculating Inverses

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.

Introduction to Algebra: The First Week

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces

Matrices related to linear transformations

Linear Combination. v = a 1 v 1 + a 2 v a k v k

Math Linear Algebra Final Exam Review Sheet

NAME MATH 304 Examination 2 Page 1

1. Let r, s, t, v be the homogeneous relations defined on the set M = {2, 3, 4, 5, 6} by

2 Systems of Linear Equations

Math Linear Algebra

This last statement about dimension is only one part of a more fundamental fact.

Math 290, Midterm II-key

APPENDIX: MATHEMATICAL INDUCTION AND OTHER FORMS OF PROOF

Matrix Inverses. November 19, 2014

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

SECTION 3.3. PROBLEM 22. The null space of a matrix A is: N(A) = {X : AX = 0}. Here are the calculations of AX for X = a,b,c,d, and e. =

Lecture 2 Systems of Linear Equations and Matrices, Continued

MATH 300, Second Exam REVIEW SOLUTIONS. NOTE: You may use a calculator for this exam- You only need something that will perform basic arithmetic.

Chapter 2: Matrix Algebra

Chapter Five Notes N P U2C5

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

Lecture 8: A Crash Course in Linear Algebra

Solution to Homework 1

Math 314H EXAM I. 1. (28 points) The row reduced echelon form of the augmented matrix for the system. is the matrix

Linear Algebra II. 2 Matrices. Notes 2 21st October Matrix algebra

ftdt. We often drop the parentheses and write Tf instead of Tf. We could also write in this example Tfx 0

Roberto s Notes on Linear Algebra Chapter 4: Matrix Algebra Section 4. Matrix products

Math Lecture 18 Notes

Linear Algebra Handout

Math 1060 Linear Algebra Homework Exercises 1 1. Find the complete solutions (if any!) to each of the following systems of simultaneous equations:

Linear Algebra for Beginners Open Doors to Great Careers. Richard Han

MATH 213 Linear Algebra and ODEs Spring 2015 Study Sheet for Midterm Exam. Topics

UNDETERMINED COEFFICIENTS SUPERPOSITION APPROACH *

Linear Algebra, 4th day, Thursday 7/1/04 REU Info:

Linear Algebra March 16, 2019

Review 1 Math 321: Linear Algebra Spring 2010

Elementary maths for GMT

[ Here 21 is the dot product of (3, 1, 2, 5) with (2, 3, 1, 2), and 31 is the dot product of

0.2 Vector spaces. J.A.Beachy 1

Math 480 The Vector Space of Differentiable Functions

Contents. 1 Vectors, Lines and Planes 1. 2 Gaussian Elimination Matrices Vector Spaces and Subspaces 124

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3

DEF 1 Let V be a vector space and W be a nonempty subset of V. If W is a vector space w.r.t. the operations, in V, then W is called a subspace of V.

Transcription:

ENGINEERING MATH 1 Fall 2009 VECTOR SPACES A vector space, more specifically, a real vector space (as opposed to a complex one or some even stranger ones) is any set that is closed under an operation of addition and under multiplication by real numbers. To be a bit more precise, if a set V is to be a vector space, then 1. One has to be able to add any two elements of V to get another element of V. Specifically, an operation + has to be defined so that if x, y V, one can form x + y and x + y V. This operation should have the usual properties, namely it has to be associative and commutative, there should be a zero element (denoted by 0) that has no effect when added to another element (If x V, then x+0 = x) and one should also be able to subtract in V ; that is, if x V then x has an additive inverse x which when added to x results in 0. Subtraction is then defined by x y = x + ( y). 2. If x V and c is a real number, it should make sense to multiply x by c to get cx V. The usual properties should hold, specifically, c(x + y) = cx + cy if c R, x, y V. (c + d)x = cx + dx if c, d R, x V. c(dx) = (cd)x = d(cx) if c, d R, x V. 1x = x if x V. The last property may seem a bit strange, but one possibly could define some weird product in which 1 times x is not x, and one wants to be sure nobody goes around saying he or she has a vector space in that case. The elements of V are then called vectors, the real numbers are then called scalars. For us the main examples are: 1. For every pair of positive integers m, n, the set of m n matrices, with the usual operations, is a vector space. Of particular interest are the cases m = 1; we then have column matrices or vectors. This space can be identified with R n, n-tuples of real numbers. n = 1; we then have row matrices or vectors. identified with R m, m-tuples of real numbers. R n for every positive integer n. This space can be The 0 element of this space is the 0 matrix. Actually, the most important case is the set R n of all n-tuples of real numbers, with the usual operations. The 0 element is the vector with all components equal to 0. 2. If I is an interval in R; that is, I is one of the following sets I = (a, b) for some a, b, a < b (a = or b = being allowed), or I = [a, b) for some a, b, < a < b (b = being allowed), or I = (a, b] for some a, b, a < b < (a = a possibility), or I = [a, b] for some a, b, < a < b <, then the set V of all real valued functions on I is a vector space. If f(x), g(x) are defined for x I, if c R, we define (f + g)(x), (cf)(x) in the usual and obvious way: (f + g)(x) = f(x) + g(x), (cf)(x) = cf(x), (x I). The 0 element of this vector space is the identically 0 function; the function f defined by f(x) = 0 for all x I.

2 Linear Combinations. In every vector space one can combine vectors with scalars to get linear combinations. The precise definition is: Suppose v 1, v 2,..., v n are vectors. A linear combination of these vectors is any vector of the form v = c 1 v 1 + c 2 v 2 + + c n v n, where c 1, c 2,..., c n are scalars. The scalars c 1,..., c n are the coefficients of the linear combination. Examples. 1. In R consider the vectors v 1 = (1, 1, 0), v 2 = (2,, 0) and v = (0, 5, 0). Here are a few linear combinations of these vectors: 2v 1 7v 2 + 4v = (2, 2, 0) + ( 14, 21, 0) + (0, 20, 0) = ( 12, 1, 0), v 1 + 2v 2 v = (,, 0) + (4, 6, 0) + (0, 5, 0) = (7, 4, 0), v 1 + v 2 + 0v = ( 1, 1, 0) + (2,, 0) = (1, 2, 0), (if in a combination a vector appears with coefficient 0, we just omit it, for example write v 1 + v 2 instead of v 1 + v 2 + 0v ), 2v 2 = 0v 1 + 2v 2 + 0v = (4, 6, 0), 10v 1 + 5v 2 = ( 10, 10, 0) + (10, 15, 0) = (0, 5, 0) = v, v 2 = 0v 1 + 1 v 2 + 0v. We can notice a few facts of general interest. The last computation shows v 2 is itself a linear combination of v 1, v 2, v. This is of course generally true; every vector of a set of vectors is a linear combination of this set; just take all coefficients equal to 0, except the one corresponding to the vector in question, which you take equal to 1. More interestingly, the fifth example shows that v is a linear combination of the other vectors of the bunch; that is a combination of v 1 and v 2. This means we don t need it for the purpose of forming linear combinations; in every linear combination we can just replace v 5 by 10v 1 + 5v 2, collect terms, to get a linear combination of v 1, v 2 alone. For example, in the first computation we get ( 12, 1, 0) as a linear combination of just v 1, v 2 by ( 12, 1, 0) = 2v 1 7v 2 + 4v = 2v 1 7v 2 + 4( 10v 1 + 5v 2 ) = 2v 1 7v 2 40v 1 + 20v 2 = 8v 1 + 1v 2. Here is an important definition: A set v 1,..., v n of vectors is linearly independent if no vector from the set is a linear combination of the other vectors from the set. We just saw that v 1, v 2, v as given was not linearly independent. Is the set consisting of v 1, v 2 linearly independent? This is the same as asking is one of v 1, v 2 a linear combination of the other one, meaning is v 1 = cv 2 for some constant, or is v 2 = cv 1 for some constant c. It is easy to see that the answer is NO. If v 1 = cv 2, then c must satisfy (1, 1, 0) = (2c, c, 0), thus 2c = 1 and c = 1, only possible if 2 =, and there is good evidence to conclude that 2. Similarly one sees that v 2 = cv 1 is impossible. Usually, when one vector of a bunch is a linear combination of others in the bunch, there are other vectors in the bunch with the same property. For example, in our case, we can ask ourselves whether v 2 is a linear combination of v 1, v. This is the same as asking whether there are scalars a, b such that v 2 = av 1 + bv or (2,, 0) = (a, a, 0) + (0, 5b, 0) = (a, a + 5b, 0). This leads to the system of equations a = 2, a+5b =, with the immediate solution a = 2, b = 1/5. Thus v 2 = 2v 1 + 1 5 v. We can then write the vector of the first computation above without using v 2 : ( 12, 1, 0) = 2v 1 7v 2 + 4v = 2v 1 7(2v 1 + 1 5 v ) + 4v = 12v 1 + 1 5 v.

Exercise. Show that v 1 is a linear combination of v 2, v and write ( 12, 1, 0) as a linear combination of v 2, v. We ll do one more thing with these vectors, describe the set of all their linear combinations. We don t need all three; since v depends on v 1, v 2, it suffices to describe all the linear combinations of v 1, v 2. This will consist of all vectors (a, b, c) such that one can find coefficients c 1, c 2 such that c 1 v 1 + c 2 v 2 = (a, b, c). If we equate components, we are asking for all vectors (a, b, c) such that c 1 + 2c 2 = a c 1 + c 2 = b 0c 1 + 0c 2 = c. Obviously, c = 0. The other two equations are however easy to solve without any problem. By Gauss, Gauss Jordan, or otherwise, c 1 = a 2b, c 2 = b a. In other words, no matter what a, b are, every vector of the form (a, b, 0) is a linear combination of v 1, v 2, hence also of v 1, v 2, v. For later reference we remark that the set of vectors that are combinations of v 1, v 2, v form a set closed under addition and scalar multiplication containing the 0 vector of the space. 2. Here is a similar example in R 4. Consider the vectors v 1 = (1, 2, 1, ), v 2 = (1, 0, 2, 4) and v = (1, 1, 1, 1). We want to answer two questions: (a) Are they linearly independent? (b) What is the set of all linear combinations of these vectors; i.e., when is a vector (a, b, c, d) of R 4 a linear combination of these vectors? It would be very tedious to answer the first one by checking first whether v 1 depends on v 2, v, then if v 2 depends on v 1, v, finally if v depends on v 1, v 2. There is a better way to check for independence of vectors v 1,..., v n in a vector space. One considers the equation (1) c 1 v 1 + + c n v n = 0 (The 0 element of the space). If one can show there is a solution with at least one c i 0, then one has one s answer. For example, just as an example, suppose that there is a solution of (1) with c 2 0. Then we can solve for v 2 = (c 1 /c 2 )v 1 (c /c 2 )v + + ( c n /c 2 )v n, and the system is not linearly independent. But if no such solution exists, then the system is linearly independent. Let us try it with our vectors. The vector equation c 1 v 1 + c 2 v 2 + c v = 0 results in a system of 4 linear equations in the unknowns c 1, c 2, c : c 1 + c 2 + c = 0 2c 1 + c = 0 c 1 + 2c 2 c = 0 c 1 + 4c 2 c = 0. To solve it we row reduce the augmented matrix of the system 1 1 1 0 2 0 1 0 1 2 1 0. 4 1 0 The canonical row reduced form is 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0

4 showing that the only solution is c 1 = 0, c 2 = 0, c = 0. The system is linearly independent. Exercise. Verify that the stated row reduced form is the right one. To answer the second question, we need to determine all vectors (a, b, c, d) for which the system c 1 + c 2 + c = a 2c 1 + c = b c 1 + 2c 2 c = c c 1 + 4c 2 c = d. has a solution. The augmented matrix is now 1 1 1 a 2 0 1 b 1 2 1 c 4 1 d and the exact same row operations as before take it to the canonical form: 2a + b + c 1 0 0 a + c 0 1 0 4a b 2c 0 0 1 0 0 0 2a 4b c + d The conclusion is that a vector v = (a, b, c, d) is a linear combination of the given three vectors v 1, v 2, v if and only if 2a 4b c + d = 0; that is, d = 2a + 4b + c, in which case v = c 1 v 1 + c 2 + c v if c 1 = 2a + b + c, c 2 = a + c, c = 4a b 2c. For example, consider the vector (4,, 2, 14). Here a = 4, b =, c = 2, d = 14. We see that the equation 2a + 4b + c = d holds, so this vector is a linear combination of the given v 1, v 2, v. Taking c 1 = 2a + b + c we verify that indeed = 5, c 2 = a + c = 2, c = 4a b 2c (4,, 2, 14) = 5(1, 2, 1, ) + 2(1, 0, 2, 4) + 7(1, 1, 1, 1). Here is something we learned while going through this example: = 7, A set v 1,..., v n of vectors is linearly independent if and only if the equation c 1 + + c n = 0 (the 0 here being the 0 element of the vector space), has the only solution c 1 = c 2 = = c n = 0. (This is frequently given as a definition in most textbooks).. Let V be the set of all functions defined on (, ). We consider the functions as vectors, the 0 vector being the identically zero function. Consider f, f 2, f, f 4 where f 1 (x) = x, f 2 (x) = sin x, f (x) = cos x, f 4 (x) = sin(x + π 6 ).

5 Are these functions linearly independent? If we remember well our trigonometry it is easy to see that the answer is no, because sin(x + π 6 ) = sin x cos π 6 + cos x sin π 6 = 2 sin x + 1 cos x; 2 that is, f 4 = ( /2)f 2 + (1/2)f. What if we remove f 4 ; are f 1, f 2, f linearly independent? We may later on see an easier way of deciding this, for now we do a brute force attack. We suppose that c 1 f + c 2 f 2 + c f = 0 and try to either solve this equation for some triple c 1, c 2, c not all zero, or show that the only solution is c 1 = c 2 = c = 0. Because the 0 element is the identically zero function, the equation c 1 f + c 2 f 2 +c f = 0 is equivalent to c 1 x + c 2 sin x + c cosx = 0 for all values of x. Let s give values to x, see what happens. For example if x = 0 we should have c 1 0 + c 2 sin 0 + c cos0 = 0; in other words, c = 0. With c = 0, we look for c 1, c 2 so that c 1 x + c 2 sin x = 0 for all x. Take x = π and we get c 1 π = 0, thus c 1 = 0. Finally if c 2 sin x = 0 for all x, we must have c 2 = 0. The answer to the last question is yes, f 1, f 2, f are linearly independent. Subspaces. If V is a vector space, a subspace is any subset that contains the 0 element and is closed under addition and scalar multiplication. A subspace of a vector space is a vector space in its own right, with the operations of V. Examples. In each case, verify it is, or isn t, a subspace of the indicated vector space. 1. V = R and W the set of all vectors of the form (a, b, 0) (triples of real numbers with third component 0). W is a subspace of V. Why? Because (a) The 0 vector of V is (0, 0, 0), of the form (a, b, 0) with a = b = 0. (b) If v, w are vectors in W, then (say) v = (a, b, 0), w = (c, d, 0), so that v + w = (a + c, b + d, 0) is again a vector with third component 0, hence in W. (c) If v = (a, b, 0) is in W and c is a scalar, then cv = (ac, bc, 0) W. 2. Let V = R 2 and consider the set W consisting of all vectors (a, b) with a = b. It is a subspace.. Let V = R 2 and consider the set W consisting of all vectors (a, b) with a 2 = b 2. It is NOT a subspace. Why? Because, for example, the vectors (1, 1) and (1, 1) are in W, but (1, 1) + (1, 1) = (2, 0) is not in W. 4. Here is what could be our main example: Let V be any vector space and let v 1,..., v n be vectors in V. The set W of ALL linear combinations of the vectors v 1,..., v n is a subspace of V. In fact, (a) 0 = 0v 1 + + 0v n, so 0 is a linear combination of the vectors and hence in W. (Note: In the equation 0 = 0v 1 + 0v n, the zero on the leftt hand side is the zero element of the vector space; the zeros on the right hand side are the real number 0. One usually knows, or should know, from the context which is which.) (b) Suppose v, w are in W. Then we can write v = c 1 v 1 + + c n v n, w = d 1 v 1 + +d n v n for scalars (real numbers) c 1,..., c n, d 1,..., d n. Then v + w = (c 1 + d 1 )v 1 + + (c n + d n )v n also is a linear combination of v 1,..., v n, hence in W. (c) Suppose v is in W and c is a scalar. We can write v = c 1 v 1 + +c n v n for scalars (real numbers) c 1,..., c n ; then cv = (cc 1 )v 1 + + (cc n )v n also is a linear combination of v 1,..., v n, hence in W.

6 5. Two silly examples: If V is a vector space, then it itself is a subspace of itself. Also, the set consisting only of 0, the 0 element of V by itself, is a subspace of V. 6. Suppose that I = (a, b) is an open interval of real numbers and let V be the set of ALL real valued functions of domain I, W the set of all continuous functions on I, X the set of all differentiable functions of on I. It should be clear that W is a subset of V, and X of W. It is also easy to see that W is a subspace of the vector space V ; and X is a subspace of the vector space W. It is sort of evident that a subspace X of a subspace W of V is also a subspace of V. 7. An important example: Let A = [a ij ] be an m n matrix. Consider the homogeneous system of equations, Ax = 0; that is, the system written in less abbreviated form as a 11 x 1 + a 12 x 2 + + a 1n x n = 0 a 21 x 1 + a 22 x 2 + + a 2n x n = 0 a m1 x 1 + a m2 x 2 + + a mn x n = 0 We will identify here an n-tuple (b 1,..., b n ) with the column vector b 1... b n In this way we can talk of a solution of the system as being an n-tuple (x 1,..., x n ) satisfying the equations, or a column vector x such that Ax = 0. For example, for the system: 2x 1 + x 2 x = 0 x 1 + 2x 2 x = 0, the vector (1, 1, 1) is a solution. We identify this vector with the column matrix 1 1 ; 1 it is a solution because if we replace x 1 by 1, x 2 by 1 and x by 1 in the equations, the equations ar satisfied. Or we can say that it is a solution because [ 2 1 1 2 ] 1 1 1 = 0 (the 2 1 zero matrix.) As n-tuples, the solutions of the homogeneous system of m linear equations in n unknowns are n-tuples, thus the set of ALL solutions is a subset of R n. Here is the important fact: The set of all solutions of a homogeneous system of linear equations is a subspace of R n, n=number of unknowns. In fact, given such a system, the 0 vector in R n ; that is the vector (0,..., 0) }{{} n zeroes is ALWAYS a solution. If VECTORS (column vectors) x, y are solutions, then Ax = 0, Ay = 0 so that A(x + y) = Ax + Ay = 0 + 0 = 0, and x + y is also a solution. Finally, if x is a solution and c is a scalar, then A(cx) = cax = c0 = 0, so cx is also a solution. This shows that the solution space of a system of m homogeneous linear equations in n unknowns is a subspace of R n.

7 Bases and Dimension. Given a vector space we can ask for a minimal set of vectors that span the space. We say the space is spanned by vectors v 1,..., v n if and only if every vector of the space is a linear combination of v 1,..., v n. Examples. 1. The vectors (1, 1, 2), (1, 2, ), (1, 0, 4), (2, 2, 2) span R. How do we verify that? To say these vectors span is equivalent to saying that given any vector of R ; in other words any triple of numbers (a, b, c), we can find coefficients c 1, c 2, c, c 4 such that the equation (a, b, c) = c 1 (1, 1, 2) + c 2 (1, 2, ) + c (1, 0, 4) + c 4 (2, 2, 2) is solvable. In terms of components, this means that the system of equations c 1 + c 2 + c + 2c 4 = a c 1 + 2c 2 + 2c 4 = b 2c 1 + c 2 + 4c + 2c 4 = c can be solved for all choices of a, b, c. To see whether this is or isn t so, we consider the augmented matrix, and row reduce. 1 1 1 2 a 1 2 0 2 b 1 1 1 2 a 0 1 1 0 b a 2 4 2 c 0 1 2 2 c 2a 1 0 2 2 2a b 1 0 2 2 2a b 0 1 1 0 b a 0 1 1 0 b a 0 0 2 c a b 0 0 1 2 c a b 10 8a b 2c 1 0 0 0 1 0 2 0 0 1 2 4a+2b+c a b+c. And we are done. This shows that we can select c 4 at will, any value will do. And then c 1 = 10 c 8a b 2c 4 + c 2 = 2 c 4 + 4a + 2b + c c = 2 c 4 + a b + c We get several conclusions from these computations. The first one is that the vectors (1, 1, 2), (1, 2, ), (1, 0, 4), (2, 2, 2) span R. But they are not a minimal set. Since c 4 can be chosen any way we wish, one choice is c 4 = 0, and that means that the fourth vector is superfluous. 2. In the previous example we saw a set of vectors spanning R. It is not the most obvious set. For any n the vectors e 1 = (1, 0,..., 0), e 2 = (0, 1, 0,..., 0),..., e n = (0,..., 0, n) span R n. The vector e j for any j in the range 1 to n is n-tuple having a 1 in the j-th place, all other entries equal to 0. It is clear (is it) that if x = (x 1,..., x n ) R n, then x = x 1 e 1 + x 2 e 2 + x n e n is a linear combination of these vectors.

8. Consider the set of all linear combinations of the functions e x, e x ; this is a vector space, and a subspace of the vector space of all real valued functions defined in the interval (, ). Show that cosh x, sinh x are in the span of of these functions; in fact, show that the spans of e x, e x and of cosh x, sinh x are identical. Solution. By definition cosh x = 1 2 ex + 1 2 e x, sinh x = 1 2 ex 1 2 e x, so cosh x, sinh x are linear combinations of e x, e x and every linear combination of cosh x, sinh x is also one of e x, e x. In fact, we will have ( 1 c 1 cosh x + c 2 sinh x = c 1 2 ex + 1 ) ( 1 2 e x + c 2 2 ex 1 ) 2 e x = c 1 + c 2 2 e x + c 1 c 2 2 e x. The span of cosh x, sinh x is thus contained in that of e x, e x. But, conversely, e x = cosh x + sinh x, e x = cosh x sinh x so that e x, e x are linear combinations of cosh x, sinh x and the span of e x, e x is included in that of cosh x, sinh x; in other words, the two spans are equal. Here are some important facts and definitions. Definition. A set of vectors v 1,..., v n of a vector space V is a basis of the vector space if (and only if) it is a minimal spanning set. That is: 1. Every vector of V is a linear combination of the vectors v 1,..., v n. 2. No proper subset of the set v 1,..., v n spans all of V ; in particular, if we remove the vector v 1, then v 1 cannot be obtained as a linear combination of v 2,..., v n ; same for v 2, etc. In other words, no vector from the set can be a lienar combination of the remaining vectors. An equivalent definition, the one usually found in textbooks is: A set of vectors v 1,..., v n is a basis of the vector space V if (and only if) the set is linearly independent and spans V. A property of a basis, which can also be used as a definition is: A set v 1,..., v n is a basis of V if and only if every vector of V can be written uniquely as a linear combination of the vectors v 1,..., v n. For example, in our first example above, with v 1 = (1, 1, 2), v 2 = (1, 2, ), v = (1, 0, 4), v 4 = (2, 2, 2), v 1, v 2, v, v 4 is not a basis of R because we have the freedom of choosing c 4 any way we wish. However, once we put c 4 = 0 (i.e., exclude c 4 ) all freedom of choice is gone; the vector (a, b, c) can be written in the form c 1 v 1 + c 2 v 2 + c v only in one form; the only choice of c, c 2, c that works is given by what we found above, namely (to repeat): c 1 = 8a b 2c c 2 = 4a + 2b + c c = a b + c For example, (1,, 5) can be written as a linear combination of v 1, v 2, v in the form (1,, 5) = 5 v 1 + 7 v 2 + 1 v.

9 Exercise. Write the vectors (1, 0, 0), (0, 1, 0)and(0, 0, 1) as a linear combination of v 1, v 2, v. Definition. A vector space is finite dimensional if it has a finite basis. We come to a very important result: Suppose V is a finite dimensional vector space. Then any two bases have the same number of elements. That number is called the dimension of the vector space. Examples and more. 1. We saw that (1, 1, 2), (1, 2, ), (1, 0, 4) is a basis of R. But the obvious basis is (1, 0, 0), (0, 1, 0), (0, 0, 1). In general, for every n the set of n vectors e 1 = (1, 0,..., 0), e 2 = (0, 1, 0,..., 0),..., e n = (0,..., 0, n) are easily seen to be linearly independent and (as mentioned above) span R n, showing R n has dimension n. To be continued in class Complete notes may eventually appear.