General Vector Spaces

Size: px
Start display at page:

Download "General Vector Spaces"

Transcription

1 CHAPTER 4 General Vector Spaces CHAPTER CONTENTS 4. Real Vector Spaces Subspaces Linear Independence 4.4 Coordinates and Basis 4.5 Dimension 4.6 Change of Basis Row Space, Column Space, and Null Space Rank, Nullit, and the Fundamental Matri Spaces Basic Matri Transformations in R and R Properties of Matri Transformations 7 4. Geometr of Matri Operators on R 8 INTRODUCTION Recall that we began our stud of vectors b viewing them as directed line segments (arrows). We then etended this idea b introducing rectangular coordinate sstems, which enabled us to view vectors as ordered pairs and ordered triples of real numbers. As we developed properties of these vectors we noticed patterns in various formulas that enabled us to etend the notion of a vector to an n-tuple of real numbers. Although n-tuples took us outside the realm of our visual eperience, it gave us a valuable tool for understanding and studing sstems of linear equations. In this chapter we will etend the concept of a vector et again b using the most important algebraic properties of vectors in R n as aioms. These aioms, if satisfied b a set of objects, will enable us to think of those objects as vectors. 4. Real Vector Spaces In this section we will etend the concept of a vector b using the basic properties of vectors in R n as aioms, which if satisfied b a set of objects, guarantee that those objects behave like familiar vectors. Vector Space Aioms The following definition consists of ten aioms, eight of which are properties of vectors in R n that were stated in Theorem 3... It is important to keep in mind that one does not prove aioms; rather, the are assumptions that serve as the starting point for proving theorems. 83

2 84 Chapter 4 General Vector Spaces In this tet scalars will be either real numbers or comple numbers. Vector spaces with real scalars will be called real vector spaces and those with comple scalars will be called comple vector spaces. There is a more general notion of a vector space in which scalars can come from a mathematical structure known as a field, but we will not be concerned with that level of generalit. For now, we will focus eclusivel on real vector spaces, which we will refer to simpl as vector spaces. We will consider comple vector spaces later. DEFINITION Let V be an arbitrar nonempt set of objects on which two operations are defined: addition, and multiplication b numbers called scalars. B addition we mean a rule for associating with each pair of objects u and v in V an object u + v, called the sum of u and v;bscalar multiplication we mean a rule for associating with each scalar k and each object u in V an object ku, called the scalar multiple of u b k. If the following aioms are satisfied b all objects u, v, w in V and all scalars k and m, then we call V a vector space and we call the objects in V vectors.. If u and v are objects in V, then u + v is in V.. u + v = v + u 3. u + (v + w) = (u + v) + w 4. There is an object in V, called a zero vector for V, such that + u = u + = u for all u in V. 5. For each u in V, there is an object u in V, called a negative of u, such that u + ( u) = ( u) + u =. 6. If k is an scalar and u is an object in V, then ku is in V. 7. k(u + v) = ku + kv 8. (k + m)u = ku + mu 9. k(mu) = (km)(u). u = u Observe that the definition of a vector space does not specif the nature of the vectors or the operations. An kind of object can be a vector, and the operations of addition and scalar multiplication need not have an relationship to those on R n. The onl requirement is that the ten vector space aioms be satisfied. In the eamples that follow we will use four basic steps to show that a set with two operations is a vector space. To Show That a Set with Two Operations Is a Vector Space Step. Identif the set V of objects that will become vectors. Step. Identif the addition and scalar multiplication operations on V. Step 3. Verif Aioms and 6; that is, adding two vectors in V produces a vector in V, and multipling a vector in V b a scalar also produces a vector in V. Aiom is called closure under addition, and Aiom 6 is called closure under scalar multiplication. Step 4. Confirm that Aioms, 3, 4, 5, 7, 8, 9, and hold. Hermann Günther Grassmann (89 877) Historical Note The notion of an abstract vector space evolved over man ears and had man contributors. The idea crstallized with the work of the German mathematician H. G. Grassmann, who published a paper in 86 in which he considered abstract sstems of unspecified elements on which he defined formal operations of addition and scalar multiplication. Grassmann s work was controversial, and others, including Augustin Cauch (p. ), laid reasonable claim to the idea. [Image: Sueddeutsche Zeitung Photo/The Image Works]

3 4. Real Vector Spaces 85 Our first eample is the simplest of all vector spaces in that it contains onl one object. Since Aiom 4 requires that ever vector space contain a zero vector, the object will have to be that vector. EXAMPLE The Zero Vector Space Let V consist of a single object, which we denote b, and define + = and k = for all scalars k. It is eas to check that all the vector space aioms are satisfied. We call this the zero vector space. Our second eample is one of the most important of all vector spaces the familiar space R n. It should not be surprising that the operations on R n satisf the vector space aioms because those aioms were based on known properties of operations on R n. EXAMPLE R n Is a Vector Space Let V = R n, and define the vector space operations on V to be the usual operations of addition and scalar multiplication of n-tuples; that is, u + v = (u,u,...,u n ) + (v,v,...,v n ) = (u + v,u + v,...,u n + v n ) ku = (ku,ku,...,ku n ) The set V = R n is closed under addition and scalar multiplication because the foregoing operations produce n-tuples as their end result, and these operations satisf Aioms, 3, 4, 5, 7, 8, 9, and b virtue of Theorem 3... Our net eample is a generalization of R n in which we allow vectors to have infinitel man components. E(t) Voltage Figure 4.. t Time EXAMPLE 3 The Vector Space of Infinite Sequences of Real Numbers Let V consist of objects of the form u = (u,u,...,u n,...) in which u,u,...,u n,...is an infinite sequence of real numbers. We define two infinite sequences to be equal if their corresponding components are equal, and we define addition and scalar multiplication componentwise b u + v = (u,u,...,u n,...)+ (v,v,...,v n,...) = (u + v,u + v,...,u n + v n,...) ku = (ku,ku,...,ku n,...) In the eercises we ask ou to confirm that V with these operations is a vector space. We will denote this vector space b the smbol R. Vector spaces of the tpe in Eample 3 arise when a transmitted signal of indefinite duration is digitized b sampling its values at discrete time intervals (Figure 4..). In the net eample our vectors will be matrices. This ma be a little confusing at first because matrices are composed of rows and columns, which are themselves vectors (row vectors and column vectors). However, from the vector space viewpoint we are not

4 86 Chapter 4 General Vector Spaces concerned with the individual rows and columns but rather with the properties of the matri operations as the relate to the matri as a whole. Note that Equation () involves three different addition operations: the addition operation on vectors, the addition operation on matrices, and the addition operation on real numbers. EXAMPLE 4 The Vector Space of Matrices Let V be the set of matrices with real entries, and take the vector space operations on V to be the usual operations of matri addition and scalar multiplication; that is, u u v v u + v u + v u + v = + = () u u v v u + v u + v u u ku ku ku = k = u u ku ku The set V is closed under addition and scalar multiplication because the foregoing operations produce matrices as the end result. Thus, it remains to confirm that Aioms, 3, 4, 5, 7, 8, 9, and hold. Some of these are standard properties of matri operations. For eample, Aiom follows from Theorem.4.(a) since [ [ u u v v v v u u u + v = u u ] + v v = v v ] + u u = v + u Similarl, Aioms 3, 7, 8, and 9 follow from parts (b), (h), ( j), and (e), respectivel, of that theorem (verif). This leaves Aioms 4, 5, and that remain to be verified. To confirm that Aiom 4 is satisfied, we must find a matri in V for which u + = + u for all matrices in V. We can do this b taking = With this definition, u u u u + u = + = = u u u u u and similarl u + = u. To verif that Aiom 5 holds we must show that each object u in V has a negative u in V such that u + ( u) = and ( u) + u =. This can be done b defining the negative of u to be u u u = u u With this definition, u u u u u + ( u) = + = = u u u u and similarl ( u) + u =. Finall, Aiom holds because u u u u u = = = u u u u u EXAMPLE 5 The Vector Space of m n Matrices Eample 4 is a special case of a more general class of vector spaces. You should have no trouble adapting the argument used in that eample to show that the set V of all m n matrices with the usual matri operations of addition and scalar multiplication is a vector space. We will denote this vector space b the smbol M mn. Thus, for eample, the vector space in Eample 4 is denoted as M.

5 EXAMPLE 6 The Vector Space of Real-Valued Functions 4. Real Vector Spaces 87 Let V be the set of real-valued functions that are defined at each in the interval (, ). If f = f() and g = g() are two functions in V and if k is an scalar, then define the operations of addition and scalar multiplication b (f + g)() = f() + g() () (kf)() = kf() (3) In Eample 6 the functions were defined on the entire interval (, ). However, the arguments used in that eample appl as well on all subintervals of (, ), such as a closed interval [a,b] or an open interval (a, b). We will denote the vector spaces of functions on these intervals b F [a,b] and F(a, b), respectivel. One wa to think about these operations is to view the numbers f() and g() as components of f and g at the point, in which case Equations () and (3) state that two functions are added b adding corresponding components, and a function is multiplied b a scalar b multipling each component b that scalar eactl as in R n and R. This idea is illustrated in parts (a) and (b) of Figure 4... The set V with these operations is denoted b the smbol F(, ). We can prove that this is a vector space as follows: Aioms and 6: These closure aioms require that if we add two functions that are defined at each in the interval (, ), then sums and scalar multiples of those functions must also be defined at each in the interval (, ). This follows from Formulas () and (3). Aiom 4: This aiom requires that there eists a function in F(, ), which when added to an other function f in F(, ) produces f back again as the result. The function whose value at ever point in the interval (, ) is zero has this propert. Geometricall, the graph of the function is the line that coincides with the -ais. Aiom 5: This aiom requires that for each function f in F(, ) there eists a function f in F(, ), which when added to f produces the function. The function defined b f() = f() has this propert. The graph of f can be obtained b reflecting the graph of f about the -ais (Figure 4..c). Aioms, 3, 7, 8, 9, : The validit of each of these aioms follows from properties of real numbers. For eample, if f and g are functions in F(, ), then Aiom requires that f + g = g + f. This follows from the computation (f + g)() = f() + g() = g() + f() = (g + f)() in which the first and last equalities follow from (), and the middle equalit is a propert of real numbers. We will leave the proofs of the remaining parts as eercises. f + g g f f() g() f() + g() kf f f() kf() f f f() f() Figure 4.. (a) (b) (c) It is important to recognize that ou cannot impose an two operations on an set V and epect the vector space aioms to hold. For eample, if V is the set of n-tuples with positive components, and if the standard operations from R n are used, then V is not closed under scalar multiplication, because if u is a nonzero n-tuple in V, then ( )u has

6 88 Chapter 4 General Vector Spaces at least one negative component and hence is not in V. The following is a less obvious eample in which onl one of the ten vector space aioms fails to hold. EXAMPLE 7 A Set That Is Not a Vector Space Let V = R and define addition and scalar multiplication operations as follows: If u = (u,u ) and v = (v,v ), then define and if k is an real number, then define u + v = (u + v,u + v ) ku = (ku, ) For eample, if u = (, 4), v = ( 3, 5), and k = 7, then u + v = ( + ( 3), 4 + 5) = (, 9) ku = 7u = (7, ) = (4, ) The addition operation is the standard one from R, but the scalar multiplication is not. In the eercises we will ask ou to show that the first nine vector space aioms are satisfied. However, Aiom fails to hold for certain vectors. For eample, if u = (u,u ) is such that u =, then u = (u,u ) = ( u, ) = (u, ) = u Thus, V is not a vector space with the stated operations. Our final eample will be an unusual vector space that we have included to illustrate how varied vector spaces can be. Since the vectors in this space will be real numbers, it will be important for ou to keep track of which operations are intended as vector operations and which ones as ordinar operations on real numbers. EXAMPLE 8 An Unusual Vector Space Let V be the set of positive real numbers, let u = u and v = v be an vectors (i.e., positive real numbers) in V, and let k be an scalar. Define the operations on V to be u + v = uv [ Vector addition is numerical multiplication. ] ku = u k [ Scalar multiplication is numerical eponentiation. ] Thus, for eample, + = and ()() = = strange indeed, but nevertheless the set V with these operations satisfies the ten vector space aioms and hence is a vector space. We will confirm Aioms 4, 5, and 7, and leave the others as eercises. Aiom 4 The zero vector in this space is the number (i.e., = ) since u + = u = u Aiom 5 The negative of a vector u is its reciprocal (i.e., u = /u) since u + ( ) u = u = (= ) u Aiom 7 k(u + v) = (uv) k = u k v k = (ku) + (kv). Some Properties of Vectors The following is our first theorem about vector spaces. The proof is ver formal with each step being justified b a vector space aiom or a known propert of real numbers. There will not be man rigidl formal proofs of this tpe in the tet, but we have included this one to reinforce the idea that the familiar properties of vectors can all be derived from the vector space aioms.

7 4. Real Vector Spaces 89 THEOREM 4.. Let V be a vector space, u a vector in V, and k a scalar; then: (a) u = (b) k = (c) ( )u = u (d) If ku =, then k = or u =. We will prove parts (a) and (c) and leave proofs of the remaining parts as eercises. Proof (a) We can write u + u = ( + )u [ Aiom 8 ] = u [ Propert of the number ] B Aiom 5 the vector u has a negative, u. Adding this negative to both sides above ields [u + u]+( u) = u + ( u) or u +[u + ( u)] =u + ( u) [ Aiom 3 ] u + = [ Aiom 5 ] u = [ Aiom 4 ] Proof (c) To prove that ( )u = u, we must show that u + ( )u =. The proof is as follows: u + ( )u = u + ( )u [ Aiom ] = ( + ( ))u [ Aiom 8 ] = u [ Propert of numbers ] = [ Part (a) of this theorem ] A Closing Observation This section of the tet is important to the overall plan of linear algebra in that it establishes a common thread among such diverse mathematical objects as geometric vectors, vectors in R n, infinite sequences, matrices, and real-valued functions, to name a few. As a result, whenever we discover a new theorem about general vector spaces, we will at the same time be discovering a theorem about geometric vectors, vectors in R n, sequences, matrices, real-valued functions, and about an new kinds of vectors that we might discover. To illustrate this idea, consider what the rather innocent-looking result in part (a) of Theorem 4.. sas about the vector space in Eample 8. Keeping in mind that the vectors in that space are positive real numbers, that scalar multiplication means numerical eponentiation, and that the zero vector is the number, the equation u = is reall a statement of the familiar fact that if u is a positive real number, then u =

8 9 Chapter 4 General Vector Spaces Eercise Set 4.. Let V be the set of all ordered pairs of real numbers, and consider the following addition and scalar multiplication operations on u = (u,u ) and v = (v,v ): u + v = (u + v,u + v ), ku = (,ku ) (a) Compute u + v and ku for u = (, ), v = (3, 4), and k = 3. (b) In words, eplain wh V is closed under addition and scalar multiplication. (c) Since addition on V is the standard addition operation on R, certain vector space aioms hold for V because the are known to hold for R. Which aioms are the? (d) Show that Aioms 7, 8, and 9 hold. (e) Show that Aiom fails and hence that V is not a vector space under the given operations.. Let V be the set of all ordered pairs of real numbers, and consider the following addition and scalar multiplication operations on u = (u,u ) and v = (v,v ): u + v = (u + v +,u + v + ), ku = (ku,ku ) (a) Compute u + v and ku for u = (, 4), v = (, 3), and k =. (b) Show that (, ) =. (c) Show that (, ) =. (d) Show that Aiom 5 holds b producing an ordered pair u such that u + ( u) = for u = (u,u ). (e) Find two vector space aioms that fail to hold. In Eercises 3, determine whether each set equipped with the given operations is a vector space. For those that are not vector spaces identif the vector space aioms that fail. 3. The set of all real numbers with the standard operations of addition and multiplication. 4. The set of all pairs of real numbers of the form (, ) with the standard operations on R. 5. The set of all pairs of real numbers of the form (, ), where, with the standard operations on R. 6. The set of all n-tuples of real numbers that have the form (,,...,)with the standard operations on R n. 7. The set of all triples of real numbers with the standard vector addition but with scalar multiplication defined b k(,,z) = (k,k,k z) 8. The set of all invertible matrices with the standard matri addition and scalar multiplication. 9. The set of all matrices of the form a b with the standard matri addition and scalar multiplication.. The set of all real-valued functions f defined everwhere on the real line and such that f() = with the operations used in Eample 6.. The set of all pairs of real numbers of the form (,)with the operations (,)+ (, ) = (,+ ) and k(,)= (,k). The set of polnomials of the form a + a with the operations and (a + a ) + (b + b ) = (a + b ) + (a + b ) k(a + a ) = (ka ) + (ka ) 3. Verif Aioms 3, 7, 8, and 9 for the vector space given in Eample Verif Aioms,, 3, 7, 8, 9, and for the vector space given in Eample With the addition and scalar multiplication operations defined in Eample 7, show that V = R satisfies Aioms Verif Aioms,, 3, 6, 8, 9, and for the vector space given in Eample Show that the set of all points in R ling on a line is a vector space with respect to the standard operations of vector addition and scalar multiplication if and onl if the line passes through the origin. 8. Show that the set of all points in R 3 ling in a plane is a vector space with respect to the standard operations of vector addition and scalar multiplication if and onl if the plane passes through the origin. In Eercises 9, let V be the vector space of positive real numbers with the vector space operations given in Eample 8. Let u = u be an vector in V, and rewrite the vector statement as a statement about real numbers. 9. u = ( )u. ku = if and onl if k = oru =. Working with Proofs. The argument that follows proves that if u, v, and w are vectors in a vector space V such that u + w = v + w, then u = v (the cancellation law for vector addition). As illustrated, justif the steps b filling in the blanks.

9 4. Subspaces 9 u + w = v + w Hpothesis (u + w) + ( w) = (v + w) + ( w) Add w to both sides. u +[w + ( w)] =v +[w + ( w)] u + = v + u = v. Below is a seven-step proof of part (b) of Theorem 4... Justif each step either b stating that it is true b hpothesis or b specifing which of the ten vector space aioms applies. Hpothesis: Let u be an vector in a vector space V, let be the zero vector in V, and let k be a scalar. Conclusion: Then k =. Proof: () k + ku = k( + u) () = ku (3) Since ku is in V, ku is in V. (4) Therefore, (k + ku) + ( ku) = ku + ( ku). (5) k + (ku + ( ku)) = ku + ( ku) (6) k + = (7) k = In Eercises 3 4, let u be an vector in a vector space V. Give a step-b-step proof of the stated result using Eercises and as models for our presentation. 3. u = 4. u = ( )u In Eercises 5 7, prove that the given set with the stated operations is a vector space. 5. The set V ={} with the operations of addition and scalar multiplication given in Eample. 6. The set R of all infinite sequences of real numbers with the operations of addition and scalar multiplication given in Eample The set M mn of all m n matrices with the usual operations of addition and scalar multiplication. 8. Prove: If u is a vector in a vector space V and k a scalar such that ku =, then either k = oru =. [Suggestion: Show that if ku = and k =, then u =. The result then follows as a logical consequence of this.] True-False Eercises TF. In parts (a) (f) determine whether the statement is true or false, and justif our answer. (a) A vector is an element of a vector space. (b) A vector space must contain at least two vectors. (c) If u is a vector and k is a scalar such that ku =, then it must be true that k =. (d) The set of positive real numbers is a vector space if vector addition and scalar multiplication are the usual operations of addition and multiplication of real numbers. (e) In ever vector space the vectors ( )u and u are the same. (f ) In the vector space F(, ) an function whose graph passes through the origin is a zero vector. 4. Subspaces It is often the case that some vector space of interest is contained within a larger vector space whose properties are known. In this section we will show how to recognize when this is the case, we will eplain how the properties of the larger vector space can be used to obtain properties of the smaller vector space, and we will give a variet of important eamples. We begin with some terminolog. DEFINITION A subset W of a vector space V is called a subspace of V if W is itself a vector space under the addition and scalar multiplication defined on V. In general, to show that a nonempt set W with two operations is a vector space one must verif the ten vector space aioms. However, if W is a subspace of a known vector space V, then certain aioms need not be verified because the are inherited from V. For eample, it is not necessar to verif that u + v = v + u holds in W because it holds for all vectors in V including those in W. On the other hand, it is necessar to verif

10 9 Chapter 4 General Vector Spaces that W is closed under addition and scalar multiplication since it is possible that adding two vectors in W or multipling a vector in W b a scalar produces a vector in V that is outside of W (Figure 4..). Those aioms that are not inherited b W are Aiom Closure of W under addition Aiom 4 Eistence of a zero vector in W Aiom 5 Eistence of a negative in W for ever vector in W Aiom 6 Closure of W under scalar multiplication so these must be verified to prove that it is a subspace of V. However, the net theorem shows that if Aiom and Aiom 6 hold in W, then Aioms 4 and 5 hold in W as a consequence and hence need not be verified. ku u + v Figure 4.. The vectors u and v are in W, but the vectors u + v and ku are not. u v W V THEOREM 4.. If W is a set of one or more vectors in a vector space V, then W is a subspace of V if and onl if the following conditions are satisfied. (a) If u and v are vectors in W, then u + v is in W. (b) If k is a scalar and u is a vector in W, then ku is in W. Theorem 4.. states that W is a subspace of V if and onl if it is closed under addition and scalar multiplication. Proof If W is a subspace of V, then all the vector space aioms hold in W, including Aioms and 6, which are precisel conditions (a) and (b). Conversel, assume that conditions (a) and (b) hold. Since these are Aioms and 6, and since Aioms, 3, 7, 8, 9, and are inherited from V, we onl need to show that Aioms 4 and 5 hold in W. For this purpose, let u be an vector in W. It follows from condition (b) that ku is a vector in W for ever scalar k. In particular, u = and ( )u = uare in W, which shows that Aioms 4 and 5 hold in W. Note that ever vector space has at least two subspaces, itself and its zero subspace. EXAMPLE The Zero Subspace If V is an vector space, and if W ={} is the subset of V that consists of the zero vector onl, then W is closed under addition and scalar multiplication since + = and k = for an scalar k. We call W the zero subspace of V. EXAMPLE Lines Through the Origin Are Subspaces of R and of R 3 If W is a line through the origin of either R or R 3, then adding two vectors on the line or multipling a vector on the line b a scalar produces another vector on the line, so W is closed under addition and scalar multiplication (see Figure 4.. for an illustration in R 3 ).

11 4. Subspaces 93 u v u + v W u ku W Figure 4.. (a) W is closed under addition. (b) W is closed under scalar multiplication. v u W u + v ku Figure 4..3 The vectors u + v and ku both lie in the same plane as u and v. EXAMPLE 3 Planes Through the Origin Are Subspaces of R 3 If u and v are vectors in a plane W through the origin of R 3, then it is evident geometricall that u + v and ku also lie in the same plane W for an scalar k (Figure 4..3). Thus W is closed under addition and scalar multiplication. Table below gives a list of subspaces of R and of R 3 that we have encountered thus far. We will see later that these are the onl subspaces of R and of R 3. Table Subspaces of R Subspaces of R 3 {} {} Lines through the origin Lines through the origin R Planes through the origin R 3 W (, ) EXAMPLE 4 A Subset of R That Is Not a Subspace Let W be the set of all points (, ) in R for which and (the shaded region in Figure 4..4). This set is not a subspace of R because it is not closed under scalar multiplication. For eample, v = (, ) is a vector in W, but( )v = (, ) is not. (, ) Figure 4..4 W is not closed under scalar multiplication. EXAMPLE 5 Subspaces of M nn We know from Theorem.7. that the sum of two smmetric n n matrices is smmetric and that a scalar multiple of a smmetric n n matri is smmetric. Thus, the set of smmetric n n matrices is closed under addition and scalar multiplication and hence is a subspace of M nn. Similarl, the sets of upper triangular matrices, lower triangular matrices, and diagonal matrices are subspaces of M nn. EXAMPLE 6 A Subset of M nn That Is Not a Subspace The set W of invertible n n matrices is not a subspace of M nn, failing on two counts it is not closed under addition and not closed under scalar multiplication. We will illustrate this with an eample in M that ou can readil adapt to M nn. Consider the matrices U = and V = 5 5 The matri U is the zero matri and hence is not invertible, and the matri U + V has a column of zeros so it also is not invertible.

12 94 Chapter 4 General Vector Spaces CALCULUS REQUIRED EXAMPLE 7 The Subspace C (, ) There is a theorem in calculus which states that a sum of continuous functions is continuous and that a constant times a continuous function is continuous. Rephrased in vector language, the set of continuous functions on (, ) is a subspace of F(, ). We will denote this subspace b C(, ). CALCULUS REQUIRED EXAMPLE 8 Functions with Continuous Derivatives A function with a continuous derivative is said to be continuousl differentiable. There is a theorem in calculus which states that the sum of two continuousl differentiable functions is continuousl differentiable and that a constant times a continuousl differentiable function is continuousl differentiable. Thus, the functions that are continuousl differentiable on (, ) form a subspace of F(, ). We will denote this subspace b C (, ), where the superscript emphasizes that the first derivatives are continuous. To take this a step further, the set of functions with m continuous derivatives on (, ) is a subspace of F(, ) as is the set of functions with derivatives of all orders on (, ). We will denote these subspaces b C m (, ) and C (, ), respectivel. In this tet we regard all constants to be polnomials of degree zero. Be aware, however, that some authors do not assign a degree to the constant. EXAMPLE 9 The Subspace of All Polnomials Recall that a polnomial is a function that can be epressed in the form p() = a + a + +a n n () where a,a,...,a n are constants. It is evident that the sum of two polnomials is a polnomial and that a constant times a polnomial is a polnomial. Thus, the set W of all polnomials is closed under addition and scalar multiplication and hence is a subspace of F(, ). We will denote this space b P. EXAMPLE The Subspace of Polnomials of Degree n Recall that the degree of a polnomial is the highest power of the variable that occurs with a nonzero coefficient. Thus, for eample, if a n = in Formula (), then that polnomial has degree n. It is not true that the set W of polnomials with positive degree n is a subspace of F(, ) because that set is not closed under addition. For eample, the polnomials and both have degree, but their sum has degree. What is true, however, is that for each nonnegative integer n the polnomials of degree n or less form a subspace of F(, ). We will denote this space b P n. The Hierarch of Function Spaces It is proved in calculus that polnomials are continuous functions and have continuous derivatives of all orders on (, ). Thus, it follows that P is not onl a subspace of F(, ), as previousl observed, but is also a subspace of C (, ). We leave it for ou to convince ourself that the vector spaces discussed in Eamples 7 to are nested one inside the other as illustrated in Figure Remark In our previous eamples we considered functions that were defined at all points of the interval (, ). Sometimes we will want to consider functions that are onl defined on some subinterval of (, ), sa the closed interval [a,b] or the open interval (a, b). In such cases we will make an appropriate notation change. For eample, C[a,b] is the space of continuous functions on [a,b] and C(a,b) is the space of continuous functions on (a, b).

13 4. Subspaces 95 Figure 4..5 P n C (, ) C m (, ) C (, ) C(, ) F(, ) Building Subspaces The following theorem provides a useful wa of creating a new subspace from known subspaces. THEOREM 4.. If W,W,...,W r are subspaces of a vector space V, then the intersection of these subspaces is also a subspace of V. Note that the first step in proving Theorem 4.. was to establish that W contained at least one vector. This is important, for otherwise the subsequent argument might be logicall correct but meaningless. If k =, then Equation () has the form w = k v, in which case the linear combination is just a scalar multiple of v. Proof Let W be the intersection of the subspaces W,W,...,W r. This set is not empt because each of these subspaces contains the zero vector of V, and hence so does their intersection. Thus, it remains to show that W is closed under addition and scalar multiplication. To prove closure under addition, let u and v be vectors in W. Since W is the intersection of W,W,...,W r, it follows that u and v also lie in each of these subspaces. Moreover, since these subspaces are closed under addition and scalar multiplication, the also all contain the vectors u + v and ku for ever scalar k, and hence so does their intersection W. This proves that W is closed under addition and scalar multiplication. Sometimes we will want to find the smallest subspace of a vector space V that contains all of the vectors in some set of interest. The following definition, which generalizes Definition 4 of Section 3., will help us to do that. DEFINITION If w is a vector in a vector space V, then w is said to be a linear combination of the vectors v, v,...,v r in V if w can be epressed in the form w = k v + k v + +k r v r () where k,k,...,k r are scalars. These scalars are called the coefficients of the linear combination. THEOREM 4..3 If S ={w, w,...,w r } is a nonempt set of vectors in a vector space V, then: (a) The set W of all possible linear combinations of the vectors in S isasubspace of V. (b) The set W in part (a) is the smallest subspace of V that contains all of the vectors in S in the sense that an other subspace that contains those vectors contains W. Proof (a) Let W be the set of all possible linear combinations of the vectors in S. We must show that W is closed under addition and scalar multiplication. To prove closure under addition, let u = c w + c w + +c r w r and v = k w + k w + +k r w r be two vectors in W. It follows that their sum can be written as u + v = (c + k )w + (c + k )w + +(c r + k r )w r

14 96 Chapter 4 General Vector Spaces which is a linear combination of the vectors in S. Thus, W is closed under addition. We leave it for ou to prove that W is also closed under scalar multiplication and hence is a subspace of V. Proof (b) Let W be an subspace of V that contains all of the vectors in S. Since W is closed under addition and scalar multiplication, it contains all linear combinations of the vectors in S and hence contains W. The following definition gives some important notation and terminolog related to Theorem In the case where S is the empt set, it will be convenient to agree that span(ø) ={}. DEFINITION 3 If S ={w, w,...,w r } is a nonempt set of vectors in a vector space V, then the subspace W of V that consists of all possible linear combinations of the vectors in S is called the subspace of V generated b S, and we sa that the vectors w, w,...,w r span W. We denote this subspace as W = span{w, w,...,w r } or W = span(s) EXAMPLE The Standard Unit Vectors Span R n Recall that the standard unit vectors in R n are e = (,,,...,), e = (,,,...,),..., e n = (,,,...,) These vectors span R n since ever vector v = (v,v,...,v n ) in R n can be epressed as v = v e + v e + +v n e n which is a linear combination of e, e,...,e n. Thus, for eample, the vectors i = (,, ), j = (,, ), k = (,, ) span R 3 since ever vector v = (a,b,c)in this space can be epressed as v = (a,b,c)= a(,, ) + b(,, ) + c(,, ) = ai + bj + ck EXAMPLE A Geometric View of Spanning in R and R 3 (a) If v is a nonzero vector in R or R 3 that has its initial point at the origin, then span{v}, which is the set of all scalar multiples of v, is the line through the origin determined b v. You should be able to visualize this from Figure 4..6a b observing that the tip of the vector kv can be made to fall at an point on the line b choosing the value of k to lengthen, shorten, or reverse the direction of v appropriatel. George William Hill (838 94) Historical Note The term linear combination is due to the American mathematician G.W. Hill, who introduced it in a research paper on planetar motion published in 9. Hill was a loner who preferred to work out of his home inwest Nack, NewYork, rather than in academia, though he did tr lecturing at Columbia Universit for a few ears. Interestingl, he apparentl returned the teaching salar, indicating that he did not need the mone and did not want to be bothered looking after it. Although technicall a mathematician, Hill had little interest in modern developments of mathematics and worked almost entirel on the theor of planetar orbits. [Image: Courtes of the American Mathematical Societ

15 4. Subspaces 97 (b) If v and v are nonzero vectors in R 3 that have their initial points at the origin, then span{v, v }, which consists of all linear combinations of v and v, is the plane through the origin determined b these two vectors. You should be able to visualize this from Figure 4..6b b observing that the tip of the vector k v + k v can be made to fall at an point in the plane b adjusting the scalars k and k to lengthen, shorten, or reverse the directions of the vectors k v and k v appropriatel. z span{v} z span{v, v } v kv v k v k v + k v v k v Figure 4..6 (a) Span{v} is the line through the origin determined b v. (b) Span{v, v } is the plane through the origin determined b v and v. EXAMPLE 3 A Spanning Set for P n The polnomials,,,..., n span the vector space P n defined in Eample since each polnomial p in P n can be written as p = a + a + +a n n which is a linear combination of,,,..., n. We can denote this b writing P n = span{,,,..., n } The net two eamples are concerned with two important tpes of problems: Given a nonempt set S of vectors in R n and a vector v in R n, determine whether v is a linear combination of the vectors in S. Given a nonempt set S of vectors in R n, determine whether the vectors span R n. EXAMPLE 4 Linear Combinations Consider the vectors u = (,, ) and v = (6, 4, ) in R 3. Show that w = (9,, 7) is a linear combination of u and v and that w = (4,, 8) is not a linear combination of u and v. Solution In order for w to be a linear combination of u and v, there must be scalars k and k such that w = k u + k v; that is, (9,, 7) = k (,, ) + k (6, 4, ) = (k + 6k, k + 4k, k + k ) Equating corresponding components gives k + 6k = 9 k + 4k = k + k = 7 Solving this sstem using Gaussian elimination ields k = 3, k =, so w = 3u + v

16 98 Chapter 4 General Vector Spaces Similarl, for w to be a linear combination of u and v, there must be scalars k and k such that w = k u + k v; that is, (4,, 8) = k (,, ) + k (6, 4, ) = (k + 6k, k + 4k, k + k ) Equating corresponding components gives k + 6k = 4 k + 4k = k + k = 8 This sstem of equations is inconsistent (verif), so no such scalars k and k eist. Consequentl, w is not a linear combination of u and v. EXAMPLE 5 Testing for Spanning Determine whether the vectors v = (,, ), v = (,, ), and v 3 = (,, 3) span the vector space R 3. Solution We must determine whether an arbitrar vector b = (b,b,b 3 ) in R 3 can be epressed as a linear combination b = k v + k v + k 3 v 3 of the vectors v, v, and v 3. Epressing this equation in terms of components gives (b,b,b 3 ) = k (,, ) + k (,, ) + k 3 (,, 3) or or (b,b,b 3 ) = (k + k + k 3,k + k 3, k + k + 3k 3 ) k + k + k 3 = b k + k 3 = b k + k + 3k 3 = b 3 Thus, our problem reduces to ascertaining whether this sstem is consistent for all values of b, b, and b 3. One wa of doing this is to use parts (e) and (g) of Theorem.3.8, which state that the sstem is consistent if and onl if its coefficient matri A = 3 has a nonzero determinant. But this is not the case here since det(a) = (verif), so v, v, and v 3 do not span R 3. Solution Spaces of Homogeneous Sstems The solutions of a homogeneous linear sstem A = of m equations in n unknowns can be viewed as vectors in R n. The following theorem provides a useful insight into the geometric structure of the solution set. THEOREM 4..4 The solution set of a homogeneous linear sstem A = of m equations in n unknowns is a subspace of R n. Proof Let W be the solution set of the sstem. The set W is not empt because it contains at least the trivial solution =.

17 4. Subspaces 99 To show that W is a subspace of R n, we must show that it is closed under addition and scalar multiplication. To do this, let and be vectors in W. Since these vectors are solutions of A =, wehave A = and A = It follows from these equations and the distributive propert of matri multiplication that A( + ) = A + A = + = so W is closed under addition. Similarl, if k is an scalar then A(k ) = ka = k = so W is also closed under scalar multiplication. Because the solution set of a homogeneous sstem in n unknowns is actuall a subspace of R n, we will generall refer to it as the solution space of the sstem. EXAMPLE 6 Solution Spaces of Homogeneous Sstems In each part, solve the sstem b an method and then give a geometric description of the solution set. 3 3 (a) 4 6 = (b) = z 4 6 z 3 (c) = (d) = 4 z z Solution (a) The solutions are = s 3t, = s, z = t from which it follows that (b) (c) (d) = 3z or + 3z = This is the equation of a plane through the origin that has n = (,, 3) as a normal. The solutions are = 5t, = t, z = t which are parametric equations for the line through the origin that is parallel to the vector v = ( 5,, ). The onl solution is =, =, z =, so the solution space consists of the single point {}. This linear sstem is satisfied b all real values of,, and z, so the solution space is all of R 3. Remark Whereas the solution set of ever homogeneous sstem of m equations in n unknowns is a subspace of R n,itisnever true that the solution set of a nonhomogeneous sstem of m equations in n unknowns is a subspace of R n. There are two possible scenarios: first, the sstem ma not have an solutions at all, and second, if there are solutions, then the solution set will not be closed either under addition or under scalar multiplication (Eercise 8).

18 Chapter 4 General Vector Spaces The LinearTransformation Viewpoint Theorem 4..4 can be viewed as a statement about matri transformations b letting T A : R n R m be multiplication b the coefficient matri A. From this point of view the solution space of A = is the set of vectors in R n that T A maps into the zero vector in R m. This set is sometimes called the kernel of the transformation, so with this terminolog Theorem 4..4 can be rephrased as follows. THEOREM 4..5 If A is an m n matri, then the kernel of the matri transformation T A : R n R m is a subspace of R n. A Concluding Observation It is important to recognize that spanning sets are not unique. For eample, an nonzero vector on the line in Figure 4..6a will span that line, and an two noncollinear vectors in the plane in Figure 4..6b will span that plane. The following theorem, whose proof is left as an eercise, states conditions under which two sets of vectors will span the same space. THEOREM 4..6 If S ={v, v,...,v r } and S ={w, w,...,w k } are nonempt sets of vectors in a vector space V, then span{v, v,...,v r }=span{w, w,...,w k } if and onl if each vector in S is a linear combination of those in S, and each vector in S is a linear combination of those in S. Eercise Set 4.. Use Theorem 4.. to determine which of the following are subspaces of R 3. (a) All vectors of the form (a,, ). (b) All vectors of the form (a,, ). (c) All vectors of the form (a,b,c), where b = a + c. (d) All vectors of the form (a,b,c), where b = a + c +. (e) All vectors of the form (a, b, ).. Use Theorem 4.. to determine which of the following are subspaces of M nn. (a) The set of all diagonal n n matrices. (b) The set of all n n matrices A such that det(a) =. (c) The set of all n n matrices A such that tr(a) =. (d) The set of all smmetric n n matrices. (e) The set of all n n matrices A such that A T = A. (f ) The set of all n n matrices A for which A = has onl the trivial solution. (g) The set of all n n matrices A such that AB = BA for some fied n n matri B. 3. Use Theorem 4.. to determine which of the following are subspaces of P 3. (a) All polnomials a + a + a + a 3 3 for which a =. (b) All polnomials a + a + a + a 3 3 for which a + a + a + a 3 =. (c) All polnomials of the form a + a + a + a 3 3 in which a, a, a, and a 3 are rational numbers. (d) All polnomials of the form a + a, where a and a are real numbers. 4. Which of the following are subspaces of F(, )? (a) All functions f in F(, ) for which f() =. (b) All functions f in F(, ) for which f() =. (c) All functions f in F(, ) for which f( ) = f(). (d) All polnomials of degree. 5. Which of the following are subspaces of R? (a) All sequences v in R of the form v = (v,,v,,v,,...).

19 4. Subspaces (b) All sequences v in R of the form v = (v,,v,,v,,...). (c) All sequences v in R of the form v = (v, v, 4v, 8v, 6v,...). (d) All sequences in R whose components are from some point on. 6. A line L through the origin in R 3 can be represented b parametric equations of the form = at, = bt, and z = ct. Use these equations to show that L is a subspace of R 3 b showing that if v = (,,z ) and v = (,,z ) are points on L and k is an real number, then kv and v + v are also points on L. 7. Which of the following are linear combinations of u = (,, ) and v = (, 3, )? (a) (,, ) (b) (, 4, 5) (c) (,, ) 8. Epress the following as linear combinations of u = (,, 4), v = (,, 3), and w = (3,, 5). (a) ( 9, 7, 5) (b) (6,, 6) (c) (,, ) 9. Which of the following are linear combinations of 4 A =, B =, C =? (a) (b) (c) 8 7. In each part epress the vector as a linear combination of p = + + 4, p = + 3, and p 3 = (a) (b) (c) (d) In each part, determine whether the vectors span R 3. (a) v = (,, ), v = (,, 3), v 3 = (,, ) (b) v = (,, 3), v = (4,, ), v 3 = (8,, 8). Suppose that v = (,,, 3), v = (3,, 5, ), and v 3 = (,,, ). Which of the following vectors are in span{v, v, v 3 }? (a) (, 3, 7, 3) (b) (,,, ) (c) (,,, ) (d) ( 4, 6, 3, 4) 3. Determine whether the following polnomials span P. p = +, p = 3 +, p 3 = 5 + 4, p 4 = + 4. Let f = cos and g = sin. Which of the following lie in the space spanned b f and g? (a) cos (b) 3 + (c) (d) sin (e) 5. Determine whether the solution space of the sstem A = is a line through the origin, a plane through the origin, or the origin onl. If it is a plane, find an equation for it. If it is a line, find parametric equations for it. 3 (a) A = 3 (b) A = (c) A = 6 (d) A = (Calculus required) Show that the following sets of functions are subspaces of F(, ). (a) All continuous functions on (, ). (b) All differentiable functions on (, ). (c) All differentiable functions on (, ) that satisf f + f =. 7. (Calculus required) Show that the set of continuous functions f = f() on [a,b] such that b is a subspace of C[a,b]. a f()d = 8. Show that the solution vectors of a consistent nonhomogeneous sstem of m linear equations in n unknowns do not form a subspace of R n. 9. In each part, let T A : R R be multiplication b A, and let u = (, ) and u = (, ). Determine whether the set {T A (u ), T A (u )} spans R. (a) A = (b) A =. In each part, let T A : R 3 R be multiplication b A, and let u = (,, ) and u = (,, ) and u 3 = (,, ). Determine whether the set {T A (u ), T A (u ), T A (u 3 )} spans R. [ ] [ ] (a) A = (b) A = 3. If T A is multiplication b a matri A with three columns, then the kernel of T A is one of four possible geometric objects. What are the? Eplain how ou reached our conclusion.. Let v = (, 6, 4), v = (, 4, ), v 3 = (,, 5), and w = (,, 5), w = (, 8, 9). Use Theorem 4..6 to show that span{v, v, v 3 }=span{w, w }. 3. The accompaning figure shows a mass-spring sstem in which a block of mass m is set into vibrator motion b pulling the block beond its natural position at = and releasing it at time t =. If friction and air resistance are ignored, then the -coordinate (t) of the block at time t is given b a function of the form (t) = c cos ωt + c sin ωt

20 Chapter 4 General Vector Spaces where ω is a fied constant that depends on the mass of the block and the stiffness of the spring and c and c are arbitrar. Show that this set of functions forms a subspace of C (, ). Natural position Stretched Released m Figure E-3 Working with Proofs 4. Prove Theorem True-False Eercises m TF. In parts (a) (k) determine whether the statement is true or false, and justif our answer. (a) Ever subspace of a vector space is itself a vector space. (b) Ever vector space is a subspace of itself. (c) Ever subset of a vector space V that contains the zero vector in V is a subspace of V. (d) The kernel of a matri transformation T A : R n R m is a subspace of R m. (e) The solution set of a consistent linear sstem A = b of m equations in n unknowns is a subspace of R n. (f ) The span of an finite set of vectors in a vector space is closed under addition and scalar multiplication. m (g) The intersection of an two subspaces of a vector space V is a subspace of V. (h) The union of an two subspaces of a vector space V is a subspace of V. (i) Two subsets of a vector space V that span the same subspace of V must be equal. ( j) The set of upper triangular n n matrices is a subspace of the vector space of all n n matrices. (k) The polnomials, ( ), and ( ) 3 span P 3. Working withtechnolog T. Recall from Theorem.3. that a product A can be epressed as a linear combination of the column vectors of the matri A in which the coefficients are the entries of. Use matri multiplication to compute v = 6(8,,, 4) + 7( 3, 9,, 6) 9(3,,, 4) T. Use the idea in Eercise T and matri multiplication to determine whether the polnomial is in the span of p = p = , p = , p 3 = T3. For the vectors that follow, determine whether span{v, v, v 3 }=span{w, w, w 3 } v = (,,,, 3), v = (7, 4, 6, 3, ), v 3 = ( 5, 3,,, 4) w = ( 6, 5,, 3, 7), w = (6, 6, 6,, 4), w 3 = (, 7, 7,, 5) 4.3 Linear Independence In this section we will consider the question of whether the vectors in a given set are interrelated in the sense that one or more of them can be epressed as a linear combination of the others. This is important to know in applications because the eistence of such relationships often signals that some kind of complication is likel to occur. Linear Independence and Dependence In a rectangular -coordinate sstem ever vector in the plane can be epressed in eactl one wa as a linear combination of the standard unit vectors. For eample, the onl wa to epress the vector (3, ) as a linear combination of i = (, ) and j = (, ) is (3, ) = 3(, ) + (, ) = 3i + j ()

21 j 3i + j i Figure 4.3. w (, ) w 45 Figure 4.3. (3, ) Linear Independence 3 (Figure 4.3.). Suppose, however, that we were to introduce a third coordinate ais that makes an angle of 45 with the -ais. Call it the w-ais. As illustrated in Figure 4.3., the unit vector along the w-ais is ( ) w =, Whereas Formula () shows the onl wa to epress the vector (3, ) as a linear combination of i and j, there are infinitel man was to epress this vector as a linear combination of i, j, and w. Three possibilities are (3, ) = 3(, ) + (, ) + (3, ) = (, ) + (, ) + (3, ) = 4(, ) + 3(, ) (, ) = 3i + j + w ( ), = 3i + j + w ( ), = 4i + 3j w In short, b introducing a superfluous ais we created the complication of having multiple was of assigning coordinates to points in the plane. What makes the vector w superfluous is the fact that it can be epressed as a linear combination of the vectors i and j, namel, w = (, ) = i + j This leads to the following definition. DEFINITION If S ={v, v,...,v r } is a set of two or more vectors in a vector space V, then S is said to be a linearl independent set if no vector in S can be epressed as a linear combination of the others. A set that is not linearl independent is said to be linearl dependent. In the case where the set S in Definition has onl one vector, we will agree that S is linearl independent if and onl if that vector is nonzero. In general, the most efficient wa to determine whether a set is linearl independent or not is to use the following theorem whose proof is given at the end of this section. THEOREM 4.3. A nonempt set S ={v, v,...,v r } in a vector space V is linearl independent if and onl if the onl coefficients satisfing the vector equation are k =,k =,...,k r =. k v + k v + +k r v r = EXAMPLE Linear Independence of the Standard Unit Vectors in R n The most basic linearl independent set in R n is the set of standard unit vectors e = (,,,...,), e = (,,,...,),..., e n = (,,,...,) To illustrate this in R 3, consider the standard unit vectors i = (,, ), j = (,, ), k = (,, )

22 4 Chapter 4 General Vector Spaces To prove linear independence we must show that the onl coefficients satisfing the vector equation k i + k j + k 3 k = are k =,k =,k 3 =. component form But this becomes evident b writing this equation in its (k,k,k 3 ) = (,, ) You should have no trouble adapting this argument to establish the linear independence of the standard unit vectors in R n. EXAMPLE Linear Independence in R 3 Determine whether the vectors v = (,, 3), v = (5, 6, ), v 3 = (3,, ) () are linearl independent or linearl dependent in R 3. Solution The linear independence or dependence of these vectors is determined b whether the vector equation k v + k v + k 3 v 3 = (3) can be satisfied with coefficients that are not all zero. To see whether this is so, let us rewrite (3) in the component form k (,, 3) + k (5, 6, ) + k 3 (3,, ) = (,, ) Equating corresponding components on the two sides ields the homogeneous linear sstem k + 5k + 3k 3 = k + 6k + k 3 = (4) 3k k + k 3 = Thus, our problem reduces to determining whether this sstem has nontrivial solutions. There are various was to do this; one possibilit is to simpl solve the sstem, which ields k = t, k = t, k 3 = t (we omit the details). This shows that the sstem has nontrivial solutions and hence that the vectors are linearl dependent. A second method for establishing the linear dependence is to take advantage of the fact that the coefficient matri 5 3 A = 6 3 is square and compute its determinant. We leave it for ou to show that det(a) = from which it follows that (4) has nontrivial solutions b parts (b) and (g) of Theorem.3.8. Because we have established that the vectors v, v, and v 3 in () are linearl dependent, we know that at least one of them is a linear combination of the others. We leave it for ou to confirm, for eample, that v 3 = v + v

23 EXAMPLE 3 Linear Independence in R 4 Determine whether the vectors 4.3 Linear Independence 5 v = (,,, ), v = (4, 9, 9, 4), v 3 = (5, 8, 9, 5) in R 4 are linearl dependent or linearl independent. Solution The linear independence or linear dependence of these vectors is determined b whether there eist nontrivial solutions of the vector equation or, equivalentl, of k v + k v + k 3 v 3 = k (,,, ) + k (4, 9, 9, 4) + k 3 (5, 8, 9, 5) = (,,, ) Equating corresponding components on the two sides ields the homogeneous linear sstem k + 4k + 5k 3 = k + 9k + 8k 3 = k + 9k + 9k 3 = k 4k 5k 3 = We leave it for ou to show that this sstem has onl the trivial solution k =, k =, k 3 = from which ou can conclude that v, v, and v 3 are linearl independent. EXAMPLE 4 An Important Linearl Independent Set in P n Show that the polnomials,,,..., n form a linearl independent set in P n. Solution For convenience, let us denote the polnomials as p =, p =, p =,..., p n = n We must show that the onl coefficients satisfing the vector equation a p + a p + a p + +a n p n = (5) are a = a = a = =a n = But (5) is equivalent to the statement that a + a + a + +a n n = (6) for all in (, ), so we must show that this is true if and onl if each coefficient in (6) is zero. To see that this is so, recall from algebra that a nonzero polnomial of degree n has at most n distinct roots. That being the case, each coefficient in (6) must be zero, for otherwise the left side of the equation would be a nonzero polnomial with infinitel man roots. Thus, (5) has onl the trivial solution. The following eample shows that the problem of determining whether a given set of vectors in P n is linearl independent or linearl dependent can be reduced to determining whether a certain set of vectors in R n is linearl dependent or independent.

24 6 Chapter 4 General Vector Spaces In Eample 5, what relationship do ou see between the coefficients of the given polnomials and the column vectors of the coefficient matri of sstem (9)? Sets with One ortwo Vectors EXAMPLE 5 Linear Independence of Polnomials Determine whether the polnomials p =, p = 5 + 3, p 3 = + 3 are linearl dependent or linearl independent in P. Solution The linear independence or dependence of these vectors is determined b whether the vector equation k p + k p + k 3 p 3 = (7) can be satisfied with coefficients that are not all zero. To see whether this is so, let us rewrite (7) in its polnomial form k ( ) + k (5 + 3 ) + k 3 ( + 3 ) = (8) or, equivalentl, as (k + 5k + k 3 ) + ( k + 3k + 3k 3 ) + ( k k 3 ) = Since this equation must be satisfied b all in (, ), each coefficient must be zero (as eplained in the previous eample). Thus, the linear dependence or independence of the given polnomials hinges on whether the following linear sstem has a nontrivial solution: k + 5k + k 3 = k + 3k + 3k 3 = (9) k k 3 = We leave it for ou to show that this linear sstem has nontrivial solutions either b solving it directl or b showing that the coefficient matri has determinant zero. Thus, the set {p, p, p 3 } is linearl dependent. The following useful theorem is concerned with the linear independence and linear dependence of sets with one or two vectors and sets that contain the zero vector. THEOREM 4.3. (a) A finite set that contains is linearl dependent. (b) A set with eactl one vector is linearl independent if and onl if that vector is not. (c) A set with eactl two vectors is linearl independent if and onl if neither vector is a scalar multiple of the other. We will prove part (a) and leave the rest as eercises. Proof (a) For an vectors v, v,...,v r, the set S ={v, v,...,v r, } is linearl dependent since the equation v + v + +v r + () = epresses as a linear combination of the vectors in S with coefficients that are not all zero. EXAMPLE 6 Linear Independence of Two Functions The functions f = and f = sin are linearl independent vectors in F(, ) since neither function is a scalar multiple of the other. On the other hand, the two functions g = sin and g = sin cos are linearl dependent because the trigonometric identit sin = sin cos reveals that g and g are scalar multiples of each other.

25 4.3 Linear Independence 7 A Geometric Interpretation of Linear Independence Linear independence has the following useful geometric interpretations in R and R 3 : Two vectors in R or R 3 are linearl independent if and onl if the do not lie on the same line when the have their initial points at the origin. Otherwise one would be a scalar multiple of the other (Figure 4.3.3). z z z v v v v v v Figure (a) Linearl dependent (b) Linearl dependent (c) Linearl independent Three vectors in R 3 are linearl independent if and onl if the do not lie in the same plane when the have their initial points at the origin. Otherwise at least one would be a linear combination of the other two (Figure 4.3.4). z z z v v 3 v 3 v v v v v v 3 Figure (a) Linearl dependent (b) Linearl dependent (c) Linearl independent At the beginning of this section we observed that a third coordinate ais in R is superfluous b showing that a unit vector along such an ais would have to be epressible as a linear combination of unit vectors along the positive - and -ais. That result is a consequence of the net theorem, which shows that there can be at most n vectors in an linearl independent set R n. THEOREM Let S ={v, v,...,v r } be a set of vectors in R n.ifr>n,then S is linearl dependent. Proof Suppose that and consider the equation v = (v,v,...,v n ) v = (v,v,...,v n ).. v r = (v r,v r,...,v rn ) k v + k v + +k r v r =

26 8 Chapter 4 General Vector Spaces It follows from Theorem that a set in R with more than two vectors is linearl dependent and a set in R 3 with more than three vectors is linearl dependent. CALCULUS REQUIRED Linear Independence of Functions If we epress both sides of this equation in terms of components and then equate the corresponding components, we obtain the sstem v k + v k + + v r k r = v k + v k + + v r k r =.... v n k + v n k + +v rn k r = This is a homogeneous sstem of n equations in the r unknowns k,...,k r. Since r>n, it follows from Theorem.. that the sstem has nontrivial solutions. Therefore, S ={v, v,...,v r } is a linearl dependent set. Sometimes linear dependence of functions can be deduced from known identities. For eample, the functions f = sin, f = cos, and f 3 = 5 form a linearl dependent set in F(, ), since the equation 5f + 5f f 3 = 5 sin + 5 cos 5 = 5(sin + cos ) 5 = epresses as a linear combination of f, f, and f 3 with coefficients that are not all zero. However, it is relativel rare that linear independence or dependence of functions can be ascertained b algebraic or trigonometric methods. To make matters worse, there is no general method for doing that either. That said, there does eist a theorem that can be useful for that purpose in certain cases. The following definition is needed for that theorem. DEFINITION If f = f (), f = f (),...,f n = f n () are functions that are n times differentiable on the interval (, ), then the determinant f () f () f n () f W() = () f () f n ()... f (n ) () f (n ) () f (n ) () is called the Wronskian of f,f,...,f n. n Józef Hoëné de Wroński ( ) Historical Note The Polish-French mathematician Józef Hoëné de Wroński was born Józef Hoëné and adopted the name Wroński after he married. Wroński s life was fraught with controvers and conflict, which some sa was due to pschopathic tendencies and his eaggeration of the importance of his own work. Although Wroński s work was dismissed as rubbish for man ears, and much of it was indeed erroneous, some of his ideas contained hidden brilliance and have survived. Among other things, Wroński designed a caterpillar vehicle to compete with trains (though it was never manufactured) and did research on the famous problem of determining the longitude of a ship at sea. His final ears were spent in povert. [Image: TopFoto/The Image Works]

27 4.3 Linear Independence 9 Suppose for the moment that f = f (), f = f (),...,f n = f n () are linearl dependent vectors in C (n ) (, ). This implies that the vector equation k f + k f + +k n f n = is satisfied b values of the coefficients k,k,...,k n that are not all zero, and for these coefficients the equation k f () + k f () + +k n f n () = is satisfied for all in (, ). Using this equation together with those that result b differentiating it n times we obtain the linear sstem k f () + k f () + +k n f n () = k f () + k f () + +k nf n () = k f (n ) () + k f (n ) () + +k n f n (n ) () = Thus, the linear dependence of f, f,...,f n implies that the linear sstem f () f () f n () k f () f () f n () k..... =... f (n ) () f (n ) () f (n ) () k n has a nontrivial solution for ever in the interval (, ), and this in turn implies that the determinant of the coefficient matri of () is zero for ever such. Since this determinant is the Wronskian of f,f,...,f n, we have established the following result. n () WARNING The converse of Theorem is false. If the Wronskian of f, f,...,f n is identicall zero on (, ), then no conclusion can be reached about the linear independence of {f, f,...,f n } this set of vectors ma be linearl independent or linearl dependent. THEOREM If the functions f, f,...,f n have n continuous derivatives on the interval (, ), and if the Wronskian of these functions is not identicall zero on (, ), then these functions form a linearl independent set of vectors in C (n ) (, ). In Eample 6 we showed that and sin are linearl independent functions b observing that neither is a scalar multiple of the other. The following eample illustrates how to obtain the same result using the Wronskian (though it is a more complicated procedure in this particular case). EXAMPLE 7 Linear Independence Using the Wronskian Use the Wronskian to show that f = and f = sin are linearl independent vectors in C (, ). Solution The Wronskian is W() = sin cos = cos sin This function is not identicall zero on the interval (, ) since, for eample, ( π ) W = π ( π ) ( π ) cos sin = π Thus, the functions are linearl independent.

28 Chapter 4 General Vector Spaces EXAMPLE 8 Linear Independence Using the Wronskian Use the Wronskian to show that f =, f = e, and f 3 = e are linearl independent vectors in C (, ). Solution The Wronskian is e e W() = e e = e 3 e 4e This function is obviousl not identicall zero on (, ),sof, f, and f 3 form a linearl independent set. OPTIONAL We will close this section b proving Theorem Proof of Theorem 4.3. We will prove this theorem in the case where the set S has two or more vectors, and leave the case where S has onl one vector as an eercise. Assume first that S is linearl independent. We will show that if the equation k v + k v + +k r v r = () can be satisfied with coefficients that are not all zero, then at least one of the vectors in S must be epressible as a linear combination of the others, thereb contradicting the assumption of linear independence. To be specific, suppose that k =. Then we can rewrite () as v = ( k k ) ( v + + k r which epresses v as a linear combination of the other vectors in S. Conversel, we must show that if the onl coefficients satisfing () are k ) k =, k =,..., k r = then the vectors in S must be linearl independent. But if this were true of the coefficients and the vectors were not linearl independent, then at least one of them would be epressible as a linear combination of the others, sa v r v = c v + +c r v r which we can rewrite as v + ( c )v + +( c r )v r = But this contradicts our assumption that () can onl be satisfied b coefficients that are all zero. Thus, the vectors in S must be linearl independent. Eercise Set 4.3. Eplain wh the following form linearl dependent sets of vectors. (Solve this problem b inspection.) (a) u = (,, 4) and u = (5,, ) in R 3 (b) u = (3, ), u = (4, 5), u 3 = ( 4, 7) in R (c) p = 3 + and p = in P (d) A = and B = in M. In each part, determine whether the vectors are linearl independent or are linearl dependent in R 3. (a) ( 3,, 4), (5,, ), (,, 3) (b) (,, ), (3,, 5), (6,, ), (7,, ) 3. In each part, determine whether the vectors are linearl independent or are linearl dependent in R 4. (a) (3, 8, 7, 3), (, 5, 3, ), (,,, 6), (4,, 6, 4) (b) (3,, 3, 6), (,, 3, ), (,,, ), (,,, )

29 4.3 Linear Independence 4. In each part, determine whether the vectors are linearl independent or are linearl dependent in P. (a) + 4,3+ 6 +,+ 4 (b) , + 4, ,7+ 5. In each part, determine whether the matrices are linearl independent or dependent. (a),, in M (b),, in M 3 6. Determine all values of k for which the following matrices are linearl independent in M.,, k k 3 7. In each part, determine whether the three vectors lie in a plane in R 3. (a) v = (,, ), v = (6,, 4), v 3 = (,, 4) (b) v = ( 6, 7, ), v = (3,, 4), v 3 = (4,, ) 8. In each part, determine whether the three vectors lie on the same line in R 3. (a) v = (,, 3), v = (, 4, 6), v 3 = ( 3, 6, ) (b) v = (,, 4), v = (4,, 3), v 3 = (, 7, 6) (c) v = (4, 6, 8), v = (, 3, 4), v 3 = (, 3, 4) 9. (a) Show that the three vectors v = (, 3,, ), v = (6,, 5, ), and v 3 = (4, 7,, 3) form a linearl dependent set in R 4. (b) Epress each vector in part (a) as a linear combination of the other two.. (a) Show that the vectors v = (,, 3, 4), v = (,,, ), and v 3 = (, 3, 3, 3) form a linearl dependent set in R 4. (b) Epress each vector in part (a) as a linear combination of the other two.. For which real values of λ do the following vectors form a linearl dependent set in R 3? v = ( λ,, ), v = (,λ, ), v3 = (,,λ). Under what conditions is a set with one vector linearl independent? 3. In each part, let T A : R R be multiplication b A, and let u = (, ) and u = (, ). Determine whether the set {T A (u ), T A (u )} is linearl independent in R. (a) A = (b) A = 4. In each part, let T A : R 3 R 3 be multiplication b A, and let u = (,, ), u = (,, ), and u 3 = (,, ). Determine whether the set {T A (u ), T A (u ), T A (u 3 )} is linearl independent in R 3. (a) A = 3 (b) A = 3 5. Are the vectors v, v, and v 3 in part (a) of the accompaning figure linearl independent? What about those in part (b)? Eplain. z v (a) Figure E-5 v 3 v 3 v 6. B using appropriate identities, where required, determine which of the following sets of vectors in F(, ) are linearl dependent. (a) 6, 3 sin, cos (b), cos (c), sin, sin (d) cos, sin, cos (e) (3 ), 6, 5 (f), cos 3 π, sin 5 3π 7. (Calculus required) The functions v f () = and f () = cos are linearl independent in F(, ) because neither function is a scalar multiple of the other. Confirm the linear independence using the Wronskian. 8. (Calculus required) The functions z v (b) f () = sin and f () = cos are linearl independent in F(, ) because neither function is a scalar multiple of the other. Confirm the linear independence using the Wronskian. 9. (Calculus required) Use the Wronskian to show that the following sets of vectors are linearl independent. (a),,e (b),,. (Calculus required) Use the Wronskian to show that the functions f () = e,f () = e, and f 3 () = e are linearl independent vectors in C (, ).. (Calculus required) Use the Wronskian to show that the functions f () = sin, f () = cos, and f 3 () = cos are linearl independent vectors in C (, ).

30 Chapter 4 General Vector Spaces. Show that for an vectors u, v, and w in a vector space V, the vectors u v, v w, and w u form a linearl dependent set. 3. (a) In Eample we showed that the mutuall orthogonal vectors i, j, and k form a linearl independent set of vectors in R 3. Do ou think that ever set of three nonzero mutuall orthogonal vectors in R 3 is linearl independent? Justif our conclusion with a geometric argument. (b) Justif our conclusion with an algebraic argument. [Hint: Use dot products.] Working with Proofs 4. Prove that if {v, v, v 3 } is a linearl independent set of vectors, then so are {v, v }, {v, v 3 }, {v, v 3 }, {v }, {v }, and {v 3 }. 5. Prove that if S ={v, v,...,v r } is a linearl independent set of vectors, then so is ever nonempt subset of S. 6. Prove that if S ={v, v, v 3 } is a linearl dependent set of vectors in a vector space V, and v 4 is an vector in V that is not in S, then {v, v, v 3, v 4 } is also linearl dependent. 7. Prove that if S ={v, v,...,v r } is a linearl dependent set of vectors in a vector space V, and if v r+,...,v n are an vectors in V that are not in S, then {v, v,...,v r, v r+,...,v n } is also linearl dependent. 8. Prove that in P ever set with more than three vectors is linearl dependent. 9. Prove that if {v, v } is linearl independent and v 3 does not lie in span{v, v }, then {v, v, v 3 } is linearl independent. 3. Use part (a) of Theorem 4.3. to prove part (b). 3. Prove part (b) of Theorem Prove part (c) of Theorem True-False Eercises TF. In parts (a) (h) determine whether the statement is true or false, and justif our answer. (a) A set containing a single vector is linearl independent. (b) The set of vectors {v,kv} is linearl dependent for ever scalar k. (c) Ever linearl dependent set contains the zero vector. (d) If the set of vectors {v, v, v 3 } is linearl independent, then {kv,kv,kv 3 } is also linearl independent for ever nonzero scalar k. (e) If v,...,v n are linearl dependent nonzero vectors, then at least one vector v k is a unique linear combination of v,...,v k. (f ) The set of matrices that contain eactl two s and two s is a linearl independent set in M. (g) The three polnomials ( )( + ), ( + ), and ( ) are linearl independent. (h) The functions f and f are linearl dependent if there is a real number such that k f () + k f () = for some scalars k and k. Working withtechnolog T. Devise three different methods for using our technolog utilit to determine whether a set of vectors in R n is linearl independent, and then use each of those methods to determine whether the following vectors are linearl independent. v = (4, 5,, 6), v = (,,, 3), v 3 = (6, 3, 3, 9), v 4 = (4,, 5, 6) T. Show that S ={cos t,sin t,cos t,sin t} is a linearl independent set in C(, ) b evaluating the left side of the equation c cos t + c sin t + c 3 cos t + c 4 sin t = at sufficientl man values of t to obtain a linear sstem whose onl solution is c = c = c 3 = c 4 =. 4.4 Coordinates and Basis We usuall think of a line as being one-dimensional, a plane as two-dimensional, and the space around us as three-dimensional. It is the primar goal of this section and the net to make this intuitive notion of dimension precise. In this section we will discuss coordinate sstems in general vector spaces and la the groundwork for a precise definition of dimension in the net section. Coordinate Sstems in Linear Algebra In analtic geometr one uses rectangular coordinate sstems to create a one-to-one correspondence between points in -space and ordered pairs of real numbers and between points in 3-space and ordered triples of real numbers (Figure 4.4.). Although rectangular coordinate sstems are common, the are not essential. For eample, Figure 4.4. shows coordinate sstems in -space and 3-space in which the coordinate aes are not mutuall perpendicular.

31 4.4 Coordinates and Basis 3 z c b P(a, b) P(a, b, c) O a a b Figure 4.4. Coordinates of P in a rectangular coordinate sstem in -space. Coordinates of P in a rectangular coordinate sstem in 3-space. z c b P(a, b) P(a, b, c) O a a b Figure 4.4. Coordinates of P in a nonrectangular coordinate sstem in -space. Coordinates of P in a nonrectangular coordinate sstem in 3-space. In linear algebra coordinate sstems are commonl specified using vectors rather than coordinate aes. For eample, in Figure we have re-created the coordinate sstems in Figure 4.4. b using unit vectors to identif the positive directions and then attaching coordinates to a point P using the scalar coefficients in the equations OP = au + bu and OP = au + bu + cu 3 cu 3 Figure u O bu u P(a, b) au au u 3 P(a, b, c) O u u bu Units of measurement are essential ingredients of an coordinate sstem. In geometr problems one tries to use the same unit of measurement on all aes to avoid distorting the shapes of figures. This is less important in applications where coordinates represent phsical quantities with diverse units (for eample, time in seconds on one ais and temperature in degrees Celsius on another ais). To allow for this level of generalit, we will rela the requirement that unit vectors be used to identif the positive directions and require onl that those vectors be linearl independent. We will refer to these as the basis vectors for the coordinate sstem. In summar, it is the directions of the basis vectors that establish the positive directions, and it is the lengths of the basis vectors that establish the spacing between the integer points on the aes (Figure 4.4.4).

32 4 Chapter 4 General Vector Spaces Equal spacing Perpendicular aes Unequal spacing Perpendicular aes Equal spacing Skew aes Unequal spacing Skew aes Figure Basis for a Vector Space Our net goal is to etend the concepts of basis vectors and coordinate sstems to general vector spaces, and for that purpose we will need some definitions. Vector spaces fall into two categories: A vector space V is said to be finite-dimensional if there is a finite set of vectors in V that spans V and is said to be infinite-dimensional if no such set eists. DEFINITION If S ={v, v,...,v n } is a set of vectors in a finite-dimensional vector space V, then S is called a basis for V if: (a) S spans V. (b) S is linearl independent. If ou think of a basis as describing a coordinate sstem for a finite-dimensional vector space V, then part (a) of this definition guarantees that there are enough basis vectors to provide coordinates for all vectors in V, and part (b) guarantees that there is no interrelationship between the basis vectors. Here are some eamples. EXAMPLE The Standard Basis for R n Recall from Eample of Section 4. that the standard unit vectors e = (,,,...,), e = (,,,...,),..., e n = (,,,...,) span R n and from Eample of Section 4.3 that the are linearl independent. Thus, the form a basis for R n that we call the standard basis for R n. In particular, i = (,, ), j = (,, ), k = (,, ) is the standard basis for R 3. EXAMPLE The Standard Basis for P n Show that S ={,,,..., n } is a basis for the vector space P n of polnomials of degree n or less. Solution We must show that the polnomials in S are linearl independent and span P n. Let us denote these polnomials b p =, p =, p =,..., p n = n We showed in Eample 3 of Section 4. that these vectors span P n and in Eample 4 of Section 4.3 that the are linearl independent. Thus, the form a basis for P n that we call the standard basis for P n.

33 4.4 Coordinates and Basis 5 From Eamples and 3 ou can see that a vector space can have more than one basis. EXAMPLE 3 Another Basis for R 3 Show that the vectors v = (,, ), v = (, 9, ), and v 3 = (3, 3, 4) form a basis for R 3. Solution We must show that these vectors are linearl independent and span R 3. To prove linear independence we must show that the vector equation c v + c v + c 3 v 3 = () has onl the trivial solution; and to prove that the vectors span R 3 we must show that ever vector b = (b,b,b 3 ) in R 3 can be epressed as c v + c v + c 3 v 3 = b () (3) B equating corresponding components on the two sides, these two equations can be epressed as the linear sstems c + c + 3c 3 = c + 9c + 3c 3 = and c + c + 3c 3 = b c + 9c + 3c 3 = b c + 4c 3 = c + 4c 3 = b 3 (verif). Thus, we have reduced the problem to showing that in (3) the homogeneous sstem has onl the trivial solution and that the nonhomogeneous sstem is consistent for all values of b,b, and b 3. But the two sstems have the same coefficient matri 3 A = so it follows from parts (b), (e), and (g) of Theorem.3.8 that we can prove both results at the same time b showing that det(a) =. We leave it for ou to confirm that det(a) =, which proves that the vectors v, v, and v 3 form a basis for R 3. EXAMPLE 4 The Standard Basis for M mn Show that the matrices M =, M =, M 3 =, M 4 = form a basis for the vector space M of matrices. Solution We must show that the matrices are linearl independent and span M.To prove linear independence we must show that the equation c M + c M + c 3 M 3 + c 4 M 4 = (4) has onl the trivial solution, where is the zero matri; and to prove that the matrices span M we must show that ever matri a b B = c d can be epressed as c M + c M + c 3 M 3 + c 4 M 4 = B (5) The matri forms of Equations (4) and (5) are c + c + c 3 + c 4 = and a b c + c + c 3 + c 4 = c d

34 6 Chapter 4 General Vector Spaces which can be rewritten as c c c c a b = and = c 3 c 4 c 3 c 4 c d Since the first equation has onl the trivial solution c = c = c 3 = c 4 = the matrices are linearl independent, and since the second equation has the solution c = a, c = b, c 3 = c, c 4 = d the matrices span M. This proves that the matrices M, M, M 3, M 4 form a basis for M. More generall, the mn different matrices whose entries are zero ecept for a single entr of form a basis for M mn called the standard basis for M mn. The simplest of all vector spaces is the zero vector space V ={}. This space is finite-dimensional because it is spanned b the vector. However, it has no basis in the sense of Definition because {} is not a linearl independent set (wh?). However, we will find it useful to define the empt set Ø to be a basis for this vector space. EXAMPLE 5 An Infinite-Dimensional Vector Space Show that the vector space of P of all polnomials with real coefficients is infinitedimensional b showing that it has no finite spanning set. Solution If there were a finite spanning set, sa S ={p, p,...,p r }, then the degrees of the polnomials in S would have a maimum value, sa n; and this in turn would impl that an linear combination of the polnomials in S would have degree at most n. Thus, there would be no wa to epress the polnomial n+ as a linear combination of the polnomials in S, contradicting the fact that the vectors in S span P. EXAMPLE 6 Some Finite- and Infinite-Dimensional Spaces In Eamples,, and 4 we found bases for R n, P n, and M mn, so these vector spaces are finite-dimensional. We showed in Eample 5 that the vector space P is not spanned b finitel man vectors and hence is infinite-dimensional. Some other eamples of infinite-dimensional vector spaces are R, F(, ), C(, ), C m (, ), and C (, ). Coordinates Relative to a Basis Earlier in this section we drew an informal analog between basis vectors and coordinate sstems. Our net goal is to make this informal idea precise b defining the notion of a coordinate sstem in a general vector space. The following theorem will be our first step in that direction. THEOREM 4.4. Uniqueness of Basis Representation If S ={v, v,...,v n } is a basis for a vector space V, then ever vector v in V can be epressed in the form v = c v + c v + +c n v n in eactl one wa. Proof Since S spans V, it follows from the definition of a spanning set that ever vector in V is epressible as a linear combination of the vectors in S. To see that there is onl one wa to epress a vector as a linear combination of the vectors in S, suppose that some vector v can be written as v = c v + c v + +c n v n

35 4.4 Coordinates and Basis 7 and also as v = k v + k v + +k n v n Subtracting the second equation from the first gives = (c k )v + (c k )v + +(c n k n )v n Since the right side of this equation is a linear combination of vectors in S, the linear independence of S implies that c k =, c k =,..., c n k n = z ck (,, ) k i (,, ) ai Figure (a, b, c) j bj (,, ) that is, c = k, c = k,..., c n = k n Thus, the two epressions for v are the same. We now have all of the ingredients required to define the notion of coordinates in a general vector space V. For motivation, observe that in R 3, for eample, the coordinates (a,b,c)of a vector v are precisel the coefficients in the formula v = ai + bj + ck that epresses v as a linear combination of the standard basis vectors for R 3 (see Figure 4.4.5). The following definition generalizes this idea. Sometimes it will be desirable to write a coordinate vector as a column matri or row matri, in which case we will denote it with square brackets as [v] S. We will refer to this as the matri form of the coordinate vector and (6) as the commadelimited form. DEFINITION If S ={v, v,...,v n } is a basis for a vector space V, and v = c v + c v + +c n v n is the epression for a vector v in terms of the basis S, then the scalars c,c,...,c n are called the coordinates of v relative to the basis S. The vector (c,c,...,c n ) in R n constructed from these coordinates is called the coordinate vector of v relative to S; it is denoted b (v) S = (c,c,...,c n ) (6) Remark It is standard to regard two sets to be the same if the have the same members, even if those members are written in a different order. In particular, in a basis for a vector space V, which is a set of linearl independent vectors that span V, the order in which those vectors are listed does not generall matter. However, the order in which the are listed is critical for coordinate vectors, since changing the order of the basis vectors changes the coordinate vectors [for eample, in R the coordinate pair (, ) is not the same as the coordinate pair (, )]. To deal with this complication, man authors define an ordered basis to be one in which the listing order of the basis vectors remains fied. In all discussions involving coordinate vectors we will assume that the underling basis is ordered, even though we ma not sa so eplicitl. Observe that (v) S is a vector in R n, so that once an ordered basis S is given for a vector space V, Theorem 4.4. establishes a one-to-one correspondence between vectors in V and vectors in R n (Figure 4.4.6). A one-to-one correspondence v (v) S Figure V R n

36 8 Chapter 4 General Vector Spaces EXAMPLE 7 Coordinates Relative to the Standard Basis for R n In the special case where V = R n and S is the standard basis, the coordinate vector (v) S and the vector v are the same; that is, v = (v) S For eample, in R 3 the representation of a vector v = (a,b,c)as a linear combination of the vectors in the standard basis S ={i, j, k} is v = ai + bj + ck so the coordinate vector relative to this basis is (v) S = (a,b,c), which is the same as the vector v. (a) (b) EXAMPLE 8 Coordinate Vectors Relative to Standard Bases Find the coordinate vector for the polnomial p() = c + c + c + +c n n relative to the standard basis for the vector space P n. Find the coordinate vector of a b B = c d relative to the standard basis for M. Solution (a) The given formula for p() epresses this polnomial as a linear combination of the standard basis vectors S ={,,,..., n }. Thus, the coordinate vector for p relative to S is (p) S = (c,c,c,...,c n ) Solution (b) We showed in Eample 4 that the representation of a vector a b B = c d as a linear combination of the standard basis vectors is a b B = = a + b + c + d c d so the coordinate vector of B relative to S is (B) S = (a,b,c,d) EXAMPLE 9 Coordinates in R 3 (a) We showed in Eample 3 that the vectors v = (,, ), v = (, 9, ), v 3 = (3, 3, 4) form a basis for R 3. Find the coordinate vector of v = (5,, 9) relative to the basis S ={v, v, v 3 }. (b) Find the vector v in R 3 whose coordinate vector relative to S is (v) S = (, 3, ). Solution (a) To find (v) S we must first epress v as a linear combination of the vectors in S; that is, we must find values of c, c, and c 3 such that v = c v + c v + c 3 v 3

37 or, in terms of components, (5,, 9) = c (,, ) + c (, 9, ) + c 3 (3, 3, 4) Equating corresponding components gives c + c + 3c 3 = 5 c + 9c + 3c 3 = c + 4c 3 = Coordinates and Basis 9 Solving this sstem we obtain c =, c =, c 3 = (verif). Therefore, Solution (b) (v) S = (,, ) Using the definition of (v) S, we obtain v = ( )v + 3v + v 3 = ( )(,, ) + 3(, 9, ) + (3, 3, 4) = (, 3, 7) Eercise Set 4.4. Use the method of Eample 3 to show that the following set of vectors forms a basis for R. { (, ), (3, ) }. Use the method of Eample 3 to show that the following set of vectors forms a basis for R 3. { (3,, 4), (, 5, 6), (, 4, 8) } 3. Show that the following polnomials form a basis for P. +,, 4. Show that the following polnomials form a basis for P 3. +,,, 3 5. Show that the following matrices form a basis for M ,,, Show that the following matrices form a basis for M.,,, 7. In each part, show that the set of vectors is not a basis for R 3. (a) { (, 3, ), (4,, ), (, 7, ) } (b) { (, 6, 4), (, 4, ), (,, 5) } 8. Show that the following vectors do not form a basis for P. 3 +, + + 4, 7 9. Show that the following matrices do not form a basis for M.,,, 3. Let V be the space spanned b v = cos, v = sin, v 3 = cos. (a) Show that S ={v, v, v 3 } is not a basis for V. (b) Find a basis for V.. Find the coordinate vector of w relative to the basis S ={u, u } for R. (a) u = (, 4), u = (3, 8); w = (, ) (b) u = (, ), u = (, ); w = (a, b). Find the coordinate vector of w relative to the basis S ={u, u } for R. (a) u = (, ), u = (, ); w = (, ) (b) u = (, ), u = (, ); w = (, ) 3. Find the coordinate vector of v relative to the basis S ={v, v, v 3 } for R 3. (a) v = (,, 3); v = (,, ), v = (,, ), v 3 = (3, 3, 3) (b) v = (5,, 3); v = (,, 3), v = ( 4, 5, 6), v 3 = (7, 8, 9) 4. Find the coordinate vector of p relative to the basis S ={p, p, p 3 } for P. (a) p = ; p =, p =, p 3 = (b) p = + ; p = +, p = +, p 3 = +

38 Chapter 4 General Vector Spaces In Eercises 5 6, first show that the set S ={A,A,A 3,A 4 } is a basis for M, then epress A as a linear combination of the vectors in S, and then find the coordinate vector of A relative to S. 5. A =, A =, A 3 =, A 4 = ; A = 6. A =, A =, A 3 =, 6 A 4 = ; A = 5 3 In Eercises 7 8, first show that the set S ={p, p, p 3 } is a basis for P, then epress p as a linear combination of the vectors in S, and then find the coordinate vector of p relative to S. 7. p = + +, p = +, p 3 = ; p = p = + +, p = + 9, p 3 = ; p = In words, eplain wh the sets of vectors in parts (a) to (d) are not bases for the indicated vector spaces. (a) u = (, ), u = (, 3), u 3 = (, 5) for R (b) u = (, 3, ), u = (6,, ) for R 3 (c) p = + +, p = for P 6 3 (d) A =, B =, C =, D = for M 4. In an vector space a set that contains the zero vector must be linearl dependent. Eplain wh this is so.. In each part, let T A : R 3 R 3 be multiplication b A, and let {e, e, e 3 } be the standard basis for R 3. Determine whether the set {T A (e ), T A (e ), T A (e 3 )} is linearl independent in R. (a) A = 3 (b) A =. In each part, let T A : R 3 R 3 be multiplication b A, and let u = (,, ). Find the coordinate vector of T A (u) relative to the basis S ={(,, ), (,, ), (,, )} for R 3. (a) A = (b) A = 3. The accompaning figure shows a rectangular -coordinate sstem determined b the unit basis vectors i and j and an -coordinate sstem determined b unit basis vectors u and u. Find the -coordinates of the points whose coordinates are given. (a) ( 3, ) (b) (, ) (c) (, ) (d) (a, b) and ' j and u 3 u i ' Figure E-3 4. The accompaning figure shows a rectangular -coordinate sstem and an -coordinate sstem with skewed aes. Assuming that -unit scales are used on all the aes, find the - coordinates of the points whose -coordinates are given. (a) (, ) (b) (, ) (c) (, ) (d) (a, b) 45 and Figure E-4 5. The first four Hermite polnomials [named for the French mathematician Charles Hermite (8 9)] are, t, + 4t, t + 8t 3 These polnomials have a wide variet of applications in phsics and engineering. (a) Show that the first four Hermite polnomials form a basis for P 3. (b) Let B be the basis in part (a). Find the coordinate vector of the polnomial relative to B. p(t) = 4t + 8t + 8t 3 6. The first four Laguerre polnomials [named for the French mathematician Edmond Laguerre ( )] are, t, 4t + t, 6 8t + 9t t 3 (a) Show that the first four Laguerre polnomials form a basis for P 3. (b) Let B be the basis in part (a). Find the coordinate vector of the polnomial relative to B. p(t) = t + 9t t 3

39 4.5 Dimension 7. Consider the coordinate vectors [w] S =, [q] S =, [B] S = (a) Find w if S is the basis in Eercise. (b) Find q if S is the basis in Eercise 3. (c) Find B if S is the basis in Eercise The basis that we gave for M in Eample 4 consisted of noninvertible matrices. Do ou think that there is a basis for M consisting of invertible matrices? Justif our answer. Working with Proofs 9. Prove that R is an infinite-dimensional vector space. 3. Let T A : R n R n be multiplication b an invertible matri A, and let {u, u,...,u n } be a basis for R n. Prove that {T A (u ), T A (u ),...,T A (u n )} is also a basis for R n. 3. Prove that if V is a subspace of a vector space W and if V is infinite-dimensional, then so is W. True-False Eercises TF. In parts (a) (e) determine whether the statement is true or false, and justif our answer. (a) If V = span{v,...,v n }, then {v,...,v n } is a basis for V. (b) Ever linearl independent subset of a vector space V is a basis for V. (c) If {v, v,...,v n } is a basis for a vector space V, then ever vector in V can be epressed as a linear combination of v, v,...,v n. (d) The coordinate vector of a vector in R n relative to the standard basis for R n is. (e) Ever basis of P 4 contains at least one polnomial of degree 3 or less. Working withtechnolog T. Let V be the subspace of P 3 spanned b the vectors p = , p = , p 3 = , p 4 = (a) Find a basis S for V. (b) Find the coordinate vector of p = relative to the basis S ou obtained in part (a). T. Let V be the subspace of C (, ) spanned b the vectors in the set B ={, cos,cos,cos 3,cos 4,cos 5 } and accept without proof that B is a basis for V. Confirm that the following vectors are in V, and find their coordinate vectors relative to B. f =, f = cos, f = cos, f 3 = cos 3, f 4 = cos 4, f 5 = cos Dimension We showed in the previous section that the standard basis for R n has n vectors and hence that the standard basis for R 3 has three vectors, the standard basis for R has two vectors, and the standard basis for R (= R) has one vector. Since we think of space as three-dimensional, a plane as two-dimensional, and a line as one-dimensional, there seems to be a link between the number of vectors in a basis and the dimension of a vector space. We will develop this idea in this section. Number of Vectors in a Basis Our first goal in this section is to establish the following fundamental theorem. THEOREM 4.5. All bases for a finite-dimensional vector space have the same number of vectors. To prove this theorem we will need the following preliminar result, whose proof is deferred to the end of the section.

40 Chapter 4 General Vector Spaces THEOREM 4.5. Let V be an n-dimensional vector space, and let {v, v,...,v n } be an basis. (a) If a set in V has more than n vectors, then it is linearl dependent. (b) If a set in V has fewer than n vectors, then it does not span V. We can now see rather easil wh Theorem 4.5. is true; for if S ={v, v,...,v n } is an arbitrar basis for V, then the linear independence of S implies that an set in V with more than n vectors is linearl dependent and an set in V with fewer than n vectors does not span V. Thus, unless a set in V has eactl n vectors it cannot be a basis. We noted in the introduction to this section that for certain familiar vector spaces the intuitive notion of dimension coincides with the number of vectors in a basis. The following definition makes this idea precise. DEFINITION The dimension of a finite-dimensional vector space V is denoted b dim(v ) and is defined to be the number of vectors in a basis for V. In addition, the zero vector space is defined to have dimension zero. Engineers often use the term degrees of freedom as a snonm for dimension. EXAMPLE Dimensions of Some Familiar Vector Spaces dim(r n ) = n [ The standard basis has n vectors. ] dim(p n ) = n + [ The standard basis has n + vectors. ] dim(m mn ) = mn [ The standard basis has mn vectors. ] EXAMPLE Dimension of Span(S) If S ={v, v,...,v r } then ever vector in span(s) is epressible as a linear combination of the vectors in S. Thus, if the vectors in S are linearl independent, the automaticall form a basis for span(s), from which we can conclude that dim[span{v, v,...,v r }] = r In words, the dimension of the space spanned b a linearl independent set of vectors is equal to the number of vectors in that set. EXAMPLE 3 Dimension of a Solution Space Find a basis for and the dimension of the solution space of the homogeneous sstem = = = = Solution In Eample 6 of Section. we found the solution of this sstem to be = 3r 4s t, = r, 3 = s, 4 = s, 5 = t, 6 = which can be written in vector form as (,, 3, 4, 5, 6 ) = ( 3r 4s t,r, s, s, t, )

41 4.5 Dimension 3 or, alternativel, as (,, 3, 4, 5, 6 ) = r( 3,,,,, ) + s( 4,,,,, ) + t(,,,,, ) This shows that the vectors v = ( 3,,,,, ), v = ( 4,,,,, ), v 3 = (,,,,, ) span the solution space. We leave it for ou to check that these vectors are linearl independent b showing that none of them is a linear combination of the other two (but see the remark that follows). Thus, the solution space has dimension 3. Remark It can be shown that for an homogeneous linear sstem, the method of the last eample alwas produces a basis for the solution space of the sstem. We omit the formal proof. Some Fundamental Theorems We will devote the remainder of this section to a series of theorems that reveal the subtle interrelationships among the concepts of linear independence, spanning sets, basis, and dimension. These theorems are not simpl eercises in mathematical theor the are essential to the understanding of vector spaces and the applications that build on them. We will start with a theorem (proved at the end of this section) that is concerned with the effect on linear independence and spanning if a vector is added to or removed from a nonempt set of vectors. Informall stated, if ou start with a linearl independent set S and adjoin to it a vector that is not a linear combination of those alread in S, then the enlarged set will still be linearl independent. Also, if ou start with a set S of two or more vectors in which one of the vectors is a linear combination of the others, then that vector can be removed from S without affecting span(s) (Figure 4.5.). The vector outside the plane can be adjoined to the other two without affecting their linear independence. Figure 4.5. An of the vectors can be removed, and the remaining two will still span the plane. Either of the collinear vectors can be removed, and the remaining two will still span the plane. THEOREM Plus/Minus Theorem Let S be a nonempt set of vectors in a vector space V. (a) If S is a linearl independent set, and if v is a vector in V that is outside of span(s), then the set S {v} that results b inserting v into S is still linearl independent. (b) If v is a vector in S that is epressible as a linear combination of other vectors in S, and if S {v} denotes the set obtained b removing v from S, then S and S {v} span the same space; that is, span(s) = span(s {v})

42 4 Chapter 4 General Vector Spaces E X A M P L E 4 Appling the Plus/Minus Theorem Show that p =, p =, and p 3 = 3 are linearl independent vectors. Solution The set S ={p, p } is linearl independent since neither vector in S is a scalar multiple of the other. Since the vector p 3 cannot be epressed as a linear combination of the vectors in S (wh?), it can be adjoined to S to produce a linearl independent set S {p 3 }={p, p, p 3 }. In general, to show that a set of vectors {v, v,...,v n } is a basis for a vector space V, one must show that the vectors are linearl independent and span V. However, if we happen to know that V has dimension n (so that {v, v,...,v n } contains the right number of vectors for a basis), then it suffices to check either linear independence or spanning the remaining condition will hold automaticall. This is the content of the following theorem. THEOREM Let V be an n-dimensional vector space, and let S be a set in V with eactl n vectors. Then S is a basis for V if and onl if S spans V or S is linearl independent. Proof Assume that S has eactl n vectors and spans V. To prove that S is a basis, we must show that S is a linearl independent set. But if this is not so, then some vector v in S is a linear combination of the remaining vectors. If we remove this vector from S, then it follows from Theorem 4.5.3(b) that the remaining set of n vectors still spans V. But this is impossible since Theorem 4.5.(b) states that no set with fewer than n vectors can span an n-dimensional vector space. Thus S is linearl independent. Assume that S has eactl n vectors and is a linearl independent set. To prove that S is a basis, we must show that S spans V. But if this is not so, then there is some vector v in V that is not in span(s). If we insert this vector into S, then it follows from Theorem 4.5.3(a) that this set of n + vectors is still linearl independent. But this is impossible, since Theorem 4.5.(a) states that no set with more than n vectors in an n-dimensional vector space can be linearl independent. Thus S spans V. EXAMPLE 5 Bases b Inspection (a) Eplain wh the vectors v = ( 3, 7) and v = (5, 5) form a basis for R. (b) Eplain wh the vectors v = (,, ), v = (4,, 7), and v 3 = (,, 4) form a basis for R 3. Solution (a) Since neither vector is a scalar multiple of the other, the two vectors form a linearl independent set in the two-dimensional space R, and hence the form a basis b Theorem Solution (b) The vectors v and v form a linearl independent set in the z-plane (wh?). The vector v 3 is outside of the z-plane, so the set {v, v, v 3 } is also linearl independent. Since R 3 is three-dimensional, Theorem implies that {v, v, v 3 } is a basis for the vector space R 3. The net theorem (whose proof is deferred to the end of this section) reveals two important facts about the vectors in a finite-dimensional vector space V :

43 4.5 Dimension 5. Ever spanning set for a subspace is either a basis for that subspace or has a basis as a subset.. Ever linearl independent set in a subspace is either a basis for that subspace or can be etended to a basis for it. THEOREM Let S be a finite set of vectors in a finite-dimensional vector space V. (a) If S spans V but is not a basis for V, then S can be reduced to a basis for V b removing appropriate vectors from S. (b) If S is a linearl independent set that is not alread a basis for V, then S can be enlarged to a basis for V b inserting appropriate vectors into S. We conclude this section with a theorem that relates the dimension of a vector space to the dimensions of its subspaces. THEOREM If W is a subspace of a finite-dimensional vector space V, then: (a) W is finite-dimensional. (b) dim(w ) dim(v ). (c) W = V if and onl if dim(w) = dim(v ). Proof (a) Proof (b) We will leave the proof of this part as an eercise. Part (a) shows that W is finite-dimensional, so it has a basis S ={w, w,...,w m } Either S is also a basis for V or it is not. If so, then dim(v ) = m, which means that dim(v ) = dim(w). If not, then because S is a linearl independent set it can be enlarged to a basis for V b part (b) of Theorem But this implies that dim(w ) < dim(v ), so we have shown that dim(w) dim(v ) in all cases. Proof (c) Assume that dim(w) = dim(v ) and that S ={w, w,...,w m } is a basis for W. If S is not also a basis for V, then being linearl independent S can be etended to a basis for V b part (b) of Theorem But this would mean that dim(v ) > dim(w), which contradicts our hpothesis. Thus S must also be a basis for V, which means that W = V. The converse is obvious. Figure 4.5. illustrates the geometric relationship between the subspaces of R 3 in order of increasing dimension. Line through the origin (-dimensional) The origin (-dimensional) Plane through the origin (-dimensional) Figure 4.5. R 3 (3-dimensional)

44 6 Chapter 4 General Vector Spaces OPTIONAL We conclude this section with optional proofs of Theorems 4.5., 4.5.3, and Proof oftheorem 4.5. (a) Let S ={w, w,...,w m } be an set of m vectors in V, where m>n. We want to show that S is linearl dependent. Since S ={v, v,...,v n } is a basis, each w i can be epressed as a linear combination of the vectors in S, sa w = a v + a v + + a n v n w = a v + a v + + a n v n.... w m = a m v + a m v + + a nm v n () To show that S is linearl dependent, we must find scalars k,k,...,k m, not all zero, such that k w + k w + +k m w m = () We leave it for ou to verif that the equations in () can be rewritten in the partitioned form a a a m a a a m [w w w m ]=[v v v n ].. (3). a n a n a mn Since m>n, the linear sstem a a a m a a a m... a n a n a mn. m =. has more equations than unknowns and hence has a nontrivial solution = k, = k,..., m = k m Creating a column vector from this solution and multipling both sides of (3) on the right b this vector ields k a a a m k k a a a m [w w w m ]. =[v k v v n ].... a n a n a mn B (4), this simplifies to which we can rewrite as k m k k [w w w m ]. =. k m k w + k w + +k m w m = Since the scalar coefficients in this equation are not all zero, we have proved that S ={w, w,...,w m } is linearl independent. k m (4)

45 4.5 Dimension 7 The proof of Theorem 4.5.(b) closel parallels that of Theorem 4.5.(a) and will be omitted. Proof of Theorem (a) Assume that S ={v, v,...,v r } is a linearl independent set of vectors in V, and v is a vector in V that is outside of span(s). To show that S ={v, v,...,v r, v} is a linearl independent set, we must show that the onl scalars that satisf k v + k v + +k r v r + k r+ v = (5) are k = k = =k r = k r+ =. But it must be true that k r+ = for otherwise we could solve (5) for v as a linear combination of v, v,...,v r, contradicting the assumption that v is outside of span(s). Thus, (5) simplifies to k v + k v + +k r v r = (6) which, b the linear independence of {v, v,...,v r }, implies that k = k = =k r = Proof oftheorem (b) Assume that S ={v, v,...,v r } is a set of vectors in V, and (to be specific) suppose that v r is a linear combination of v, v,...,v r,sa v r = c v + c v + +c r v r (7) We want to show that if v r is removed from S, then the remaining set of vectors {v, v,...,v r } still spans S; that is, we must show that ever vector w in span(s) is epressible as a linear combination of {v, v,...,v r }. But if w is in span(s), then w is epressible in the form or, on substituting (7), w = k v + k v + +k r v r + k r v r w = k v + k v + +k r v r + k r (c v + c v + +c r v r ) which epresses w as a linear combination of v, v,...,v r. Proof of Theorem (a) If S is a set of vectors that spans V but is not a basis for V, then S is a linearl dependent set. Thus some vector v in S is epressible as a linear combination of the other vectors in S. B the Plus/Minus Theorem (4.5.3b), we can remove v from S, and the resulting set S will still span V. If S is linearl independent, then S is a basis for V, and we are done. If S is linearl dependent, then we can remove some appropriate vector from S to produce a set S that still spans V. We can continue removing vectors in this wa until we finall arrive at a set of vectors in S that is linearl independent and spans V. This subset of S is a basis for V. Proof of Theorem (b) Suppose that dim(v ) = n. IfS is a linearl independent set that is not alread a basis for V, then S fails to span V, so there is some vector v in V that is not in span(s). B the Plus/Minus Theorem (4.5.3a), we can insert v into S, and the resulting set S will still be linearl independent. If S spans V, then S is a basis for V, and we are finished. If S does not span V, then we can insert an appropriate vector into S to produce a set S that is still linearl independent. We can continue inserting vectors in this wa until we reach a set with n linearl independent vectors in V. This set will be a basis for V b Theorem

46 8 Chapter 4 General Vector Spaces Eercise Set 4.5 In Eercises 6, find a basis for the solution space of the homogeneous linear sstem, and find the dimension of that space = + 3 = + 3 = = = + 3 = = = = = = = = z = 3 + z = z = z = 7. In each part, find a basis for the given subspace of R 3, and state its dimension. (a) The plane 3 + 5z =. (b) The plane =. (c) The line = t, = t,z = 4t. (d) All vectors of the form (a,b,c), where b = a + c. 8. In each part, find a basis for the given subspace of R 4, and state its dimension. (a) All vectors of the form (a,b,c,). (b) All vectors of the form (a,b,c,d), where d = a + b and c = a b. (c) All vectors of the form (a,b,c,d), where a = b = c = d. 9. Find the dimension of each of the following vector spaces. (a) The vector space of all diagonal n n matrices. (b) The vector space of all smmetric n n matrices. (c) The vector space of all upper triangular n n matrices.. Find the dimension of the subspace of P 3 consisting of all polnomials a + a + a + a 3 3 for which a =.. (a) Show that the set W of all polnomials in P such that p() = is a subspace of P. (b) Make a conjecture about the dimension of W. (c) Confirm our conjecture b finding a basis for W.. Find a standard basis vector for R 3 that can be added to the set {v, v } to produce a basis for R 3. (a) v = (,, 3), v = (,, ) (b) v = (,, ), v = (3,, ) 3. Find standard basis vectors for R 4 that can be added to the set {v, v } to produce a basis for R 4. v = (, 4,, 3), v = ( 3, 8, 4, 6) 4. Let {v, v, v 3 } be a basis for a vector space V. Show that {u, u, u 3 } is also a basis, where u = v, u = v + v, and u 3 = v + v + v The vectors v = (,, 3) and v = (, 5, 3) are linearl independent. Enlarge {v, v } to a basis for R The vectors v = (,,, ) and v = (,,, ) are linearl independent. Enlarge {v, v } to a basis for R Find a basis for the subspace of R 3 that is spanned b the vectors v = (,, ), v = (,, ), v 3 = (,, ), v 4 = (,, ) 8. Find a basis for the subspace of R 4 that is spanned b the vectors v = (,,, ), v = (,,, ), v 3 = (,,, 3), v 4 = (3, 3, 3, 4) 9. In each part, let T A : R 3 R 3 be multiplication b A and find the dimension of the subspace of R 3 consisting of all vectors for which T A () =. (a) A = (b) A = (c) A =. In each part, let T A be multiplication b A and find the dimension of the subspace R 4 consisting of all vectors for which T A () =. [ ] (a) A = (b) A = 4 Working with Proofs. (a) Prove that for ever positive integer n, one can find n + linearl independent vectors in F(, ). [Hint: Look for polnomials.] (b) Use the result in part (a) to prove that F(, ) is infinitedimensional. (c) Prove that C(, ), C m (, ), and C (, ) are infinite-dimensional.. Let S be a basis for an n-dimensional vector space V. Prove that if v, v,...,v r form a linearl independent set of vectors in V, then the coordinate vectors (v ) S,(v ) S,...,(v r ) S form a linearl independent set in R n, and conversel.

47 4.6 Change of Basis 9 3. Let S ={v, v,...,v r } be a nonempt set of vectors in an n-dimensional vector space V. Prove that if the vectors in S span V, then the coordinate vectors (v ) S,(v ) S,...,(v r ) S span R n, and conversel. 4. Prove part (a) of Theorem Prove: A subspace of a finite-dimensional vector space is finite-dimensional. 6. State the two parts of Theorem 4.5. in contrapositive form. 7. In each part, let S be the standard basis for P. Use the results proved in Eercises and 3 to find a basis for the subspace of P spanned b the given vectors. (a) +, ,9 (b) +,,+ + 3 (c) + 3,+ 6, True-False Eercises TF. In parts (a) ( k) determine whether the statement is true or false, and justif our answer. (a) The zero vector space has dimension zero. (b) There is a set of 7 linearl independent vectors in R 7. (c) There is a set of vectors that span R 7. (d) Ever linearl independent set of five vectors in R 5 is a basis for R 5. (e) Ever set of five vectors that spans R 5 is a basis for R 5. (f ) Ever set of vectors that spans R n contains a basis for R n. (g) Ever linearl independent set of vectors in R n is contained in some basis for R n. (h) There is a basis for M consisting of invertible matrices. (i) If A has size n n and I n,a,a,...,a n are distinct matrices, then {I n,a,a,...,a n } is a linearl dependent set. ( j) There are at least two distinct three-dimensional subspaces of P. (k) There are onl three distinct two-dimensional subspaces of P. Working withtechnolog T. Devise three different procedures for using our technolog utilit to determine the dimension of the subspace spanned b a set of vectors in R n, and then use each of those procedures to determine the dimension of the subspace of R 5 spanned b the vectors v = (,,,, ), v = (,,, 3, ), v 3 = (,,,, ), v 4 = (,,,, ) T. Find a basis for the row space of A b starting at the top and successivel removing each row that is a linear combination of its predecessors A = Change of Basis A basis that is suitable for one problem ma not be suitable for another, so it is a common process in the stud of vector spaces to change from one basis to another. Because a basis is the vector space generalization of a coordinate sstem, changing bases is akin to changing coordinate aes in R and R 3. In this section we will stud problems related to changing bases. Coordinate Maps If S ={v, v,...,v n } is a basis for a finite-dimensional vector space V, and if (v) S = (c,c,...,c n ) is the coordinate vector of v relative to S, then, as illustrated in Figure 4.4.6, the mapping v (v) S () creates a connection (a one-to-one correspondence) between vectors in the general vector space V and vectors in the Euclidean vector space R n. We call () the coordinate map relative to S from V to R n. In this section we will find it convenient to epress coordinate

48 3 Chapter 4 General Vector Spaces V v Coordinate map S c c.. c n R n Figure 4.6. Change of Basis vectors in the matri form c [v] S =.. c n where the square brackets emphasize the matri notation (Figure 4.6.). There are man applications in which it is necessar to work with more than one coordinate sstem. In such cases it becomes important to know how the coordinates of a fied vector relative to each coordinate sstem are related. This leads to the following problem. c () The Change-of-Basis Problem If v is a vector in a finite-dimensional vector space V, and if we change the basis for V from a basis B to a basis B, how are the coordinate vectors [v] B and [v] B related? Remark To solve this problem, it will be convenient to refer to B as the old basis and B as the new basis. Thus, our objective is to find a relationship between the old and new coordinates of a fied vector v in V. For simplicit, we will solve this problem for two-dimensional spaces. The solution for n-dimensional spaces is similar. Let B ={u, u } and B ={u, u } be the old and new bases, respectivel. We will need the coordinate vectors for the new basis vectors relative to the old basis. Suppose the are a c [u ] B = and [u b ] B = (3) d That is, u = au + bu u = cu (4) + du Now let v be an vector in V, and let k [v] B = (5) be the new coordinate vector, so that k v = k u + k u (6) In order to find the old coordinates of v, we must epress v in terms of the old basis B. To do this, we substitute (4) into (6). This ields v = k (au + bu ) + k (cu + du ) or v = (k a + k c)u + (k b + k d)u Thus, the old coordinate vector for v is k a + k c [v] B = k b + k d

49 which, b using (5), can be written as [ a c [v] B = b d ][ k k ] a c = [v] B b d 4.6 Change of Basis 3 This equation states that the old coordinate vector [v] B results when we multipl the new coordinate vector [v] B on the left b the matri a c P = b d Since the columns of this matri are the coordinates of the new basis vectors relative to the old basis [see (3)], we have the following solution of the change-of-basis problem. Solution of the Change-of-Basis Problem If we change the basis for a vector space V from an old basis B ={u, u,...,u n } to a new basis B ={u, u,...,u n }, then for each vector v in V, the old coordinate vector [v] B is related to the new coordinate vector [v] B b the equation [v] B = P [v] B (7) where the columns of P are the coordinate vectors of the new basis vectors relative to the old basis; that is, the column vectors of P are [u ] B, [u ] B,..., [u n ] B (8) Transition Matrices The matri P in Equation (7) is called the transition matri from B to B. For emphasis, we will often denote it b P B B. It follows from (8) that this matri can be epressed in terms of its column vectors as P B B = [ [u ] B [u ] B [u n ] ] B (9) Similarl, the transition matri from B to B can be epressed in terms of its column vectors as P B B = [u ] B [u ] B [u n ] B () Remark There is a simple wa to remember both of these formulas using the terms old basis and new basis defined earlier in this section: In Formula (9) the old basis is B and the new basis is B, whereas in Formula () the old basis is B and the new basis is B. Thus, both formulas can be restated as follows: The columns of the transition matri from an old basis to a new basis are the coordinate vectors of the old basis relative to the new basis. EXAMPLE Finding Transition Matrices Consider the bases B ={u, u } and B ={u, u } for R, where u = (, ), u = (, ), u = (, ), u = (, ) (a) Find the transition matri P B B from B to B. (b) Find the transition matri P B B from B to B.

50 3 Chapter 4 General Vector Spaces Solution (a) Here the old basis vectors are u and u and the new basis vectors are u and u. We want to find the coordinate matrices of the old basis vectors u and u relative to the new basis vectors u and u. To do this, observe that u = u + u from which it follows that and hence that u = u + u [u ] B = and [u ] B = P B B = Solution (b) Here the old basis vectors are u and u and the new basis vectors are u and u. As in part (a), we want to find the coordinate matrices of the old basis vectors u and u relative to the new basis vectors u and u. To do this, observe that u = u + u u = u u from which it follows that [u ] B = and [u ] B = and hence that P B B = Suppose now that B and B are bases for a finite-dimensional vector space V. Since multiplication b P B B maps coordinate vectors relative to the basis B into coordinate vectors relative to a basis B, and P B B maps coordinate vectors relative to B into coordinate vectors relative to B, it follows that for ever vector v in V we have [v] B = P B B[v] B () [v] B = P B B [v] B () EXAMPLE Computing Coordinate Vectors Let B and B be the bases in Eample. Use an appropriate formula to find [v] B given that 3 [v] B = 5 Solution To find [v] B we need to make the transition from B to B. It follows from Formula () and part (a) of Eample that 3 7 [v] B = P B B[v] B = = 5 Invertibilit of Transition Matrices If B and B are bases for a finite-dimensional vector space V, then (P B B)(P B B ) = P B B

51 4.6 Change of Basis 33 because multiplication b the product (P B B)(P B B ) first maps the B-coordinates of a vector into its B -coordinates, and then maps those B -coordinates back into the original B-coordinates. Since the net effect of the two operations is to leave each coordinate vector unchanged, we are led to conclude that P B B must be the identit matri, that is, (P B B)(P B B ) = I (3) (we omit the formal proof). For eample, for the transition matrices obtained in Eample wehave (P B B)(P B B ) = = = I It follows from (3) that P B B is invertible and that its inverse is P B B. Thus, we have the following theorem. THEOREM 4.6. If P is the transition matri from a basis B to a basis B for a finitedimensional vector space V, then P is invertible and P is the transition matri from B to B. An Efficient Method for ComputingTransition Matrices for R n Our net objective is to develop an efficient procedure for computing transition matrices between bases for R n. As illustrated in Eample, the first step in computing a transition matri is to epress each new basis vector as a linear combination of the old basis vectors. For R n this involves solving n linear sstems of n equations in n unknowns, each of which has the same coefficient matri (wh?). An efficient wa to do this is b the method illustrated in Eample of Section.6, which is as follows: A Procedure for Computing P B B Step. Form the matri [B B]. Step. Use elementar row operations to reduce the matri in Step to reduced row echelon form. Step 3. The resulting matri will be [I P B B ]. Step 4. Etract the matri P B B from the right side of the matri in Step 3. This procedure is captured in the following diagram. [new basis old basis] row operations [I transition from old to new] (4) EXAMPLE 3 Eample Revisited In Eample we considered the bases B ={u, u } and B ={u, u } for R, where u = (, ), u = (, ), u = (, ), u = (, ) (a) Use Formula (4) to find the transition matri from B to B. (b) Use Formula (4) to find the transition matri from B to B. Solution (a) Here B is the old basis and B is the new basis, so [new basis old basis] =

52 34 Chapter 4 General Vector Spaces Since the left side is alread the identit matri, no reduction is needed. We see b inspection that the transition matri is P B B = which agrees with the result in Eample. Solution (b) Here B is the old basis and B is the new basis, so [new basis old basis] = B reducing this matri, so the left side becomes the identit, we obtain (verif) [I transition from old to new] = so the transition matri is P B B = which also agrees with the result in Eample. Transition to the Standard Basis for R n Note that in part (a) of the last eample the column vectors of the matri that made the transition from the basis B to the standard basis turned out to be the vectors in B written in column form. This illustrates the following general result. THEOREM 4.6. Let B ={u, u,...,u n } be an basis for the vector space R n and let S ={e, e,...,e n } be the standard basis for R n. If the vectors in these bases are written in column form, then P B S =[u u u n ] (5) It follows from this theorem that if A =[u u u n ] is an invertible n n matri, then A can be viewed as the transition matri from the basis {u, u,...,u n } for R n to the standard basis for R n. Thus, for eample, the matri 3 A = which was shown to be invertible in Eample 4 of Section.5, is the transition matri from the basis u = (,, ), u = (, 5, ), u 3 = (3, 3, 8) to the basis e = (,, ), e = (,, ), e 3 = (,, )

53 4.6 Change of Basis 35 Eercise Set 4.6. Consider the bases B ={u, u } and B ={u, u } for R, where [ 4 u =, u = ], u =, u 3 = (a) Find the transition matri from B to B. (b) Find the transition matri from B to B. (c) Compute the coordinate vector [w] B, where 3 w = 5 and use () to compute [w] B. (d) Check our work b computing [w] B directl.. Repeat the directions of Eercise with the same vector w but with [ u =, u = ], u = 3, u = 4 3. Consider the bases B ={u, u, u 3 } and B ={u, u, u } 3 for R 3, where u =, u =, u 3 = 3 u =, u =, u = (a) Find the transition matri B to B. (b) Compute the coordinate vector [w] B, where 5 w = 8 5 and use () to compute [w] B. (c) Check our work b computing [w] B directl. 4. Repeat the directions of Eercise 3 with the same vector w,but with 3 3 u =, u =, u 3 = u = 6, u = 6, u = Let V be the space spanned b f = sin and f = cos. (a) Show that g = sin + cos and g = 3 cos form a basis for V. (b) Find the transition matri from B ={g, g } to B ={f, f }. (c) Find the transition matri from B to B. (d) Compute the coordinate vector [h] B, where h = sin 5 cos, and use () to obtain [h] B. (e) Check our work b computing [h] B directl. 6. Consider the bases B ={p, p } and B ={q, q } for P, where p = 6 + 3, p = +, q =, q = 3 + (a) Find the transition matri from B to B. (b) Find the transition matri from B to B. (c) Compute the coordinate vector [p] B, where p = 4 +, and use () to compute [p] B. (d) Check our work b computing [p] B directl. 7. Let B ={u, u } and B ={v, v } be the bases for R in which u = (, ), u = (, 3), v = (, 3), and v = (, 4). (a) Use Formula (4) to find the transition matri P B B. (b) Use Formula (4) to find the transition matri P B B. (c) Confirm that P B B and P B B are inverses of one another. (d) Let w = (, ). Find [w] B and then use the matri P B B to compute [w] B from [w] B. (e) Let w = (, 5). Find [w] B and then use the matri P B B to compute [w] B from [w] B. 8. Let S be the standard basis for R, and let B ={v, v } be the basis in which v = (, ) and v = ( 3, 4). (a) Find the transition matri P B S b inspection. (b) Use Formula (4) to find the transition matri P S B. (c) Confirm that P B S and P S B are inverses of one another. (d) Let w = (5, 3). Find [w] B and then use Formula () to compute [w] S. (e) Let w = (3, 5). Find [w] S and then use Formula () to compute [w] B. 9. Let S be the standard basis for R 3, and let B ={v, v, v 3 } be the basis in which v = (,, ), v = (, 5, ), and v 3 = (3, 3, 8). (a) Find the transition matri P B S b inspection. (b) Use Formula (4) to find the transition matri P S B. (c) Confirm that P B S and P S B are inverses of one another. (d) Let w = (5, 3, ). Find [w] B and then use Formula () to compute [w] S. (e) Let w = (3, 5, ). Find [w] S and then use Formula () to compute [w] B.

54 36 Chapter 4 General Vector Spaces. Let S ={e, e } be the standard basis for R, and let B ={v, v } be the basis that results when the vectors in S are reflected about the line =. (a) Find the transition matri P B S. (b) Let P = P B S and show that P T = P S B.. Let S ={e, e } be the standard basis for R, and let B ={v, v } be the basis that results when the vectors in S are reflected about the line that makes an angle θ with the positive -ais. (a) Find the transition matri P B S. (b) Let P = P B S and show that P T = P S B.. If B, B, and B 3 are bases for R, and if 3 7 P B B = and P B B 5 3 = 4 then P B3 B =. 3. If P is the transition matri from a basis B to a basis B, and Q is the transition matri from B to a basis C, what is the transition matri from B to C? What is the transition matri from C to B? 4. To write the coordinate vector for a vector, it is necessar to specif an order for the vectors in the basis. If P is the transition matri from a basis B to a basis B, what is the effect on P if we reverse the order of vectors in B from v,...,v n to v n,...,v? What is the effect on P if we reverse the order of vectors in both B and B? 5. Consider the matri P = (a) P is the transition matri from what basis B to the standard basis S ={e, e, e 3 } for R 3? (b) P is the transition matri from the standard basis S ={e, e, e 3 } to what basis B for R 3? 6. The matri P = 3 is { the transition matri from } what basis B to the basis (,, ), (,, ), (,, ) for R 3? 7. Let S ={e, e } be the standard basis for R, and let B ={v, v } be the basis that results when the linear transformation defined b T(, ) = ( + 3, 5 ) is applied to each vector in S. Find the transition matri P B S. 8. Let S ={e, e, e 3 } be the standard basis for R 3, and let B ={v, v, v 3 } be the basis that results when the linear transformation defined b T(,, 3 ) = ( +, + 4 3, ) is applied to each vector in S. Find the transition matri P B S. 9. If [w] B = w holds for all vectors w in R n, what can ou sa about the basis B? Working with Proofs. Let B be a basis for R n. Prove that the vectors v, v,...,v k span R n if and onl if the vectors [v ] B, [v ] B,...,[v k ] B span R n.. Let B be a basis for R n. Prove that the vectors v, v,...,v k form a linearl independent set in R n if and onl if the vectors [v ] B, [v ] B,...,[v k ] B form a linearl independent set in R n. True-False Eercises TF. In parts (a) (f ) determine whether the statement is true or false, and justif our answer. (a) If B and B are bases for a vector space V, then there eists a transition matri from B to B. (b) Transition matrices are invertible. (c) If B is a basis for a vector space R n, then P B B is the identit matri. (d) If P B B is a diagonal matri, then each vector in B is a scalar multiple of some vector in B. (e) If each vector in B is a scalar multiple of some vector in B, then P B B is a diagonal matri. (f ) If A is a square matri, then A = P B B for some bases B and B for R n. Working withtechnolog T. Let P = and v = (, 4, 3, 5), v = (,,, ), v 3 = (3,,, 9), v 4 = (5, 8, 6, 3) Find a basis B ={u, u, u 3, u 4 } for R 4 for which P is the transition matri from B to B ={v, v, v 3, v 4 }. T. Given that the matri for a linear transformation T : R 4 R 4 relative to the standard basis B ={e, e, e 3, e 4 } for R 4 is find the matri for T relative to the basis B ={e, e + e, e + e + e 3, e + e + e 3 + e 4 }

55 4.7 Row Space, Column Space, and Null Space Row Space, Column Space, and Null Space In this section we will stud some important vector spaces that are associated with matrices. Our work here will provide us with a deeper understanding of the relationships between the solutions of a linear sstem and properties of its coefficient matri. Row Space, Column Space, and Null Space Recall that vectors can be written in comma-delimited form or in matri form as either row vectors or column vectors. In this section we will use the latter two. DEFINITION For an m n matri a a a n a a a n A =... a m a m a mn the vectors r =[a a a n ] r =[a a a n ].. r m =[a m a m a mn ] in R n that are formed from the rows of A are called the row vectors of A, and the vectors a a a n a c =., c a =.,..., c a n n =.. a m in R m formed from the columns of A are called the column vectors of A. a m a mn EXAMPLE Row and Column Vectors of a 3 Matri Let A = 3 4 The row vectors of A are r =[ ] and r =[3 4] and the column vectors of A are c =, c =, and c 3 = 3 4 The following definition defines three important vector spaces associated with a matri. We will sometimes denote the row space of A, the column space of A, and the null space of A b row(a), col(a), and null(a), respectivel. DEFINITION If A is an m n matri, then the subspace of R n spanned b the row vectors of A is called the row space of A, and the subspace of R m spanned b the column vectors of A is called the column space of A. The solution space of the homogeneous sstem of equations A =, which is a subspace of R n, is called the null space of A.

56 38 Chapter 4 General Vector Spaces In this section and the net we will be concerned with two general questions: Question. What relationships eist among the solutions of a linear sstem A = b and the row space, column space, and null space of the coefficient matri A? Question. What relationships eist among the row space, column space, and null space of a matri? Starting with the first question, suppose that a a a n a a a n A =... and = a m a m a mn It follows from Formula () of Section.3 that if c, c,...,c n denote the column vectors of A, then the product A can be epressed as a linear combination of these vectors with coefficients from ; that is, A = c + c + + n c n () Thus, a linear sstem, A = b, ofm equations in n unknowns can be written as c + c + + n c n = b () from which we conclude that A = b is consistent if and onl if b is epressible as a linear combination of the column vectors of A. This ields the following theorem... n THEOREM 4.7. A sstem of linear equations A = b is consistent if and onl if b is in the column space of A. EXAMPLE A Vector b in the Column Space of A Let A = b be the linear sstem = 9 3 Show that b is in the column space of A b epressing it as a linear combination of the column vectors of A. Solution Solving the sstem b Gaussian elimination ields (verif) =, =, 3 = 3 It follows from this and Formula () that = 9 3 Recall from Theorem that the general solution of a consistent linear sstem A = b can be obtained b adding an specific solution of the sstem to the general solution of the corresponding homogeneous sstem A =. Keeping in mind that the null space of A is the same as the solution space of A =, we can rephrase that theorem in the following vector form.

57 4.7 Row Space, Column Space, and Null Space 39 THEOREM 4.7. If is an solution of a consistent linear sstem A = b, and if S ={v, v,...,v k } is a basis for the null space of A, then ever solution of A = b can be epressed in the form = + c v + c v + +c k v k (3) Conversel, for all choices of scalars c,c,...,c k, the vector in this formula is a solution of A = b. The vector in Formula (3) is called a particular solution of A = b, and the remaining part of the formula is called the general solution of A =. With this terminolog Theorem 4.7. can be rephrased as: The general solution of a consistent linear sstem can be epressed as the sum of a particular solution of that sstem and the general solution of the corresponding homogeneous sstem. Geometricall, the solution set of A = b can be viewed as the translation b of the solution space of A = (Figure 4.7.). + Solution set of A = b Figure 4.7. Solution space of A = EXAMPLE 3 General Solution of a Linear Sstem A = b In the concluding subsection of Section 3.4 we compared solutions of the linear sstems = and = and deduced that the general solution of the nonhomogeneous sstem and the general solution h of the corresponding homogeneous sstem (when written in column-vector form) are related b

58 4 Chapter 4 General Vector Spaces }{{} 3r 4s t 3 4 r s = = + r + s + t s t 3 3 }{{}}{{} h Recall from the Remark following Eample 3 of Section 4.5 that the vectors in h form a basis for the solution space of A =. Bases for Row Spaces, Column Spaces, and Null Spaces We know that performing elementar row operations on the augmented matri [A b] of a linear sstem does not change the solution set of that sstem. This is true, in particular, if the sstem is homogeneous, in which case the augmented matri is [A ]. But elementar row operations have no effect on the column of zeros, so it follows that the solution set of A = is unaffected b performing elementar row operations on A itself. Thus, we have the following theorem. THEOREM Elementar row operations do not change the null space of a matri. The following theorem, whose proof is left as an eercise, is a companion to Theorem THEOREM Elementar row operations do not change the row space of a matri. Theorems and might tempt ou into incorrectl believing that elementar row operations do not change the column space of a matri. To see wh this is not true, compare the matrices 3 3 A = and B = 6 The matri B can be obtained from A b adding times the first row to the second. However, this operation has changed the column space of A, since that column space consists of all scalar multiples of whereas the column space of B consists of all scalar multiples of and the two are different spaces. EXAMPLE 4 Finding a Basis for the Null Space of a Matri Find a basis for the null space of the matri A =

59 4.7 Row Space, Column Space, and Null Space 4 Solution The null space of A is the solution space of the homogeneous linear sstem A =, which, as shown in Eample 3, has the basis 3 4 v =, v =, v 3 = Remark Observe that the basis vectors v, v, and v 3 in the last eample are the vectors that result b successivel setting one of the parameters in the general solution equal to and the others equal to. The following theorem makes it possible to find bases for the row and column spaces of a matri in row echelon form b inspection. THEOREM If a matri R is in row echelon form, then the row vectors with the leading s (the nonzero row vectors) form a basis for the row space of R, and the column vectors with the leading s of the row vectors form a basis for the column space of R. The proof essentiall involves an analsis of the positions of the s and s of R. We omit the details. EXAMPLE 5 Bases for the Row and Column Spaces of a Matri in Row Echelon Form Find bases for the row and column spaces of the matri R = Solution Since the matri R is in row echelon form, it follows from Theorem that the vectors r =[ 5 3] r =[ 3 ] r 3 =[ ] form a basis for the row space of R, and the vectors c =, c =, c 4 = form a basis for the column space of R.

60 4 Chapter 4 General Vector Spaces EXAMPLE 6 Basis for a Row Space b Row Reduction Find a basis for the row space of the matri A = Solution Since elementar row operations do not change the row space of a matri, we can find a basis for the row space of A b finding a basis for the row space of an row echelon form of A. Reducing A to row echelon form, we obtain (verif) R = 5 B Theorem 4.7.5, the nonzero row vectors of R form a basis for the row space of R and hence form a basis for the row space of A. These basis vectors are r =[ ] r =[ 3 6] r 3 =[ 5] Basis for the Column Space of a Matri The problem of finding a basis for the column space of a matri A in Eample 6 is complicated b the fact that an elementar row operation can alter its column space. However, the good news is that elementar row operations do not alter dependence relationships among the column vectors. To make this more precise, suppose that w, w,...,w k are linearl dependent column vectors of A, so there are scalars c,c,...,c k that are not all zero and such that c w + c w + +c k w k = (4) If we perform an elementar row operation on A, then these vectors will be changed into new column vectors w, w,...,w k. At first glance it would seem possible that the transformed vectors might be linearl independent. However, this is not so, since it can be proved that these new column vectors are linearl dependent and, in fact, related b an equation c w + c w + +c kw k = that has eactl the same coefficients as (4). It can also be proved that elementar row operations do not alter the linear independence of a set of column vectors. All of these results are summarized in the following theorem. Although elementar row operations can change the column space of a matri, it follows from Theorem 4.7.6(b) that the do not change the dimension of its column space. THEOREM If A and B are row equivalent matrices, then: (a) A given set of column vectors of A is linearl independent if and onl if the corresponding column vectors of B are linearl independent. (b) A given set of column vectors of A forms a basis for the column space of A if and onl if the corresponding column vectors of B form a basis for the column space of B.

61 4.7 Row Space, Column Space, and Null Space 43 EXAMPLE 7 Basis for a Column Space b Row Reduction Find a basis for the column space of the matri A = that consists of column vectors of A. Solution We observed in Eample 6 that the matri R = 5 is a row echelon form of A. Keeping in mind that A and R can have different column spaces, we cannot find a basis for the column space of A directl from the column vectors of R. However, it follows from Theorem 4.7.6(b) that if we can find a set of column vectors of R that forms a basis for the column space of R, then the corresponding column vectors of A will form a basis for the column space of A. Since the first, third, and fifth columns of R contain the leading s of the row vectors, the vectors 4 5 c =, c 3 =, c 5 = form a basis for the column space of R. Thus, the corresponding column vectors of A, which are 4 5 c =, c 9 3 = 9, c 8 5 = form a basis for the column space of A. Up to now we have focused on methods for finding bases associated with matrices. Those methods can readil be adapted to the more general problem of finding a basis for the subspace spanned b a set of vectors in R n. EXAMPLE 8 Basis for the Space Spanned b a Set of Vectors The following vectors span a subspace of R 4. Find a subset of these vectors that forms a basis of this subspace. v = (,,, ), v = ( 3, 6, 6, 3), v 3 = (4, 9, 9, 4), v 4 = (,,, ), v 5 = (5, 8, 9, 5), v 6 = (4,, 7, 4) Solution If we rewrite these vectors in column form and construct the matri that has those vectors as its successive columns, then we obtain the matri A in Eample 7 (verif). Thus, span{v, v, v 3, v 4, v 5, v 6 }=col(a)

62 44 Chapter 4 General Vector Spaces Proceeding as in that eample (and adjusting the notation appropriatel), we see that the vectors v, v 3, and v 5 form a basis for span{v, v, v 3, v 4, v 5, v 6 } Bases Formed from Row and Column Vectors of a Matri In Eample 6, we found a basis for the row space of a matri b reducing that matri to row echelon form. However, the basis vectors produced b that method were not all row vectors of the original matri. The following adaptation of the technique used in Eample 7 shows how to find a basis for the row space of a matri that consists entirel of row vectors of that matri. EXAMPLE 9 Basis for the Row Space of a Matri Find a basis for the row space of A = 5 5 consisting entirel of row vectors from A Solution We will transpose A, thereb converting the row space of A into the column space of A T ; then we will use the method of Eample 7 to find a basis for the column space of A T ; and then we will transpose again to convert column vectors back to row vectors. Transposing A ields A T = and then reducing this matri to row echelon form we obtain 5 The first, second, and fourth columns contain the leading s, so the corresponding column vectors in A T form a basis for the column space of A T ; these are 5 6 c =, c = 3, and c 4 = Transposing again and adjusting the notation appropriatel ields the basis vectors r =[ 3], r =[ 5 3 6], for the row space of A. r 4 =[ ]

63 4.7 Row Space, Column Space, and Null Space 45 Net we will give an eample that adapts the method of Eample 7 to solve the following general problem in R n : Problem Given a set of vectors S ={v, v,...,v k } in R n, find a subset of these vectors that forms a basis for span(s), and epress each vector that is not in that basis as a linear combination of the basis vectors. EXAMPLE Basis and Linear Combinations (a) Find a subset of the vectors v = (,,, 3), v = (, 5, 3, 6), v 3 = (,, 3, ), v 4 = (,, 4, 7), v 5 = (5, 8,, ) that forms a basis for the subspace of R 4 spanned b these vectors. (b) Epress each vector not in the basis as a linear combination of the basis vectors. Had we onl been interested in part (a) of this eample, it would have sufficed to reduce the matri to row echelon form. It is for part (b) that the reduced row echelon form is most useful. Solution (a) vectors: We begin b constructing a matri that has v, v,...,v 5 as its column (5) v v v 3 v 4 v 5 The first part of our problem can be solved b finding a basis for the column space of this matri. Reducing the matri to reduced row echelon form and denoting the column vectors of the resulting matri b w, w, w 3, w 4, and w 5 ields (6) w w w 3 w 4 w 5 The leading s occur in columns,, and 4, so b Theorem 4.7.5, {w, w, w 4 } is a basis for the column space of (6), and consequentl, is a basis for the column space of (5). {v, v, v 4 } Solution (b) We will start b epressing w 3 and w 5 as linear combinations of the basis vectors w, w, w 4. The simplest wa of doing this is to epress w 3 and w 5 in terms of basis vectors with smaller subscripts. Accordingl, we will epress w 3 as a linear combination of w and w, and we will epress w 5 as a linear combination of w, w, and w 4. B inspection of (6), these linear combinations are w 3 = w w w 5 = w + w + w 4

64 46 Chapter 4 General Vector Spaces We call these the dependenc equations. The corresponding relationships in (5) are v 3 = v v v 5 = v + v + v 4 The following is a summar of the steps that we followed in our last eample to solve the problem posed above. Basis for the Space Spanned b a Set of Vectors Step. Form the matri A whose columns are the vectors in the set S ={v, v,...,v k }. Step. Reduce the matri A to reduced row echelon form R. Step 3. Denote the column vectors of R b w, w,...,w k. Step 4. Identif the columns of R that contain the leading s. The corresponding column vectors of A form a basis for span(s). This completes the first part of the problem. Step 5. Obtain a set of dependenc equations for the column vectors w, w,...,w k of R b successivel epressing each w i that does not contain a leading of R as a linear combination of predecessors that do. Step 6. In each dependenc equation obtained in Step 5, replace the vector w i b the vector v i for i =,,...,k. This completes the second part of the problem. Eercise Set 4.7 In Eercises, epress the product A as a linear combination of the column vectors of A. 3. (a) (a) (b) (b) In Eercises 3 4, determine whether b is in the column space of A, and if so, epress b as a linear combination of the column vectors of A 3. (a) A = ; b = 3 5 (b) A = 9 3 ; b = 4. (a) A = ; b = 4 (b) A = 3 ; b = Suppose that = 3, =, 3 =, 4 = 5 is a solution of a nonhomogeneous linear sstem A = b and that the solution set of the homogeneous sstem A = is given b the formulas = 5r s, = s, 3 = s + t, 4 = t (a) Find a vector form of the general solution of A =. (b) Find a vector form of the general solution of A = b. 6. Suppose that =, =, 3 = 4, 4 = 3 is a solution of a nonhomogeneous linear sstem A = b and that the solution set of the homogeneous sstem A = is given b the formulas = 3r + 4s, = r s, 3 = r, 4 = s (a) Find a vector form of the general solution of A =. (b) Find a vector form of the general solution of A = b. In Eercises 7 8, find the vector form of the general solution of the linear sstem A = b, and then use that result to find the vector form of the general solution of A =.

65 4.7 Row Space, Column Space, and Null Space (a) 3 = 6 = 8. (a) = = = = 3 (b) = = = = 5 (b) = = = 3 In Eercises 9, find bases for the null space and row space of A (a) A = (b) A = (a) A = (b) A = In Eercises, a matri in row echelon form is given. B inspection, find a basis for the row space and for the column space of that matri. 3. (a) (b) (a) 3 (b) 7 3. (a) Use the methods of Eamples 6 and 7 to find bases for the row space and column space of the matri 5 3 A = (b) Use the method of Eample 9 to find a basis for the row space of A that consists entirel of row vectors of A. In Eercises 4 5, find a basis for the subspace of R 4 that is spanned b the given vectors. 4. (,, 4, 3), (,,, ), (,, 3, ) 5. (,,, ), (,,, ), (,,, ), (, 3,, 3) In Eericses 6 7, find a subset of the given vectors that forms a basis for the space spanned b those vectors, and then epress each vector that is not in the basis as a linear combination of the basis vectors. 6. v = (,,, ), v = ( 3, 3, 7, ), v 3 = (, 3, 9, 3), v 4 = ( 5, 3, 5, ) 7. v = (,, 5, ), v = (, 3,, ), v 3 = (4, 5, 9, 4), v 4 = (, 4,, 3), v 5 = ( 7, 8,, 8) In Eercises 8 9, find a basis for the row space of A that consists entirel of row vectors of A. 8. The matri in Eercise (a). 9. The matri in Eercise (b).. Construct a matri whose null space consists of all linear combinations of the vectors v = 3 = 4. In each part, let A = [ 4 ]. For the given vector b, find the general form of all vectors in R 3 for which T A () = b if such vectors eist. (a) b = (, ) (b) b = (, 3) (c) b = (, ). In each part, let A =. For the given vector b, find the general form of all vectors in R for which T A () = b if such vectors eist. (a) b = (,,, ) (b) b = (,,, ) (c) b = (,,, ) 3. (a) Let A = Show that relative to an z-coordinate sstem in 3-space the null space of A consists of all points on the z-ais and that the column space consists of all points in the -plane (see the accompaning figure). (b) Find a 3 3 matri whose null space is the -ais and whose column space is the z-plane. z Null space of A Column space of A Figure E-3

66 48 Chapter 4 General Vector Spaces 4. Find a 3 3 matri whose null space is (a) a point. (b) a line. (c) a plane. 5. (a) Find all matrices whose null space is the line 3 5 =. (b) Describe the null spaces of the following matrices: 4 6 A =, B =, C =, D = Working with Proofs 6. Prove Theorem Prove that the row vectors of an n n invertible matri A form a basis for R n. 8. Suppose that A and B are n n matrices and A is invertible. Invent and prove a theorem that describes how the row spaces of AB and B are related. True-False Eercises TF. In parts (a) ( j) determine whether the statement is true or false, and justif our answer. (a) The span of v,...,v n is the column space of the matri whose column vectors are v,...,v n. (b) The column space of a matri A is the set of solutions of A = b. (c) If R is the reduced row echelon form of A, then those column vectors of R that contain the leading s form a basis for the column space of A. (d) The set of nonzero row vectors of a matri A is a basis for the row space of A. (e) If A and B are n n matrices that have the same row space, then A and B have the same column space. (f ) If E is an m m elementar matri and A is an m n matri, then the null space of EA is the same as the null space of A. (g) If E is an m m elementar matri and A is an m n matri, then the row space of EA is the same as the row space of A. (h) If E is an m m elementar matri and A is an m n matri, then the column space of EA is the same as the column space of A. (i) The sstem A = b is inconsistent if and onl if b is not in the column space of A. ( j) There is an invertible matri A and a singular matri B such that the row spaces of A and B are the same. Working withtechnolog T. Find a basis for the column space of A = that consists of column vectors of A. T. Find a basis for the row space of the matri A in Eercise T that consists of row vectors of A. 4.8 Rank, Nullit, and the Fundamental Matri Spaces In the last section we investigated relationships between a sstem of linear equations and the row space, column space, and null space of its coefficient matri. In this section we will be concerned with the dimensions of those spaces. The results we obtain will provide a deeper insight into the relationship between a linear sstem and its coefficient matri. Row and Column Spaces Have Equal Dimensions In Eamples 6 and 7 of Section 4.7 we found that the row and column spaces of the matri A = both have three basis vectors and hence are both three-dimensional. The fact that these spaces have the same dimension is not accidental, but rather a consequence of the following theorem.

67 4.8 Rank, Nullit, and the Fundamental Matri Spaces 49 THEOREM 4.8. The row space and the column space of a matri A have the same dimension. The proof of Theorem 4.8. shows that the rank of A can be interpreted as the number of leading s in an row echelon form of A. Proof It follows from Theorems and (b) that elementar row operations do not change the dimension of the row space or of the column space of a matri. Thus, if R is an row echelon form of A, it must be true that dim(row space of A) = dim(row space of R) dim(column space of A) = dim(column space of R) so it suffices to show that the row and column spaces of R have the same dimension. But the dimension of the row space of R is the number of nonzero rows, and b Theorem the dimension of the column space of R is the number of leading s. Since these two numbers are the same, the row and column space have the same dimension. Rank and Nullit The dimensions of the row space, column space, and null space of a matri are such important numbers that there is some notation and terminolog associated with them. DEFINITION The common dimension of the row space and column space of a matri A is called the rank of A and is denoted b rank(a); the dimension of the null space of A is called the nullit of A and is denoted b nullit(a). EXAMPLE Rank and Nullit of a 4 6 Matri Find the rank and nullit of the matri A = Solution The reduced row echelon form of A is () (verif). Since this matri has two leading s, its row and column spaces are twodimensional and rank(a) =. To find the nullit of A, we must find the dimension of the solution space of the linear sstem A =. This sstem can be solved b reducing its augmented matri to reduced row echelon form. The resulting matri will be identical to (), ecept that it will have an additional last column of zeros, and hence the corresponding sstem of equations will be = = Solving these equations for the leading variables ields = = ()

68 5 Chapter 4 General Vector Spaces from which we obtain the general solution = 4r + 8s + 37t 3u = r + s + 6t 5u 3 = r 4 = s 5 = t 6 = u or in column vector form = r + s + t + u 5 6 Because the four vectors on the right side of (3) form a basis for the solution space, nullit(a) = 4. (3) EXAMPLE Maimum Value for Rank What is the maimum possible rank of an m n matri A that is not square? Solution Since the row vectors of A lie in R n and the column vectors in R m, the row space of A is at most n-dimensional and the column space is at most m-dimensional. Since the rank of A is the common dimension of its row and column space, it follows that the rank is at most the smaller of m and n. We denote this b writing rank(a) min(m, n) in which min(m, n) is the minimum of m and n. The following theorem establishes a fundamental relationship between the rank and nullit of a matri. THEOREM 4.8. Dimension Theorem for Matrices If A is a matri with n columns, then rank(a) + nullit(a) = n (4) Proof Since A has n columns, the homogeneous linear sstem A = has n unknowns (variables). These fall into two distinct categories: the leading variables and the free variables. Thus, number of leading number of free + = n variables variables But the number of leading variables is the same as the number of leading s in an row echelon form of A, which is the same as the dimension of the row space of A, which is the same as the rank of A. Also, the number of free variables in the general solution of A = is the same as the number of parameters in that solution, which is the same as the dimension of the solution space of A =, which is the same as the nullit of A. This ields Formula (4).

69 4.8 Rank, Nullit, and the Fundamental Matri Spaces 5 EXAMPLE 3 The Sum of Rank and Nullit The matri A = has 6 columns, so rank(a) + nullit(a) = 6 This is consistent with Eample, where we showed that rank(a) = and nullit(a) = 4 The following theorem, which summarizes results alread obtained, interprets rank and nullit in the contet of a homogeneous linear sstem. THEOREM If A is an m n matri, then (a) rank(a) = the number of leading variables in the general solution of A =. (b) nullit(a) = the number of parameters in the general solution of A =. EXAMPLE 4 Rank, Nullit, and Linear Sstems (a) Find the number of parameters in the general solution of A = if A isa5 7 matri of rank 3. (b) Find the rank of a 5 7 matri A for which A = has a two-dimensional solution space. Solution (a) From (4), Thus, there are four parameters. Solution (b) nullit(a) = n rank(a) = 7 3 = 4 The matri A has nullit, so rank(a) = n nullit(a) = 7 = 5 Recall from Section 4.7 that if A = b is a consistent linear sstem, then its general solution can be epressed as the sum of a particular solution of this sstem and the general solution of A =. We leave it as an eercise for ou to use this fact and Theorem to prove the following result. THEOREM If A = b is a consistent linear sstem of m equations in n unknowns, and if A has rank r, then the general solution of the sstem contains n r parameters. The Fundamental Spaces of a Matri There are si important vector spaces associated with a matri A and its transpose A T : row space of A row space of A T column space of A column space of A T null space of A null space of A T

70 5 Chapter 4 General Vector Spaces If A is an m n matri, then the row space and null space of A are subspaces of R n, and the column space of A and the null space of A T are subspaces of R m. However, transposing a matri converts row vectors into column vectors and conversel, so ecept for a difference in notation, the row space of A T is the same as the column space of A, and the column space of A T is the same as the row space of A. Thus, of the si spaces listed above, onl the following four are distinct: row space of A null space of A column space of A null space of A T These are called the fundamental spaces of a matri A. We will now consider how these four subspaces are related. Let us focus for a moment on the matri A T. Since the row space and column space of a matri have the same dimension, and since transposing a matri converts its columns to rows and its rows to columns, the following result should not be surprising. THEOREM If A is an matri, then rank(a) = rank(a T ). Proof rank(a) = dim(row space of A) = dim(column space of A T ) = rank(a T ). This result has some important implications. For eample, if A is an m n matri, then appling Formula (4) to the matri A T and using the fact that this matri has m columns ields rank(a T ) + nullit(a T ) = m which, b virtue of Theorem 4.8.5, can be rewritten as rank(a) + nullit(a T ) = m (5) This alternative form of Formula (4) makes it possible to epress the dimensions of all four fundamental spaces in terms of the size and rank of A. Specificall, if rank(a) = r, then dim[row(a)] =r dim[col(a)] =r dim[null(a)] =n r dim[null(a T )]=m r (6) A Geometric Link Between the Fundamental Spaces The four formulas in (6) provide an algebraic relationship between the size of a matri and the dimensions of its fundamental spaces. Our net objective is to find a geometric relationship between the fundamental spaces themselves. For this purpose recall from Theorem that if A is an m n matri, then the null space of A consists of those vectors that are orthogonal to each of the row vectors of A. To develop that idea in more detail, we make the following definition. DEFINITION If W is a subspace of R n, then the set of all vectors in R n that are orthogonal to ever vector in W is called the orthogonal complement of W and is denoted b the smbol W. The following theorem lists three basic properties of orthogonal complements. We will omit the formal proof because a more general version of this theorem will be proved later in the tet.

71 4.8 Rank, Nullit, and the Fundamental Matri Spaces 53 Part (b) of Theorem can be epressed as W W ={} and part (c)as (W ) = W THEOREM If W is a subspace of R n, then: (a) W is a subspace of R n. (b) The onl vector common to W and W is. (c) The orthogonal complement of W is W. EXAMPLE 5 Orthogonal Complements In R the orthogonal complement of a line W through the origin is the line through the origin that is perpendicular to W (Figure 4.8.a); and in R 3 the orthogonal complement of a plane W through the origin is the line through the origin that is perpendicular to that plane (Figure 4.8.b). W W W Eplain wh {} and R n are orthogonal complements. Figure 4.8. (a) z (b) W The net theorem will provide a geometric link between the fundamental spaces of a matri. In the eercises we will ask ou to prove that if a vector in R n is orthogonal to each vector in a basis for a subspace of R n, then it is orthogonal to ever vector in that subspace. Thus, part (a) of the following theorem is essentiall a restatement of Theorem in the language of orthogonal complements; it is illustrated in Eample 6 of Section 3.4. The proof of part (b), which is left as an eercise, follows from part (a). The essential idea of the theorem is illustrated in Figure z z Figure 4.8. Row A Null A Null A T Col A THEOREM If A is an m n matri, then: (a) The null space of A and the row space of A are orthogonal complements in R n. (b) The null space of A T and the column space of A are orthogonal complements in R m.

72 54 Chapter 4 General Vector Spaces More on the Equivalence Theorem In Theorem.3.8 we listed si results that are equivalent to the invertibilit of a square matri A. We are now in a position to add ten more statements to that list to produce a single theorem that summarizes and links together all of the topics that we have covered thus far. We will prove some of the equivalences and leave others as eercises. THEOREM Equivalent Statements If A is an n n matri, then the following statements are equivalent. (a) A is invertible. (b) A = has onl the trivial solution. (c) The reduced row echelon form of A is I n. (d) A is epressible as a product of elementar matrices. (e) A = b is consistent for ever n matri b. ( f) A = b has eactl one solution for ever n matri b. (g) det(a) =. (h) The column vectors of A are linearl independent. (i) The row vectors of A are linearl independent. ( j) The column vectors of A span R n. (k) The row vectors of A span R n. (l) The column vectors of A form a basis for R n. (m) The row vectors of A form a basis for R n. (n) A has rank n. (o) A has nullit. ( p) The orthogonal complement of the null space of A is R n. (q) The orthogonal complement of the row space of A is {}. Proof The equivalence of (h) through (m) follows from Theorem (we omit the details). To complete the proof we will show that (b), (n), and (o) are equivalent b proving the chain of implications (b) (o) (n) (b). (b) (o) If A = has onl the trivial solution, then there are no parameters in that solution, so nullit(a) = b Theorem 4.8.3(b). (o) (n) Theorem (n) (b) If A has rank n, then Theorem 4.8.3(a) implies that there are n leading variables (hence no free variables) in the general solution of A =. This leaves the trivial solution as the onl possibilit. Applications of Rank The advent of the Internet has stimulated research on finding efficient methods for transmitting large amounts of digital data over communications lines with limited bandwidths. Digital data are commonl stored in matri form, and man techniques for improving transmission speed use the rank of a matri in some wa. Rank plas a role because it measures the redundanc in a matri in the sense that if A is an m n matri of rank k, then n k of the column vectors and m k of the row vectors can be epressed in terms of k linearl independent column or row vectors. The essential idea in man data compression schemes is to approimate the original data set b a data set with smaller rank that conves nearl the same information, then eliminate redundant vectors in the approimating set to speed up the transmission time.

73 4.8 Rank, Nullit, and the Fundamental Matri Spaces 55 OPTIONAL Overdetermined and Underdetermined Sstems In engineering and phsics, the occurrence of an overdetermined or underdetermined linear sstem often signals that one or more variables were omitted in formulating the problem or that etraneous variables were included. This often leads to some kind of complication. In man applications the equations in a linear sstem correspond to phsical constraints or conditions that must be satisfied. In general, the most desirable sstems are those that have the same number of constraints as unknowns since such sstems often have a unique solution. Unfortunatel, it is not alwas possible to match the number of constraints and unknowns, so researchers are often faced with linear sstems that have more constraints than unknowns, called overdetermined sstems, or with fewer constraints than unknowns, called underdetermined sstems. The following theorem will help us to analze both overdetermined and underdetermined sstems. THEOREM Let A be an m n matri. (a) (Overdetermined Case). If m>n, then the linear sstem A = b is inconsistent for at least one vector b in R n. (b) (Underdetermined Case). If m<n,then for each vector b in R m the linear sstem A = b is either inconsistent or has infinitel man solutions. Proof (a) Assume that m>n, in which case the column vectors of A cannot span R m (fewer vectors than the dimension of R m ). Thus, there is at least one vector b in R m that is not in the column space of A, and for an such b the sstem A = b is inconsistent b Theorem Proof (b) Assume that m<n. For each vector b in R n there are two possibilities: either the sstem A = b is consistent or it is inconsistent. If it is inconsistent, then the proof is complete. If it is consistent, then Theorem implies that the general solution has n r parameters, where r = rank(a). But we know from Eample that rank(a) is at most the smaller of m and n (which is m), so n r n m> This means that the general solution has at least one parameter and hence there are infinitel man solutions. EXAMPLE 6 Overdetermined and Underdetermined Sstems (a) What can ou sa about the solutions of an overdetermined sstem A = b of 7 equations in 5 unknowns in which A has rank r = 4? (b) What can ou sa about the solutions of an underdetermined sstem A = b of 5 equations in 7 unknowns in which A has rank r = 4? Solution (a) The sstem is consistent for some vector b in R 7, and for an such b the number of parameters in the general solution is n r = 5 4 =. Solution (b) The sstem ma be consistent or inconsistent, but if it is consistent for the vector b in R 5, then the general solution has n r = 7 4 = 3 parameters. EXAMPLE 7 An Overdetermined Sstem The linear sstem = b = b + = b 3 + = b = b 5 is overdetermined, so it cannot be consistent for all possible values of b, b, b 3, b 4, and b 5. Conditions under which the sstem is consistent can be obtained b solving the linear

74 56 Chapter 4 General Vector Spaces sstem b Gauss Jordan elimination. We leave it for ou to show that the augmented matri is row equivalent to b b b b b 3 3b + b (7) b 4 4b + 3b b 5 5b + 4b Thus, the sstem is consistent if and onl if b, b, b 3, b 4, and b 5 satisf the conditions b 3b + b 3 = 3b 4b + b 4 = 4b 5b + b 5 = Solving this homogeneous linear sstem ields b = 5r 4s, b = 4r 3s, b 3 = r s, b 4 = r, b 5 = s where r and s are arbitrar. Remark The coefficient matri for the given linear sstem in the last eample has n = columns, and it has rank r = because there are two nonzero rows in its reduced row echelon form. This implies that when the sstem is consistent its general solution will contain n r = parameters; that is, the solution will be unique. With a moment s thought, ou should be able to see that this is so from (7). Eercise Set 4.8 In Eercises, find the rank and nullit of the matri A b reducing it to row echelon form. 4. (a) A = (b) A = (a) A = (b) A = In Eercises 3 6, the matri R is the reduced row echelon form of the matri A. (a) B inspection of the matri R, find the rank and nullit of A. (b) Confirm that the rank and nullit satisf Formula (4). (c) Find the number of leading variables and the number of parameters in the general solution of A = without solving the sstem A = 3 ; R = A = 3 ; R = A = 3 ; R = A = 3 ; R = 3

75 4.8 Rank, Nullit, and the Fundamental Matri Spaces In each part, find the largest possible value for the rank of A and the smallest possible value for the nullit of A. (a) A is 4 4 (b) A is 3 5 (c) A is If A is an m n matri, what is the largest possible value for its rank and the smallest possible value for its nullit? 9. In each part, use the information in the table to: (i) (ii) (iii) find the dimensions of the row space of A, column space of A, null space of A, and null space of A T ; determine whether or not the linear sstem A = b is consistent; find the number of parameters in the general solution of each sstem in (ii) that is consistent. (a) (b) (c) (d) (e) (f ) (g) Size of A Rank(A) 3 Rank[A b] Verif that rank(a) = rank(a T ). 4 A = (a) Find an equation relating nullit(a) and nullit(a T ) for the matri in Eercise. (b) Find an equation relating nullit(a) and nullit(a T ) for a general m n matri.. Let T : R R 3 be the linear transformation defined b the formula T(, ) = ( + 3,, ) (a) Find the rank of the standard matri for T. (b) Find the nullit of the standard matri for T. 3. Let T : R 5 R 3 be the linear transformation defined b the formula T(,, 3, 4, 5 ) = ( +, , ) (a) Find the rank of the standard matri for T. (b) Find the nullit of the standard matri for T. 4. Discuss how the rank of A varies with t. t t 3 (a) A = t (b) A = 3 6 t 3 t 5. Are there values of r and s for which r s r + 3 has rank? Has rank? If so, find those values. 6. (a) Give an eample of a 3 3 matri whose column space is a plane through the origin in 3-space. (b) What kind of geometric object is the null space of our matri? (c) What kind of geometric object is the row space of our matri? 7. Suppose that A isa3 3 matri whose null space is a line through the origin in 3-space. Can the row or column space of A also be a line through the origin? Eplain. 8. (a) If A isa3 5 matri, then the rank of A is at most.wh? (b) If A isa3 5 matri, then the nullit of A is at most.wh? (c) If A isa3 5 matri, then the rank of A T is at most.wh? (d) If A isa3 5 matri, then the nullit of A T is at most.wh? 9. (a) If A isa3 5 matri, then the number of leading s in the reduced row echelon form of A is at most. Wh? (b) If A isa3 5 matri, then the number of parameters in the general solution of A = is at most.wh? (c) If A isa5 3 matri, then the number of leading s in the reduced row echelon form of A is at most. Wh? (d) If A isa5 3 matri, then the number of parameters in the general solution of A = is at most.wh?. Let A bea7 6 matri such that A = has onl the trivial solution. Find the rank and nullit of A.. Let A bea5 7 matri with rank 4. (a) What is the dimension of the solution space of A =? (b) Is A = b consistent for all vectors b in R 5? Eplain.. Let a a a 3 A = a a a 3 Show that A has rank if and onl if one or more of the following determinants is nonzero. a a a a, a a 3 a a, a a 3 3 a a 3

76 58 Chapter 4 General Vector Spaces 3. Use the result in Eercise to show that the set of points (,,z)in R 3 for which the matri [ ] z has rank is the curve with parametric equations = t, = t, z = t Find matrices A and B for which rank(a) = rank(b), but rank(a ) = rank(b ). 5. In Eample 6 of Section 3.4 we showed that the row space and the null space of the matri A = are orthogonal complements in R 6, as guaranteed b part (a) of Theorem Show that null space of A T and the column space of A are orthogonal complements in R 4, as guaranteed b part (b) of Theorem [Suggestion: Show that each column vector of A is orthogonal to each vector in a basis for the null space of A T.] 6. Confirm the results stated in Theorem for the matri A = In each part, state whether the sstem is overdetermined or underdetermined. If overdetermined, find all values of the b s for which it is inconsistent, and if underdetermined, find all values of the b s for which it is inconsistent and all values for which it has infinitel man solutions. b (a) 3 = b b 3 [ ] 3 4 b (b) = 6 8 b z [ ] 3 b (c) = b z 8. What conditions must be satisfied b b, b, b 3, b 4, and b 5 for the overdetermined linear sstem to be consistent? 3 = b = b + = b 3 4 = b = b 5 Working with Proofs 9. Prove: If k =, then A and ka have the same rank. 3. Prove: If a matri A is not square, then either the row vectors or the column vectors of A are linearl dependent. 3. Use Theorem to prove Theorem Prove Theorem 4.8.7(b). 33. Prove: If a vector v in R n is orthogonal to each vector in a basis for a subspace W of R n, then v is orthogonal to ever vector in W. True-False Eercises TF. In parts (a) ( j) determine whether the statement is true or false, and justif our answer. (a) Either the row vectors or the column vectors of a square matri are linearl independent. (b) A matri with linearl independent row vectors and linearl independent column vectors is square. (c) The nullit of a nonzero m n matri is at most m. (d) Adding one additional column to a matri increases its rank b one. (e) The nullit of a square matri with linearl dependent rows is at least one. (f ) If A is square and A = b is inconsistent for some vector b, then the nullit of A is zero. (g) If a matri A has more rows than columns, then the dimension of the row space is greater than the dimension of the column space. (h) If rank(a T ) = rank(a), then A is square. (i) There is no 3 3 matri whose row space and null space are both lines in 3-space. (j) If V is a subspace of R n and W is a subspace of V, then W is a subspace of V. Working withtechnolog T. It can be proved that a nonzero matri A has rank k if and onl if some k k submatri has a nonzero determinant and all square submatrices of larger size have determinant zero. Use this fact to find the rank of A = Check our result b computing the rank of A in a different wa.

77 4.9 Basic Matri Transformations in R and R 3 59 T. Slvester s inequalit states that if A and B are n n matrices with rank r A and r B, respectivel, then the rank r AB of AB satisfies the inequalit r A + r B n r AB min(r A,r B ) where min(r A,r B ) denotes the smaller of r A and r B or their common value if the two ranks are the same. Use our technolog utilit to confirm this result for some matrices of our choice. 4.9 Basic Matri Transformations in R and R 3 In this section we will continue our stud of linear transformations b considering some basic tpes of matri transformations in R and R 3 that have simple geometric interpretations. The transformations we will stud here are important in such fields as computer graphics, engineering, and phsics. There are man was to transform the vector spaces R and R 3, some of the most important of which can be accomplished b matri transformations using the methods introduced in Section.8. For eample, rotations about the origin, reflections about lines and planes through the origin, and projections onto lines and planes through the origin can all be accomplished using a linear operator T A in which A is an appropriate or3 3 matri. Reflection Operators Some of the most basic matri operators on R and R 3 are those that map each point into its smmetric image about a fied line or a fied plane that contains the origin; these are called reflection operators. Table shows the standard matrices for the reflections about the coordinate aes in R, and Table shows the standard matrices for the reflections about the coordinate planes in R 3. In each case the standard matri was obtained using the following procedure introduced in Section.8: Find the images of the standard basis vectors, convert those images to column vectors, and then use those column vectors as successive columns of the standard matri. Table Operator Illustration Images of e and e Standard Matri Reflection about the -ais T (,) = (, ) T() (, ) (, ) T(e ) = T(, ) = (, ) T(e ) = T(, ) = (, ) Reflection about the -ais (, ) T (,) = (,) T() (, ) T(e ) = T(, ) = (, ) T(e ) = T(, ) = (, ) Reflection about the line = T (,) = (, ) T() (, ) (, ) = T(e ) = T(, ) = (, ) T(e ) = T(, ) = (, )

78 6 Chapter 4 General Vector Spaces Table Operator Illustration Images of e, e, e 3 Standard Matri z Reflection about the -plane T (,,z) = (,, z) T() (,, z) T(e ) = T(,, ) = (,, ) T(e ) = T(,, ) = (,, ) T(e 3 ) = T(,, ) = (,, ) (,, z) z Reflection about the z-plane T (,,z) = (,,z) (,, z) T() (,, z) T(e ) = T(,, ) = (,, ) T(e ) = T(,, ) = (,, ) T(e 3 ) = T(,, ) = (,, ) Reflection about the z-plane T (,,z) = (,,z) z T() (,, z) (,, z) T(e ) = T(,, ) = (,, ) T(e ) = T(,, ) = (,, ) T(e 3 ) = T(,, ) = (,, ) Projection Operators Matri operators on R and R 3 that map each point into its orthogonal projection onto a fied line or plane through the origin are called projection operators (or more precisel, orthogonal projection operators). Table 3 shows the standard matrices for the orthogonal projections onto the coordinate aes in R, and Table 4 shows the standard matrices for the orthogonal projections onto the coordinate planes in R 3. Table 3 Operator Illustration Images of e and e Standard Matri Orthogonal projection onto the -ais T (,) = (, ) T() (, ) (, ) T(e ) = T(, ) = (, ) T(e ) = T(, ) = (, ) Orthogonal projection onto the -ais T (,) = (,) (, ) T() (, ) T(e ) = T(, ) = (, ) T(e ) = T(, ) = (, )

79 4.9 Basic Matri Transformations in R and R 3 6 Table 4 Operator Illustration Images of e, e, e 3 Standard Matri Orthogonal projection onto the -plane T (,,z) = (,, ) z T() (,, z) (,, ) T(e ) = T(,, ) = (,, ) T(e ) = T(,, ) = (,, ) T(e 3 ) = T(,, ) = (,, ) Orthogonal projection onto the z-plane T (,,z) = (,,z) (,, z) T() z (,, z) T(e ) = T(,, ) = (,, ) T(e ) = T(,, ) = (,, ) T(e 3 ) = T(,, ) = (,, ) Orthogonal projection onto the z-plane T (,,z) = (,,z) z T() (,, z) (,, z) T(e ) = T(,, ) = (,, ) T(e ) = T(,, ) = (,, ) T(e 3 ) = T(,, ) = (,, ) Rotation Operators Matri operators on R and R 3 that move points along arcs of circles centered at the origin are called rotation operators. Let us consider how to find the standard matri for the rotation operator T : R R that moves points counterclockwise about the origin through a positive angle θ. As illustrated in Figure 4.9., the images of the standard basis vectors are T(e ) = T(, ) = (cos θ,sin θ) and T(e ) = T(, ) = ( sin θ,cos θ) so it follows from Formula (4) of Section.8 that the standard matri for T is cos θ sin θ A =[T(e ) T(e )]= sin θ cos θ ( sin θ, cos θ) T e u u (cos θ, sin θ) T e Figure 4.9. In keeping with common usage we will denote this operator b R θ and call cos θ sin θ R θ = sin θ cos θ ()

80 6 Chapter 4 General Vector Spaces In the plane, counterclockwise angles are positive and clockwise angles are negative. The rotation matri for a clockwise rotation of θ radians can be obtained b replacing θ b θ in (). After simplification this ields [ ] cos θ sin θ R θ = sin θ cos θ the rotation matri for R. If = (, ) is a vector in R, and if w = (w,w ) is its image under the rotation, then the relationship w = R θ can be written in component form as w = cos θ sin θ () w = sin θ + cos θ These are called the rotation equations for R. These ideas are summarized in Table 5. Table 5 Operator Illustration Rotation Equations Standard Matri Counterclockwise rotation about the origin through an angle θ θ w (w, w ) (, ) w = cos θ sin θ w = sin θ + cos θ [ cos θ ] sin θ sin θ cos θ EXAMPLE A Rotation Operator Find the image of = (, ) under a rotation of π/6 radians (= 3 ) about the origin. Solution It follows from () with θ = π/6 that [ 3 ] R π/6 = = 3 [ ] or in comma-delimited notation, R π/6 (, ) (.37,.37) Rotations in R 3 A rotation of vectors in R 3 is commonl described in relation to a line through the origin called the ais of rotation and a unit vector u along that line (Figure 4.9.a). The unit vector and what is called the right-hand rule can be used to establish a sign for the angle of rotation b cupping the fingers of our right hand so the curl in the direction of rotation and observing the direction of our thumb. If our thumb points in the direction of u, then the angle of rotation is regarded to be positive relative to u, and if it points in the direction opposite to u, then it is regarded to be negative relative to u (Figure 4.9.b). z Ais of rotation l θ w z u Positive rotation z u Negative rotation Figure 4.9. (a) Angle of rotation (b) Right-hand rule For rotations about the coordinate aes in R 3, we will take the unit vectors to be i, j, and k, in which case an angle of rotation will be positive if it is counterclockwise looking toward the origin along the positive coordinate ais and will be negative if it is clockwise. Table 6 shows the standard matrices for the rotation operators on R 3 that rotate each vector about one of the coordinate aes through an angle θ. You will find it instructive to compare these matrices to that in Table 5.

81 4.9 Basic Matri Transformations in R and R 3 63 Table 6 Operator Illustration Rotation Equations Standard Matri z Counterclockwise rotation about the positive -ais through an angle θ w θ w = w = cos θ z sin θ w 3 = sin θ + z cos θ cos θ sin θ sin θ cos θ Counterclockwise rotation about the positive -ais through an angle θ z θ w = cos θ + z sin θ w = w 3 = sin θ + z cos θ cos θ sin θ sin θ cos θ w z Counterclockwise rotation about the positive z-ais through an angle θ θ w w = cos θ sin θ w = sin θ + cos θ w 3 = z cos θ sin θ sin θ cos θ Yaw, Pitch, and Roll In aeronautics and astronautics, the orientation of an aircraft or space shuttle relative to an z-coordinate sstem is often described in terms of angles called aw, pitch, and roll. If, for eample, an aircraft is fling along the -ais and the -plane defines the horizontal, then the aircraft s angle of rotation about the z-ais is called the aw, its angle of rotation about the -ais is called the pitch, and its angle of rotation about the -ais is called the roll. A combination of aw, pitch, and roll can be achieved b a single rotation about some ais through the origin. This is, in fact, how a space shuttle makes attitude adjustments it doesn t perform each rotation separatel; it calculates one ais, and rotates about that ais to get the correct orientation. Such rotation maneuvers are used to align an antenna, point the nose toward a celestial object, or position a paload ba for docking. Pitch z Yaw Roll For completeness, we note that the standard matri for a counterclockwise rotation through an angle θ about an ais in R 3, which is determined b an arbitrar unit vector u = (a,b,c)that has its initial point at the origin, is a ( cos θ)+ cos θ ab( cos θ) c sin θ ac( cos θ)+ b sin θ ab( cos θ)+ c sin θ b ( cos θ)+ cos θ bc( cos θ) a sin θ (3) ac( cos θ) b sin θ bc( cos θ)+ a sin θ c ( cos θ)+ cos θ The derivation can be found in the book Principles of Interactive Computer Graphics,b W. M. Newman and R. F. Sproull (New York: McGraw-Hill, 979). You ma find it instructive to derive the results in Table 6 as special cases of this more general result.

82 64 Chapter 4 General Vector Spaces Dilations and Contractions If k is a nonnegative scalar, then the operator T() = k on R or R 3 has the effect of increasing or decreasing the length of each vector b a factor of k. If k< the operator is called a contraction with factor k, and if k> it is called a dilation with factor k (Figure 4.9.3). Tables 7 and 8 illustrate these operators. If k =, then T is the identit operator. T() = k T() = k Figure (a) k < (b) k > Table 7 Illustration Effect on the Standard Operator T (, ) = (k, k) Unit Square Matri Contraction with factor k in R ( k<) Dilation with factor k in R T() (, ) (k, k) (, ) T() (k, k) (, ) (, ) (, ) (, k) (, k) (k, ) [ k ] k (k > ) (, ) (k, ) Table 8 Illustration Standard Operator T (,, z) = (k, k, kz) Matri z Contraction with factor k in R 3 ( k<) Dilation with factor k in R 3 (k > ) (,, z) T() (k, k, kz) z (k, k, kz) T() (,, z) k k k

83 4.9 Basic Matri Transformations in R and R 3 65 Epansions and Compressions In a dilation or contraction of R or R 3, all coordinates are multiplied b a nonnegative factor k. If onl one coordinate is multiplied b k, then, depending on the value of k, the resulting operator is called a compression or epansion with factor k in the direction of a coordinate ais. This is illustrated in Table 9 for R. The etension to R 3 is left as an eercise. Table 9 Illustration Effect on the Standard Operator T (,) = (k,) Unit Square Matri Compression in the -direction with factor k in R ( k<) Epansion in the -direction with factor k in R (k > ) (k, ) (, ) T() (, ) (k, ) T() (, ) (, ) (, ) (k, ) [ k ] (, ) (, ) (, ) (k, ) Illustration Effect on the Standard Operator T (, ) = (, k) Unit Square Matri Compression in the -direction with factor k in R ( k<) Epansion in the -direction with factor k in R (k > ) T() T() (, ) (, k) (, k) (, ) (, ) (, ) (, k) (, ) (, ) [ ] k (, k) (, ) (, ) Shears A matri operator of the form T (,) = ( + k, ) translates a point (, ) in the -plane parallel to the -ais b an amount k that is proportional to the -coordinate of the point. This operator leaves the points on the -ais fied (since = ), but as we progress awa from the -ais, the translation distance increases. We call this operator the shear in the -direction b a factor k. Similarl, a matri operator of the form T (,) = (, + k) is called the shear in the -direction b a factor k. Table, which illustrates the basic information about shears in R, shows that a shear is in the positive direction if k>and the negative direction if k<.

84 66 Chapter 4 General Vector Spaces Table Operator Effect on the Unit Square Standard Matri Shear in the -direction b a factor k in R T (,) = ( + k, ) (, ) (, ) (k, ) (k, ) (, ) (, ) (k > ) (k < ) k Shear in the -direction b a factor k in R T (,) = (, + k) (, ) (, ) (, ) (k > ) (, k) (, ) (, k) (k < ) k EXAMPLE Effect of Matri Operators on the Unit Square In each part, describe the matri operator whose standard matri is shown, and show its effect on the unit square. (a) A = (b) A = (c) A 3 = (d) A 4 = Solution B comparing the forms of these matrices to those in Tables 7, 9, and, we see that the matri A corresponds to a shear in the -direction b a factor, the matri A corresponds to a shear in the -direction b a factor, the matri A 3 corresponds to a dilation with factor, and the matri A 4 corresponds to an epansion in the - direction with factor. The effects of these operators on the unit square are shown in Figure Figure Orthogonal Projections onto LinesThrough the Origin θ Figure T() L A A A 3 A 4 In Table 3 we listed the standard matrices for the orthogonal projections onto the coordinate aes in R. These are special cases of the more general matri operator T A : R R that maps each point into its orthogonal projection onto a line L through the origin that makes an angle θ with the positive -ais (Figure 4.9.5). In Eample 4 of Section 3.3 we used Formula () of that section to find the orthogonal projections of the standard basis vectors for R onto that line. Epressed in matri form, we found those projections to be cos θ sin θ cos θ T(e ) = and T(e ) = sin θ cos θ sin θ Thus, the standard matri for T A is [ cos ] [ θ sin θ cos θ cos ] θ sin θ A =[T(e ) T(e )]= sin θ cos θ sin θ = sin θ sin θ

85 4.9 Basic Matri Transformations in R and R 3 67 We have included two versions of Formula (4) because both are commonl used. Whereas the first version involves onl the angle θ, the second involves both θ and θ. Reflections About Lines Through the Origin H θ L θ Figure H θ θ Figure P θ L In keeping with common usage, we will denote this operator b [ cos ] [ θ sin θ cos θ cos ] θ sin θ P θ = sin θ cos θ sin = θ sin θ sin θ EXAMPLE 3 Orthogonal Projection onto a Line Through the Origin Use Formula (4) to find the orthogonal projection of the vector = (, 5) onto the line through the origin that makes an angle of π/6 (= 3 ) with the positive -ais. Solution Since sin(π/6) = / and cos(π/6) = 3/, it follows from (4) that the standard matri for this projection is [ cos ] [ (π/6) sin(π/6) cos(π/6) 3 3 ] 4 4 P π/6 = sin(π/6) cos(π/6) sin = (π/6) Thus, [ ] [ ] 4 4 P π/6 = = or in comma-delimited notation, P π/6 (, 5) (.9,.68) In Table we listed the reflections about the coordinate aes in R. These are special cases of the more general operator H θ : R R that maps each point into its reflection about a line L through the origin that makes an angle θ with the positive -ais (Figure 4.9.6). We could find the standard matri for H θ b finding the images of the standard basis vectors, but instead we will take advantage of our work on orthogonal projections b using Formula (4) for P θ to find a formula for H θ. You should be able to see from Figure that for ever vector in R n P θ = (H θ ) or equivalentl H θ = (P θ I) Thus, it follows from Theorem.8.4 that and hence from (4) that (4) H θ = P θ I (5) cos θ sin θ H θ = sin θ cos θ EXAMPLE 4 Reflection About a Line Through the Origin Find the reflection of the vector = (, 5) about the line through the origin that makes an angle of π/6 (= 3 ) with the -ais. Solution Since sin(π/3) = 3/ and cos(π/3) = /, it follows from (6) that the standard matri for this reflection is [ 3 ] cos(π/3) sin(π/3) H π/6 = = sin(π/3) cos(π/3) 3 Thus, [ 3 ] [ +5 3 ] 4.83 H π/6 = = or in comma-delimited notation, H π/6 (, 5) (4.83,.63). (6)

86 68 Chapter 4 General Vector Spaces Eercise Set 4.9. Use matri multiplication to find the reflection of (, ) about the (a) -ais. (b) -ais. (c) line =.. Use matri multiplication to find the reflection of (a, b) about the (a) -ais. (b) -ais. (c) line =. 3. Use matri multiplication to find the reflection of (, 5, 3) about the (a) -plane. (b) z-plane. (c) z-plane. 4. Use matri multiplication to find the reflection of (a,b,c) about the (a) -plane. (b) z-plane. (c) z-plane. 5. Use matri multiplication to find the orthogonal projection of (, 5) onto the (a) -ais. (b) -ais. 6. Use matri multiplication to find the orthogonal projection of (a, b) onto the (a) -ais. (b) -ais. 7. Use matri multiplication to find the orthogonal projection of (,, 3) onto the (a) -plane. (b) z-plane. (c) z-plane. 8. Use matri multiplication to find the orthogonal projection of (a,b,c)onto the (a) -plane. (b) z-plane. (c) z-plane. 9. Use matri multiplication to find the image of the vector (3, 4) when it is rotated about the origin through an angle of (a) θ = 3. (b) θ = 6. (c) θ = 45. (d) θ = 9.. Use matri multiplication to find the image of the nonzero vector v = (v,v ) when it is rotated about the origin through (a) a positive angle α. (b) a negative angle α.. Use matri multiplication to find the image of the vector (,, ) if it is rotated (a) 3 clockwise about the positive -ais. (b) 3 counterclockwise about the positive -ais. (c) 45 clockwise about the positive -ais. (d) 9 counterclockwise about the positive z-ais.. Use matri multiplication to find the image of the vector (,, ) if it is rotated (a) 3 counterclockwise about the positive -ais. (b) 3 clockwise about the positive -ais. (c) 45 counterclockwise about the positive -ais. (d) 9 clockwise about the positive z-ais. 3. (a) Use matri multiplication to find the contraction of (, ) with factor k =. (b) Use matri multiplication to find the dilation of (, ) with factor k = (a) Use matri multiplication to find the contraction of (a, b) with factor k = /α, where α>. (b) Use matri multiplication to find the dilation of (a, b) with factor k = α, where α>. 5. (a) Use matri multiplication to find the contraction of (,, 3) with factor k =. 4 (b) Use matri multiplication to find the dilation of (,, 3) with factor k =. 6. (a) Use matri multiplication to find the contraction of (a,b,c)with factor k = /α, where α>. (b) Use matri multiplication to find the dilation of (a,b,c) with factor k = α, where α>. 7. (a) Use matri multiplication to find the compression of (, ) in the -direction with factor k =. (b) Use matri multiplication to find the compression of (, ) in the -direction with factor k =. 8. (a) Use matri multiplication to find the epansion of (, ) in the -direction with factor k = 3. (b) Use matri multiplication to find the epansion of (, ) in the -direction with factor k = (a) Use matri multiplication to find the compression of (a, b) in the -direction with factor k = /α, where α>. (b) Use matri multiplication to find the epansion of (a, b) in the -direction with factor k = α, where α>.. Based on Table 9, make a conjecture about the standard matrices for the compressions with factor k in the directions of the coordinate aes in R 3. Eercises Using Eample as a model, describe the matri operator whose standard matri is given, and then show in a coordinate sstem its effect on the unit square.. (a) A = [ ] (c) A 3 = (b) A = (d) A 4 =

87 4.9 Basic Matri Transformations in R and R (a) A = 3 (c) A 3 = 3 (b) A = 3 (d) A 4 = 3 In each part of Eercises 3 4, the effect of some matri operator on the unit square is shown. Find the standard matri for an operator with that effect. 3. (a) (b) (a) 3 3 In Eercises 5 6, find the standard matri for the orthogonal projection of R onto the stated line, and then use that matri to find the orthogonal projection of the given point onto that line. 5. The orthogonal projection of (3, 4) onto the line that makes an angle of π/3 (= 6 ) with the positive -ais. 6. The orthogonal projection of (, ) onto the line that makes an angle of π/4 (= 45 ) with the positive -ais. In Eercises 7 8, find the standard matri for the reflection of R about the stated line, and then use that matri to find the reflection of the given point about that line. 7. The reflection of (3, 4) about the line that makes an angle of π/3 (= 6 ) with the positive -ais. 8. The reflection of (, ) about the line that makes an angle of π/4 (= 45 ) with the positive -ais. 9. For each reflection operator in Table use the standard matri to compute T(,, 3), and convince ourself that our result makes sense geometricall. 3. For each orthogonal projection operator in Table 4 use the standard matri to compute T(,, 3), and convince ourself that our result makes sense geometricall. 3. Find the standard matri for the operator T : R 3 R 3 that (a) rotates each vector 3 counterclockwise about the z-ais (looking along the positive z-ais toward the origin). (b) rotates each vector 45 counterclockwise about the -ais (looking along the positive -ais toward the origin). (c) rotates each vector 9 counterclockwise about the -ais (looking along the positive -ais toward the origin). (b) In each part of the accompaning figure, find the standard matri for the pictured operator. (,, z) z (, z, ) z (,, z) (z,, ) (,, z) (a) (b) (c) Figure E-3 z (,, z) 33. Use Formula (3) to find the standard matri for a rotation of 8 about the ais determined b the vector v = (,, ). [Note: Formula (3) requires that the vector defining the ais of rotation have length.] 34. Use Formula (3) to find the standard matri for a rotation of π/ radians about the ais determined b v = (,, ). [Note: Formula (3) requires that the vector defining the ais of rotation have length.] 35. Use Formula (3) to derive the standard matrices for the rotations about the -ais, the -ais, and the z-ais through an angle of 9 in R Show that the standard matrices listed in Tables and 3 are special cases of Formulas (4) and (6). 37. In a sentence, describe the geometric effect of multipling a vector b the matri [ ] cos θ sin θ sin θ cos θ A = sin θ cos θ cos θ sin θ 38. If multiplication b A rotates a vector in the -plane through an angle θ, what is the effect of multipling b A T? Eplain our reasoning. 39. Let be a nonzero column vector in R, and suppose that T : R R is the transformation defined b the formula T() = + R θ, where R θ is the standard matri of the rotation of R about the origin through the angle θ. Give a geometric description of this transformation. Is it a matri transformation? Eplain. 4. In R 3 the orthogonal projections onto the -ais, -ais, and z-ais are T (,,z)= (,, ), T (,,z)= (,,), T 3 (,,z)= (,,z) respectivel. (a) Show that the orthogonal projections onto the coordinate aes are matri operators, and then find their standard matrices.

88 7 Chapter 4 General Vector Spaces (b) Show that if T : R 3 R 3 is an orthogonal projection onto one of the coordinate aes, then for ever vector in R 3, the vectors T() and T() are orthogonal. (c) Make a sketch showing and T() in the case where T is the orthogonal projection onto the -ais. 4. Properties of MatriTransformations In this section we will discuss properties of matri transformations. We will show, for eample, that if several matri transformations are performed in succession, then the same result can be obtained b a single matri transformation that is chosen appropriatel. We will also eplore the relationship between the invertibilit of a matri and properties of the corresponding transformation. Compositions of Matri Transformations Suppose that T A is a matri transformation from R n to R k and T B is a matri transformation from R k to R m.if is a vector in R n, then T A maps this vector into a vector T A () in R k, and T B, in turn, maps that vector into the vector T B (T A ()) in R m. This process creates a transformation from R n to R m that we call the composition of T B with T A and denote b the smbol T B T A which is read T B circle T A. As illustrated in Figure 4.., the transformation T A in the formula is performed first; that is, (T B T A )() = T B (T A ()) () This composition is itself a matri transformation since (T B T A )() = T B (T A ()) = B(T A ()) = B(A) = (BA) which shows that it is multiplication b BA. This is epressed b the formula T B T A = T BA () T A T B R n R k T A () R m T B (T A ()) Figure 4.. T B T A Compositions can be defined for an finite succession of matri transformations whose domains and ranges have the appropriate dimensions. For eample, to etend Formula () to three factors, consider the matri transformations T A : R n R k, T B : R k R l, T C : R l R m We define the composition (T C T B T A ): R n R m b (T C T B T A )() = T C (T B (T A ())) As above, it can be shown that this is a matri transformation whose standard matri is CBA and that T C T B T A = T CBA (3) Sometimes we will want to refer to the standard matri for a matri transformation T : R n R m without giving a name to the matri itself. In such cases we will denote the standard matri for T b the smbol [T ]. Thus, the equation T() =[T ]

89 4. Properties of Matri Transformations 7 states that T() is the product of the standard matri [T ] and the column vector. For eample, if T : R n R k and if T : R k R m, then Formula () can be restated as Similarl, Formula (3) can be restated as [T T ]=[T ][T ] (4) [T 3 T T ]=[T 3 ][T ][T ] (5) WARNING Just as it is not generall true for matrices that AB = BA, so it is not generall true that T B T A = T A T B That is, order matters when matri transformations are composed. In those special cases where the order does not matter we sa that the linear transformations commute. EXAMPLE Composition Is Not Commutative Let T : R R be the reflection about the line =, and let T : R R be the orthogonal projection onto the -ais. Figure 4.. illustrates graphicall that T T and T T have different effects on a vector. This same conclusion can be reached b showing that the standard matrices for T and T do not commute: [T T ]=[T ][T ]= = [T T ]=[T ][T ]= = so [T T ] = [T T ]. T (T ()) T () = = T () T (T ()) Figure 4.. T T T T T (T ()) θ Figure 4..3 T () θ θ + θ EXAMPLE Composition of Rotations Is Commutative Let T : R R and T : R R be the matri operators that rotate vectors about the origin through the angles θ and θ, respectivel. Thus the operation (T T )() = T (T ()) first rotates through the angle θ, then rotates T () through the angle θ. It follows that the net effect of T T is to rotate each vector in R through the angle θ + θ (Figure 4..3). The standard matrices for these matri operators, which are cos θ sin θ cos θ sin θ [T ]=, [T ]=, sin θ cos θ sin θ cos θ cos(θ + θ ) sin(θ + θ ) [T T ]= sin(θ + θ ) cos(θ + θ ) should satisf (4). With the help of some basic trigonometric identities, we can confirm that this is so as follows:

90 7 Chapter 4 General Vector Spaces cos θ sin θ cos θ sin θ Using the notation R θ for a [T ][T ]= rotation of R about the origin sin θ cos θ sin θ cos θ through an angle θ, the computation in Eample shows cos θ cos θ sin θ sin θ (cos θ sin θ + sin θ cos θ ) = that sin θ cos θ + cos θ sin θ sin θ sin θ + cos θ cos θ R θ R θ = R θ +θ This makes sense since rotating a vector through an angle sin(θ + θ ) cos(θ + θ ) cos(θ + θ ) sin(θ + θ ) = θ and then rotating the resulting vector through an angle θ =[T T ] is the same as rotating the original vector through the angle θ + θ. EXAMPLE 3 Composition of Two Reflections Let T : R R be the reflection about the -ais, and let T : R R be the reflection about the -ais. In this case T T and T T are the same; both map ever vector = (, ) into its negative = (, ) (Figure 4..4): (T T )(, ) = T (, ) = (, ) (T T )(, ) = T (,) = (, ) The equalit of T T and T T can also be deduced b showing that the standard matrices for T and T commute: [T T ]=[T ][T ]= = [T T ]=[T ][T ]= = The operator T() = on R or R 3 is called the reflection about the origin. As the foregoing computations show, the standard matri for this operator on R is [T ]= (, ) (, ) T () (, ) T (T ()) T () (, ) (, ) (, ) T (T ()) Figure 4..4 T T T T EXAMPLE 4 Composition of Three Transformations Find the standard matri for the operator T : R 3 R 3 that first rotates a vector counterclockwise about the z-ais through an angle θ, then reflects the resulting vector about the z-plane, and then projects that vector orthogonall onto the -plane. Solution The operator T can be epressed as the composition T = T 3 T T where T is the rotation about the z-ais, T is the reflection about the z-plane, and T 3 is the orthogonal projection onto the -plane. From Tables 6,, and 4 of Section 4.9, the standard matrices for these operators are

91 4. Properties of Matri Transformations 73 cos θ sin θ [T ]= sin θ cos θ, [T ]=, [T 3 ]= Thus, it follows from (5) that the standard matri for T is cos θ sin θ [T ]= sin θ cos θ cos θ sin θ = sin θ cos θ One-to-One Matri Transformations Our net objective is to establish a link between the invertibilit of a matri A and properties of the corresponding matri transformation T A. DEFINITION A matri transformation T A : R n R m is said to be one-to-one if T A maps distinct vectors (points) in R n into distinct vectors (points) in R m. (See Figure 4..5.) This idea can be epressed in various was. For eample, ou should be able to see that the following are just restatements of Definition :. T A is one-to-one if for each vector b in the range of A there is eactl one vector in R n such that T A = b.. T A is one-to-one if the equalit T A (u) = T A (v) implies that u = v. R n R m R n R m Figure 4..5 One-to-one Not one-to-one Rotation operators on R are one-to-one since distinct vectors that are rotated through the same angle have distinct images (Figure 4..6). In contrast, the orthogonal projection of R onto the -ais is not one-to-one because it maps distinct points on the same vertical line into the same point (Figure 4..7). T(v) T(u) θ θ v u P Q M Figure 4..6 Distinct vectors u and v are rotated into distinct vectors T(u) and T(v). Figure 4..7 The distinct points P and Q are mapped into the same point M.

92 74 Chapter 4 General Vector Spaces Kernel and Range In the discussion leading up to Theorem 4..5 we introduced the notion of the kernel of a matri transformation. The following definition formalizes this idea and defines the companion notion of range. DEFINITION If T A : R n R m is a matri transformation, then the set of all vectors in R n that T A maps into is called the kernel of T A and is denoted b ker(t A ). The set of all vectors in R m that are images under this transformation of at least one vector in R n is called the range of T A and is denoted b R(T A ). In brief: ker(t A ) = null space of A (6) R(T A ) = column space of A (7) The ke to solving a mathematical problem is often adopting the right point of view; and this is wh, in linear algebra, we develop different was of thinking about the same vector space. For eample, if A is an m n matri, here are three was of viewing the same subspace of R n : Matri view: the null space of A Sstem view: the solution space of A = Transformation view: the kernel of T A and here are three was of viewing the same subspace of R m : Matri view: the column space of A Sstem view: all b in R m for which A = b is consistent Transformation view: the range of T A In the special case of a linear operator T A : R n R n, the following theorem establishes fundamental relationships between the invertibilit of A and properties of T A. THEOREM 4.. If A is an n n matri and T A : R n R n is the corresponding matri operator, then the following statements are equivalent. (a) A is invertible. (b) The kernel of T A is {}. (c) The range of T A is R n. (d) T A is one-to-one. Proof We can prove this theorem b establishing the chain of implications (a) (b) (c) (d) (a). We will prove the first two implications and leave the rest as eercises. (a) (b) Assume that A is invertible. It follows from parts (a) and (b) of Theorem that the sstem A = has onl the trivial solution and hence that the null space of A is {}. Formula (6) now implies that the kernel of T A is {}. (b) (c) Assume that the kernel of T A is {}. It follows from Formula (6) that the null space of A is {} and hence that A has nullit. This in turn implies that the rank of A is n and hence that the column space of A is all of R n. Formula (7) now implies that the range of T A is R n.

93 4. Properties of Matri Transformations 75 EXAMPLE 5 The Rotation Operator on R Is One-to-One As was illustrated in Figure 4..6, the operator T : R R that rotates vectors through an angle θ is one-to-one. In accordance with parts (a) and (d) of Theorem 4.., show that the standard matri for T is invertible. Solution We will show that the standard matri for T is invertible b showing that its determinant is nonzero. From Table 5 of Section 4.9 the standard matri for T is cos θ sin θ [T ]= sin θ cos θ This matri is invertible because det[t ]= cos θ sin θ sin θ cos θ = cos θ + sin θ = = EXAMPLE 6 Projection Operators Are Not One-to-One As illustrated in Figure 4..7, the operator T : R R that projects onto the -ais in the -plane is not one-to-one. In accordance with parts (a) and (d) of Theorem 4.., show that the standard matri for T is not invertible. Solution We will show that the standard matri for T is not invertible b showing that its determinant is zero. From Table 3 of Section 4.9 the standard matri for T is [T ]= Since det[t ]=, the operator T is not one-to-one. Inverse of a One-to-One Matri Operator If T A : R n R n is a one-to-one matri operator, then it follows from Theorem 4.. that A is invertible. The matri operator T A maps to w T A maps w to Figure 4..8 w T A : R n R n that corresponds to A is called the inverse operator or (more simpl) the inverse of T A. This terminolog is appropriate because T A and T A cancel the effect of each other in the sense that if is an vector in R n, then T A (T A ()) = AA = I = T A (T A ()) = A A = I = or, equivalentl, T A T A = T AA = T I T A T A = T A A = T I From a more geometric viewpoint, if w is the image of under T A, then T A maps w backinto, since T A (w) = T A (T A ()) = This is illustrated in Figure 4..8 for R.

94 76 Chapter 4 General Vector Spaces Before considering eamples, it will be helpful to touch on some notational matters. If T A : R n R n is a one-to-one matri operator, and if T A : R n R n is its inverse, then the standard matrices for these operators are related b the equation T A = T A (8) In cases where it is preferable not to assign a name to the matri, we can epress this equation as [T ]=[T ] (9) EXAMPLE 7 Standard Matri for T Let T : R R be the operator that rotates each vector in R through the angle θ, so from Table 5 of Section 4.9, cos θ sin θ [T ]= () sin θ cos θ It is evident geometricall that to undo the effect of T, one must rotate each vector in R through the angle θ. But this is eactl what the operator T does, since the standard matri for T is cos θ sin θ cos( θ) sin( θ) [T ]=[T] = = sin θ cos θ sin( θ) cos( θ) (verif), which is the standard matri for a rotation through the angle θ. EXAMPLE 8 Finding T Show that the operator T : R R defined b the equations is one-to-one, and find T (w,w ). Solution w = + w = The matri form of these equations is [ w = 3 4 w ][ so the standard matri for T is [T ]= 3 4 This matri is invertible (so T is one-to-one) and the standard matri for T is Thus [T ] [ w w [T ]=[T] = ] = [ w w 3 5 ] ] 4 5 = w w 5 3 w 5 + w 5 from which we conclude that T (w,w ) = ( 4 w 5 w 5, 3 w 5 + w ) 5

95 4. Properties of Matri Transformations 77 More on the Equivalence Theorem As our final result in this section, we will add parts (b), (c), and (d) of Theorem 4.. to Theorem THEOREM 4.. Equivalent Statements If A is an n n matri, then the following statements are equivalent. (a) A is invertible. (b) A = has onl the trivial solution. (c) The reduced row echelon form of A is I n. (d) A is epressible as a product of elementar matrices. (e) A = b is consistent for ever n matri b. ( f) A = b has eactl one solution for ever n matri b. (g) det(a) =. (h) The column vectors of A are linearl independent. (i) The row vectors of A are linearl independent. ( j) The column vectors of A span R n. (k) The row vectors of A span R n. (l) The column vectors of A form a basis for R n. (m) The row vectors of A form a basis for R n. (n) A has rank n. (o) A has nullit. ( p) The orthogonal complement of the null space of A is R n. (q) The orthogonal complement of the row space of A is {}. (r) The kernel of T A is {}. (s) The range of T A is R n. (t) T A is one-to-one. Eercise Set 4. In Eercises 4, determine whether the operators T and T commute; that is, whether T T = T T.. (a) T : R R is the reflection about the line =, and T : R R is the orthogonal projection onto the -ais. (b) T : R R is the reflection about the -ais, and T : R R is the reflection about the line =.. (a) T : R R is the orthogonal projection onto the -ais, and T : R R is the orthogonal projection onto the -ais. (b) T : R R is the rotation about the origin through an angle of π/4, and T : R R is the reflection about the -ais. 3. T : R 3 R 3 is a dilation with factor k, and T : R 3 R 3 is a contraction with factor /k. 4. T : R 3 R 3 is the rotation about the -ais through an angle θ, and T : R 3 R 3 is the rotation about the z-ais through an angle θ. In Eercises 5 6, let T A and T B bet the operators whose standard matrices are given. Find the standard matrices for T B T A and T A T B A =, B = A =, B = Find the standard matri for the stated composition in R. (a) A rotation of 9, followed b a reflection about the line =. (b) An orthogonal projection onto the -ais, followed b a contraction with factor k =. (c) A reflection about the -ais, followed b a dilation with factor k = 3, followed b a rotation about the origin of 6.

96 78 Chapter 4 General Vector Spaces 8. Find the standard matri for the stated composition in R. (a) A rotation about the origin of 6, followed b an orthogonal projection onto the -ais, followed b a reflection about the line =. (b) A dilation with factor k =, followed b a rotation about the origin of 45, followed b a reflection about the -ais. (c) A rotation about the origin of 5, followed b a rotation about the origin of 5, followed b a rotation about the origin of Find the standard matri for the stated composition in R 3. (a) A reflection about the z-plane, followed b an orthogonal projection onto the z-plane. (b) A rotation of 45 about the -ais, followed b a dilation with factor k =. (c) An orthogonal projection onto the -plane, followed b a reflection about the z-plane.. Find the standard matri for the stated composition in R 3. (a) A rotation of 3 about the -ais, followed b a rotation of 3 about the z-ais, followed b a contraction with factor k = 4. (b) A reflection about the -plane, followed b a reflection about the z-plane, followed b an orthogonal projection onto the z-plane. (c) A rotation of 7 about the -ais, followed b a rotation of 9 about the -ais, followed b a rotation of 8 about the z-ais.. Let T (, ) = ( +, ) and T (, ) = (3, + 4 ). (a) Find the standard matrices for T and T. (b) Find the standard matrices for T T and T T. (c) Use the matrices obtained in part (b) to find formulas for T (T (, )) and T (T (, )).. Let T (,, 3 ) = (4, +, 3 ) and T (,, 3 ) = ( +, 3, 4 3 ). (a) Find the standard matrices for T and T. (b) Find the standard matrices for T T and T T. (c) Use the matrices obtained in part (b) to find formulas for T (T (,, 3 )) and T (T (,, 3 )). In Eercises 3 4, determine b inspection whether the stated matri operator is one-to-one. 3. (a) The orthogonal projection onto the -ais in R. (b) The reflection about the -ais in R. (c) The reflection about the line = in R. (d) A contraction with factor k>inr. 4. (a) A rotation about the z-ais in R 3. (b) A reflection about the -plane in R 3. (c) A dilation with factor k>inr 3. (d) An orthogonal projection onto the z-plane in R 3. In Eercises 5 6, describe in words the inverse of the given one-to-one operator. 5. (a) The reflection about the -ais on R. (b) The rotation about the origin through an angle of π/4 on R. (c) The dilation with factor of 3 on R. 6. (a) The reflection about the z-plane in R 3. (b) The contraction with factor 5 in R3. (c) The rotation through an angle of 8 about the z-ais in R 3. In Eercises 7 8, epress the equations in matri form, and then use parts (g) and (s) of Theorem 4.. to determine whether the operator defined b the equations is one-to-one. 7. (a) w = w = + (b) w = w = w 3 = (a) w = 3 w = 5 + (b) w = w = w 3 = Determine whether the matri operator T : R R defined b the equations is one-to-one; if so, find the standard matri for the inverse operator, and find T (w,w ). (a) w = + w = + (b) w = 4 6 w = + 3. Determine whether the matri operator T : R 3 R 3 defined b the equations is one-to-one; if so, find the standard matri for the inverse operator, and find T (w,w,w 3 ). (a) w = + 3 w = w 3 = + (b) w = w = w 3 = In Eercises, determine whether multiplication b A is a one-to-one matri transformation. [ ] 3. (a) A = (b) A = (a) A = (b) A =

97 4. Properties of Matri Transformations 79 In Eercises 3 4, let T be multiplication b the matri A. Find (a) a basis for the range of T. (b) a basis for the kernel of T. (c) the rank and nullit of T. (d) the rank and nullit of A A = A = In Eercises 5 6, let T A : R 4 R 3 be multiplication b A. Find a basis for the kernel of T A, and then find a basis for the range of T A that consists of column vectors of A. 5. A = A = Let A be an n n matri such that det(a) =, and let T : R n R n be multiplication b A. (a) What can ou sa about the range of the matri operator T? Give an eample that illustrates our conclusion. (b) What can ou sa about the number of vectors that T maps into? 8. Answer the questions in Eercise 7 in the case where det(a) =. 9. (a) Is a composition of one-to-one matri transformations one-to-one? Justif our conclusion. (b) Can the composition of a one-to-one matri transformation and a matri transformation that is not one-to-one be one-to-one? Account for both possible orders of composition and justif our conclusion. 3. Let T A : R R be multiplication b [ ] cos θ sin θ sin θ cos θ A = sin θ cos θ cos θ sin θ (a) What is the geometric effect of appling this transformation to a vector in R? (b) Epress the operator T A as a composition of two linear operators on R. In Eercises 3 3, use matri inversion to confirm the stated result in R. 3. (a) The inverse transformation for a reflection about = is a reflection about =. (b) The inverse transformation for a compression along an ais is an epansion along that ais. 3. (a) The inverse transformation for a reflections about a coordinate ais is a reflection about that ais. (b) The inverse transformation for a shear along a coordinate ais is a shear along that ais. Working with Proofs 33. Prove that the matri transformations T A and T B commute if and onl if the matrices A and B commute. 34. Prove the implication (c) (d) in Theorem Prove the implication (d) (a) in Theorem 4... True-False Eercises TF. In parts (a) (g) determine whether the statement is true or false, and justif our answer. (a) If T A and T B are matri operators on R n, then T A (T B ()) = T B (T A ()) for ever vector in R n. (b) If T and T are matri operators on R n, then [T T ]=[T ][T ]. (c) A composition of two rotation operators about the origin of R is another rotation about the origin. (d) A composition of two reflection operators in R is another reflection operator. (e) The kernel of a matri transformation T A : R n R m is the same as the null space of A. (f ) If there is a nonzero vector in the kernel of the matri operator T A : R n R n, then this operator is not one-to-one. (g) If A is an n n matri and if the linear sstem A = has a nontrivial solution, then the range of the matri operator is not R n. Working withtechnolog T. (a) Find the standard matri for the linear operator on R 3 that performs a counterclockwise rotation of 47 about the -ais, followed b a counterclockwise rotation of 68 about the -ais, followed b a counterclockwise rotation of 33 about the z-ais. (b) Find the image of the point (,, ) under the operator in part (a). T. Find the standard matri for the linear operator on R that first reflects each point in the plane about the line through the origin that makes an angle of 7 with the positive -ais and then projects the resulting point orthogonall onto the line through the origin that makes an angle of 5 with the positive -ais.

98 8 Chapter 4 General Vector Spaces 4. Geometr of Matri Operators on R In applications such as computer graphics it is important to understand not onl how linear operators on R and R 3 affect individual vectors but also how the affect two-dimensional or three-dimensional regions. That is the focus of this section. Transformations of Regions Figure 4.. shows a famous picture of Albert Einstein that has been transformed in various was using matri operators on R. The original image was scanned and then digitized to decompose it into a rectangular arra of piels. Those piels were then transformed as follows: The program MATLAB was used to assign coordinates and a gra level to each piel. The coordinates of the piels were transformed b matri multiplication. The piels were then assigned their original gra levels to produce the transformed picture. In computer games a perception of motion is created b using matrices to rapidl and repeatedl transform the arras of piels that form the visual images. Digitized scan Rotated Sheared horizontall Compressed horizontall Figure 4.. [Image: ARTHUR SASSE/AFP/Gett Images] Images of Lines Under Matri Operators The effect of a matri operator on R can often be deduced b studing how it transforms the points that form the unit square. The following theorem, which we state without proof, shows that if the operator is invertible, then it maps each line segment in the unit square into the line segment connecting the images of its endpoints. In particular, the edges of the unit square get mapped into edges of the image (see Figure 4.. in which the edges of a unit square and the corresponding edges of its image have been numbered). e 3 (, ) e Unit square Unit square rotated Unit square reflected about the -ais Unit square reflected about the line = Figure 4..

Introduction to Vector Spaces Linear Algebra, Spring 2011

Introduction to Vector Spaces Linear Algebra, Spring 2011 Introduction to Vector Spaces Linear Algebra, Spring 2011 You probabl have heard the word vector before, perhaps in the contet of Calculus III or phsics. You probabl think of a vector like this: 5 3 or

More information

Eigenvectors and Eigenvalues 1

Eigenvectors and Eigenvalues 1 Ma 2015 page 1 Eigenvectors and Eigenvalues 1 In this handout, we will eplore eigenvectors and eigenvalues. We will begin with an eploration, then provide some direct eplanation and worked eamples, and

More information

14.1 Systems of Linear Equations in Two Variables

14.1 Systems of Linear Equations in Two Variables 86 Chapter 1 Sstems of Equations and Matrices 1.1 Sstems of Linear Equations in Two Variables Use the method of substitution to solve sstems of equations in two variables. Use the method of elimination

More information

Section 3.1. ; X = (0, 1]. (i) f : R R R, f (x, y) = x y

Section 3.1. ; X = (0, 1]. (i) f : R R R, f (x, y) = x y Paul J. Bruillard MATH 0.970 Problem Set 6 An Introduction to Abstract Mathematics R. Bond and W. Keane Section 3.1: 3b,c,e,i, 4bd, 6, 9, 15, 16, 18c,e, 19a, 0, 1b Section 3.: 1f,i, e, 6, 1e,f,h, 13e,

More information

Vector Spaces ปร ภ ม เวกเตอร

Vector Spaces ปร ภ ม เวกเตอร Vector Spaces ปร ภ ม เวกเตอร 5.1 Real Vector Spaces ปร ภ ม เวกเตอร ของจ านวนจร ง Vector Space Axioms (1/2) Let V be an arbitrary nonempty set of objects on which two operations are defined, addition and

More information

Second-Order Linear Differential Equations C 2

Second-Order Linear Differential Equations C 2 C8 APPENDIX C Additional Topics in Differential Equations APPENDIX C. Second-Order Homogeneous Linear Equations Second-Order Linear Differential Equations Higher-Order Linear Differential Equations Application

More information

15. Eigenvalues, Eigenvectors

15. Eigenvalues, Eigenvectors 5 Eigenvalues, Eigenvectors Matri of a Linear Transformation Consider a linear ( transformation ) L : a b R 2 R 2 Suppose we know that L and L Then c d because of linearit, we can determine what L does

More information

8. BOOLEAN ALGEBRAS x x

8. BOOLEAN ALGEBRAS x x 8. BOOLEAN ALGEBRAS 8.1. Definition of a Boolean Algebra There are man sstems of interest to computing scientists that have a common underling structure. It makes sense to describe such a mathematical

More information

x c x c This suggests the following definition.

x c x c This suggests the following definition. 110 Chapter 1 / Limits and Continuit 1.5 CONTINUITY A thrown baseball cannot vanish at some point and reappear someplace else to continue its motion. Thus, we perceive the path of the ball as an unbroken

More information

FACULTY OF MATHEMATICAL STUDIES MATHEMATICS FOR PART I ENGINEERING. Lectures AB = BA = I,

FACULTY OF MATHEMATICAL STUDIES MATHEMATICS FOR PART I ENGINEERING. Lectures AB = BA = I, FACULTY OF MATHEMATICAL STUDIES MATHEMATICS FOR PART I ENGINEERING Lectures MODULE 7 MATRICES II Inverse of a matri Sstems of linear equations Solution of sets of linear equations elimination methods 4

More information

1.3 LIMITS AT INFINITY; END BEHAVIOR OF A FUNCTION

1.3 LIMITS AT INFINITY; END BEHAVIOR OF A FUNCTION . Limits at Infinit; End Behavior of a Function 89. LIMITS AT INFINITY; END BEHAVIOR OF A FUNCTION Up to now we have been concerned with its that describe the behavior of a function f) as approaches some

More information

Vector Spaces ปร ภ ม เวกเตอร

Vector Spaces ปร ภ ม เวกเตอร Vector Spaces ปร ภ ม เวกเตอร 1 5.1 Real Vector Spaces ปร ภ ม เวกเตอร ของจ านวนจร ง Vector Space Axioms (1/2) Let V be an arbitrary nonempty set of objects on which two operations are defined, addition

More information

10.2 The Unit Circle: Cosine and Sine

10.2 The Unit Circle: Cosine and Sine 0. The Unit Circle: Cosine and Sine 77 0. The Unit Circle: Cosine and Sine In Section 0.., we introduced circular motion and derived a formula which describes the linear velocit of an object moving on

More information

Finding Limits Graphically and Numerically. An Introduction to Limits

Finding Limits Graphically and Numerically. An Introduction to Limits 8 CHAPTER Limits and Their Properties Section Finding Limits Graphicall and Numericall Estimate a it using a numerical or graphical approach Learn different was that a it can fail to eist Stud and use

More information

Get Solution of These Packages & Learn by Video Tutorials on Matrices

Get Solution of These Packages & Learn by Video Tutorials on  Matrices FEE Download Stud Package from website: wwwtekoclassescom & wwwmathsbsuhagcom Get Solution of These Packages & Learn b Video Tutorials on wwwmathsbsuhagcom Matrices An rectangular arrangement of numbers

More information

FIRST- AND SECOND-ORDER IVPS The problem given in (1) is also called an nth-order initial-value problem. For example, Solve: Solve:

FIRST- AND SECOND-ORDER IVPS The problem given in (1) is also called an nth-order initial-value problem. For example, Solve: Solve: .2 INITIAL-VALUE PROBLEMS 3.2 INITIAL-VALUE PROBLEMS REVIEW MATERIAL Normal form of a DE Solution of a DE Famil of solutions INTRODUCTION We are often interested in problems in which we seek a solution

More information

We have examined power functions like f (x) = x 2. Interchanging x

We have examined power functions like f (x) = x 2. Interchanging x CHAPTER 5 Eponential and Logarithmic Functions We have eamined power functions like f =. Interchanging and ields a different function f =. This new function is radicall different from a power function

More information

1.6 CONTINUITY OF TRIGONOMETRIC, EXPONENTIAL, AND INVERSE FUNCTIONS

1.6 CONTINUITY OF TRIGONOMETRIC, EXPONENTIAL, AND INVERSE FUNCTIONS .6 Continuit of Trigonometric, Eponential, and Inverse Functions.6 CONTINUITY OF TRIGONOMETRIC, EXPONENTIAL, AND INVERSE FUNCTIONS In this section we will discuss the continuit properties of trigonometric

More information

LESSON #42 - INVERSES OF FUNCTIONS AND FUNCTION NOTATION PART 2 COMMON CORE ALGEBRA II

LESSON #42 - INVERSES OF FUNCTIONS AND FUNCTION NOTATION PART 2 COMMON CORE ALGEBRA II LESSON #4 - INVERSES OF FUNCTIONS AND FUNCTION NOTATION PART COMMON CORE ALGEBRA II You will recall from unit 1 that in order to find the inverse of a function, ou must switch and and solve for. Also,

More information

Finding Limits Graphically and Numerically. An Introduction to Limits

Finding Limits Graphically and Numerically. An Introduction to Limits 60_00.qd //0 :05 PM Page 8 8 CHAPTER Limits and Their Properties Section. Finding Limits Graphicall and Numericall Estimate a it using a numerical or graphical approach. Learn different was that a it can

More information

Properties of Limits

Properties of Limits 33460_003qd //04 :3 PM Page 59 SECTION 3 Evaluating Limits Analticall 59 Section 3 Evaluating Limits Analticall Evaluate a it using properties of its Develop and use a strateg for finding its Evaluate

More information

1.7 Inverse Functions

1.7 Inverse Functions 71_0107.qd 1/7/0 10: AM Page 17 Section 1.7 Inverse Functions 17 1.7 Inverse Functions Inverse Functions Recall from Section 1. that a function can be represented b a set of ordered pairs. For instance,

More information

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1) EXERCISE SET 5. 6. The pair (, 2) is in the set but the pair ( )(, 2) = (, 2) is not because the first component is negative; hence Axiom 6 fails. Axiom 5 also fails. 8. Axioms, 2, 3, 6, 9, and are easily

More information

A.5. Complex Numbers AP-12. The Development of the Real Numbers

A.5. Complex Numbers AP-12. The Development of the Real Numbers AP- A.5 Comple Numbers Comple numbers are epressions of the form a + ib, where a and b are real numbers and i is a smbol for -. Unfortunatel, the words real and imaginar have connotations that somehow

More information

8.1 Exponents and Roots

8.1 Exponents and Roots Section 8. Eponents and Roots 75 8. Eponents and Roots Before defining the net famil of functions, the eponential functions, we will need to discuss eponent notation in detail. As we shall see, eponents

More information

Homework Notes Week 6

Homework Notes Week 6 Homework Notes Week 6 Math 24 Spring 24 34#4b The sstem + 2 3 3 + 4 = 2 + 2 + 3 4 = 2 + 2 3 = is consistent To see this we put the matri 3 2 A b = 2 into reduced row echelon form Adding times the first

More information

Analytic Geometry in Three Dimensions

Analytic Geometry in Three Dimensions Analtic Geometr in Three Dimensions. The Three-Dimensional Coordinate Sstem. Vectors in Space. The Cross Product of Two Vectors. Lines and Planes in Space The three-dimensional coordinate sstem is used

More information

4 Inverse function theorem

4 Inverse function theorem Tel Aviv Universit, 2013/14 Analsis-III,IV 53 4 Inverse function theorem 4a What is the problem................ 53 4b Simple observations before the theorem..... 54 4c The theorem.....................

More information

Lines and Planes 1. x(t) = at + b y(t) = ct + d

Lines and Planes 1. x(t) = at + b y(t) = ct + d 1 Lines in the Plane Lines and Planes 1 Ever line of points L in R 2 can be epressed as the solution set for an equation of the form A + B = C. Will we call this the ABC form. Recall that the slope-intercept

More information

3.7 InveRSe FUnCTIOnS

3.7 InveRSe FUnCTIOnS CHAPTER functions learning ObjeCTIveS In this section, ou will: Verif inverse functions. Determine the domain and range of an inverse function, and restrict the domain of a function to make it one-to-one.

More information

You don't have to be a mathematician to have a feel for numbers. John Forbes Nash, Jr.

You don't have to be a mathematician to have a feel for numbers. John Forbes Nash, Jr. Course Title: Real Analsis Course Code: MTH3 Course instructor: Dr. Atiq ur Rehman Class: MSc-II Course URL: www.mathcit.org/atiq/fa5-mth3 You don't have to be a mathematician to have a feel for numbers.

More information

On Range and Reflecting Functions About the Line y = mx

On Range and Reflecting Functions About the Line y = mx On Range and Reflecting Functions About the Line = m Scott J. Beslin Brian K. Heck Jerem J. Becnel Dept.of Mathematics and Dept. of Mathematics and Dept. of Mathematics and Computer Science Computer Science

More information

In this chapter a student has to learn the Concept of adjoint of a matrix. Inverse of a matrix. Rank of a matrix and methods finding these.

In this chapter a student has to learn the Concept of adjoint of a matrix. Inverse of a matrix. Rank of a matrix and methods finding these. MATRICES UNIT STRUCTURE.0 Objectives. Introduction. Definitions. Illustrative eamples.4 Rank of matri.5 Canonical form or Normal form.6 Normal form PAQ.7 Let Us Sum Up.8 Unit End Eercise.0 OBJECTIVES In

More information

Systems of Linear Equations: Solving by Graphing

Systems of Linear Equations: Solving by Graphing 8.1 Sstems of Linear Equations: Solving b Graphing 8.1 OBJECTIVE 1. Find the solution(s) for a set of linear equations b graphing NOTE There is no other ordered pair that satisfies both equations. From

More information

1.5. Analyzing Graphs of Functions. The Graph of a Function. What you should learn. Why you should learn it. 54 Chapter 1 Functions and Their Graphs

1.5. Analyzing Graphs of Functions. The Graph of a Function. What you should learn. Why you should learn it. 54 Chapter 1 Functions and Their Graphs 0_005.qd /7/05 8: AM Page 5 5 Chapter Functions and Their Graphs.5 Analzing Graphs of Functions What ou should learn Use the Vertical Line Test for functions. Find the zeros of functions. Determine intervals

More information

Derivatives of Multivariable Functions

Derivatives of Multivariable Functions Chapter 0 Derivatives of Multivariable Functions 0. Limits Motivating Questions In this section, we strive to understand the ideas generated b the following important questions: What do we mean b the limit

More information

Re(z) = a, For example, 3 + 2i = = 13. The complex conjugate (or simply conjugate") of z = a + bi is the number z defined by

Re(z) = a, For example, 3 + 2i = = 13. The complex conjugate (or simply conjugate) of z = a + bi is the number z defined by F COMPLEX NUMBERS In this appendi, we review the basic properties of comple numbers. A comple number is a number z of the form z = a + bi where a,b are real numbers and i represents a number whose square

More information

8.7 Systems of Non-Linear Equations and Inequalities

8.7 Systems of Non-Linear Equations and Inequalities 8.7 Sstems of Non-Linear Equations and Inequalities 67 8.7 Sstems of Non-Linear Equations and Inequalities In this section, we stud sstems of non-linear equations and inequalities. Unlike the sstems of

More information

Chapter 4 Analytic Trigonometry

Chapter 4 Analytic Trigonometry Analtic Trigonometr Chapter Analtic Trigonometr Inverse Trigonometric Functions The trigonometric functions act as an operator on the variable (angle, resulting in an output value Suppose this process

More information

Additional Topics in Differential Equations

Additional Topics in Differential Equations 0537_cop6.qd 0/8/08 :6 PM Page 3 6 Additional Topics in Differential Equations In Chapter 6, ou studied differential equations. In this chapter, ou will learn additional techniques for solving differential

More information

2.5 CONTINUITY. a x. Notice that Definition l implicitly requires three things if f is continuous at a:

2.5 CONTINUITY. a x. Notice that Definition l implicitly requires three things if f is continuous at a: SECTION.5 CONTINUITY 9.5 CONTINUITY We noticed in Section.3 that the it of a function as approaches a can often be found simpl b calculating the value of the function at a. Functions with this propert

More information

2.5. Infinite Limits and Vertical Asymptotes. Infinite Limits

2.5. Infinite Limits and Vertical Asymptotes. Infinite Limits . Infinite Limits and Vertical Asmptotes. Infinite Limits and Vertical Asmptotes In this section we etend the concept of it to infinite its, which are not its as before, but rather an entirel new use of

More information

11.4 Polar Coordinates

11.4 Polar Coordinates 11. Polar Coordinates 917 11. Polar Coordinates In Section 1.1, we introduced the Cartesian coordinates of a point in the plane as a means of assigning ordered pairs of numbers to points in the plane.

More information

MATRIX TRANSFORMATIONS

MATRIX TRANSFORMATIONS CHAPTER 5. MATRIX TRANSFORMATIONS INSTITIÚID TEICNEOLAÍOCHTA CHEATHARLACH INSTITUTE OF TECHNOLOGY CARLOW MATRIX TRANSFORMATIONS Matri Transformations Definition Let A and B be sets. A function f : A B

More information

Answer Explanations. The SAT Subject Tests. Mathematics Level 1 & 2 TO PRACTICE QUESTIONS FROM THE SAT SUBJECT TESTS STUDENT GUIDE

Answer Explanations. The SAT Subject Tests. Mathematics Level 1 & 2 TO PRACTICE QUESTIONS FROM THE SAT SUBJECT TESTS STUDENT GUIDE The SAT Subject Tests Answer Eplanations TO PRACTICE QUESTIONS FROM THE SAT SUBJECT TESTS STUDENT GUIDE Mathematics Level & Visit sat.org/stpractice to get more practice and stud tips for the Subject Test

More information

17. C M 2 (C), the set of all 2 2 matrices with complex entries. 19. Is C 3 a real vector space? Explain.

17. C M 2 (C), the set of all 2 2 matrices with complex entries. 19. Is C 3 a real vector space? Explain. 250 CHAPTER 4 Vector Spaces 14. On R 2, define the operation of addition by (x 1,y 1 ) + (x 2,y 2 ) = (x 1 x 2,y 1 y 2 ). Do axioms A5 and A6 in the definition of a vector space hold? Justify your answer.

More information

0.1. Linear transformations

0.1. Linear transformations Suggestions for midterm review #3 The repetitoria are usually not complete; I am merely bringing up the points that many people didn t now on the recitations Linear transformations The following mostly

More information

Linear Algebra (Math-324) Lecture Notes

Linear Algebra (Math-324) Lecture Notes Linear Algebra (Math-324) Lecture Notes Dr. Ali Koam and Dr. Azeem Haider September 24, 2017 c 2017,, Jazan All Rights Reserved 1 Contents 1 Real Vector Spaces 6 2 Subspaces 11 3 Linear Combination and

More information

Roberto s Notes on Integral Calculus Chapter 3: Basics of differential equations Section 3. Separable ODE s

Roberto s Notes on Integral Calculus Chapter 3: Basics of differential equations Section 3. Separable ODE s Roberto s Notes on Integral Calculus Chapter 3: Basics of differential equations Section 3 Separable ODE s What ou need to know alread: What an ODE is and how to solve an eponential ODE. What ou can learn

More information

Introduction to Differential Equations

Introduction to Differential Equations Introduction to Differential Equations. Definitions and Terminolog.2 Initial-Value Problems.3 Differential Equations as Mathematical Models Chapter in Review The words differential and equations certainl

More information

4Cubic. polynomials UNCORRECTED PAGE PROOFS

4Cubic. polynomials UNCORRECTED PAGE PROOFS 4Cubic polnomials 4.1 Kick off with CAS 4. Polnomials 4.3 The remainder and factor theorems 4.4 Graphs of cubic polnomials 4.5 Equations of cubic polnomials 4.6 Cubic models and applications 4.7 Review

More information

Course 15 Numbers and Their Properties

Course 15 Numbers and Their Properties Course Numbers and Their Properties KEY Module: Objective: Rules for Eponents and Radicals To practice appling rules for eponents when the eponents are rational numbers Name: Date: Fill in the blanks.

More information

EXERCISES FOR SECTION 3.1

EXERCISES FOR SECTION 3.1 174 CHAPTER 3 LINEAR SYSTEMS EXERCISES FOR SECTION 31 1 Since a > 0, Paul s making a pro t > 0 has a bene cial effect on Paul s pro ts in the future because the a term makes a positive contribution to

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

4.7. Newton s Method. Procedure for Newton s Method HISTORICAL BIOGRAPHY

4.7. Newton s Method. Procedure for Newton s Method HISTORICAL BIOGRAPHY 4. Newton s Method 99 4. Newton s Method HISTORICAL BIOGRAPHY Niels Henrik Abel (18 189) One of the basic problems of mathematics is solving equations. Using the quadratic root formula, we know how to

More information

Additional Topics in Differential Equations

Additional Topics in Differential Equations 6 Additional Topics in Differential Equations 6. Eact First-Order Equations 6. Second-Order Homogeneous Linear Equations 6.3 Second-Order Nonhomogeneous Linear Equations 6.4 Series Solutions of Differential

More information

Ordinary Differential Equations

Ordinary Differential Equations 58229_CH0_00_03.indd Page 6/6/6 2:48 PM F-007 /202/JB0027/work/indd & Bartlett Learning LLC, an Ascend Learning Compan.. PART Ordinar Differential Equations. Introduction to Differential Equations 2. First-Order

More information

INTRODUCTION TO DIFFERENTIAL EQUATIONS

INTRODUCTION TO DIFFERENTIAL EQUATIONS INTRODUCTION TO DIFFERENTIAL EQUATIONS. Definitions and Terminolog. Initial-Value Problems.3 Differential Equations as Mathematical Models CHAPTER IN REVIEW The words differential and equations certainl

More information

MA123, Chapter 1: Equations, functions and graphs (pp. 1-15)

MA123, Chapter 1: Equations, functions and graphs (pp. 1-15) MA123, Chapter 1: Equations, functions and graphs (pp. 1-15) Date: Chapter Goals: Identif solutions to an equation. Solve an equation for one variable in terms of another. What is a function? Understand

More information

Determinants. We said in Section 3.3 that a 2 2 matrix a b. Determinant of an n n Matrix

Determinants. We said in Section 3.3 that a 2 2 matrix a b. Determinant of an n n Matrix 3.6 Determinants We said in Section 3.3 that a 2 2 matri a b is invertible if and onl if its c d erminant, ad bc, is nonzero, and we saw the erminant used in the formula for the inverse of a 2 2 matri.

More information

ES.1803 Topic 16 Notes Jeremy Orloff

ES.1803 Topic 16 Notes Jeremy Orloff ES803 Topic 6 Notes Jerem Orloff 6 Eigenalues, diagonalization, decoupling This note coers topics that will take us seeral classes to get through We will look almost eclusiel at 2 2 matrices These hae

More information

Review Topics for MATH 1400 Elements of Calculus Table of Contents

Review Topics for MATH 1400 Elements of Calculus Table of Contents Math 1400 - Mano Table of Contents - Review - page 1 of 2 Review Topics for MATH 1400 Elements of Calculus Table of Contents MATH 1400 Elements of Calculus is one of the Marquette Core Courses for Mathematical

More information

TABLE OF CONTENTS - UNIT 1 CHARACTERISTICS OF FUNCTIONS

TABLE OF CONTENTS - UNIT 1 CHARACTERISTICS OF FUNCTIONS TABLE OF CONTENTS - UNIT CHARACTERISTICS OF FUNCTIONS TABLE OF CONTENTS - UNIT CHARACTERISTICS OF FUNCTIONS INTRODUCTION TO FUNCTIONS RELATIONS AND FUNCTIONS EXAMPLES OF FUNCTIONS 4 VIEWING RELATIONS AND

More information

Ch 3 Alg 2 Note Sheet.doc 3.1 Graphing Systems of Equations

Ch 3 Alg 2 Note Sheet.doc 3.1 Graphing Systems of Equations Ch 3 Alg Note Sheet.doc 3.1 Graphing Sstems of Equations Sstems of Linear Equations A sstem of equations is a set of two or more equations that use the same variables. If the graph of each equation =.4

More information

Mathematics 309 Conic sections and their applicationsn. Chapter 2. Quadric figures. ai,j x i x j + b i x i + c =0. 1. Coordinate changes

Mathematics 309 Conic sections and their applicationsn. Chapter 2. Quadric figures. ai,j x i x j + b i x i + c =0. 1. Coordinate changes Mathematics 309 Conic sections and their applicationsn Chapter 2. Quadric figures In this chapter want to outline quickl how to decide what figure associated in 2D and 3D to quadratic equations look like.

More information

Gauss and Gauss Jordan Elimination

Gauss and Gauss Jordan Elimination Gauss and Gauss Jordan Elimination Row-echelon form: (,, ) A matri is said to be in row echelon form if it has the following three properties. () All row consisting entirel of zeros occur at the bottom

More information

Unit 12 Study Notes 1 Systems of Equations

Unit 12 Study Notes 1 Systems of Equations You should learn to: Unit Stud Notes Sstems of Equations. Solve sstems of equations b substitution.. Solve sstems of equations b graphing (calculator). 3. Solve sstems of equations b elimination. 4. Solve

More information

A Tutorial on Euler Angles and Quaternions

A Tutorial on Euler Angles and Quaternions A Tutorial on Euler Angles and Quaternions Moti Ben-Ari Department of Science Teaching Weimann Institute of Science http://www.weimann.ac.il/sci-tea/benari/ Version.0.1 c 01 17 b Moti Ben-Ari. This work

More information

D u f f x h f y k. Applying this theorem a second time, we have. f xx h f yx k h f xy h f yy k k. f xx h 2 2 f xy hk f yy k 2

D u f f x h f y k. Applying this theorem a second time, we have. f xx h f yx k h f xy h f yy k k. f xx h 2 2 f xy hk f yy k 2 93 CHAPTER 4 PARTIAL DERIVATIVES We close this section b giving a proof of the first part of the Second Derivatives Test. Part (b) has a similar proof. PROOF OF THEOREM 3, PART (A) We compute the second-order

More information

Analytic Trigonometry

Analytic Trigonometry CHAPTER 5 Analtic Trigonometr 5. Fundamental Identities 5. Proving Trigonometric Identities 5.3 Sum and Difference Identities 5.4 Multiple-Angle Identities 5.5 The Law of Sines 5.6 The Law of Cosines It

More information

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education MTH 3 Linear Algebra Study Guide Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education June 3, ii Contents Table of Contents iii Matrix Algebra. Real Life

More information

Additional Material On Recursive Sequences

Additional Material On Recursive Sequences Penn State Altoona MATH 141 Additional Material On Recursive Sequences 1. Graphical Analsis Cobweb Diagrams Consider a generic recursive sequence { an+1 = f(a n ), n = 1,, 3,..., = Given initial value.

More information

Handout #3 SUBSPACE OF A VECTOR SPACE Professor Moseley

Handout #3 SUBSPACE OF A VECTOR SPACE Professor Moseley Handout #3 SUBSPACE OF A VECTOR SPACE Professor Mosele An important concept in abstract linear algebra is that of a subspace. After we have established a number of important eamples of vector spaces, we

More information

LESSON #48 - INTEGER EXPONENTS COMMON CORE ALGEBRA II

LESSON #48 - INTEGER EXPONENTS COMMON CORE ALGEBRA II LESSON #8 - INTEGER EXPONENTS COMMON CORE ALGEBRA II We just finished our review of linear functions. Linear functions are those that grow b equal differences for equal intervals. In this unit we will

More information

Elementary Linear Algebra

Elementary Linear Algebra Elementary Linear Algebra Anton & Rorres, 10 th Edition Lecture Set 05 Chapter 4: General Vector Spaces 1006003 คณ ตศาสตร ว ศวกรรม 3 สาขาว ชาว ศวกรรมคอมพ วเตอร ป การศ กษา 1/2554 1006003 คณตศาสตรวศวกรรม

More information

Section 1.5 Formal definitions of limits

Section 1.5 Formal definitions of limits Section.5 Formal definitions of limits (3/908) Overview: The definitions of the various tpes of limits in previous sections involve phrases such as arbitraril close, sufficientl close, arbitraril large,

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors CHAPTER Eigenvalues and Eigenvectors CHAPTER CONTENTS. Eigenvalues and Eigenvectors 9. Diagonalization. Complex Vector Spaces.4 Differential Equations 6. Dynamical Systems and Markov Chains INTRODUCTION

More information

2.1 Rates of Change and Limits AP Calculus

2.1 Rates of Change and Limits AP Calculus . Rates of Change and Limits AP Calculus. RATES OF CHANGE AND LIMITS Limits Limits are what separate Calculus from pre calculus. Using a it is also the foundational principle behind the two most important

More information

m x n matrix with m rows and n columns is called an array of m.n real numbers

m x n matrix with m rows and n columns is called an array of m.n real numbers LINEAR ALGEBRA Matrices Linear Algebra Definitions m n matri with m rows and n columns is called an arra of mn real numbers The entr a a an A = a a an = ( a ij ) am am amn a ij denotes the element in the

More information

0.24 adults 2. (c) Prove that, regardless of the possible values of and, the covariance between X and Y is equal to zero. Show all work.

0.24 adults 2. (c) Prove that, regardless of the possible values of and, the covariance between X and Y is equal to zero. Show all work. 1 A socioeconomic stud analzes two discrete random variables in a certain population of households = number of adult residents and = number of child residents It is found that their joint probabilit mass

More information

Week 3 September 5-7.

Week 3 September 5-7. MA322 Weekl topics and quiz preparations Week 3 September 5-7. Topics These are alread partl covered in lectures. We collect the details for convenience.. Solutions of homogeneous equations AX =. 2. Using

More information

Functions. Introduction

Functions. Introduction Functions,00 P,000 00 0 70 7 80 8 0 000 00 00 Figure Standard and Poor s Inde with dividends reinvested (credit "bull": modification of work b Praitno Hadinata; credit "graph": modification of work b MeasuringWorth)

More information

10.5 Graphs of the Trigonometric Functions

10.5 Graphs of the Trigonometric Functions 790 Foundations of Trigonometr 0.5 Graphs of the Trigonometric Functions In this section, we return to our discussion of the circular (trigonometric functions as functions of real numbers and pick up where

More information

Cubic and quartic functions

Cubic and quartic functions 3 Cubic and quartic functions 3A Epanding 3B Long division of polnomials 3C Polnomial values 3D The remainder and factor theorems 3E Factorising polnomials 3F Sum and difference of two cubes 3G Solving

More information

Glossary. Also available at BigIdeasMath.com: multi-language glossary vocabulary flash cards. An equation that contains an absolute value expression

Glossary. Also available at BigIdeasMath.com: multi-language glossary vocabulary flash cards. An equation that contains an absolute value expression Glossar This student friendl glossar is designed to be a reference for ke vocabular, properties, and mathematical terms. Several of the entries include a short eample to aid our understanding of important

More information

Chapter Contents. A 1.6 Further Results on Systems of Equations and Invertibility 1.7 Diagonal, Triangular, and Symmetric Matrices

Chapter Contents. A 1.6 Further Results on Systems of Equations and Invertibility 1.7 Diagonal, Triangular, and Symmetric Matrices Chapter Contents. Introduction to System of Linear Equations. Gaussian Elimination.3 Matrices and Matri Operations.4 Inverses; Rules of Matri Arithmetic.5 Elementary Matrices and a Method for Finding A.6

More information

Lecture 5. Equations of Lines and Planes. Dan Nichols MATH 233, Spring 2018 University of Massachusetts.

Lecture 5. Equations of Lines and Planes. Dan Nichols MATH 233, Spring 2018 University of Massachusetts. Lecture 5 Equations of Lines and Planes Dan Nichols nichols@math.umass.edu MATH 233, Spring 2018 Universit of Massachusetts Februar 6, 2018 (2) Upcoming midterm eam First midterm: Wednesda Feb. 21, 7:00-9:00

More information

MATH Line integrals III Fall The fundamental theorem of line integrals. In general C

MATH Line integrals III Fall The fundamental theorem of line integrals. In general C MATH 255 Line integrals III Fall 216 In general 1. The fundamental theorem of line integrals v T ds depends on the curve between the starting point and the ending point. onsider two was to get from (1,

More information

LESSON #1 - BASIC ALGEBRAIC PROPERTIES COMMON CORE ALGEBRA II

LESSON #1 - BASIC ALGEBRAIC PROPERTIES COMMON CORE ALGEBRA II 1 LESSON #1 - BASIC ALGEBRAIC PROPERTIES COMMON CORE ALGEBRA II Mathematics has developed a language all to itself in order to clarif concepts and remove ambiguit from the analsis of problems. To achieve

More information

VECTORS IN THREE DIMENSIONS

VECTORS IN THREE DIMENSIONS 1 CHAPTER 2. BASIC TRIGONOMETRY 1 INSTITIÚID TEICNEOLAÍOCHTA CHEATHARLACH INSTITUTE OF TECHNOLOGY CARLOW VECTORS IN THREE DIMENSIONS 1 Vectors in Two Dimensions A vector is an object which has magnitude

More information

LESSON #11 - FORMS OF A LINE COMMON CORE ALGEBRA II

LESSON #11 - FORMS OF A LINE COMMON CORE ALGEBRA II LESSON # - FORMS OF A LINE COMMON CORE ALGEBRA II Linear functions come in a variet of forms. The two shown below have been introduced in Common Core Algebra I and Common Core Geometr. TWO COMMON FORMS

More information

6 = 1 2. The right endpoints of the subintervals are then 2 5, 3, 7 2, 4, 2 9, 5, while the left endpoints are 2, 5 2, 3, 7 2, 4, 9 2.

6 = 1 2. The right endpoints of the subintervals are then 2 5, 3, 7 2, 4, 2 9, 5, while the left endpoints are 2, 5 2, 3, 7 2, 4, 9 2. 5 THE ITEGRAL 5. Approimating and Computing Area Preliminar Questions. What are the right and left endpoints if [, 5] is divided into si subintervals? If the interval [, 5] is divided into si subintervals,

More information

Review of Prerequisite Skills, p. 350 C( 2, 0, 1) B( 3, 2, 0) y A(0, 1, 0) D(0, 2, 3) j! k! 2k! Section 7.1, pp

Review of Prerequisite Skills, p. 350 C( 2, 0, 1) B( 3, 2, 0) y A(0, 1, 0) D(0, 2, 3) j! k! 2k! Section 7.1, pp . 5. a. a a b a a b. Case If and are collinear, then b is also collinear with both and. But is perpendicular to and c c c b 9 b c, so a a b b is perpendicular to. Case If b and c b c are not collinear,

More information

Systems of Linear and Quadratic Equations. Check Skills You ll Need. y x. Solve by Graphing. Solve the following system by graphing.

Systems of Linear and Quadratic Equations. Check Skills You ll Need. y x. Solve by Graphing. Solve the following system by graphing. NY- Learning Standards for Mathematics A.A. Solve a sstem of one linear and one quadratic equation in two variables, where onl factoring is required. A.G.9 Solve sstems of linear and quadratic equations

More information

MAT 1275: Introduction to Mathematical Analysis. Graphs and Simplest Equations for Basic Trigonometric Functions. y=sin( x) Function

MAT 1275: Introduction to Mathematical Analysis. Graphs and Simplest Equations for Basic Trigonometric Functions. y=sin( x) Function MAT 275: Introduction to Mathematical Analsis Dr. A. Rozenblum Graphs and Simplest Equations for Basic Trigonometric Functions We consider here three basic functions: sine, cosine and tangent. For them,

More information

Mathematics of Cryptography Part I

Mathematics of Cryptography Part I CHAPTER 2 Mathematics of Crptograph Part I (Solution to Practice Set) Review Questions 1. The set of integers is Z. It contains all integral numbers from negative infinit to positive infinit. The set of

More information

Chapter 5: Systems of Equations

Chapter 5: Systems of Equations Chapter : Sstems of Equations Section.: Sstems in Two Variables... 0 Section. Eercises... 9 Section.: Sstems in Three Variables... Section. Eercises... Section.: Linear Inequalities... Section.: Eercises.

More information

SEPARABLE EQUATIONS 2.2

SEPARABLE EQUATIONS 2.2 46 CHAPTER FIRST-ORDER DIFFERENTIAL EQUATIONS 4. Chemical Reactions When certain kinds of chemicals are combined, the rate at which the new compound is formed is modeled b the autonomous differential equation

More information

2.2 SEPARABLE VARIABLES

2.2 SEPARABLE VARIABLES 44 CHAPTER FIRST-ORDER DIFFERENTIAL EQUATIONS 6 Consider the autonomous DE 6 Use our ideas from Problem 5 to find intervals on the -ais for which solution curves are concave up and intervals for which

More information

4 Strain true strain engineering strain plane strain strain transformation formulae

4 Strain true strain engineering strain plane strain strain transformation formulae 4 Strain The concept of strain is introduced in this Chapter. The approimation to the true strain of the engineering strain is discussed. The practical case of two dimensional plane strain is discussed,

More information