4.3 - Linear Combinations and Independence of Vectors

Size: px
Start display at page:

Download "4.3 - Linear Combinations and Independence of Vectors"

Transcription

1 - Linear Combinations and Independence of Vectors De nitions, Theorems, and Examples De nition 1 A vector v in a vector space V is called a linear combination of the vectors u 1, u,,u k in V if v can be written in the form v = c 1 u 1 + c u + + c k u k, where c 1 ; c ; :::; c k are scalars Example For the set of vectors in R, S = f(1; ; 1) ; (; 1; ) ; (1; ; 5)g, v 1 = v + v = (; 1; ) + (1; ; 5) = (1; ; 1) Example For the set of vectors in M ;, (the set of matrices) 8 1 S = ; ; ; v 1 is a linear combination of v ; v, and v because Example How to find a linear combination 1 = Given the set S = f(1; ; ) ; (; 1; ) ; ( 1; ; 1)g, write the vector (1; 1; 1) as a linear combination of vectors in the set S Solution 5 We need to nd scalars c 1 ; c, and c such that, (1; 1; 1) = c 1 (1; ; ) + c (; 1; ) + c ( 1; ; 1) = (c 1 c ; c 1 + c ; c 1 + c + c ) which leads to the system 8 < : c 1 c = 1 c 1 + c = 1 c 1 + c + c = 1 with augmented coe cient matrix and subsequent reduced-row echelon form rref : 1 1! The resulting system has a free variable c = t such that c 1 = 1 + t; c = 1 t, and c = t Since c is free, one such linear combination is therefore when t =, so that c 1 = 1, c = 1, c =, and (1; 1; 1) = (1) (1; ; ) + ( 1) (; 1; ) + () ( 1; ; 1) Write the vector (1; ; ) as a linear combination of vectors from the same set S as above Solution Setting up and solving the system as done above, we end up with the reduced-row echelon form whereby we conclude that the system is inconsistent Hence we cannot write (1; ; ) as a linear combination of the vectors in S (meaning, as we ll see below, that (1; ; ) is linearly independent of the vectors in S), or that (1; ; ) is not in the span of S, or in symbols, that (1; ; ) = span(s) (see the de nition below) 1

2 De nition Let S = fv 1 ; v ; :::; v k g be a subset of the vector space V The set S is called a spanning set of V if every vector in V can be written as a linear combination of vectors in S In such cases it is said that "S spans V " Example 8 Examples of Spanning Sets 1 The set S = f(1; ; ) ; (; 1; ) ; (; ; 1)g = fe 1 ; e ; e g spans R because any vector u = (u 1 ; u ; u ) can be written as u = u 1 e 1 + u e + u e The set S = 1; x; x spans the set of all second-degree (or less) polynomials (denoted P ) because any polynomial p (x) = a + bx + cx can be written as p (x) = a (1) + b (x) + c x The set S = f(1; ; ) ; (; 1; ) ; ( ; ; 1)g also spans R because for any vector u = (u 1 ; u ; u ) in R, we have 8 < c 1 c = u 1 c 1 + c = u : c 1 + c + c = u whose coe cient matrix reduces to rref! and hence every system Ax = b has a unique solution x, regardless of b The set S = f(1; ; ) ; (; 1; ) ; ( 1; ; 1)g from our previous examples does not span R Although this set is hardly any di erent from set S, the vectors in S lead to a row-reduced augmented coe cient matrix of 1 1 a 1 b 1 c 5 rref! a 1 b c In this case, the presence of a free variable will guarantee in nitely many solutions if c turns out to equal upon row reduction, but the system will have no solutions for any c = Therefore, given the system Ax = b, not all vectors b will be expressible as a linear combination of the vectors in S, ie, the set S does not span all of R : 5 De nition 9 If S = fv 1 ; v ; :::; v k g is a set of vectors in a vector space V, then the span of S is the set of all linear combinations of the vectors in S, span (S) = fc 1 v 1 + c v + + c k v k : c i are all real numbersg : The span of S is denoted by span(s) or span(fv 1 ; v ; :::; v k g) by fv 1 ; v ; :::; v k g, or that S spans V If span(s) = V, it is said that V is spanned Theorem 1 If S = fv 1 ; v ; :::; v k g is a set of vectors in a vector space V, then span(s) is a subspace of V Moreover, span(s) is the smallest subspace of V that contains S, in the sense that every other subspace of V that contains S must contain span(s) Proof We would have to show that the set of all linear combinations of v 1 ; v ; :::; v k is closed under addition and scalar multiplication Consider any two vectors in S: u = c 1 v 1 + c v + + c k v k v = d 1 v 1 + d v + + d k v k

3 where the c i and d i are scalars for all i Then u + v = (c 1 v 1 + c v + + c k v k ) + (d 1 v 1 + d v + + d k v k ) = (c 1 + d 1 ) v 1 + (c + d ) v + (c k + d k ) v k and au = a (c 1 v 1 + c v + + c k v k ) = (ac 1 ) v 1 + (ac ) v + + (ac k ) v k, hence u+v and au are also in span(s) because they too can be written as linear combinations of v 1 ; v ; :::; v k Therefore, span(s) is a subspace of V De nition 11 A set of vectors S = fv 1 ; v ; :::; v k g in a vector space V is called linearly independent if the vector equation c 1 v 1 + c v + + c k v k = has only the trivial solution, c 1 = c = = c k = linearly dependent If there are any nontrivial solutions, then S is called Remark 1 It is important to note that the coe cients in a linear combination of linearly independent vectors v 1 ; v ; :::; v k are unique Proof If vector w can be written as both and then and w = c 1 v 1 + c v + + c k v k w = d 1 v 1 + d v + + d k v k ; c 1 v 1 + c v + + c k v k = d 1 v 1 + d v + + d k v k (c 1 d 1 ) v 1 + (c d ) v + + (c k d k ) v k = Then, because the vectors v 1 ; v ; :::; v k are linearly independent, it must be true that each of the coe cients is zero, so that c i = d i for all i = 1:::k Theorem 1 A set S = fv 1 ; v ; :::; v k g, k, is linearly dependent if and only if at least one of the vectors v j can be written as a linear combination of the other vectors in S Proof ) Assume that S is a linearly dependent set Then there exist scalars c 1 ; c ; :::; c k (not all zero) such that c 1 v 1 + c v + + c k v k = Because one of the coe cients must be nonzero, we can assume without loss of generality that c 1 = Then we can solve for v 1 in terms of the other vectors, producing v 1 = c c 1 v c c 1 v hence v 1 is a linear combination of the other vectors c k c 1 v k, ( Suppose the vector v 1 in S is a linear combination of the other vectors In other words, v 1 = c v + c v + + c k v k Then the equation v 1 + c v + c v + + c k v k = has at least one coe cient, 1, that is nonzero, and you can conclude that S is linearly dependent

4 Linear Independence in R n Theorem 1 The n vectors v 1 ; v ; :::; v n in R n are linearly independent if and only if the n n matrix A = v 1 v v n having the v i as column vectors has nonzero determinant Remark 15 Any set of more than n vectors in R n is always linearly dependent Proof If k > n, then the system c 1 v 1 + c v + + c k v k = is a homogeneous system that has more variables than equations and therefore has in nitely many nontrivial solutions - Bases and Dimension for Vector Spaces De nitions, Theorems, and Examples De nition 1 A nite set S of vectors in a vector space V is called a basis for V provided that (a) the vectors in S are linearly independent, and (b) the vectors in S span V In other words, a basis for a vector space V is a linearly independent spanning set of vectors in V De nition 1 The standard basis for R n consists of the unit vectors e 1 = (1; ; ; :::; ) ; e = (; 1; ; :::; ) ; :::, e n = (; ; ; :::; 1) : Claim 18 Any set of more than n vectors in R n is linearly dependent Claim 19 Any set of n linearly independent vectors in R n is a basis for R n Claim A basis for any vector space V contains the largest possible number of linearly independent vectors in V De nition 1 Any two bases for a vector space consist of the same number of vectors, and this number is called the dimension of the vector space We then call the vector space an "n-dimensional vector space" Example The zero vector space fg has no basis because it contains no linearly independent set of vectors Example The space of all polynomials } has no nite basis (ie, we cannot add two nd-degree polynomials to obtain all other degrees of polynomials) We therefore say that } is in nite dimensional Claim An n-dimensional vector space contains proper subspaces of each dimension k = 1; ; :::; n 1 (Recall that a proper subspace is one that is not "trivial," ie, is not just the fg space or not the entire space itself) Theorem 5 Let V be an n-dimensional vector space and let S be a subspace of V (a) If S is linearly independent and consists of n vectors, then S is a basis for V (b) If S spans V and consists of n vectors, then S is a basis for V (c) If S is linearly independent, then S is contained in a basis for V (d) If S spans V, then S contains a basis for V Then Bases for Solution Spaces Consider a homogeneous linear system Ax =, in which A is an m n matrix, so the system consists of m equations in the n variables x 1 ; x ; :::; x n Its "solution space" W, as discussed previously, is a subspace of R n We typically want to describe this solution space in terms of an explicit basis for W In the case of a homogeneous system, if the row-echelon form of the matrix has no free variables, the system has only the trivial solution, and W = fg If there are free variables, we parameterize to obtain a basis for the solution vectors as shown in the following example:

5 Example Find a basis for the solution space of the homogeneous linear system 8 < x 1 + x x 5x + 5x 5 = x 1 + x x x + x 5 = : : x 1 + x x x + x 5 = The reduced row-echelon form for this system is given by : Thus the leading variables are x 1 and x while the free variables are x ; x ; and x 5 Setting we nd x = r, x = s, and x 5 = t, (1) x 1 = r + s t and x = s t () Now the equations in (1) and () give us a typical solution vector of the form 1 1 x 1 r + s t x B x x A = r B s t s A ; x 5 t all in terms of the parameters r, s, and t So far, there is nothing new in this approach However, we now want to generate linearly independent solution vectors that span the solution space We do so by in turn selecting one of the parameters to be 1 while all the others are So for r = 1 and s = t =, we have the vector v 1 = ( ; 1; ; ; ), for s = 1 and r = t =, we have and for r = s = and t = 1, we have v = (; ; 1; 1; ), v = ( ; ; ; ; 1) Therefore the solution space of the system is a -dimensional subspace of R 5 with a basis given by fv 1 ; v ; v g Claim When homogeneous systems are solved from the reduced row-echelon form, the resulting spanning set is always linearly independent Claim 8 The dimension of the solution space for the homogeneous linear system is the same as the number of free variables in the reduced row-echelon form 5 - Row and Column Spaces Row Space and Row Rank In the course of solving systems of linear equations, we have often produced entire rows of zeros in our reduction of matrices to echelon form What this really means is that some of the equations in the original system did not give us any additional information about the system In other words, those equations were redundant or unnecessary It turns out that it is a relatively easy determination as to how many equations within a system are not redundant First, we address some de nitions 5

6 De nition 9 Given an m n matrix A, the row vectors of A are the m vectors r 1 = (a 11 ; a 1 ; :::; a 1n ) r = (a 1 ; a ; :::; a n ) r m = (a m1 ; a m ; :::; a mn ) in R n The subspace of R n spanned by these m row vectors is called the row space, denoted Row(A), of the matrix A The dimension of the row space Row(A) is called the row rank of the matrix A Note that if the rst column of an echelon matrix E is not all zero, then its nonzero row vectors have the forms r 1 = (e 11 ; :::; e 1p; :::; e 1q ; :::); r = (; ::; e p ; :::; e q ; :::); r = (; :::; ; ; ; ; : e q ; :::); and so on implies that Then the equation c 1 r 1 + c r + + c k r k = c 1 e 11 = ; c 1 e 1p + c e p = ; c 1 e 1q + c e q + c e q = ; and so forth We can therefore conclude that c 1 = c = ::: = c k =, and thus the row vectors r 1 through r k are linearly independent This result leads to the following theorem Theorem The nonzero row vectors of an echelon matrix are linearly independent and therefore form a basis for its row space Furthermore, if two matrices are row equivalent, they have the same row space Algorithm 1 To nd a basis for the row space of a matrix A, use elementary row operations to reduce A to an echelon matrix E Then the nonzero row vectors of E form a basis for the row space of A, Row(A) Remark The nonzero row vectors of E are not necessarily the same rows that would have been linearly independent in the original matrix A Column Space and Column Rank De nition Given an m n matrix A, the column vectors of A are the n vectors a 11 a 1 c 1 = a m1 5 ; c = a 1 a a m 5 ; :::; c n = in R m The subspace of R m spanned by these n column vectors is called the column space, denoted Col(A), of the matrix A The dimension of the column space Col(A) is called the column rank of the matrix A a 1n a n a mn 5 Realize that an echelon matrix E has the following general form, d 1 d d E = d k ; 5

7 where the d 1 to d k are all nonzero leading entries Then the k pivot columns of E look like d 1 d ; 5 ; 5 d ; :::; 5 The same argument we used to determine the linear independence of the row vectors earlier can be used here We would nd that a 1 c 1 + a c + + a k c k = implies that a 1 = a = = a k = : Therefore, the column rank of the echelon matrix E is equal to the number of its nonzero rows, and hence is equal to its row rank Now, what about the column rank of the original matrix A? Note that the homogeneous linear systems have the same solution set because the two matrices are row equivalent left-hand sides in Eq () are linear combinations of the column vectors: d k : 5 Ax = and Ex = () Ax = x 1 c 1 + x c + + x n c n ; Ex = x 1 c 1 + x c + + x n c n: Because x satis es either both or neither of the above systems, it follows that if and only if x 1 c 1 + x c + + x n c n = x 1 c 1 + x c + + x n c n = : If x = (x 1 ; x ; :::; x n ), then the Therefore, every linear dependence between column vectors of E is mirrored in a linear dependence with the same coe cients between the corresponding column vectors of A Because the k pivot columns of E are linearly independent, we can conclude that the corresponding k columns of A are also linearly independent In addition, because the pivot columns of E span Col(E), the corresponding columns in A span Col(A) Therefore, the latter k vectors form a basis for the column space of A, and the column rank of A is k This leads to the following algorithm Algorithm To nd a basis for the column space of a matrix A, use elementary row operations to reduce A to an echelon matrix E Then the column vectors of A that correspond to the pivot columns of E form a basis for the column space of A, Col(A) Theorem 5 The row rank and column rank of any matrix are equal the rank of the matrix This common value is simply called De nition The solution space of the homogeneous system Ax = is often called the null space of A and is denoted Null(A) The dimension of the null space is sometimes referred to as the nullity of the space, or as dim Null(A) Note that if A is an m n matrix, then Row(A) and Null(A) are subspaces of R n, while Col(A) is a subspace of R m In addition, we have the important identity rank(a) + dim Null(A) = n: Why? The rank of A is the number of leading variables in the reduced echelon matrix E, while the dimension of the null space is the number of free variables in E Therefore, their sum is the total number of variables in the matrix, or in other words, their sum is n It is an (almost) immediate consequence of the de nition of Col(A) that the nonhomogeneous linear system Ax = b is consistent if and only if the vector b is in the column space of A

8 Problems 1-1 The directions state that we are given a set of vectors in R Then " nd a subset of S that forms a basis for the subspace of R spanned by S" This amounts to determining which vectors in the given list are linearly independent We can do this by setting up a matrix with the given vectors as its columns and reducing the matrix to echelon form Doing so will identify the column space of the matrix, which in turn tells us which of the original vectors were linearly independent Problems 1- Here we are given a basis for some subspace W of R n We are then supposed to "extend" the basis so that it spans R n We do so by assuming that whatever basis vectors are lacking in S and would be needed to span R n would be multiples of the "standard unit vectors" e 1 ; e ; :::; e n Realizing this, we set these problems up in exactly the same way as problems 1-1, with the v 1 ; :::; v k as column vectors, along with the e 1 ; :::; e n augmented as columns as well The process of reducing the resulting matrix to reduced row-echelon form will identify which of the original columns were linearly independent, and hence will identify the basis vectors for R n Problems 1- Here we are trying to identify which equations within a system are actually "redundant" Recall that nding a basis for the row space tells us how many independent rows there are in a system, but not which rows those independent rows were However, the basis for the column space corresponds exactly with the linearly independent columns of the original matrix Therefore, if we take the transpose of the original system and nd a basis for the column space, we ve basically identi ed which equations (after re-transposition) in the original system were "irredundant" - Orthogonal Vectors in R n De nitions and Theorems De nition The length, or magnitude, of a vector v = (v 1 ; v ; :::; v n ) in R n is given by q kvk = v1 + v + + v n: This length is also called the norm If kvk = 1, then v is called a unit vector Note that by de nition, the length cannot be negative, ie kvk Also, kvk = if and only if v is the zero vector Theorem 8 The length of a scalar multiple of a vector is given as follows c a scalar Then kcvk = jcj kvk, where jcj is the absolute value of c Theorem 9 We often want to nd a unit vector in the same direction as a given vector vector in R n, then the vector u = v kvk Let v be a vector in R n and If v is a nonzero has length 1 and has the same direction as v This vector is called the unit vector in the direction of v 1 Proof Because v is nonzero, we know that kvk = Therefore kvk >, and we can write 1 u = v, kvk so u is a scalar multiple of v and hence in the same direction as v Additionally, u has length 1 because kuk = 1 kvk v = 1 kvk kvk = 1 The process of nding the unit vector in the direction of v is called normalizing the vector v 8

9 De nition The distance between two vectors u and v in R n is given by q ku vk = (u 1 v 1 ) + (u v ) + + (u n v n ) Note the following properties of this distance: 1) ku vk ) ku vk = if and only if u = v ) ku vk = kv uk Dot Product and the Angle Between Two Vectors To nd the angle between two vectors u and v, use the law of cosines (shown below in R ): kv uk = kuk + kvk kuk kvk cos (v 1 u 1 ) + (v u ) = u 1 + u + v 1 + v kuk kvk cos v1 u 1 v 1 + u 1 + v u v + u = u 1 + u + v1 + v kuk kvk cos u 1 v 1 + u v = cos () kuk kvk The numerator on the left-hand side represents the R version of the dot product, which is de ned in R n as u v = u 1 v 1 + u v + + u n v n This dot product, also called a scalar product, produces a scalar, not a vector! following properties: 1 u v = v u (symmetric) u (v + w) = u v + u w (distributivity) (cu) v = u (cv) = c (u v) (homogeneity) v v, and v v = if and only if v = (positivity) This product exhibits the Given the above derivation of cos in Eq (), we would like to de ne the angle between two vectors as cos = uv uv kukkvk, but how do we know that 1 kukkvk 1? We know this because of the Cauchy-Schwarz Inequality, often considered one of the most important inequalities in all of mathematics: Theorem 1 (Cauchy-Schwarz Inequality): If u and v are vectors in R n, then ju vj kuk kvk Proof If u =, then ju vj = j vj = and kuk kvk = kvk = If u =, we consider tu + v for any t R We then know (by property above) that (tu + v) (tu + v) t (u u) + t (u v) + (v v) Now, let a = (u u), b = (u v), and c = (v v) Then we have at + bt + c, which represents a parabola that is on or above the x-axis Therefore, we know that it has two non-real roots or one repeated real root, and that its discriminant is non-positive We thus have b ac b ac 9

10 and substituting the values for a, b, and c, this leads to (u v) (u u) (v v) or, dividing by, (u v) (u u) (v v) Recall that (u u) = kuk and (v v) = kvk Then we have (u v) kuk kvk, and because both sides are positive, we end up with ju vj kuk kvk (5) Now, dividing both sides of Eq (5) by the right-hand side (which is non-zero for non-zero vectors u and v) leads to ju vj kuk kvk 1; which in turn is equivalent to We therefore de ne the angle between two vectors as cos = 1 u v kuk kvk 1 u v, for kuk kvk Furthermore, because kuk kvk >, we know that the sign of the cos is determined by the dot product u v Therefore, we have the following result regarding the relationship between u and v: Opposite direction u v < u v = u v > Same direction cos = 1 cos < cos = cos > cos = 1 = < < (Q ) = < < (Q 1) = Of particular interest is the case in which u v = The angle between the two vectors is then, and the vectors are considered perpendicular to one another (or orthogonal) This leads to the following de nition De nition Two vectors u and v in R n are orthogonal if their dot product is zero (ie, if u v = ) By this de nition, the zero vector is orthogonal to every other vector (even though cos = uv kukkvk would be unde ned) Another very important theorem in mathematics is the Triangle Inequality In two dimensions, this says that the longest side of a "triangle" is shorter than the sum of the two shorter sides (what happens if it s not?) This theorem is widely used in proofs of important mathematical results, and it generalizes to higher dimensions Here is its formal statement and its proof Theorem (Triangle Inequality): If u and v are vectors in R n, then Proof We have ku + vk kuk + kvk ku + vk = (u + v) (u + v) = u (u + v) + v (u + v) = u u + u v + v u + v v = kuk + (u v) + kvk () kuk + ju vj + kvk : () 1

11 Now, by the Cauchy-Schwarz Inequality, we know that ju vj kuk kvk, so from Eq () we have ku + vk kuk + ju vj + kvk kuk + kuk kvk + kvk = (kuk + kvk) ; and because both sides are, we nally have ku + vk kuk + kvk Note that equality occurs precisely when u = cv In the midst of the proof for the Triangle Inequality (Eq ())we notice the following statement: ku + vk = kuk + (u v) + kvk If the vectors u and v are orthogonal, then u v =, and we end up with the following generalization of the Pythagorean Theorem to R n Theorem If u and v are vectors in R n, then u and v are orthogonal if and only if ku + vk = kuk + kvk Dot Products and Matrix Multiplication We quite often represent a vector v = (v 1 ; v ; :::; v n ) as an n1 matrix With this in mind, we can represent the dot product of two vectors u u C B C as the matrix product u = u n u v = u T v = (u 1 ; u ; :::; u n ) C A and v = v 1 v v n 1 v 1 v v n C A C A = u 1v 1 + u v + + u n v n Orthogonality, Linear Independence, and Matrix Subspaces As one might expect, there is a connection between orthogonality and linear independence Theorem 5 If the nonzero vectors v 1 ; v ; :::; v n are mutually orthogonal - that is, each two of them are orthogonal - then they are linearly independent (see proof in textbook on pages -8) An important implication of this theorem is that any set of n mutually orthogonal vectors in R n constitutes a basis for R n This particular basis is called an orthogonal basis One simple example is the set of standard unit vectors e 1 ; e ; :::; e n in R n Of course, we would now have to ask exactly how we can nd an orthogonal basis for a given space Furthermore, we often would like to have basis vectors with lengths of 1 This leads to what is called an orthonormal basis, and of course, we would like to know how to nd a basis of this type as well More on this later 11

12 Orthogonal Complements Consider the homogeneous linear system Ax = of m equations in n unknowns If v 1 ; v ; :::; v n are the row vectors of the m n coe cient matrix A, then the system can be represented as v 1 v 1 x v v x v n 5 x 5 = v n x 5 = 5 : (8) From this, it should be clear that x is a solution vector of Ax = if and only if x is orthogonal to each row vector of A But then x is orthogonal to every linear combination of row vectors because x (c 1 v 1 + c v + + c k v k ) = c 1 x 1 v 1 + c x v + + c k x k v k = (c 1 ) () + (c ) () + + (c k ) () = : Therefore, the vector x in R n is a solution vector of Ax = if and only if x is orthogonal to each vector in the row space of the matrix A This leads us to the following de nition: De nition (Orthogonal Complement of a Subspace): Given a subspace V of R n, the space of all vectors orthogonal to V is called the orthogonal complement of V It is denoted by V? and read "V perp" It is easy to show that V? is also a subspace of R n This statement is part of the following theorem Theorem Let V be a subspace of R n Then 1 Its orthogonal complement V? is also a subspace of R n ; The only vector that lies in both V and V? is the zero vector; The orthogonal complement of V? is V, or in other words, V?? = V ; If S is a spanning set for V, then the vector u is in V? if and only if u is orthogonal to every vector in S The importance of our discussion of the system in Eq (8) cannot be overstated to the following theorem That discussion leads Theorem 8 Let A be an m n matrix complements in R n Then the row space of A and the null space of A are orthogonal We then have at least one way to nd a basis for the orthogonal complement of a given space example, if we seek a basis for the orthogonal complement V? of a subspace V of R n : For Algorithm 9 Let A be the m n matrix with row vectors v 1 ; v ; :::; v m Then, reduce A to echelon form and nd a basis for the solution space Null(A) of Ax = Because V? = Null(A), this latter basis will be a basis for the orthogonal complement of V - General Vector Spaces Examples of General Vector Spaces The concept of a "vector space" can (and is) easily extended to spaces for which the elements are not necessarily "vectors" in the way we typically view or de ne vectors Here are some examples Example 5 (Function Space): Let F denote the set of all functions f : R! R This notation means that f is de ned for all real numbers on the real line Therefore, functions such as f 1 (x) = x x + and f (x) = sin x are in F, but f (t) = 1=t and f (t) = tan t are not in F because they are not de ned for all real t Addition and scalar multiplication of functions are de ned as (f + g) (x) = f (x) + g (x) and (cf) (x) = cf (x), respectively, and the zero element is the constant function f (x) = for all x R (One can also consider functions de ned on some I R) 1

13 Example 51 (Polynomial Space): Let } be the set of all polynomials in F (ie, polynomials are de ned for all real values of their independent variable) Sums of polynomials and scalar multiples of polynomials are still polynomials, so } is a subspace of F, and hence a vector space in its own right However, a basis for } would have to be in nite-dimensional (why?) Example 5 (Space of polynomials of degree at most n): Let } n be the set of all polynomials of degree at most n (for n ) Again, this set would be closed under addition and scalar multiplication and therefore would be a subspace of both } and F Whereas } is in nite-dimensional, } n is (n + 1)-dimensional (why n + 1 and not just n?), and has basis 1; x; x ; :::; x n NOTE: The fact that the basis "vectors" of this polynomial space are linearly independent leads to the result that the coe cients in a polynomial are unique, which in turn is what allows us to use the method of "partial fractions" that we learned in calculus class Example 5 (Space of Continuously Di erentiable Functions): Let C 1 be the set of all continuously differentiable functions u : R! R A function u is in this space if u and u exist and are continuous for all t R Some functions that would be in C 1 would include all polynomials, functions of the form u (t) = e t for all R, and the trigonometric functions such as v (t) = sin (t) and w (t) = cos (t) for all R Functions not in this space would be those such as f (t) = 1 t 5, g (t) = jtj, or h (t) = csc (t) This space is a subspace (hence a vector space) because and d dx (x (t) + y (t)) = dt dt + dy dx d dx (rx (t)) = r dt dt Example 5 (Space of m n Matrices): It is easy to verify that for given m and n, the set of m n matrices possesses all the properties of a vector space We label this space as M m;n Example 55 A speci c example of the above space would be the set of all matrices, M ; This is a subspace (it is closed under scalar multiplication and matrix addition) of M m;n Furthermore, it is a -dimensional space with basis vectors 1 1 ; ; ; and 1 1 a b Example 5 A subset of M ; might consist of all matrices of the form, which is easily b a veri able as a subspace of M ; This is a -dimensional subspace because its basis is given by 1 1 A = and B = 1 1 Note that B = = and for any linear combinations aa + bb and ca + db given by a b c d and b a d c 1 1, = A, we also have (aa + bb) (ca + db) = = a b c d b a d c ac bd (bc + ad) bc + ad ac bd 1

14 This last result shows that this particular space is also closed under matrix multiplication (in addition to scalar multiplication We know by now that this is not necessarily that case Furthermore, it should not take you long to recognize the above as a matrix representation of complex numbers! The matrix a b = a Re +b Im b a corresponds to the complex number a+bi, and the matrix multiplication above is none other than the familiar (a + bi) (c + di) = (ac bd) + (ad + bc) i The primary convenience of such a representation is that matrices are much easier to program on a computer than are the rules of complex number arithmetic Solution Spaces of Di erential Equations The most applicable "vector space" in our class (apart from R n ) is the space of solutions of di erential equations The concepts regarding vector spaces, such as linear independence, rank, dimension, bases, and so on, come into play in our solution of di erential equations, particularly when we get into systems of di erential equations Here are some more examples of vector spaces in this context Example 5 The rst-order di erential equation y = ky, where k is a constant, has the general solution y = Ce kx Therefore the solution space of the di erential equation consists of all constant multiples of the single function e kx, and is therefore the 1-dimensional function space with basis e kx We will soon study this in more detail, but for now, it is probably not di cult to trust that the set of all solutions of a linear second-order di erential equation of the form ay + by + cy = (9) with constant coe cients a, b, and c is a -dimensional function subspace S The following serve as examples of this concept, and the details behind the derivation of the solutions can be found on pages -8 of the textbook Do not be concerned (for now) with how to solve these di erential equations Just be sure to understand how the solutions can be generated from basis "vectors" in a subspace of functions Example 58 Let a = 1 and b = c = in Eq (9) Then we have y = ; (1) which after two integrations (do this for review!) has the solution y (x) = Ax + B Every function of this form would be a solution to the di erential equation in (1), and every solution to the di erential equation would have this form Therefore the solution space of this di erential equation is the -dimensional subspace } 1 of linear polynomials with basis f1; xg Example 59 Let a = 1, b =, and c = in Eq (9) Then we have y y =, (11) which has solution (verify this!) y (x) = 1 Cex + A Therefore every solution of (11) is of the form y (x) = A + Be x (where B = 1 C), and every function of this form (verify this via substitution) is a solution of the di erential equation The solution space therefore is -dimensional with basis 1; e x 1

15 Example Let a = 1, b =, and c = 1 in Eq (9) Then we have which after quite a bit of algebraic manipulation leads to the solution y y =, (1) y (x) = A cosh x + B sinh x, (1) where A = a sinh b and B = a cosh b Therefore every solution of the di erential equation in (1) is of the form shown in (1), and every function of this form is a solution of the di erential equation Therefore, the solution space of the di erential equation is the -dimensional function space generated by the basis fcosh x, sinh xg Furthermore, because cosh x and sinh x are de ned as linear combinations of e x and e x, ie, cosh x = ex + e x and sinh x = ex e x, we also see that the solution can be written in the form y (x) = Ce x + De x, and so a second basis for the solution space would be the set fe x ; e x g Homework Hints If you are trying to determine if a set is a subspace (or vector space), often an easy check is to determine whether the space contains the zero element (ie, the zero matrix, the zero function, etc) If it does not, it cannot be a vector space, and hence cannot be a subspace Other than checking for the zero element, make sure the set is closed under addition and scalar multiplication In other words, make sure that the sums and scalar multiples have the same characteristics as those "vectors" with which you started To determine linear independence, make sure that one "vector" cannot be written as a multiple (or linear combination) of the others This can often done by inspection (ie, with little or no work) 15

Linear Algebra. Linear Algebra. Chih-Wei Yi. Dept. of Computer Science National Chiao Tung University. November 12, 2008

Linear Algebra. Linear Algebra. Chih-Wei Yi. Dept. of Computer Science National Chiao Tung University. November 12, 2008 Linear Algebra Chih-Wei Yi Dept. of Computer Science National Chiao Tung University November, 008 Section De nition and Examples Section De nition and Examples Section De nition and Examples De nition

More information

Lecture 23: 6.1 Inner Products

Lecture 23: 6.1 Inner Products Lecture 23: 6.1 Inner Products Wei-Ta Chu 2008/12/17 Definition An inner product on a real vector space V is a function that associates a real number u, vwith each pair of vectors u and v in V in such

More information

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017 Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...

More information

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence)

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) David Glickenstein December 7, 2015 1 Inner product spaces In this chapter, we will only consider the elds R and C. De nition 1 Let V be a vector

More information

March 27 Math 3260 sec. 56 Spring 2018

March 27 Math 3260 sec. 56 Spring 2018 March 27 Math 3260 sec. 56 Spring 2018 Section 4.6: Rank Definition: The row space, denoted Row A, of an m n matrix A is the subspace of R n spanned by the rows of A. We now have three vector spaces associated

More information

7. Dimension and Structure.

7. Dimension and Structure. 7. Dimension and Structure 7.1. Basis and Dimension Bases for Subspaces Example 2 The standard unit vectors e 1, e 2,, e n are linearly independent, for if we write (2) in component form, then we obtain

More information

Review Notes for Linear Algebra True or False Last Updated: February 22, 2010

Review Notes for Linear Algebra True or False Last Updated: February 22, 2010 Review Notes for Linear Algebra True or False Last Updated: February 22, 2010 Chapter 4 [ Vector Spaces 4.1 If {v 1,v 2,,v n } and {w 1,w 2,,w n } are linearly independent, then {v 1 +w 1,v 2 +w 2,,v n

More information

Math 369 Exam #2 Practice Problem Solutions

Math 369 Exam #2 Practice Problem Solutions Math 369 Exam #2 Practice Problem Solutions 2 5. Is { 2, 3, 8 } a basis for R 3? Answer: No, it is not. To show that it is not a basis, it suffices to show that this is not a linearly independent set.

More information

Elementary Linear Algebra Review for Exam 2 Exam is Monday, November 16th.

Elementary Linear Algebra Review for Exam 2 Exam is Monday, November 16th. Elementary Linear Algebra Review for Exam Exam is Monday, November 6th. The exam will cover sections:.4,..4, 5. 5., 7., the class notes on Markov Models. You must be able to do each of the following. Section.4

More information

94 CHAPTER 3. VECTORS AND THE GEOMETRY OF SPACE

94 CHAPTER 3. VECTORS AND THE GEOMETRY OF SPACE 94 CHAPTER 3. VECTORS AND THE GEOMETRY OF SPACE 3.3 Dot Product We haven t yet de ned a multiplication between vectors. It turns out there are di erent ways this can be done. In this section, we present

More information

Review Notes for Midterm #2

Review Notes for Midterm #2 Review Notes for Midterm #2 Joris Vankerschaver This version: Nov. 2, 200 Abstract This is a summary of the basic definitions and results that we discussed during class. Whenever a proof is provided, I

More information

Lecture 3: Linear Algebra Review, Part II

Lecture 3: Linear Algebra Review, Part II Lecture 3: Linear Algebra Review, Part II Brian Borchers January 4, Linear Independence Definition The vectors v, v,..., v n are linearly independent if the system of equations c v + c v +...+ c n v n

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

SUMMARY OF MATH 1600

SUMMARY OF MATH 1600 SUMMARY OF MATH 1600 Note: The following list is intended as a study guide for the final exam. It is a continuation of the study guide for the midterm. It does not claim to be a comprehensive list. You

More information

MAT 242 CHAPTER 4: SUBSPACES OF R n

MAT 242 CHAPTER 4: SUBSPACES OF R n MAT 242 CHAPTER 4: SUBSPACES OF R n JOHN QUIGG 1. Subspaces Recall that R n is the set of n 1 matrices, also called vectors, and satisfies the following properties: x + y = y + x x + (y + z) = (x + y)

More information

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS 1. HW 1: Due September 4 1.1.21. Suppose v, w R n and c is a scalar. Prove that Span(v + cw, w) = Span(v, w). We must prove two things: that every element

More information

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015 Final Review Written by Victoria Kala vtkala@mathucsbedu SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015 Summary This review contains notes on sections 44 47, 51 53, 61, 62, 65 For your final,

More information

MATH 300, Second Exam REVIEW SOLUTIONS. NOTE: You may use a calculator for this exam- You only need something that will perform basic arithmetic.

MATH 300, Second Exam REVIEW SOLUTIONS. NOTE: You may use a calculator for this exam- You only need something that will perform basic arithmetic. MATH 300, Second Exam REVIEW SOLUTIONS NOTE: You may use a calculator for this exam- You only need something that will perform basic arithmetic. [ ] [ ] 2 2. Let u = and v =, Let S be the parallelegram

More information

Math 54 HW 4 solutions

Math 54 HW 4 solutions Math 54 HW 4 solutions 2.2. Section 2.2 (a) False: Recall that performing a series of elementary row operations A is equivalent to multiplying A by a series of elementary matrices. Suppose that E,...,

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Solving a system by back-substitution, checking consistency of a system (no rows of the form

Solving a system by back-substitution, checking consistency of a system (no rows of the form MATH 520 LEARNING OBJECTIVES SPRING 2017 BROWN UNIVERSITY SAMUEL S. WATSON Week 1 (23 Jan through 27 Jan) Definition of a system of linear equations, definition of a solution of a linear system, elementary

More information

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2 Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch

More information

Math 308 Discussion Problems #4 Chapter 4 (after 4.3)

Math 308 Discussion Problems #4 Chapter 4 (after 4.3) Math 38 Discussion Problems #4 Chapter 4 (after 4.3) () (after 4.) Let S be a plane in R 3 passing through the origin, so that S is a two-dimensional subspace of R 3. Say that a linear transformation T

More information

2. Every linear system with the same number of equations as unknowns has a unique solution.

2. Every linear system with the same number of equations as unknowns has a unique solution. 1. For matrices A, B, C, A + B = A + C if and only if A = B. 2. Every linear system with the same number of equations as unknowns has a unique solution. 3. Every linear system with the same number of equations

More information

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible. MATH 2331 Linear Algebra Section 2.1 Matrix Operations Definition: A : m n, B : n p ( 1 2 p ) ( 1 2 p ) AB = A b b b = Ab Ab Ab Example: Compute AB, if possible. 1 Row-column rule: i-j-th entry of AB:

More information

MATH 2331 Linear Algebra. Section 1.1 Systems of Linear Equations. Finding the solution to a set of two equations in two variables: Example 1: Solve:

MATH 2331 Linear Algebra. Section 1.1 Systems of Linear Equations. Finding the solution to a set of two equations in two variables: Example 1: Solve: MATH 2331 Linear Algebra Section 1.1 Systems of Linear Equations Finding the solution to a set of two equations in two variables: Example 1: Solve: x x = 3 1 2 2x + 4x = 12 1 2 Geometric meaning: Do these

More information

Linear Algebra Homework and Study Guide

Linear Algebra Homework and Study Guide Linear Algebra Homework and Study Guide Phil R. Smith, Ph.D. February 28, 20 Homework Problem Sets Organized by Learning Outcomes Test I: Systems of Linear Equations; Matrices Lesson. Give examples of

More information

Topic 14 Notes Jeremy Orloff

Topic 14 Notes Jeremy Orloff Topic 4 Notes Jeremy Orloff 4 Row reduction and subspaces 4. Goals. Be able to put a matrix into row reduced echelon form (RREF) using elementary row operations.. Know the definitions of null and column

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

web: HOMEWORK 1

web:   HOMEWORK 1 MAT 207 LINEAR ALGEBRA I 2009207 Dokuz Eylül University, Faculty of Science, Department of Mathematics Instructor: Engin Mermut web: http://kisideuedutr/enginmermut/ HOMEWORK VECTORS IN THE n-dimensional

More information

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP) MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

More information

Chapter 6. Orthogonality and Least Squares

Chapter 6. Orthogonality and Least Squares Chapter 6 Orthogonality and Least Squares Section 6.1 Inner Product, Length, and Orthogonality Orientation Recall: This course is about learning to: Solve the matrix equation Ax = b Solve the matrix equation

More information

Section 6.1. Inner Product, Length, and Orthogonality

Section 6.1. Inner Product, Length, and Orthogonality Section 6. Inner Product, Length, and Orthogonality Orientation Almost solve the equation Ax = b Problem: In the real world, data is imperfect. x v u But due to measurement error, the measured x is not

More information

a11 a A = : a 21 a 22

a11 a A = : a 21 a 22 Matrices The study of linear systems is facilitated by introducing matrices. Matrix theory provides a convenient language and notation to express many of the ideas concisely, and complicated formulas are

More information

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1) EXERCISE SET 5. 6. The pair (, 2) is in the set but the pair ( )(, 2) = (, 2) is not because the first component is negative; hence Axiom 6 fails. Axiom 5 also fails. 8. Axioms, 2, 3, 6, 9, and are easily

More information

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition) Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational

More information

Linear Algebra, Summer 2011, pt. 2

Linear Algebra, Summer 2011, pt. 2 Linear Algebra, Summer 2, pt. 2 June 8, 2 Contents Inverses. 2 Vector Spaces. 3 2. Examples of vector spaces..................... 3 2.2 The column space......................... 6 2.3 The null space...........................

More information

Chapter 3. Directions: For questions 1-11 mark each statement True or False. Justify each answer.

Chapter 3. Directions: For questions 1-11 mark each statement True or False. Justify each answer. Chapter 3 Directions: For questions 1-11 mark each statement True or False. Justify each answer. 1. (True False) Asking whether the linear system corresponding to an augmented matrix [ a 1 a 2 a 3 b ]

More information

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether

More information

Math 240, 4.3 Linear Independence; Bases A. DeCelles. 1. definitions of linear independence, linear dependence, dependence relation, basis

Math 240, 4.3 Linear Independence; Bases A. DeCelles. 1. definitions of linear independence, linear dependence, dependence relation, basis Math 24 4.3 Linear Independence; Bases A. DeCelles Overview Main ideas:. definitions of linear independence linear dependence dependence relation basis 2. characterization of linearly dependent set using

More information

Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

More information

Math Linear Algebra Final Exam Review Sheet

Math Linear Algebra Final Exam Review Sheet Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

More information

Vector Spaces, Orthogonality, and Linear Least Squares

Vector Spaces, Orthogonality, and Linear Least Squares Week Vector Spaces, Orthogonality, and Linear Least Squares. Opening Remarks.. Visualizing Planes, Lines, and Solutions Consider the following system of linear equations from the opener for Week 9: χ χ

More information

Chapter 6: Orthogonality

Chapter 6: Orthogonality Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products

More information

Lecture 20: 6.1 Inner Products

Lecture 20: 6.1 Inner Products Lecture 0: 6.1 Inner Products Wei-Ta Chu 011/1/5 Definition An inner product on a real vector space V is a function that associates a real number u, v with each pair of vectors u and v in V in such a way

More information

LINEAR ALGEBRA QUESTION BANK

LINEAR ALGEBRA QUESTION BANK LINEAR ALGEBRA QUESTION BANK () ( points total) Circle True or False: TRUE / FALSE: If A is any n n matrix, and I n is the n n identity matrix, then I n A = AI n = A. TRUE / FALSE: If A, B are n n matrices,

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

Lecture 3q Bases for Row(A), Col(A), and Null(A) (pages )

Lecture 3q Bases for Row(A), Col(A), and Null(A) (pages ) Lecture 3q Bases for Row(A), Col(A), and Null(A) (pages 57-6) Recall that the basis for a subspace S is a set of vectors that both spans S and is linearly independent. Moreover, we saw in section 2.3 that

More information

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work. Assignment 1 Math 5341 Linear Algebra Review Give complete answers to each of the following questions Show all of your work Note: You might struggle with some of these questions, either because it has

More information

MTH 362: Advanced Engineering Mathematics

MTH 362: Advanced Engineering Mathematics MTH 362: Advanced Engineering Mathematics Lecture 5 Jonathan A. Chávez Casillas 1 1 University of Rhode Island Department of Mathematics September 26, 2017 1 Linear Independence and Dependence of Vectors

More information

CSL361 Problem set 4: Basic linear algebra

CSL361 Problem set 4: Basic linear algebra CSL361 Problem set 4: Basic linear algebra February 21, 2017 [Note:] If the numerical matrix computations turn out to be tedious, you may use the function rref in Matlab. 1 Row-reduced echelon matrices

More information

MATH2210 Notebook 3 Spring 2018

MATH2210 Notebook 3 Spring 2018 MATH2210 Notebook 3 Spring 2018 prepared by Professor Jenny Baglivo c Copyright 2009 2018 by Jenny A. Baglivo. All Rights Reserved. 3 MATH2210 Notebook 3 3 3.1 Vector Spaces and Subspaces.................................

More information

Dimension and Structure

Dimension and Structure 96 Chapter 7 Dimension and Structure 7.1 Basis and Dimensions Bases for Subspaces Definition 7.1.1. A set of vectors in a subspace V of R n is said to be a basis for V if it is linearly independent and

More information

(v, w) = arccos( < v, w >

(v, w) = arccos( < v, w > MA322 F all206 Notes on Inner Products Notes on Chapter 6 Inner product. Given a real vector space V, an inner product is defined to be a bilinear map F : V V R such that the following holds: Commutativity:

More information

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.] Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the

More information

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det What is the determinant of the following matrix? 3 4 3 4 3 4 4 3 A 0 B 8 C 55 D 0 E 60 If det a a a 3 b b b 3 c c c 3 = 4, then det a a 4a 3 a b b 4b 3 b c c c 3 c = A 8 B 6 C 4 D E 3 Let A be an n n matrix

More information

Linear Equations and Vectors

Linear Equations and Vectors Chapter Linear Equations and Vectors Linear Algebra, Fall 6 Matrices and Systems of Linear Equations Figure. Linear Algebra, Fall 6 Figure. Linear Algebra, Fall 6 Figure. Linear Algebra, Fall 6 Unique

More information

(v, w) = arccos( < v, w >

(v, w) = arccos( < v, w > MA322 Sathaye Notes on Inner Products Notes on Chapter 6 Inner product. Given a real vector space V, an inner product is defined to be a bilinear map F : V V R such that the following holds: For all v

More information

Linear Algebra Highlights

Linear Algebra Highlights Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to

More information

Math 3108: Linear Algebra

Math 3108: Linear Algebra Math 3108: Linear Algebra Instructor: Jason Murphy Department of Mathematics and Statistics Missouri University of Science and Technology 1 / 323 Contents. Chapter 1. Slides 3 70 Chapter 2. Slides 71 118

More information

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3 Answers in blue. If you have questions or spot an error, let me know. 3 4. Find all matrices that commute with A =. 4 3 a b If we set B = and set AB = BA, we see that 3a + 4b = 3a 4c, 4a + 3b = 3b 4d,

More information

1 9/5 Matrices, vectors, and their applications

1 9/5 Matrices, vectors, and their applications 1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric

More information

MTH 2530: Linear Algebra. Sec Systems of Linear Equations

MTH 2530: Linear Algebra. Sec Systems of Linear Equations MTH 0 Linear Algebra Professor Chao Huang Department of Mathematics and Statistics Wright State University Week # Section.,. Sec... Systems of Linear Equations... D examples Example Consider a system of

More information

Worksheet for Lecture 23 (due December 4) Section 6.1 Inner product, length, and orthogonality

Worksheet for Lecture 23 (due December 4) Section 6.1 Inner product, length, and orthogonality Worksheet for Lecture (due December 4) Name: Section 6 Inner product, length, and orthogonality u Definition Let u = u n product or dot product to be and v = v v n be vectors in R n We define their inner

More information

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold: Inner products Definition: An inner product on a real vector space V is an operation (function) that assigns to each pair of vectors ( u, v) in V a scalar u, v satisfying the following axioms: 1. u, v

More information

Chapter 1: Linear Equations

Chapter 1: Linear Equations Chapter : Linear Equations (Last Updated: September, 6) The material for these notes is derived primarily from Linear Algebra and its applications by David Lay (4ed).. Systems of Linear Equations Before

More information

Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u u 2 2 = u 2. Geometric Meaning:

Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u u 2 2 = u 2. Geometric Meaning: Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u 2 1 + u 2 2 = u 2. Geometric Meaning: u v = u v cos θ. u θ v 1 Reason: The opposite side is given by u v. u v 2 =

More information

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N. Math 410 Homework Problems In the following pages you will find all of the homework problems for the semester. Homework should be written out neatly and stapled and turned in at the beginning of class

More information

Chapter 2 Subspaces of R n and Their Dimensions

Chapter 2 Subspaces of R n and Their Dimensions Chapter 2 Subspaces of R n and Their Dimensions Vector Space R n. R n Definition.. The vector space R n is a set of all n-tuples (called vectors) x x 2 x =., where x, x 2,, x n are real numbers, together

More information

Row Space, Column Space, and Nullspace

Row Space, Column Space, and Nullspace Row Space, Column Space, and Nullspace MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Introduction Every matrix has associated with it three vector spaces: row space

More information

[3] (b) Find a reduced row-echelon matrix row-equivalent to ,1 2 2

[3] (b) Find a reduced row-echelon matrix row-equivalent to ,1 2 2 MATH Key for sample nal exam, August 998 []. (a) Dene the term \reduced row-echelon matrix". A matrix is reduced row-echelon if the following conditions are satised. every zero row lies below every nonzero

More information

Review Solutions for Exam 1

Review Solutions for Exam 1 Definitions Basic Theorems. Finish the definition: Review Solutions for Exam (a) A linear combination of vectors {v,..., v n } is: any vector of the form c v + c v + + c n v n (b) A set of vectors {v,...,

More information

Chapter 2: Matrix Algebra

Chapter 2: Matrix Algebra Chapter 2: Matrix Algebra (Last Updated: October 12, 2016) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). Write A = 1. Matrix operations [a 1 a n. Then entry

More information

10 Orthogonality Orthogonal subspaces

10 Orthogonality Orthogonal subspaces 10 Orthogonality 10.1 Orthogonal subspaces In the plane R 2 we think of the coordinate axes as being orthogonal (perpendicular) to each other. We can express this in terms of vectors by saying that every

More information

Overview. Motivation for the inner product. Question. Definition

Overview. Motivation for the inner product. Question. Definition Overview Last time we studied the evolution of a discrete linear dynamical system, and today we begin the final topic of the course (loosely speaking) Today we ll recall the definition and properties of

More information

DEF 1 Let V be a vector space and W be a nonempty subset of V. If W is a vector space w.r.t. the operations, in V, then W is called a subspace of V.

DEF 1 Let V be a vector space and W be a nonempty subset of V. If W is a vector space w.r.t. the operations, in V, then W is called a subspace of V. 6.2 SUBSPACES DEF 1 Let V be a vector space and W be a nonempty subset of V. If W is a vector space w.r.t. the operations, in V, then W is called a subspace of V. HMHsueh 1 EX 1 (Ex. 1) Every vector space

More information

x 1 x 2. x 1, x 2,..., x n R. x n

x 1 x 2. x 1, x 2,..., x n R. x n WEEK In general terms, our aim in this first part of the course is to use vector space theory to study the geometry of Euclidean space A good knowledge of the subject matter of the Matrix Applications

More information

Math 3191 Applied Linear Algebra

Math 3191 Applied Linear Algebra Math 191 Applied Linear Algebra Lecture 1: Inner Products, Length, Orthogonality Stephen Billups University of Colorado at Denver Math 191Applied Linear Algebra p.1/ Motivation Not all linear systems have

More information

2018 Fall 2210Q Section 013 Midterm Exam II Solution

2018 Fall 2210Q Section 013 Midterm Exam II Solution 08 Fall 0Q Section 0 Midterm Exam II Solution True or False questions points 0 0 points) ) Let A be an n n matrix. If the equation Ax b has at least one solution for each b R n, then the solution is unique

More information

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION REVIEW FOR EXAM III The exam covers sections 4.4, the portions of 4. on systems of differential equations and on Markov chains, and..4. SIMILARITY AND DIAGONALIZATION. Two matrices A and B are similar

More information

(v, w) = arccos( < v, w >

(v, w) = arccos( < v, w > MA322 F all203 Notes on Inner Products Notes on Chapter 6 Inner product. Given a real vector space V, an inner product is defined to be a bilinear map F : V V R such that the following holds: For all v,

More information

Shorts

Shorts Math 45 - Midterm Thursday, October 3, 4 Circle your section: Philipp Hieronymi pm 3pm Armin Straub 9am am Name: NetID: UIN: Problem. [ point] Write down the number of your discussion section (for instance,

More information

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved Fundamentals of Linear Algebra Marcel B. Finan Arkansas Tech University c All Rights Reserved 2 PREFACE Linear algebra has evolved as a branch of mathematics with wide range of applications to the natural

More information

Vector Geometry. Chapter 5

Vector Geometry. Chapter 5 Chapter 5 Vector Geometry In this chapter we will look more closely at certain geometric aspects of vectors in R n. We will first develop an intuitive understanding of some basic concepts by looking at

More information

MATH SOLUTIONS TO PRACTICE PROBLEMS - MIDTERM I. 1. We carry out row reduction. We begin with the row operations

MATH SOLUTIONS TO PRACTICE PROBLEMS - MIDTERM I. 1. We carry out row reduction. We begin with the row operations MATH 2 - SOLUTIONS TO PRACTICE PROBLEMS - MIDTERM I. We carry out row reduction. We begin with the row operations yielding the matrix This is already upper triangular hence The lower triangular matrix

More information

v = v 1 2 +v 2 2. Two successive applications of this idea give the length of the vector v R 3 :

v = v 1 2 +v 2 2. Two successive applications of this idea give the length of the vector v R 3 : Length, Angle and the Inner Product The length (or norm) of a vector v R 2 (viewed as connecting the origin to a point (v 1,v 2 )) is easily determined by the Pythagorean Theorem and is denoted v : v =

More information

4.1 Distance and Length

4.1 Distance and Length Chapter Vector Geometry In this chapter we will look more closely at certain geometric aspects of vectors in R n. We will first develop an intuitive understanding of some basic concepts by looking at vectors

More information

Lecture 22: Section 4.7

Lecture 22: Section 4.7 Lecture 22: Section 47 Shuanglin Shao December 2, 213 Row Space, Column Space, and Null Space Definition For an m n, a 11 a 12 a 1n a 21 a 22 a 2n A = a m1 a m2 a mn, the vectors r 1 = [ a 11 a 12 a 1n

More information

Math 24 Spring 2012 Questions (mostly) from the Textbook

Math 24 Spring 2012 Questions (mostly) from the Textbook Math 24 Spring 2012 Questions (mostly) from the Textbook 1. TRUE OR FALSE? (a) The zero vector space has no basis. (F) (b) Every vector space that is generated by a finite set has a basis. (c) Every vector

More information

Chapter 3. Vector spaces

Chapter 3. Vector spaces Chapter 3. Vector spaces Lecture notes for MA1111 P. Karageorgis pete@maths.tcd.ie 1/22 Linear combinations Suppose that v 1,v 2,...,v n and v are vectors in R m. Definition 3.1 Linear combination We say

More information

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Jim Lambers MAT 610 Summer Session Lecture 1 Notes Jim Lambers MAT 60 Summer Session 2009-0 Lecture Notes Introduction This course is about numerical linear algebra, which is the study of the approximate solution of fundamental problems from linear algebra

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008. This chapter is available free to all individuals, on the understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

Chapter 1: Linear Equations

Chapter 1: Linear Equations Chapter : Linear Equations (Last Updated: September, 7) The material for these notes is derived primarily from Linear Algebra and its applications by David Lay (4ed).. Systems of Linear Equations Before

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

Linear Algebra. Grinshpan

Linear Algebra. Grinshpan Linear Algebra Grinshpan Saturday class, 2/23/9 This lecture involves topics from Sections 3-34 Subspaces associated to a matrix Given an n m matrix A, we have three subspaces associated to it The column

More information

MTH 2032 SemesterII

MTH 2032 SemesterII MTH 202 SemesterII 2010-11 Linear Algebra Worked Examples Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education December 28, 2011 ii Contents Table of Contents

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

1. Determine by inspection which of the following sets of vectors is linearly independent. 3 3.

1. Determine by inspection which of the following sets of vectors is linearly independent. 3 3. 1. Determine by inspection which of the following sets of vectors is linearly independent. (a) (d) 1, 3 4, 1 { [ [,, 1 1] 3]} (b) 1, 4 5, (c) 3 6 (e) 1, 3, 4 4 3 1 4 Solution. The answer is (a): v 1 is

More information

6 Inner Product Spaces

6 Inner Product Spaces Lectures 16,17,18 6 Inner Product Spaces 6.1 Basic Definition Parallelogram law, the ability to measure angle between two vectors and in particular, the concept of perpendicularity make the euclidean space

More information

Fall 2016 MATH*1160 Final Exam

Fall 2016 MATH*1160 Final Exam Fall 2016 MATH*1160 Final Exam Last name: (PRINT) First name: Student #: Instructor: M. R. Garvie Dec 16, 2016 INSTRUCTIONS: 1. The exam is 2 hours long. Do NOT start until instructed. You may use blank

More information