7. Dimension and Structure.

Size: px
Start display at page:

Download "7. Dimension and Structure."

Transcription

1 7. Dimension and Structure

2 7.1. Basis and Dimension Bases for Subspaces Example 2 The standard unit vectors e 1, e 2,, e n are linearly independent, for if we write (2) in component form, then we obtain (c 1,c 2,,c n )=(0,0,,0), which implies that all of the coefficients in (2) are 0. Furthermore, these vectors span R n because an arbitrary vector x=(x 1,x 2,,x n ) in R n can be expressed as We call {e 1,e 2,,e n } the standard basis for R n.

3 7.1. Basis and Dimension Bases for Subspaces Example 4 The nonzero row vectors of a matrix in row echelon form are linearly independent. To visualize why this is true, consider the following typical matrices in row echelon form, where the * s denote arbitrary real numbers:

4 7.1. Basis and Dimension Bases for Subspaces Let V be a nonzero subspace of R n. -Let v 1 be any nonzero vector in V. If V=span{v 1 }, then we have our linearly independent spanning set. - If V span{v 1 }, then choose any vector v 2 in V that is not a linear combination of v 1. If V=span{v 1, v 2 }, then we have our linearly independent spanning set. - If V span{v 1,v 2 }, then choose any vector v 3 in V that is not a linear combination of v 1, v 2. If V=span{v 1, v 2, v 3 }, then we have our linearly independent spanning set. - Repeat the process in the preceding steps. - If we continue this construction process, then there are two logical possibilities: At some stage we will produce a linearly independent set that spans V or, if not, we will encounter a linearly independent set of n+1 vectors. - But, the latter is impossible since a linearly independent set in R n can contain at most n vectors (Theorem 3.4.8).

5 7.1. Basis and Dimension Bases for Subspaces First consider the homogeneous linear system of k equations in the m unknowns c 1, c 2,, c m. If k<m, this system has more unknown than equations and hence has a nontrivial solution (Theorem 2.2.3). This implies that there exist numbers c 1, c 2,, c m, not all zero, such that or (5)

6 7.1. Basis and Dimension Bases for Subspaces Let V be a nonzero subspace of R n, and suppose that the sets B 1 ={v 1, v 2,, v k } and B 2 ={w 1, w 2,, w m } are bases for V. Suppose that k<m. Since B 1 spans V, and since the vectors in B 2 are in V, each w i in B 2 can be expressed as a linear combination of the vectors in B 1, say (4) =0 by (5) which will contradict the linear independence of w 1, w 2,, w m.

7 7.1. Basis and Dimension Bases for Subspaces Example 5 Every basis for a line through the origin of R n has one vector. Every basis for a plane through the origin of R n has two vector. Every basis for R n has n vectors (since the standard basis has n vectors). Example 6 A line through the origin of R n has dimension 1. A plane through the origin of R n has dimension 2. R n has dimension n.

8 7.1. Basis and Dimension Dimension of A Solution Space At the end of Section 3.5 we stated that the general solution of a homogeneous linear system Ax=0 that results from Gauss-Jordan elimination is of the form in which the vectors v 1, v 2,, v s are linearly independent. We call these vectors the canonical solutions of Ax=0. Since the canonical solution vectors span the solution space and are linearly independent, they form a basis for the solution space; we call that basis the canonical basis for the solution space.

9 7.1. Basis and Dimension Dimension of A Solution Space Example 7 In Example 7 of Section 2.2, the general solution produced by Gauss-Jordan elimination is Thus, the canonical basis vectors are and the solution space is a three-dimensional subspace of R 6.

10 7.1. Basis and Dimension Dimension of A Hyperplane Recall from Section 3.5 that if a=(a 1,a 2,,a n ) is a nonzero vector in R n, then the hyperplane a through the origin of R n is given by the equation Let us view this as a linear system of one equation in n unknowns. Since this system has one leading variable and n-1 free variables, its solution space has dimension n-1, and this implies that dim(a )=n-1. If we exclude R n itself, then the hyperplanes in R n are the subspaces of maximal dimension.

11 7.2. Properties of Bases Properties of Bases Solving this system yields Taking t=1 yields and taking t=-1 yields It was the linear dependence of v 1, v 2, v 3, and v 4 that made it possible to express v as a linear combination of these vectors in more than one way.

12 7.2. Properties of Bases Properties of Bases Let v be any vector in V. Since S spans V, there is at least one way to express v as a linear combination of the vectors in S. Suppose that and (2) Subtracting the second equation from the first yields Since the right side of this equation is a linear combination of the vectors in S, and since these vectors are linearly independent, each of the coefficients in the linear combination must be zero. Thus, the two linear combination in (2) are the same.

13 7.2. Properties of Bases Properties of Bases (a) If S spans V but is not a basis for V, then S must be a linearly dependent set. This means some vector v in S is a linear combination of predecessors. Remove this vector from S to obtain S. The set S must still span V, since any linear combination of the vectors in S can be rewritten as a linear combination of the vectors in S by expressing v in terms of its predecessors.

14 7.2. Properties of Bases Properties of Bases This theorem reveals two important facts about bases: 1. Every spanning set for a subspace is either a basis for that subspace or has a basis as a subset. 2. Every linearly independent set in a subspace is either a basis for the subspace or can be extended to a basis for the subspace.

15 7.2. Properties of Bases Properties of Bases REMARK Engineers use the term degree of freedom as a synonym for dimension, the idea being that a space with k degrees of freedom allows freedom of motion or variation in at most k independent directions.

16 7.2. Properties of Bases Subspaces of Subspaces If V and W are subspaces of R n, and if V is a subset of W, then we also say that V is a subspace of W. For example, the space {0} is a subspace of the line, which in turn is a subspace of the plane, which in turn is a subspace of R 3.

17 7.2. Properties of Bases Subspaces of Subspaces

18 7.2. Properties of Bases Sometimes Spanning Implies Linear Independence, and Conversely In general, when you want to show that a set of vectors is a basis for a subspace V of R n you must show that the set is linearly independent and spans V. However, if you know a priori that the number of vectors in the set is the same as the dimension of V, then to show that the set is a basis it suffices to show either that it is linearly independent or that it spans V the other condition will follow automatically.

19 7.2. Properties of Bases Sometimes Spanning Implies Linear Independence, and Conversely Example 2 (a) Show that the vectors v 1 =(1,2,1), v 2 =(1,-1,3), and v 3 =(1,1,4) form a basis for R 3. We have three vectors in a three-dimensional space, so it suffices to show that the vectors are linearly independent. One way to do this is to form the matrix that have v 1, v 2, and v 3 as its column vectors and apply Theorem The determinant of the matrix A is nonzero, det(a)=-7, so parts (i) and (g) of that theorem imply that the column vectors are linearly independent.

20 7.2. Properties of Bases A Unifying Theorem

21 7.3. The Fundamental Spaces of a Matrix The Fundamental Spaces of A Matrix If A is an m n matrix, then there are three important spaces associated with A: 1. The row space of A, denoted by row(a), is the subspace of R n that is spanned by the row vectors of A. 2. The column space of A, denoted by col(a), is the subspace of R n that is spanned by the column vectors of A. 3. The null space of A, denoted by null(a), is the solution space of Ax=0. This is a subspace of R n. If we consider A and A T together, then there appear to be six such subspace. But transposing a matrix converts rows to columns, and columns to rows, so row(a T )=col(a) and col(a T )=row(a). Thus, of the six subspaces only the following four are distinct: These are called the fundamental spaces of A.

22 7.3. The Fundamental Spaces of a Matrix The Fundamental Spaces of A Matrix REMARK Later in this chapter we will show that the row space and column space of a matrix always have the same dimension, so you can also think of the rank of A as the dimension of the column space.

23 7.3. The Fundamental Spaces of a Matrix Orthogonal Complements Recall from Section 3.5 that if a is a nonzero vector in R n, then a is the set of all vectors in R n that are orthogonal to a. We call this set the orthogonal complement of a (or the hyperplane through the origin with normal a). The following definition extends the idea of an orthogonal complement to sets with more than one vector.

24 7.3. The Fundamental Spaces of a Matrix Orthogonal Complements Example 1 If L is a line through the origin of R 3, then L is the plane through the origin that is perpendicular to L, and if W is a plane through the origin of R 3, then W is the line through the origin that is perpendicular to W. Example 2 If S is the set of row vectors of an m n matrix A, then it follows from Theorem that S is the solution space of Ax=0.

25 7.3. The Fundamental Spaces of a Matrix Orthogonal Complements The set S contains the vector 0, so we can be assured that it is nonempty. Let u and v be vectors in S and let c be a scalar. To show that cu and u+v are vectors in S, we must show that cu x=0 and (u+v) x=0 for every vector x in S. But u and v are vectors in S, so u x=0 and v x=0. Thus, using properties of the dot product we obtain cv x c u x 0 u v x u x v x 0 Thus, S is closed under scalar multiplication and addition.

26 7.3. The Fundamental Spaces of a Matrix Orthogonal Complements Example 3 Find the orthogonal complement in an xyz-coordinate system of the set S={v 1,v 2 }, where

27 7.3. The Fundamental Spaces of a Matrix Properties of Orthogonal Complements (b) Let v be any vector in span(s). This vector is orthogonal to every vector in span(s), so it has to be orthogonal to every vector in S, since S is contained in span(s). Thus, v is in S. Let v be any vector in S. S={v1, v2,, vk} =0 which shows that v is orthogonal to every vector in span(s).

28 7.3. The Fundamental Spaces of a Matrix Properties of Orthogonal Complements (b) The orthogonal complement of a nonempty set and the orthogonal complement of the subspaces spanned by that set are the same. (c) Note that it is required that W be a subspace of R n (not just a subset) for this to be true. Theorem (Double Perp Theorem)

29 7.3. The Fundamental Spaces of a Matrix Properties of Orthogonal Complements

30 7.3. The Fundamental Spaces of a Matrix Properties of Orthogonal Complements (a) When you multiply a row of a matrix A by a nonzero scalar or when you add a scalar multiple of one row to another, you are computing a linear combination of row vectors of A. Thus, if B is obtained from A by a succession of elementary row operation, then every vector in row(b) must be in row(a). In this case, A can be obtained from B by performing the inverse operations in reverse order. Thus, every vector in row(a) must also be in row(b), from which we conclude that row(a)=row(b).

31 7.3. The Fundamental Spaces of a Matrix Properties of Orthogonal Complements (b) By part (a), performing an elementary row operation on a matrix does not change the row space of the matrix and hence does not change the orthogonal complement of the row space. But the orthogonal complement of the row space of A is the null space of A (Theorem 7.3.5), so performing an elementary row operation on a matrix A does not change the null space of A.

32 7.3. The Fundamental Spaces of a Matrix Properties of Orthogonal Complements

33 7.3. The Fundamental Spaces of a Matrix Finding Bases by Row Reduction Find a basis for the subspace W of R n that is spanned by a given set of vectors We can start by forming a matrix A that has v 1, v 2,, v s as row vectors. This makes W the row space of A, so a basis can be found by reducing A to row echelon form and extracting the nonzero rows.

34 7.3. The Fundamental Spaces of a Matrix Finding Bases by Row Reduction Example 4 (a) Find a basis for the subspace W of R 5 that is spanned by the vectors Extracting the nonzero rows yields the basis vectors Since there are three basis vectors, we have shown that dim(w)=3.

35 7.3. The Fundamental Spaces of a Matrix Finding Bases by Row Reduction Example 4 Alternatively, we can take the matrix A all the way to the reduced row echelon form which yields the basis vectors

36 7.3. The Fundamental Spaces of a Matrix Finding Bases by Row Reduction Example 4 (b) Find a basis for W. It follows from Theorem that W is the null space of A, so our problem reduces to finding a basis for the solution space of the homogeneous system Ax=0. Thus, the vectors form a basis for W.

37 7.3. The Fundamental Spaces of a Matrix Finding Bases by Row Reduction Example 5 Find a homogeneous linear system Bx=0 whose solution space is the space W spanned by the vector v 1, v 2, v 3, and v 4 in Example 4. In part (b) of Example 4 we found basis vectors u 1 and u 2 for W. Use these as row vectors to form the matrix The row space of B is W, so the null space of B is (W ) =W. Thus, the linear system Bx=0, or equivalently, has W as its solution space.

38 7.3. The Fundamental Spaces of a Matrix Determining Whether A Vector Is In A Given Subspace Consider the following three problems: Although these problems look different at the surface, they are just different formulations of the same problem.

39 7.3. The Fundamental Spaces of a Matrix Determining Whether A Vector Is In A Given Subspace Example 6 What conditions must a vector b=(b 1,b 2,b 3,b 4,b 5 ) satisfy to lie in the subspace of R 5 spanned by the vector v 1, v 2, v 3, and v 4 in Example 4? Solution 1 The most direct way to solve this problem is to look for conditions under which the vector equation has a solution x 1, x 2, x 3, and x 4. or consistency problem (see )

40 7.3. The Fundamental Spaces of a Matrix Determining Whether A Vector Is In A Given Subspace Example 6 Solution 2 Let s focus on rows rather than columns. It follows from Theorem that b will lie in span{v 1, v 2, v 3, v 4 } if and only if this space has the same dimension as span{v 1, v 2, v 3, v 4, b}. Matrix A with row vectors v 1, v 2, v 3, and v 4 has the same rank as the matrix that results when b is adjoined to A as an additional row vector.

41 7.3. The Fundamental Spaces of a Matrix Determining Whether A Vector Is In A Given Subspace Example 6 Solution 3 To say that b lies in the subspace W spanned by the vectors v 1, v 2, v 3, and v 4 is the same as saying that b is orthogonal to every vector in W. We showed in part (b) of Example 4 that the vectors form a basis for W. Thus, b will be orthogonal to every vector in W if and only if it is orthogonal to u 1 and u 2. u u 1 2 b 0 b 0

42 7.4. The Dimension Theorem and Its Implications The Dimension Theorem for Matrices We can restate the dimension theorem for homogeneous systems as or, alternatively, as number of free variables = n - rank(a) rank(a) + number of free variable = number of columns of A (1) But each free variable produces a parameter in a general solution of the system Ax=0, so the number of free variables is the same as the dimension of the solution space of the system (which is the nullity of A). Thus, we can rewrite (1) as rank(a) + nullity(a) = number of columns of A

43 7.4. The Dimension Theorem and Its Implications The Dimension Theorem for Matrices Example 1 In Example 4, three nonzero rows two free variables rank(a) + nullity(a) = =5 which is consistent with Formula (2) and the fact that A has five columns.

44 7.4. The Dimension Theorem and Its Implications Extending A Linearly Independent Set To A Basis Every linearly independent set {v 1, v 2,, v k } at R n can be enlarged to a basis for R n by adding appropriate linearly independent vectors to it. One way to find such vectors is to form the matrix A that has v 1, v 2,, v k as row vectors, thereby making the subspace spanned by these vectors into the row space of A. By solving the homogeneous linear system Ax=0, we can find a basis for the null space of A. This basis has n-k vectors, say w k+1,, w n, by the dimension theorem for matrices, and each of the w s is orthogonal to all of the v s, since null(a) and row(a) are orthogonal. This orthogonality implies that the set {v 1, v 2,, v k, w k+1,, w n } as linearly independent and hence forms a basis for R n.

45 7.4. The Dimension Theorem and Its Implications Extending A Linearly Independent Set To A Basis Example 2 The vectors v 1 =(1,3,-1,1) and v 2 =(0,1,1,6) are linearly independent, since neither vector is a scalar multiple of the other. Enlarge the set {v 1, v 2 } to a basis for R 4. Thus, the vectors form a basis for R 4.

46 7.4. The Dimension Theorem and Its Implications Some Consequences of The Dimension Theorem for Matrices

47 7.4. The Dimension Theorem and Its Implications Some Consequences of The Dimension Theorem for Matrices Example matrix A with nullity 3. rank(a)=7-3=4 - Every row echelon form of A has 5-4=1 zero row. - The homogeneous system Ax=0 has 4 pivot variables and 7-4=3 free variables. Example 4 Can a 5 7 matrix A have a one-dimensional null space? rank(a)=7-nullity(a)=7-1=6 which is impossible, since the five row vectors of A cannot span a six-dimensional space.

48 7.4. The Dimension Theorem and Its Implications The Dimension Theorem for Subspaces

49 7.4. The Dimension Theorem and Its Implications A Unifying Theorem

50 7.4. The Dimension Theorem and Its Implications More on Hyperplanes Theorem If a is a nonzero vector, then the hyperplane a is a subspace of dimension n-1. The following theorem shows that the converse is also true.

51 7.4. The Dimension Theorem and Its Implications Rank 1 Matrices If rank(a)=1, then nullity(a)=n-1, so the row space of A is a line through the origin of R n and the null space is a hyperplane through the origin of R n. Conversely, if the row space of A is a line through the origin of R n, or, equivalently, if the null space of A is a hyperplane through the origin of R n, then A has rank 1. If rank(a)=1, then the row space of A is spanned by some nonzero vector a, so all row vectors of A are scalar multiples of a and the null space of A is a. Conversely, if the row vectors of A are all scalar multiples of some nonzero vector a, then A has rank 1 and the null space of A is the hyperplane a.

52 7.4. The Dimension Theorem and Its Implications Rank 1 Matrices Rank 1 matrices arise when outer products of nonzero column vectors are computed. This matrix has rank 1 since all row vectors are scalar multiples of the nonzero vector v T and at least one of the components of u is nonzero.

53 7.4. The Dimension Theorem and Its Implications Rank 1 Matrices Let A be any m n matrix of rank 1. The row vectors of A are all scalar multiples of some nonzero row vector v T, so we can express A in the form where u is the column vector with components u 1, u 2,, u m. These components cannot all be zero, otherwise A would have rank 0.

54 7.4. The Dimension Theorem and Its Implications Rank 1 Matrices Example 8

55 7.4. The Dimension Theorem and Its Implications Symmetric Rank 1 Matrices If u is a nonzero column vector, then which, in addition to having rank 1, is symmetric.

56 7.5. The Rank Theorem and Its Implications The Rank Theorem Assume that A has rank k, which implies that the reduced row echelon form R has exactly k nonzero row vectors, say r 1, r 2,, r k. Since A and R have the same row space by Theorem 7.3.7, it follows that the row vectors a 1, a 2,, a m of A can be expressed as linear combinations of the row vectors of R, say (10)

57 7.5. The Rank Theorem and Its Implications The Rank Theorem Let a ij be the jth component of a i, and let r ij be the jth component of r i. Thus, the relationships between the jth components on the two sides of (10) are which we can rewrite in matrix form as Since the left side of this equation is the jth column vector of A, we have shown that the k column vectors on the right side of the equation span the column space of A.

58 7.5. The Rank Theorem and Its Implications The Rank Theorem Thus, the dimension of the column space of A is at most k; that is, (11) It follows from this that or (12) We can conclude from (11) and (12) that dim(row(a))=dim(col(a)).

59 7.5. The Rank Theorem and Its Implications The Rank Theorem Example 1 From Example 4 of Section 7.3, three nonzero rows the row space is three-dimensional. the column space is also three-dimensional.

60 7.5. The Rank Theorem and Its Implications The Rank Theorem Recall from Definition that the rank of a matrix A is the dimension of its row space. If A is an m n matrix, then which we can rewrite using (3) as If A is an m n matrix with rank k, then

61 7.5. The Rank Theorem and Its Implications The Rank Theorem Example 2

62 7.5. The Rank Theorem and Its Implications Relationship Between Consistency And Rank Example 3 The bad third row in this matrix makes it evident that the system is inconsistent.

63 7.5. The Rank Theorem and Its Implications Relationship Between Consistency And Rank

64 7.5. The Rank Theorem and Its Implications Relationship Between Consistency And Rank rank(a)+nullity(a)=n rank(a)=n Example 5 b b1 b 2 b 3 b b 5b 0 x b 1 1 x b 2b 2 2 1

65 7.5. The Rank Theorem and Its Implications Overdetermined And Underdetermined Linear Systems (a) If m>n, then the column vectors of A cannot span R m. Thus, there is at least one vector b in R m that is not a linear combination of the column vectors of A, and for such a b the system Ax=b has no solution. (b) If m<n, then the column vectors of A must be linearly dependent (n vectors in R m ). This implies that Ax=0 has infinitely many solutions, so the result follows from Theorem

66 7.5. The Rank Theorem and Its Implications Matrices of The Form A T A and AA T From Formula (9) of Section 3.6, If A is an m n matrix with column vectors a 1, a 2,, a n then If r 1, r 2,, r m are the row vectors of A, then

67 7.5. The Rank Theorem and Its Implications Matrices of The Form A T A and AA T (a) If x 0 is any solution of Ax=0, then x 0 is also a solution of A T Ax=0 since Conversely, if x 0 is any solution of A T Ax=0, then x 0 is in the null space of A T A and hence is orthogonal to every vector in the row space of A T A by Theorem However, A T A is symmetric, so x 0 is also orthogonal to every vector in the column space of A T A. In particular, x 0 must be orthogonal to the vector A T Ax 0 ; x 0 is a solution of Ax=0

68 7.5. The Rank Theorem and Its Implications Matrices of The Form A T A and AA T (b) null(a T A)=null(A) row(a)=null(a) =null(a T A) =row(a T A) (c) A T A is symmetric col(a T A)=row(A T A)=row(A)=col(A T ) (d) rank(a T A)=n-nullity(A T A)=n-nullity(A)=rank(A)

69 7.5. The Rank Theorem and Its Implications Matrices of The Form A T A and AA T

70 7.5. The Rank Theorem and Its Implications Some Unifying Theorems (c) (d) Since A T A is an n n matrix, A T A is invertible if and only if A T A has rank n. As A T A has the same rank as A, A T A is invertible if and only if rank(a)=n, that is, if and only if A has full column rank.

71 7.5. The Rank Theorem and Its Implications Some Unifying Theorems Example 7 Since det(a T A)=27 0, the matrix A has full column rank. Since det(aa T )=0, the matrix A does not have full row rank.

72 7.6. The Pivot Theorem and Its Implications Basis Problems Revisited We know from Theorem that elementary row operations do not change the row space or the null space of a matrix. However, elementary row operations do change the column space of a matrix. Even though the row operation did not preserve the column space, it did preserve the dependency relationship between the column vectors. That is, elementary row operations do not change linear independence or dependence of column vectors, and in the case of linear dependence they do not change dependency relationships among column vectors.

73 7.6. The Pivot Theorem and Its Implications Basis Problems Revisited Example 1 Since U has three nonzero rows, A has rank 3 and hence the column space of A is threedimensional. If we can find three linearly independent column vectors in A, then those vectors will form a basis for the column space of A by Theorem For this purpose, focus on the column vectors of U that have the leading 1 s:

74 7.6. The Pivot Theorem and Its Implications Basis Problems Revisited Example 1 These column vectors are linearly independent, and hence so are the corresponding column vectors of A by Theorem Thus, the column vectors form a basis for the column space of A.

75 7.6. The Pivot Theorem and Its Implications Basis Problems Revisited REMARK We need only observe that the number of pivot columns in a nonzero matrix A is the same as the number of leading 1 s in a row echelon form, which is the same as the number of nonzero rows in that row echelon form. Thus, Theorems and part (c) of Theorem imply that the column space and row space have the same dimension.

76 7.6. The Pivot Theorem and Its Implications Basis Problems Revisited

77 7.6. The Pivot Theorem and Its Implications Basis Problems Revisited Example 2 Let W be the subspace of R 4 that is spanned by the vectors (a) Find a subset of these vectors that forms a basis for W. Thus, the basis vectors for W are

78 7.6. The Pivot Theorem and Its Implications Basis Problems Revisited Example 2 (b) Express those vectors of S={v 1, v 2, v 3, v 4, v 5 } that are not in the basis as linear combinations of those vectors that are. If we can express v 3 and v 5 as linear combinations of v 1, v 2, and v 4, then those same linear combinations will apply to the corresponding column vectors of A.

79 7.6. The Pivot Theorem and Its Implications Bases for The Fundamental Spaces of A Matrix We have already seen how to find bases for three of the four fundamental spaces of a matrix A by reducing the matrix to a row echelon form U or its reduced row echelon form R: 1. The nonzero rows of U form a basis for row(a). 2. The columns of U with leading 1 s identify the pivot columns, and these form a basis for col(a). 3. The canonical solutions of Ax=0 form a basis for null(a), and these are readily obtained from the system Rx=0.

80 7.6. The Pivot Theorem and Its Implications Bases for The Fundamental Spaces of A Matrix An algorithm for finding a basis for null(a T ) by row reduction of A

81 7.6. The Pivot Theorem and Its Implications A Column-Row Factorization then E is the product of the elementary matrices that perform the row operations, so (3)

82 7.6. The Pivot Theorem and Its Implications A Column-Row Factorization Suppose that the pivot columns of A (and hence R 0 ) have column numbers The column vectors of R in those positions are the standard unit vectors in R k. Thus, (3) implies that the jth pivot column of A is Ce j, which is the jth column of C. j ( ) e c A Cc R C c C c c j j j Thus, the successive columns of C are the successive pivot columns of A.

83 7.6. The Pivot Theorem and Its Implications A Column-Row Factorization Example 4

84 7.6. The Pivot Theorem and Its Implications A Column-Row Factorization Example 5

85 7.7. The Projection Theorem and Its Implications Orthogonal Projections onto Lines in R 2 From Formula (21) of Section 6.1, the standard matrix P θ for the orthogonal projection of R 2 onto the line through the origin making an angle θ with the positive x-axis of a rectangular xy-coordinate system can be expressed as

86 7.7. The Projection Theorem and Its Implications Orthogonal Projections onto Lines in R 2 Suppose that we are given a nonzero vector a in R 2, and let us consider how we might compute the orthogonal projection of a vector x onto the line W=span{a} without explicitly calculating θ.

87 7.7. The Projection Theorem and Its Implications Orthogonal Projections onto Lines in R 2 where x 1 is the orthogonal projection of x onto W: (3) = The vector x 2 is orthogonal to a, so we must have Solving for k and substituting in (3) yields

88 7.7. The Projection Theorem and Its Implications Orthogonal Projections onto Lines in R 2 Example 1 The vector u=(cosθ, sinθ) is a unit vector along W.

89 7.7. The Projection Theorem and Its Implications Orthogonal Projections onto Lines through The Origin of R n

90 7.7. The Projection Theorem and Its Implications Orthogonal Projections onto Lines through The Origin of R n Example 2 Let x=(2,-1,3) and a=(4,-1,2). Find the vector components of x along a and orthogonal to a.

91 7.7. The Projection Theorem and Its Implications Projection Operators on R n Since the vector x in Definition is arbitrary, we can use Formula (11) to define an operator T: R n R n by This is called the orthogonal projection of R n onto span{a}.

92 7.7. The Projection Theorem and Its Implications Projection Operators on R n The column vectors of the standard matrix for a linear transformation are the images of the standard basis vectors under the transformation. Thus, if we denote the jth entry of a by a j, then the jth column of P is given by

93 7.7. The Projection Theorem and Its Implications Projection Operators on R n Accordingly, partitioning P into column vectors yields which proves (16). Finally, the matrix aa T is symmetric and has rank 1 (Theorem 7.4.8), so P, being a nonzero scalar multiple of aa T, must also be symmetric and have rank 1.

94 7.7. The Projection Theorem and Its Implications Projection Operators on R n In particular, we can use a unit vector u along the line, in which case u T u= u 2 =1, and the formula for P simplifies to Example 4 u=(cosθ, sinθ)

95 7.7. The Projection Theorem and Its Implications Projection Operators on R n Example 5 (a) Find the standard matrix P for the orthogonal projection of R 3 onto the line spanned by the vector a=(1, -4, 2).

96 7.7. The Projection Theorem and Its Implications Projection Operators on R n Example 5 (b) Use the matrix to find the orthogonal projection of the vector x=(2, -1, 3) onto the line spanned by a. (c) Show that P has rank 1, and interpret this result geometrically. The matrix P has rank 1 since the second and third columns are scalar multiples of the first. This tells us that the column space of P is one-dimensional, which makes sense because the column space is the range of the linear operator represented by P, and we know that this is a line through the origin.

97 7.7. The Projection Theorem and Its Implications Orthogonal Projections onto General Subspaces Let s assume that W {0} and hence has a basis. Let {w 1, w 2,, w k } be a basis for W, and form the matrix M that has these basis vectors as successive columns. W=col(M) and W =null(m T ) Thus, the proof will be complete if we can show that every vector x in R n can be expressed in exactly one way as where x 1 is in the column space of M and M T x 2 =0.

98 7.7. The Projection Theorem and Its Implications Orthogonal Projections onto General Subspaces However, to say that x 1 is in the column space of M is equivalent to saying that x 1 =Mv for some vector v in R k, and to say that M T x 2 =0 is equivalent to saying that M T (x-x 1 )=0. Thus, if we can show that the equation has a unique solution for v, then x 1 =Mv and x 2 =x-x 1 will be uniquely determined vectors with the required properties. To do this, let us rewrite (21) as (21) The matrix M in this equation has full column rank, since its column vectors are linearly independent. Thus, it follows from Theorem that M T M is invertible, so (22) has the unique solution

99 7.7. The Projection Theorem and Its Implications Orthogonal Projections onto General Subspaces In the special case where W is a line through the origin of R n, the vector x 1 and x 2 in this theorem are those given in Theorem x 1 : orthogonal projection of x on W x 2 : orthogonal projection of x on W (24)

100 7.7. The Projection Theorem and Its Implications Orthogonal Projections onto General Subspaces Formula (25) can be used to define the linear operator on R n whose standard matrix P is We call this operator the orthogonal projection of R n onto W.

101 7.7. The Projection Theorem and Its Implications Orthogonal Projections onto General Subspaces Example 6 (a) Find the standard matrix P for the orthogonal projection of R 3 onto the plane x-4y+2z=0 (b) Use the matrix P to find the orthogonal projection of the vector x=(1,0,4) onto the plane. (a) The two column vectors on the right side form a basis for the solution space, so we take the matrix M to be

102 7.7. The Projection Theorem and Its Implications Orthogonal Projections onto General Subspaces Example 6 (b)

103 7.7. The Projection Theorem and Its Implications When Does A Matrix Represent An Orthogonal Projection? Since W is k-dimensional, the column space of P must be k-dimensional, and P must have rank k. If M is any n k matrix whose column vectors form a basis for W, then so P must be symmetric. so P must be the same as its square. That is, it is idempotent. This makes sense intuitively, since the orthogonal projection of R n onto W leaves vectors in W unchanged.

104 7.7. The Projection Theorem and Its Implications When Does A Matrix Represent An Orthogonal Projection? Example 8 Show that is the standard matrix for an orthogonal projection of R 3 onto a line through the origin, and find the line. A is symmetric, idempotent, and has rank 1. That line is the column space of A, and since the second and third column vectors are scalar multiples of the first, we can take the first column vector of A as a basis for the line. Thus, the line can be expressed parametrically in xyz-coordinates as

105 7.7. The Projection Theorem and Its Implications Strang Diagrams (24) Suppose that A is an m n matrix, so that Ax=b is a linear system of m equations in n unknowns. Since x is a vector in R n, we can apply Formula (24) with W=row(A) and W =null(a) to express x as a sum of two orthogonal terms. Similarly, since b is a vector in R m, we can apply Formula (24) to b with W=col(A) and W =null(a T ) to express b as a sum of two orthogonal terms

106 7.7. The Projection Theorem and Its Implications Strang Diagrams The decompositions can be pictured as in Figure in which we have represented the fundamental spaces of A as perpendicular lines. This is called a Strang diagram.

107 7.7. The Projection Theorem and Its Implications Strang Diagrams From Theorem 3.5.5, the system Ax=b is consistent if and only if b is in the column space of A, that is, if and only if b 0 T null( A )

108 7.7. The Projection Theorem and Its Implications Full Column Rank And Consistency of A Linear System If A has full column rank, it follows from Theorem that the system Ax=b is either inconsistent or has a unique solution. But b is in the column space of A, so the system must be consistent (Theorem 3.5.5) and hence has a unique solution. If A does not have full column rank, then Theorem implies that Ax=0 has infinitely many solutions and hence so does Ax=b by Theorem and the consistency of Ax=b.

109 7.7. The Projection Theorem and Its Implications Full Column Rank And Consistency of A Linear System In either case, if x is a solution, then Theorem implies that x can be split uniquely into a sum of two orthogonal terms where x row(a) is in row(a) and x null(a) is in null(a). Thus, (33) which shows that x row(a) is a solution of the system Ax=b. In the case where A has full column rank, this is the only solution of the system [which proves part (a)], and in the case where A does not have full column rank, it shows that there exists at least one solution in the row space of A.

110 7.7. The Projection Theorem and Its Implications Full Column Rank And Consistency of A Linear System To see in the latter case that there is only one solution in the row space of A, suppose that x r and x r are two such solutions. Then which implies that x r -x r is in null(a). However, x r -x r also lies in row(a)=null(a), so part (a) of Theorem implies that x r -x r =0 and hence that x r =x r. Thus, there is a unique solution of Ax=b in the row space of A.

111 7.7. The Projection Theorem and Its Implications Full Column Rank And Consistency of A Linear System Finally, if (33) is any solution of the system, then the theorem of Pythagoras (Theorem ) implies that which shows that the solution x row(a) in the row space of A has minimum norm.

112 7.7. The Projection Theorem and Its Implications Full Column Rank And Consistency of A Linear System full column rank null(a)={0}

113 7.7. The Projection Theorem and Its Implications The Double Perp Theorem Let w be any vector in W. Every vector in W is orthogonal to w, and this implies that w is in (W ). Conversely, let w be any vector in (W ). w can be expressed uniquely as w=w 1 +w 2 where w 1 is a vector in W and w 2 is a vector in W. Thus, w 2 is orthogonal to w belonging to (W ) : w 2 w 0 w w w w2 w2 0 w2 0

114 7.7. The Projection Theorem and Its Implications Orthogonal Projections onto W Given a subspace W of R n and the standard matrix for the orthogonal projection proj W x, where the column vectors of M form a basis for W.

115 7.8. Best Approximation and Least Squares Minimum Distance Problems We will be concerned here with the following problem.

116 7.8. Best Approximation and Least Squares Minimum Distance Problems The point ŵ in W that is closest to b is obtained by dropping a perpendicular form b to W; that is, It follows from this that the distance from b to W is, or equivalently,, where W is the line through the origin that is perpendicular to W.

117 7.8. Best Approximation and Least Squares Minimum Distance Problems Example 1 Use an appropriate orthogonal projection to find a formula for the distance d from the point (x 0, y 0, z 0 ) to the plane ax+by+cz=0. Let b=(x 0, y 0, z 0 ), let W be the given plane, and let l be the line through the origin that is perpendicular to W (i.e., l is W ). The line l is spanned by the normal n=(a, b, c) and hence

118 7.8. Best Approximation and Least Squares Minimum Distance Problems The distance from a point b to a subspace W in R n is defined as or equivalently, Example 2 The distance d from a point b=(b 1, b 2,, b n ) in R n to a hyperplane a 1 x 1 +a 2 x 2 + +a n x n =0

119 7.8. Best Approximation and Least Squares Least Squares Solutions of Linear Systems There are many applications in which a linear system Ax=b should be consistent on theoretical grounds but fails to be so because of measurement errors in the entries of A or b. REMARK Let b-ax=(e 1, e 2,, e n ). Since a best approximation solution minimizes This solution also minimizes, which is the sum of the squares of the errors in the components, and hence the term least squares solution.

120 7.8. Best Approximation and Least Squares Finding Least Squares Solutions of Linear Systems Let s find least squares solutions of a linear system Ax=b of m equations in n unknowns. To start, observe that Ax is in the column space of A for all x in R n, so is minimized when (7) Since proj col(a) b is a vector in the column space of A, system (7) is consistent and its solutions are the least squares solutions of Ax=b. Thus, we are guaranteed that every linear system Ax=b has at least one least squares solution.

121 7.8. Best Approximation and Least Squares Finding Least Squares Solutions of Linear Systems (9) Since the orthogonal complements of col(a) is null(a T ), Thus, (9) can be rewritten as or, alternatively, as This is called the normal equation or normal system associated with Ax=b. Using this terminology, the problem of finding least squares solutions of Ax=b has been reduced to solving the associated normal system exactly.

122 7.8. Best Approximation and Least Squares Finding Least Squares Solutions of Linear Systems

123 7.8. Best Approximation and Least Squares Finding Least Squares Solutions of Linear Systems (b) If A has full column rank, then Theorem implies that A T A is invertible, so (12) is the unique solution of (11). (c) If A does not have full column rank, then A T A is not invertible (Theorem ), so (11) is a consistent linear system whose coefficient matrix does not have full column rank. This being the case, it follows from part (b) of Theorem that (11) has infinitely many solutions but has a unique solution in the row space of A T A. Moreover, that theorem also tells us that the solution in the row space of A T A is the solution with smallest norm. However, the row space of A T A is the same as the row space of A (Theorem 7.5.8).

124 7.8. Best Approximation and Least Squares Finding Least Squares Solutions of Linear Systems Example 3 Find the least squares solutions of the linear system

125 7.8. Best Approximation and Least Squares Orthogonality Property of Least Squares Error Vectors Consider a linear system Ax=b. From Formula (30) of Section 7.7, (13) We know from (7) that x is a least squares solution of Ax=b if and only if which, together with (13), implies that â is a least squares solution of Ax=b if and only if least squares error vector b Axˆ proj null T ( A b ) least squares error= b Axˆ proj null T ( A b )

126 7.8. Best Approximation and Least Squares Orthogonality Property of Least Squares Error Vectors

127 7.8. Best Approximation and Least Squares Orthogonality Property of Least Squares Error Vectors Example 4 Find the least squares solutions and least squares error for the linear systems. Since it is not evident by inspection whether A has full column rank, we will simply proceed by solving the associated normal system A T Ax=A T b.

128 7.8. Best Approximation and Least Squares Orthogonality Property of Least Squares Error Vectors Example 4 Thus, there are infinitely many least squares solutions, and they are given by

129 7.8. Best Approximation and Least Squares Orthogonality Property of Least Squares Error Vectors Example 4 Since b-ax does not depend on t, all least squares solutions produce the same error vector. The resulting least squares error is

130 7.8. Best Approximation and Least Squares Strang Diagrams for Least Squares Problems b can be split into the sum of orthogonal terms as Each least squares solution satisfies Thus, the error vector is When A does not has full column rank. When A has full column rank.

131 7.8. Best Approximation and Least Squares Fitting A Curve to Experimental Data Let x and y be given variables, and assume that there is evidence to suggest that the variables are related by a linear equation where a and b are to be determined from two or more data points If the x-coordinates of the data points are not all the same, then M will have rank 2, and the normal system will have a unique least squares solution The line is called the least squares line of best fit to the data or, alternatively, the regression line.

132 7.8. Best Approximation and Least Squares Fitting A Curve to Experimental Data The normal system can be expressed in terms of the coordinates of the data points as

133 7.8. Best Approximation and Least Squares Least Squares Fits by Higher-Degree Polynomials Suppose that we want to find a polynomial of the form Whose graph comes as close as possible to passing through n known data points

134 7.8. Best Approximation and Least Squares Least Squares Fits by Higher-Degree Polynomials Example 7

135 7.9. Orthonormal Bases and the Gram-Schmidt Process Orthogonal And Orthonormal Bases Orthogonal vector set in R n orthogonal bases Orthonormal vector set in R n orthonormal bases Example 1 These vectors are linearly independent, so they must form a basis for R 3 by Theorem Orthonormal bases {q 1, q 2, q 3 }.

136 7.9. Orthonormal Bases and the Gram-Schmidt Process Orthogonal And Orthonormal Bases Let S={v 1, v 2,, v k } be an orthogonal set of nonzero vectors in R n. The only scalars that satisfy the vector equation are t 1 =0, t 2 =0,, t k =0. Let v j be any vector in S, then Since, t j =0.

137 7.9. Orthonormal Bases and the Gram-Schmidt Process Orthogonal Projections Using Orthonormal Bases If W is a nonzero subspace of R n, and if x is a vector in R n that is expressed in column form, then for any matrix M whose column vectors form a basis for W. In particular, if the column vectors of M are orthonormal, then M T M=I, so and Formula (27) of Section 7.7 for the standard matrix of this orthogonal projection simplifies to Thus, using an orthonormal basis for W eliminates the matrix inversions in the projection formulas, and reduces the calculation of an orthogonal projection to matrix multiplication.

138 7.9. Orthonormal Bases and the Gram-Schmidt Process Orthogonal Projections Using Orthonormal Bases

139 7.9. Orthonormal Bases and the Gram-Schmidt Process Orthogonal Projections Using Orthonormal Bases Example 5 Find the orthogonal projection of x=(1,1,1) onto the plane W in R 3 that is spanned by the orthonormal vectors v 1 =(0,1,0) and v 2 =(-4/5,0,3/5).

140 7.9. Orthonormal Bases and the Gram-Schmidt Process Trace And Orthogonal Projections Suppose that P is the standard matrix for an orthogonal projection of R n onto a k-dimensional subspace W. If we let {v 1, v 2,, v k } be an orthonormal basis for W, then it follows from Formula (6) and Theorem that But the range of a matrix transformation is the column space of the matrix, so it follows from this computation that tr(p)=dim(col(p))=rank(p).

141 7.9. Orthonormal Bases and the Gram-Schmidt Process Linear Combinations of Orthonormal Basis Vectors

142 7.9. Orthonormal Bases and the Gram-Schmidt Process Linear Combinations of Orthonormal Basis Vectors Example 8 Express the vector w=(1,1,1) in R 3 as a linear combination of the orthonormal basis vectors

143 7.9. Orthonormal Bases and the Gram-Schmidt Process Finding Orthogonal And Orthonormal Bases Let W be a nonzero subspace of R n, and let {w 1, w 2,, w k } be any basis for W. To prove that W has an orthonormal basis, it suffices to show that W has an orthogonal basis, since such a basis can then be converted into an orthonormal basis by normalizing the vectors. The following sequence of steps will produce an orthogonal basis {v 1, v 2,, v k } for W: Step 1. Let v 1 =w 1. Step 2. As illustrated in Figure 7.9.2, we can obtain a vector v 2 that is orthogonal to v 1 by computing the component of w 2 that is orthogonal to the subspace W 1 spanned by v 1. Because of the linear independence of the basis vectors {w 1, w 2,, w k }, v 2 0. (13)

144 7.9. Orthonormal Bases and the Gram-Schmidt Process Finding Orthogonal And Orthonormal Bases Step 3. To obtain a vector v 3 that is orthogonal to v 1 and v 2, we will compute the component of w 3 that is orthogonal to the subspace W 2 that is spanned by v 1 and v 2. Step 4. Step 5 to k. Continuing in this way produces an orthogonal set {v 1,v 2,,v k } after k steps. The proof of this theorem provides an algorithm, called the Gram-Schmidt orthogonalization process, for converting an arbitrary basis for a subspace of R n into an orthogonal basis for the subspace. If the resulting orthogonal vectors are normalized to produce an orthonormal basis for the subspace, then the algorithm is called the Gram-Schmidt process.

145 7.9. Orthonormal Bases and the Gram-Schmidt Process Finding Orthogonal And Orthonormal Bases Example 10 Use the Gram-Schmidt process to construct an orthonormal basis for the plane x+y+z=0 in R 3. The plane in the parametric form The parameter values t 1 =1, t 2 =0 and t 1 =0, t 2 =1 produce the vectors

146 7.9. Orthonormal Bases and the Gram-Schmidt Process A Property of The Gram-Schmidt Process

147 7.9. Orthonormal Bases and the Gram-Schmidt Process Extending Orthonormal Sets to Orthonormal Bases

148 7.10. QR-Decomposition; Householder Transformations QR-Decomposition It follows from Theorem that the column vectors of A are expressible in terms of the column vectors of Q as

149 7.10. QR-Decomposition; Householder Transformations QR-Decomposition Let us now form the upper triangular matrix and consider the product QR.

150 7.10. QR-Decomposition; Householder Transformations QR-Decomposition Theorem guarantees that every matrix A with full column rank has a QR-factorization, and this is true, in particular, if A is invertible. The fact that Q has orthogonal columns implies that Q T Q=I, so multiplying both side of (4) by Q T on the left yields (5) Thus, one method for finding a QR-decomposition of a matrix A with full column rank is to apply the Gram-Schmidt process to the column vectors of A, then form the matrix Q from the resulting orthonormal basis vectors, and then find R from (5).

151 7.10. QR-Decomposition; Householder Transformations QR-Decomposition Example 1 Find a QR-decomposition of The matrix A has full column rank, so it is guaranteed to have a QR-decomposition. Applying the Gram-Schmidt process to and forming the matrix Q that has the resulting orthonormal basis vectors as columns yields

152 7.10. QR-Decomposition; Householder Transformations QR-Decomposition Example 1 It follows from (5) that from which we obtain the QR-decomposition

153 7.10. QR-Decomposition; Householder Transformations The Role of QR-Decomposition in Least Squares Problems Although fine in theory, slight roundoff errors in the entries of A are often magnified in computing the entries of A T A. Thus, most computer algorithms for finding least squares solutions use methods that avoid computing the matrix A T A. One way to do this when A has full column rank is to use a QR-decomposition A=QR to rewrite the normal equation A T Ax=A T b as and use the fact that Q T Q=I to rewrite this as (8) It follows from the definition of QR-decomposition that R, and hence R T, is invertible, so we can multiply both sides of (8) on the left by(r T ) -1 to obtain the following result.

154 7.10. QR-Decomposition; Householder Transformations The Role of QR-Decomposition in Least Squares Problems

155 7.10. QR-Decomposition; Householder Transformations Householder Reflections There are also numerical difficulties that arise when the Gram-Schmidt process is used to construct a QR-decomposition, the problem being that slight roundoff errors in the entries of A can produce a severe loss of orthogonality in the computed vectors of Q. The more common approach is to compute the QR-decomposition without using the Gram-Schmidt process at all. If a is a nonzero vector in R 2 or R 3, then there is a simple relationship between the orthogonal projection onto the line span {a} and the reflection about the hyperplane a that is illustrated by Figure in R 3.

Dimension and Structure

Dimension and Structure 96 Chapter 7 Dimension and Structure 7.1 Basis and Dimensions Bases for Subspaces Definition 7.1.1. A set of vectors in a subspace V of R n is said to be a basis for V if it is linearly independent and

More information

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017 Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...

More information

SUMMARY OF MATH 1600

SUMMARY OF MATH 1600 SUMMARY OF MATH 1600 Note: The following list is intended as a study guide for the final exam. It is a continuation of the study guide for the midterm. It does not claim to be a comprehensive list. You

More information

2. Every linear system with the same number of equations as unknowns has a unique solution.

2. Every linear system with the same number of equations as unknowns has a unique solution. 1. For matrices A, B, C, A + B = A + C if and only if A = B. 2. Every linear system with the same number of equations as unknowns has a unique solution. 3. Every linear system with the same number of equations

More information

Math 123, Week 5: Linear Independence, Basis, and Matrix Spaces. Section 1: Linear Independence

Math 123, Week 5: Linear Independence, Basis, and Matrix Spaces. Section 1: Linear Independence Math 123, Week 5: Linear Independence, Basis, and Matrix Spaces Section 1: Linear Independence Recall that every row on the left-hand side of the coefficient matrix of a linear system A x = b which could

More information

Matrices and Matrix Algebra.

Matrices and Matrix Algebra. Matrices and Matrix Algebra 3.1. Operations on Matrices Matrix Notation and Terminology Matrix: a rectangular array of numbers, called entries. A matrix with m rows and n columns m n A n n matrix : a square

More information

Math 102, Winter 2009, Homework 7

Math 102, Winter 2009, Homework 7 Math 2, Winter 29, Homework 7 () Find the standard matrix of the linear transformation T : R 3 R 3 obtained by reflection through the plane x + z = followed by a rotation about the positive x-axes by 6

More information

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1) EXERCISE SET 5. 6. The pair (, 2) is in the set but the pair ( )(, 2) = (, 2) is not because the first component is negative; hence Axiom 6 fails. Axiom 5 also fails. 8. Axioms, 2, 3, 6, 9, and are easily

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB Glossary of Linear Algebra Terms Basis (for a subspace) A linearly independent set of vectors that spans the space Basic Variable A variable in a linear system that corresponds to a pivot column in the

More information

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2 Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch

More information

Linear Algebra Highlights

Linear Algebra Highlights Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to

More information

5.4 Basis And Dimension

5.4 Basis And Dimension 5.4 Basis And Dimension 1 Nonrectangular Coordinate Systems We describe this by saying that the coordinate system establishes a one-to-one correspondence between points in the plane and ordered pairs of

More information

Review Notes for Linear Algebra True or False Last Updated: February 22, 2010

Review Notes for Linear Algebra True or False Last Updated: February 22, 2010 Review Notes for Linear Algebra True or False Last Updated: February 22, 2010 Chapter 4 [ Vector Spaces 4.1 If {v 1,v 2,,v n } and {w 1,w 2,,w n } are linearly independent, then {v 1 +w 1,v 2 +w 2,,v n

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

3.3 Linear Independence

3.3 Linear Independence Prepared by Dr. Archara Pacheenburawana (MA3 Sec 75) 84 3.3 Linear Independence In this section we look more closely at the structure of vector spaces. To begin with, we restrict ourselves to vector spaces

More information

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015 Solutions to Final Practice Problems Written by Victoria Kala vtkala@math.ucsb.edu Last updated /5/05 Answers This page contains answers only. See the following pages for detailed solutions. (. (a x. See

More information

Algorithms to Compute Bases and the Rank of a Matrix

Algorithms to Compute Bases and the Rank of a Matrix Algorithms to Compute Bases and the Rank of a Matrix Subspaces associated to a matrix Suppose that A is an m n matrix The row space of A is the subspace of R n spanned by the rows of A The column space

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015 Final Review Written by Victoria Kala vtkala@mathucsbedu SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015 Summary This review contains notes on sections 44 47, 51 53, 61, 62, 65 For your final,

More information

4.3 - Linear Combinations and Independence of Vectors

4.3 - Linear Combinations and Independence of Vectors - Linear Combinations and Independence of Vectors De nitions, Theorems, and Examples De nition 1 A vector v in a vector space V is called a linear combination of the vectors u 1, u,,u k in V if v can be

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

Math 369 Exam #2 Practice Problem Solutions

Math 369 Exam #2 Practice Problem Solutions Math 369 Exam #2 Practice Problem Solutions 2 5. Is { 2, 3, 8 } a basis for R 3? Answer: No, it is not. To show that it is not a basis, it suffices to show that this is not a linearly independent set.

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

LECTURES 14/15: LINEAR INDEPENDENCE AND BASES

LECTURES 14/15: LINEAR INDEPENDENCE AND BASES LECTURES 14/15: LINEAR INDEPENDENCE AND BASES MA1111: LINEAR ALGEBRA I, MICHAELMAS 2016 1. Linear Independence We have seen in examples of span sets of vectors that sometimes adding additional vectors

More information

2018 Fall 2210Q Section 013 Midterm Exam II Solution

2018 Fall 2210Q Section 013 Midterm Exam II Solution 08 Fall 0Q Section 0 Midterm Exam II Solution True or False questions points 0 0 points) ) Let A be an n n matrix. If the equation Ax b has at least one solution for each b R n, then the solution is unique

More information

Chapter 6: Orthogonality

Chapter 6: Orthogonality Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

LINEAR ALGEBRA REVIEW

LINEAR ALGEBRA REVIEW LINEAR ALGEBRA REVIEW SPENCER BECKER-KAHN Basic Definitions Domain and Codomain. Let f : X Y be any function. This notation means that X is the domain of f and Y is the codomain of f. This means that for

More information

Chapter SSM: Linear Algebra. 5. Find all x such that A x = , so that x 1 = x 2 = 0.

Chapter SSM: Linear Algebra. 5. Find all x such that A x = , so that x 1 = x 2 = 0. Chapter Find all x such that A x : Chapter, so that x x ker(a) { } Find all x such that A x ; note that all x in R satisfy the equation, so that ker(a) R span( e, e ) 5 Find all x such that A x 5 ; x x

More information

Linear equations in linear algebra

Linear equations in linear algebra Linear equations in linear algebra Samy Tindel Purdue University Differential equations and linear algebra - MA 262 Taken from Differential equations and linear algebra Pearson Collections Samy T. Linear

More information

Chapter 3. Vector spaces

Chapter 3. Vector spaces Chapter 3. Vector spaces Lecture notes for MA1111 P. Karageorgis pete@maths.tcd.ie 1/22 Linear combinations Suppose that v 1,v 2,...,v n and v are vectors in R m. Definition 3.1 Linear combination We say

More information

LINEAR ALGEBRA SUMMARY SHEET.

LINEAR ALGEBRA SUMMARY SHEET. LINEAR ALGEBRA SUMMARY SHEET RADON ROSBOROUGH https://intuitiveexplanationscom/linear-algebra-summary-sheet/ This document is a concise collection of many of the important theorems of linear algebra, organized

More information

Lecture 21: 5.6 Rank and Nullity

Lecture 21: 5.6 Rank and Nullity Lecture 21: 5.6 Rank and Nullity Wei-Ta Chu 2008/12/5 Rank and Nullity Definition The common dimension of the row and column space of a matrix A is called the rank ( 秩 ) of A and is denoted by rank(a);

More information

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible. MATH 2331 Linear Algebra Section 2.1 Matrix Operations Definition: A : m n, B : n p ( 1 2 p ) ( 1 2 p ) AB = A b b b = Ab Ab Ab Example: Compute AB, if possible. 1 Row-column rule: i-j-th entry of AB:

More information

Chapter 1: Systems of Linear Equations

Chapter 1: Systems of Linear Equations Chapter : Systems of Linear Equations February, 9 Systems of linear equations Linear systems Lecture A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where

More information

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether

More information

Orthogonality. 6.1 Orthogonal Vectors and Subspaces. Chapter 6

Orthogonality. 6.1 Orthogonal Vectors and Subspaces. Chapter 6 Chapter 6 Orthogonality 6.1 Orthogonal Vectors and Subspaces Recall that if nonzero vectors x, y R n are linearly independent then the subspace of all vectors αx + βy, α, β R (the space spanned by x and

More information

Math 407: Linear Optimization

Math 407: Linear Optimization Math 407: Linear Optimization Lecture 16: The Linear Least Squares Problem II Math Dept, University of Washington February 28, 2018 Lecture 16: The Linear Least Squares Problem II (Math Dept, University

More information

Worksheet for Lecture 25 Section 6.4 Gram-Schmidt Process

Worksheet for Lecture 25 Section 6.4 Gram-Schmidt Process Worksheet for Lecture Name: Section.4 Gram-Schmidt Process Goal For a subspace W = Span{v,..., v n }, we want to find an orthonormal basis of W. Example Let W = Span{x, x } with x = and x =. Give an orthogonal

More information

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP) MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

More information

18.06SC Final Exam Solutions

18.06SC Final Exam Solutions 18.06SC Final Exam Solutions 1 (4+7=11 pts.) Suppose A is 3 by 4, and Ax = 0 has exactly 2 special solutions: 1 2 x 1 = 1 and x 2 = 1 1 0 0 1 (a) Remembering that A is 3 by 4, find its row reduced echelon

More information

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,

More information

PRACTICE PROBLEMS FOR THE FINAL

PRACTICE PROBLEMS FOR THE FINAL PRACTICE PROBLEMS FOR THE FINAL Here are a slew of practice problems for the final culled from old exams:. Let P be the vector space of polynomials of degree at most. Let B = {, (t ), t + t }. (a) Show

More information

Math 3108: Linear Algebra

Math 3108: Linear Algebra Math 3108: Linear Algebra Instructor: Jason Murphy Department of Mathematics and Statistics Missouri University of Science and Technology 1 / 323 Contents. Chapter 1. Slides 3 70 Chapter 2. Slides 71 118

More information

March 27 Math 3260 sec. 56 Spring 2018

March 27 Math 3260 sec. 56 Spring 2018 March 27 Math 3260 sec. 56 Spring 2018 Section 4.6: Rank Definition: The row space, denoted Row A, of an m n matrix A is the subspace of R n spanned by the rows of A. We now have three vector spaces associated

More information

8. Diagonalization.

8. Diagonalization. 8. Diagonalization 8.1. Matrix Representations of Linear Transformations Matrix of A Linear Operator with Respect to A Basis We know that every linear transformation T: R n R m has an associated standard

More information

ELE/MCE 503 Linear Algebra Facts Fall 2018

ELE/MCE 503 Linear Algebra Facts Fall 2018 ELE/MCE 503 Linear Algebra Facts Fall 2018 Fact N.1 A set of vectors is linearly independent if and only if none of the vectors in the set can be written as a linear combination of the others. Fact N.2

More information

Chapter 5 Eigenvalues and Eigenvectors

Chapter 5 Eigenvalues and Eigenvectors Chapter 5 Eigenvalues and Eigenvectors Outline 5.1 Eigenvalues and Eigenvectors 5.2 Diagonalization 5.3 Complex Vector Spaces 2 5.1 Eigenvalues and Eigenvectors Eigenvalue and Eigenvector If A is a n n

More information

Math 3C Lecture 25. John Douglas Moore

Math 3C Lecture 25. John Douglas Moore Math 3C Lecture 25 John Douglas Moore June 1, 2009 Let V be a vector space. A basis for V is a collection of vectors {v 1,..., v k } such that 1. V = Span{v 1,..., v k }, and 2. {v 1,..., v k } are linearly

More information

GEOMETRY OF MATRICES x 1

GEOMETRY OF MATRICES x 1 GEOMETRY OF MATRICES. SPACES OF VECTORS.. Definition of R n. The space R n consists of all column vectors with n components. The components are real numbers... Representation of Vectors in R n.... R. The

More information

1 9/5 Matrices, vectors, and their applications

1 9/5 Matrices, vectors, and their applications 1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric

More information

Solutions to Math 51 First Exam April 21, 2011

Solutions to Math 51 First Exam April 21, 2011 Solutions to Math 5 First Exam April,. ( points) (a) Give the precise definition of a (linear) subspace V of R n. (4 points) A linear subspace V of R n is a subset V R n which satisfies V. If x, y V then

More information

Math113: Linear Algebra. Beifang Chen

Math113: Linear Algebra. Beifang Chen Math3: Linear Algebra Beifang Chen Spring 26 Contents Systems of Linear Equations 3 Systems of Linear Equations 3 Linear Systems 3 2 Geometric Interpretation 3 3 Matrices of Linear Systems 4 4 Elementary

More information

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work. Assignment 1 Math 5341 Linear Algebra Review Give complete answers to each of the following questions Show all of your work Note: You might struggle with some of these questions, either because it has

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Practice Final Exam. Solutions.

Practice Final Exam. Solutions. MATH Applied Linear Algebra December 6, 8 Practice Final Exam Solutions Find the standard matrix f the linear transfmation T : R R such that T, T, T Solution: Easy to see that the transfmation T can be

More information

Rank and Nullity. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

Rank and Nullity. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics Rank and Nullity MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Objectives We have defined and studied the important vector spaces associated with matrices (row space,

More information

MATH 2331 Linear Algebra. Section 1.1 Systems of Linear Equations. Finding the solution to a set of two equations in two variables: Example 1: Solve:

MATH 2331 Linear Algebra. Section 1.1 Systems of Linear Equations. Finding the solution to a set of two equations in two variables: Example 1: Solve: MATH 2331 Linear Algebra Section 1.1 Systems of Linear Equations Finding the solution to a set of two equations in two variables: Example 1: Solve: x x = 3 1 2 2x + 4x = 12 1 2 Geometric meaning: Do these

More information

Lecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra

Lecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra Lecture: Linear algebra. 1. Subspaces. 2. Orthogonal complement. 3. The four fundamental subspaces 4. Solutions of linear equation systems The fundamental theorem of linear algebra 5. Determining the fundamental

More information

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MATH 3 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MAIN TOPICS FOR THE FINAL EXAM:. Vectors. Dot product. Cross product. Geometric applications. 2. Row reduction. Null space, column space, row space, left

More information

1 Systems of equations

1 Systems of equations Highlights from linear algebra David Milovich, Math 2 TA for sections -6 November, 28 Systems of equations A leading entry in a matrix is the first (leftmost) nonzero entry of a row. For example, the leading

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

Vector Spaces, Orthogonality, and Linear Least Squares

Vector Spaces, Orthogonality, and Linear Least Squares Week Vector Spaces, Orthogonality, and Linear Least Squares. Opening Remarks.. Visualizing Planes, Lines, and Solutions Consider the following system of linear equations from the opener for Week 9: χ χ

More information

4. Linear Subspaces Addition and scaling

4. Linear Subspaces Addition and scaling 71 4 Linear Subspaces There are many subsets of R n which mimic R n For example, a plane L passing through the origin in R 3 actually mimics R 2 in many ways First, L contains zero vector O as R 2 does

More information

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix. MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix. Basis Definition. Let V be a vector space. A linearly independent spanning set for V is called a basis.

More information

Spring 2014 Math 272 Final Exam Review Sheet

Spring 2014 Math 272 Final Exam Review Sheet Spring 2014 Math 272 Final Exam Review Sheet You will not be allowed use of a calculator or any other device other than your pencil or pen and some scratch paper. Notes are also not allowed. In kindness

More information

Department of Aerospace Engineering AE602 Mathematics for Aerospace Engineers Assignment No. 4

Department of Aerospace Engineering AE602 Mathematics for Aerospace Engineers Assignment No. 4 Department of Aerospace Engineering AE6 Mathematics for Aerospace Engineers Assignment No.. Decide whether or not the following vectors are linearly independent, by solving c v + c v + c 3 v 3 + c v :

More information

MAT 242 CHAPTER 4: SUBSPACES OF R n

MAT 242 CHAPTER 4: SUBSPACES OF R n MAT 242 CHAPTER 4: SUBSPACES OF R n JOHN QUIGG 1. Subspaces Recall that R n is the set of n 1 matrices, also called vectors, and satisfies the following properties: x + y = y + x x + (y + z) = (x + y)

More information

MA 265 FINAL EXAM Fall 2012

MA 265 FINAL EXAM Fall 2012 MA 265 FINAL EXAM Fall 22 NAME: INSTRUCTOR S NAME:. There are a total of 25 problems. You should show work on the exam sheet, and pencil in the correct answer on the scantron. 2. No books, notes, or calculators

More information

Exam in TMA4110 Calculus 3, June 2013 Solution

Exam in TMA4110 Calculus 3, June 2013 Solution Norwegian University of Science and Technology Department of Mathematical Sciences Page of 8 Exam in TMA4 Calculus 3, June 3 Solution Problem Let T : R 3 R 3 be a linear transformation such that T = 4,

More information

MATH 2030: ASSIGNMENT 4 SOLUTIONS

MATH 2030: ASSIGNMENT 4 SOLUTIONS MATH 23: ASSIGNMENT 4 SOLUTIONS More on the LU factorization Q.: pg 96, q 24. Find the P t LU factorization of the matrix 2 A = 3 2 2 A.. By interchanging row and row 4 we get a matrix that may be easily

More information

Math 314/814 Topics for first exam

Math 314/814 Topics for first exam Chapter 2: Systems of linear equations Math 314/814 Topics for first exam Some examples Systems of linear equations: 2x 3y z = 6 3x + 2y + z = 7 Goal: find simultaneous solutions: all x, y, z satisfying

More information

(v, w) = arccos( < v, w >

(v, w) = arccos( < v, w > MA322 F all206 Notes on Inner Products Notes on Chapter 6 Inner product. Given a real vector space V, an inner product is defined to be a bilinear map F : V V R such that the following holds: Commutativity:

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

Math 54 HW 4 solutions

Math 54 HW 4 solutions Math 54 HW 4 solutions 2.2. Section 2.2 (a) False: Recall that performing a series of elementary row operations A is equivalent to multiplying A by a series of elementary matrices. Suppose that E,...,

More information

Chapter 2. General Vector Spaces. 2.1 Real Vector Spaces

Chapter 2. General Vector Spaces. 2.1 Real Vector Spaces Chapter 2 General Vector Spaces Outline : Real vector spaces Subspaces Linear independence Basis and dimension Row Space, Column Space, and Nullspace 2 Real Vector Spaces 2 Example () Let u and v be vectors

More information

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS 1. HW 1: Due September 4 1.1.21. Suppose v, w R n and c is a scalar. Prove that Span(v + cw, w) = Span(v, w). We must prove two things: that every element

More information

DS-GA 1002 Lecture notes 10 November 23, Linear models

DS-GA 1002 Lecture notes 10 November 23, Linear models DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.

More information

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix Definition: Let L : V 1 V 2 be a linear operator. The null space N (L) of L is the subspace of V 1 defined by N (L) = {x

More information

6. Orthogonality and Least-Squares

6. Orthogonality and Least-Squares Linear Algebra 6. Orthogonality and Least-Squares CSIE NCU 1 6. Orthogonality and Least-Squares 6.1 Inner product, length, and orthogonality. 2 6.2 Orthogonal sets... 8 6.3 Orthogonal projections... 13

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Review of Matrices and Block Structures

Review of Matrices and Block Structures CHAPTER 2 Review of Matrices and Block Structures Numerical linear algebra lies at the heart of modern scientific computing and computational science. Today it is not uncommon to perform numerical computations

More information

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education MTH 3 Linear Algebra Study Guide Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education June 3, ii Contents Table of Contents iii Matrix Algebra. Real Life

More information

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true? . Let m and n be two natural numbers such that m > n. Which of the following is/are true? (i) A linear system of m equations in n variables is always consistent. (ii) A linear system of n equations in

More information

Mathematics Department Stanford University Math 61CM/DM Inner products

Mathematics Department Stanford University Math 61CM/DM Inner products Mathematics Department Stanford University Math 61CM/DM Inner products Recall the definition of an inner product space; see Appendix A.8 of the textbook. Definition 1 An inner product space V is a vector

More information

Math Linear Algebra Final Exam Review Sheet

Math Linear Algebra Final Exam Review Sheet Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

More information

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2 HE SINGULAR VALUE DECOMPOSIION he SVD existence - properties. Pseudo-inverses and the SVD Use of SVD for least-squares problems Applications of the SVD he Singular Value Decomposition (SVD) heorem For

More information

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same. Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read

More information

(v, w) = arccos( < v, w >

(v, w) = arccos( < v, w > MA322 F all203 Notes on Inner Products Notes on Chapter 6 Inner product. Given a real vector space V, an inner product is defined to be a bilinear map F : V V R such that the following holds: For all v,

More information

LECTURE 6: VECTOR SPACES II (CHAPTER 3 IN THE BOOK)

LECTURE 6: VECTOR SPACES II (CHAPTER 3 IN THE BOOK) LECTURE 6: VECTOR SPACES II (CHAPTER 3 IN THE BOOK) In this lecture, F is a fixed field. One can assume F = R or C. 1. More about the spanning set 1.1. Let S = { v 1, v n } be n vectors in V, we have defined

More information

LINEAR ALGEBRA QUESTION BANK

LINEAR ALGEBRA QUESTION BANK LINEAR ALGEBRA QUESTION BANK () ( points total) Circle True or False: TRUE / FALSE: If A is any n n matrix, and I n is the n n identity matrix, then I n A = AI n = A. TRUE / FALSE: If A, B are n n matrices,

More information

NOTES on LINEAR ALGEBRA 1

NOTES on LINEAR ALGEBRA 1 School of Economics, Management and Statistics University of Bologna Academic Year 207/8 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Linear Algebra Final Exam Study Guide Solutions Fall 2012

Linear Algebra Final Exam Study Guide Solutions Fall 2012 . Let A = Given that v = 7 7 67 5 75 78 Linear Algebra Final Exam Study Guide Solutions Fall 5 explain why it is not possible to diagonalize A. is an eigenvector for A and λ = is an eigenvalue for A diagonalize

More information

Solving a system by back-substitution, checking consistency of a system (no rows of the form

Solving a system by back-substitution, checking consistency of a system (no rows of the form MATH 520 LEARNING OBJECTIVES SPRING 2017 BROWN UNIVERSITY SAMUEL S. WATSON Week 1 (23 Jan through 27 Jan) Definition of a system of linear equations, definition of a solution of a linear system, elementary

More information

(v, w) = arccos( < v, w >

(v, w) = arccos( < v, w > MA322 Sathaye Notes on Inner Products Notes on Chapter 6 Inner product. Given a real vector space V, an inner product is defined to be a bilinear map F : V V R such that the following holds: For all v

More information

Matrix Algebra for Engineers Jeffrey R. Chasnov

Matrix Algebra for Engineers Jeffrey R. Chasnov Matrix Algebra for Engineers Jeffrey R. Chasnov The Hong Kong University of Science and Technology The Hong Kong University of Science and Technology Department of Mathematics Clear Water Bay, Kowloon

More information