EXAM Exam # Math 6 Fall Morning Class Nov. 9, ANSWERS
i
5 pts. Problem. In each part you are given the augmented matrix of a system of linear equations, with the coefficent matrix in reduced row echelon form. Determine if the system is consistent and, if it is consistent, find all solutions. A. 4 The system is inconsistent because of the last row, which has all zeros in the coefficent matrix and a nonzero entry on the right-hand side. B. 5 The system is consistent. Call the variables x, y and z. There are no free variables and the three rows correspond to the equations z = 5, y = and x =. Thus, the system has a unique solution, which is (x, y, z) = (,, 5). C. 5 The system is consistent. Call the variables x,..., x 5. Then x, x and x 5 are leading variables and x and x 4 are free variables, say x = α and x 4 = β. Reading the rows of the matrix from the bottom up, we have the equations x 5 = 5 x x 4 = = x = + x 4 = + β x + x + x 4 = = x = x x 4 = α β
Thus, the system has a two parameter family of solutions given by (x, x, x, x 4, x 5 ) = ( α β, α, + β, β, 5) 4 pts. Problem. Find the following determinant by the method of elimination, i.e., by using row operations and keeping track of the effect of the row operations on the determinant. Sorry, no credit for finding it by another method.. The point is to keep track of the signs and factors introduced by the row operations. Recall that if the matrix A is transformed into B by a row operation of type I, i.e., like R i R j, then det(a) = det(b). If A is transformed into B by the type II operation R i mr j, then det(a) = (/m) det(b). Finally, if A is transformed into B by an operation of type III (R i R i + mr j ) then det(a) = det(b). Thus, to compute the determinant in this problem, we can proceed as follows: apply R R R = R R R = R R = R R R = Multiply the diagonal elements = ()()( ) = Of course, other sequences of row operations would yield the same result.
6 pts. Problem. Consider the matrix 4 4 5 A = 4 6 7. 8 4 6 The RREF of A is the matrix R = A. Find a basis for the nullspace of A. The nullspace is the space of solutions of the system Ax =. Since R is row equivalent to A, the system Rx = has the same solutions. So, we read off the solutions from R, using zero as the right-hand side of the system. Looking at R, call the variables x through x 6. Then x, x and x 5 are leading variables and x, x 4 and x 6 are free variables, say x = α x 4 = β x 6 = γ. Reading the rows of R from the bottom up gives the equations x 5 x 6 = = x 5 = x 6 = γ x + x + x 4 + x 6 = = x = x x 4 x 6 = α β γ x x x 4 + x 6 = = x = x + x 4 x 6 = α + β γ. Putting these equations together gives the family of solutions x α + β γ x α β γ x x 4 = α β = α + β + γ x 5 γ x 6 γ
Thus, as we discussed, the vectors,, form a basis for the nullspace of A. B. Find a basis for the rowspace of A. A basis for the rowspcae is given by the nonzero rows in the RREF of A. Thus, the vectors [ ], [ ], [ ] form a basis of the rowspace of A. C. Find a basis for the columnspace of A To find a basis for the columnspace of A, we find the columns in R that contain the leading entries (columns, and 5) and take the corresponding columuns from the original matrix A. Thus, the vectors 4 4,, form a basis for the columnspace of A. Problem 4. The following vectors span R. Pare down this set of vectors to 4 pts. a basis for R. Express the vectors that are not in your basis as linear combinations of the basis vectors. v = 4, v =, v =, v 4 =, v 5 =. 4
Finding a basis for the span of these five vectors (the span is R ) is the same as finding a basis for the columnspace of the matrix A = 4. The RREF of A is R =. Thus, the leading entries in R are in columns, and. The corresponding columns of A are a basis for the columnspace of A. Thus, we conclude that v, v and v form a basis of R. To express v 4 and v 5 as linear combinations of the basis vectors, we read off the linear relationships amoung the columns of R. The columns of A (i.e., the v j s) have the same linear relationships. We have Col 4 (R) = Col (R) + Col (R) + Col (R). Thus, we have v 4 = v + v + v. In R, we have Col 4 (R) = Col (R) Col (R) + Col (R), so v 4 = v v + v. 5 pts. Problem 5. In each part, determine if the given vectors are linearly independent. Justify your answer. If the vectors are linearly dependent, find scalars c, c and c, not all zero, so that c v + c v + c v =. A. In R, v =, v =, v =. To determine if the vectors are independent, put them in the columns of a matrix A =. 5
The RREF of A is R =. The matrix R has linearly dependent columns so the same is true of A. Thus, the vectors v, v and v are linearly dependent. There are two approaches we could take to find a linear relation where not all the coefficents are zero. c v + c v + c v = () In the first approach, we read off from R that Col (R) = Col (R) + Col (R), so the columns of A have the same relationship. Thus v = v + v, and so i.e., we can take c =, c = and c =. v v + v =, () For the second approach, we note that equation () is equivlalent to the matrix equation Ac = where c = c c. c Solving this in the usual way, we find the RREF R and the system of equations Rc = is equivalent. Looking at R we see that c and c are leading variables and c is a free variable, say c = α. From the second row of R we get c + c =, so c = c = α. From the first row we get c c =, so c = c = α. Thus, we have a one parameter family of solutions c c = c α α α Thus, the vectors v, v and v satisfy the equation. αv αv + αv = for any real number α. The coefficents are nonzero if we choose α. Of course, this is essentually the same result as equation (), since we can multiply both sides of () by α. 6
B. In R 4, v =, v =, v =. Put the vectors into the columns of a matrix A =. The RREF of A is R =. The columns of R are linearly indpendent, so the same is true of the columns of A. Thus, v, v and v are linearly indpendent. 4 pts. Problem 6. Consider the space P of polynomials of degree less than. Two ordered bases of this space are P = [ x x ] Q = [ + x + x + x + x + x + x ]. A. Find the change of basis matrices S PQ and S QP. Reading off the coefficents, we have [ + x + x + x + x + x + x ] = [ x x ]. () 7
(If you re wondering if the coefficents should go down the columns or across the rows, remember that the right-hand side is a matrix multiplication.) Comparing () with the defining equation Q = PS PQ for the transition matrix S PQ, we see that S PQ =. We then have S QP = (S PQ ) =, using a calculator to find the inverse matrix. B. Let f(x) = + x. Find [f(x)] Q, the coordinate vector of f(x) with respect to Q. Use this information to write f(x) as a linear combination of the entires of Q. The definition of the coordinate vector [f(x)] P is f(x) = P[f(x)] P. Since we have f(x) = + x = [ x x ] [f(x)] P = The change of coordinates equation is Thus, we have [f(x)] Q = [f(x)] Q = S QP [f(x)] P. =. 8
Finally, from the last calculation, we have f(x) = Q[f(x)] Q = [ + x + x + x + x + x + x ] = ( + x + x ) ( + x + x ) + ( + x + x ). 9