MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Similar documents
LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

2. Every linear system with the same number of equations as unknowns has a unique solution.

PRACTICE PROBLEMS FOR THE FINAL

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Study Guide for Linear Algebra Exam 2

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

MATH 221, Spring Homework 10 Solutions

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

ft-uiowa-math2550 Assignment OptionalFinalExamReviewMultChoiceMEDIUMlengthForm due 12/31/2014 at 10:36pm CST

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

MA 265 FINAL EXAM Fall 2012

MAT Linear Algebra Collection of sample exams

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45

Linear Algebra Final Exam Study Guide Solutions Fall 2012

Elementary Linear Algebra Review for Exam 2 Exam is Monday, November 16th.

Third Midterm Exam Name: Practice Problems November 11, Find a basis for the subspace spanned by the following vectors.

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Practice Exam. 2x 1 + 4x 2 + 2x 3 = 4 x 1 + 2x 2 + 3x 3 = 1 2x 1 + 3x 2 + 4x 3 = 5

Solutions to Final Exam

ANSWERS. E k E 2 E 1 A = B

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007

Math 54 HW 4 solutions

LINEAR ALGEBRA SUMMARY SHEET.

Linear Algebra Practice Problems

Final Exam Practice Problems Answers Math 24 Winter 2012

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

Conceptual Questions for Review

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Practice Final Exam. Solutions.

LINEAR ALGEBRA QUESTION BANK

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

Answer Key for Exam #2

235 Final exam review questions

Review problems for MA 54, Fall 2004.

Chapter 3 Transformations

MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2.

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

Problem 1: Solving a linear equation

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

ft-uiowa-math2550 Assignment NOTRequiredJustHWformatOfQuizReviewForExam3part2 due 12/31/2014 at 07:10pm CST

ANSWERS (5 points) Let A be a 2 2 matrix such that A =. Compute A. 2

MATH 152 Exam 1-Solutions 135 pts. Write your answers on separate paper. You do not need to copy the questions. Show your work!!!

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

Practice problems for Exam 3 A =

Chapter 1. Vectors, Matrices, and Linear Spaces

Dimension. Eigenvalue and eigenvector

Reduction to the associated homogeneous system via a particular solution

Chapter 3. Directions: For questions 1-11 mark each statement True or False. Justify each answer.

Solutions to practice questions for the final

I. Multiple Choice Questions (Answer any eight)

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices

Spring 2014 Math 272 Final Exam Review Sheet

SUMMARY OF MATH 1600

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

No books, no notes, no calculators. You must show work, unless the question is a true/false, yes/no, or fill-in-the-blank question.

MATH 1553 PRACTICE FINAL EXAMINATION

W2 ) = dim(w 1 )+ dim(w 2 ) for any two finite dimensional subspaces W 1, W 2 of V.

Math 1553, Introduction to Linear Algebra

1. Let A = (a) 2 (b) 3 (c) 0 (d) 4 (e) 1

Math Final December 2006 C. Robinson

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

Chapter 6: Orthogonality

The Gram Schmidt Process

The Gram Schmidt Process

Cheat Sheet for MATH461

PRACTICE FINAL EXAM. why. If they are dependent, exhibit a linear dependence relation among them.

Math Linear Algebra Final Exam Review Sheet

Math 2114 Common Final Exam May 13, 2015 Form A

MAT188H1S LINEAR ALGEBRA: Course Information as of February 2, Calendar Description:

Eigenvalues and Eigenvectors A =

(v, w) = arccos( < v, w >

and let s calculate the image of some vectors under the transformation T.

Math 20F Final Exam(ver. c)

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Linear Algebra- Final Exam Review

spring, math 204 (mitchell) list of theorems 1 Linear Systems Linear Transformations Matrix Algebra

1 9/5 Matrices, vectors, and their applications

Announcements Monday, October 29

Math 54. Selected Solutions for Week 5

1 Last time: least-squares problems

Math 310 Final Exam Solutions

Math 314/ Exam 2 Blue Exam Solutions December 4, 2008 Instructor: Dr. S. Cooper. Name:

Calculating determinants for larger matrices

Answer Key for Exam #2

1. In this problem, if the statement is always true, circle T; otherwise, circle F.

MATH 1553, Intro to Linear Algebra FINAL EXAM STUDY GUIDE

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

Exam in TMA4110 Calculus 3, June 2013 Solution

80 min. 65 points in total. The raw score will be normalized according to the course policy to count into the final score.

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

Transcription:

MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether it is invertible and (iv) compute its inverse if possible A = B = C = Solution: We will row-reduce A to reduced row echelon form To compute the inverse of A (if it exists) we augment A by the identity matrix I on the right: 7 4 7 4 7 This shows that A is invertible (since it can be row-reduced to the identity matrix) and A = 4 7 The process for finding bases for the row and column spaces is explained in section of the textbook on page 8 In this case since A has a pivot position in each column and the pivot columns of A form a basis of Col A a basis for Col A is

MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION (In fact the column space of A is all of R so any basis of R is a possible answer to this part of the problem) The nonzero rows in the matrix we get by reducing A to row echelon form are a basis for Row A so a basis for Row A is (The row space is all of R so any basis of R would work here) For the matrix B we do the same thing (first augmenting B by the 4 4 identity matrix): where for the first equivalence row and row 4 were swapped the second equivalence is by a sequence of row-replacement operations the third equivalence is by rescaling rows and 4 and the final equivalence is by another sequence of row replacements So B is also invertible and B = As before B has pivot positions in every row and column so a basis for the column space is

MATH (LINEAR ALGEBRA ) FINAL EXAMFALL SOLUTIONS TO PRACTICE VERSION and a basis for the row space is (In fact the row space and column space of B both equal R 4 so any bases of R 4 would be acceptable answers to this part of the problem) The matrix C is not invertible because it is not square (it has rows and 5 columns) To find bases for the row and column spaces of C we row reduce to echelon form: C = As before a basis for Col C consists of the pivot columns of C (not of the row-reduced matrix) The pivots are in columns and so one basis for Col C is For a basis of Row C we can use the two nonzero rows of the row echelon matrix we found above: (b) Give an example of a matrix D which is not row-equivalent to the matrix A above Solution: We can pick any matrix D with a different reduced row echelon form as A recall from Chapter that the reduced row echelon form of a matrix is unique so this will work Any matrix with fewer than three pivot positions will work such as D = 7 (But there are infinitely many other possible correct answers to this problem!)

4MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each of the two matrices below (i) determine whether it is diagonalizable (ii) determine whether it is orthogonally diagonalizable and (iii) if it is diagonalizable find an invertible matrix P and a diagonal matrix D such that A = P DP HINT: The eigenvalues for the matrix A below are and A = B = 9 Solution: The matrix A is orthogonally diagonalizable since it is symmetric (by Theorem 55 in the textbook) To diagonalize it we first find eigenvectors of A (In the following solution I will find an orthonormal set of eigenvectors to show you how to orthogonally diagonalize A; however for the question as stated you could use any set of three linearly independent eigenvectors to get a correct answer) For the eigenvalue the associated eigenvectors are solutions to the linear system corresponding to whose solution is given by the equations x = x and x = x (with x as a free variable) So ( ) is an eigenvector We eventually want an orthonormal basis of R so we divide this eigenvector by its norm to get the unit eigenvector u = For the eigenvalue the associated eigenvectors are the solutions to the linear system corresponding to whose solution is given by the equation x = x x with x and x both free Plugging in ( ) and ( ) for the values of x and x in the above solution we get ( ) and ( ) two linearly independent eignevalues of A with

MATH (LINEAR ALGEBRA ) FINAL EXAMFALL SOLUTIONS TO PRACTICE VERSION5 eigenvalue Since we want an orthonormal set of eigenvectors we apply the Gram-Schmidt process to these two vectors to get the orthogonal set (see the solution to Problem 4 below for details on how to apply Gram- Schmidt) and then we make each of these vectors into unit vectors by dividing them by their norms to get u = u = Finally as explained in section 5 we can use the orthonormal set of eigenvectors { u u u } and the eigenvalues we have found above to construct matrices P and D that orthogonally diagonalize A: P = u u u = D = For B we again start by finding its eigenvalues B is upper-triangular so its eigenvalues are the entries on the main diagonal (as I explained in class) so the eigenvalues of B are and (Alternatively it is easy to find the eigenvalues of B by solving the characteristic equation which is = ( λ)( λ)) To find an eigenvector corresponding to the eigenvalue we solve: 9 9 The general solution is that x = and x is free so one possible eigenvector is ( ) To find an eigenvector corresponding to the eigenvalue we solve: 9 The general solution is that x = x and x is free so one possible eigenvector is ( ) So B is diagonalizable since the two eigenvectors we have found form a basis of R (by Corollary in section 5 of the textbook) Using the technique in section 5 we can let P = D =

MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION (The columns of P are the two eigenvectors we have found and the diagonal entries of D are the associated eigenvalues) However the matrix B is not orthogonally diagonalizable since the eigenspaces corresponding to the two different eigenvalues are not orthogonal (as we can see by taking a dot product of the eigenvectors we found: ( ) ( ) = ( ) + ( ) = which is not so these eigenvectors are not perpendicular) (b) Give an example of a matrix C which is not similar to the matrix A in part (a) Solution: Any two similar matrices have the same sets of eigenvalues (by Theorem 7 in the textbook) So we just have to find a matrix C whose set of eigenvalues is different from { }; there are many possible correct answers here such as C = (whose eigenvalues are and ) Problem For each of the following sets determine whether or not it is a subspace of R 4 If it is a subspace determine its dimension (a) The set of all x in R 4 such that B x = x where B is the matrix in Problem (a) above Solution: A vector x in R 4 is in this set if and only if it is in the eigenspace of B corresponding to the value (see the definition on page 9 of the textbook we re taking the number λ there to be ) which is the null space of the matrix B I 4 Since null spaces of 4 4 matrices are always subspaces of R 4 this set is a subspace of R 4 To find the dimension of this subspace note that since this space is the null space of B I 4 we can find its dimension by row reducing B I 4 (For a review of this see Example from section of the textbook; the technique described there generates a basis for Nul(A)) We row reduce B I 4 as follows: (by adding Row to Rows and 4 to create a pivot in the first column)

MATH (LINEAR ALGEBRA ) FINAL EXAMFALL SOLUTIONS TO PRACTICE VERSION7 (first swapping Rows and then rescaling Row by ) (first subtracting Row from Row and subtracting ( Row ) from Row 4 to create a pivot in the second column then rescaling Row by ) 4 By the Invertible Matrix Theorem (section Theorem 8) since B I has 4 pivots B I is invertible and (B I) x = has only the trivial solution x = In other words the null space of B is { x} (the set consisting of only the zero vector the zero subspace ) The dimension of this space is zero (See the definition of dimension on page 57 the zero subspace is defined to have dimension zero) (b) The row space of A where A is the matrix in Problem (a) above Solution: The matrix A is so its row space is a subspace of R but not a subspace of R 4 in fact it is not even a subset of R 4 and any subspace of R 4 must be a subset of R 4 (See the definition of subspace at the top of page plus Example 9 on the following page) So this set is not a subspace of R 4 (c) The set of all x in R 4 such that x Solution: Although this is a subset of R 4 and it contains the zero vector ( ) (which has length ) it is not a subspace of R 4 You could show this by showing that it violates either of the two conditions necessary to be a subspace (see Definition in the textbook) For example: the set is not closed under addition since the vectors e = ( ) and e = ( ) are both in the set (they each have length ) their sum e + e = ( ) is not because ( ) = + + + = > Alternatively you could show that this set is not a subspace of R 4 by showing that it is not closed under scalar multiplication for instance e = ( ) is in the set but is not in the set 7e = (7 ) = 7 + + + = 7 >

8MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem 4 (a) Find an orthonormal basis for the subspace of R spanned by Solution: Call this subspace S First note that the set given is linearly independent (since neither vector is a multiple of the other) so it is a basis for S So we can use the Gram-Schmidt process (section page 4) to find an orthogonal basis of S: let v = v = = = Then v and v are orthogonal (since v v = but to get an orthonormal basis we must divide each vector by its norm The norm of v is and the norm of v is so an orthonormal basis of S is { v v } = (b) Extend the basis you found in part (a) to an orthonormal basis for R (by adding a new vector or vectors) Solution: This comes down to finding a vector in the orthogonal complement of S By the technique on page of the textbook (in section ) the orthogonal complement of S is equal to the nullspace of the matrix

MATH (LINEAR ALGEBRA ) FINAL EXAMFALL SOLUTIONS TO PRACTICE VERSION9 This matrix row-reduces to and the solution to the associated homogeneous linear system is given by the equations x = x x and x = x (with x free) Plugging in for x we get the solution ( ) and we can normalize this to get the vector v = So { v v v } is an orthonormal basis of R (c) Is there a unique way to extend the basis you found in (a) to an orthonormal basis of R? Explain No the vector v found above is not the unique way to extend { v v } to an orthonormal basis of R : the vector v works too: it is also a unit vector orthogonal to v and v (But it turns out that v and v are the only two possibilities) Problem 5 (a) Write the quadratic form Q(x x ) = x + x + x x in the form Q( x) = x T A x for some symmetric matrix A Solution: Let A = 5 5 (This uses the method discussed in section 8 of the textbook) To check this answer note that x T A x = 5 x x x = x x 5 x x + 5x 5x + x = x (x + 5x ) + x (5x + x ) = x + x + x x (b) Find an orthogonal matrix P such that using the linear change of variables P y = x we can eliminate the cross-terms of Q; that is Q( x) = ay + by for some constants a and b in R Solution: As explained in section 8 the matrix P we want is a matrix which orthogonally diagonalizes the matrix A found above in part (a) We first compute the eigenvalues of A using the characteristic equation: = λ 5 5 λ = ( λ) 5 = λ 4λ = (λ 7)(λ + )

MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION so there are two eigenvalues 7 and Next we find an orthonormal set of eigenvectors of A For the eigenvalue 7 the corresponding eigenvectors are solutions to the system corresponding to the augmented matrix 5 5 5 5 so an eigenvector is any vector (x x ) where x = x for instance ( ) To get a unit eigenvector we can divide ( ) by its length (which is ) to get the vector u = Similarly for the eigenvalue the corresponding eigenvectors are solutions to the system 5 5 5 5 and as before one solution to this is the unit vector u = P = The set { u u } is an orthonormal basis of R (this is straightforward to check) So if (whose columns are the vectors u and u ) then just like in Example 4 from section 7 of the textbook if x y = P x y then Q(x x ) = 7y y Problem Let P be the vector space of all polynomials (with real coefficients) of degree at most Let T : P R be the linear transformation defined by the rule T (a + a t + a t a + a ) = a Let B = { t t } be the standard basis for P and let { } C = (a) Explain why C is a basis for R

MATH (LINEAR ALGEBRA ) FINAL EXAMFALL SOLUTIONS TO PRACTICE VERSION Solution: Notice that the first vector ( ) is nonzero and the second vector ( ) is not a scalar multiple of the first vector so the set B is linearly independent Since the vector space R has dimension and B is a linearly independent subset of R containing vectors it follows from Theorem in section of the textbook that B is a basis for R (In other words for this problem you do not need to check that Span(B) = R An alternate solution to this problem would be to row-reduce the matrix whose columns are the vectors ( ) and ( ) check that it has a pivot in each column and conclude that B does indeed span R ) (b) Find the matrix for T relative to the bases B and C Solution: For this part we apply formula () for the matrix on page 9 of the textbook in section 7; the matrix we are looking for is the one called R BB there (but here we call the second basis C instead of B ) So the matrix M we are looking for is M = T () C T (t) C T ( t ) C We first compute the values of T on the vectors in the basis B: T () = (since = + t + t ) T (t) = (since t = + t + t ) T (t ) = (since t = + t + t ) Next for each of the three vectors in R above we compute their coordinate vectors with respect to the basis C For T () this amounts to finding the (unique) solution to the linear system whose augmented matrix is where the two columns on the left are the vectors in C and the vector in the rightmost column is T () This can be row-reduced (by first subtracting two times Row from Row ) to: so we conclude that T () C = ( ) (If you re still having trouble computing coordinate vectors such as this one you should review section of the textbook especially the formula on page 7)

MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Simlarly T (t) C = ( ) C = ( ) and T (t ) C is the solution to the linear system whose augmented matrix is 7 4 4 so therefore T (t ) C = (7 4) Putting this all together (using the equation for M above) we get that the matrix for T relative to B and C is 7 4 (c) Is T one-to-one? Does T map P onto R? Explain Solution: The linear transformation T is not one-to-one since T (t) = ( ) (as we found above) but also T () = ( ) so T maps two different polynomials to the same value ( ) The transformation T does map P onto R One way to see this is to row-reduce the matrix M for T relative to B and C (found in part (b)) to get 7 7 M = 7 4 4 so M has a pivot position in each row and therefore the matrix transformation x M x maps R onto R But this means that T also maps onto R since if v is any vector in R and v C = (x x ) then we can find some (y y y ) in R such that M y = x and then if p(t) = y + y t + y t it follows that and so T (p(t)) = v T (p(t)) C = M p(t) B = M y = x = v C