ANSWERS (5 points) Let A be a 2 2 matrix such that A =. Compute A. 2

Similar documents
ANSWERS. E k E 2 E 1 A = B

ANSWERS. Answer: Perform combo(3,2,-1) on I then combo(1,3,-4) on the result. The elimination matrix is

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

2. Every linear system with the same number of equations as unknowns has a unique solution.

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Linear Algebra Highlights

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Conceptual Questions for Review

MAT Linear Algebra Collection of sample exams

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

Reduction to the associated homogeneous system via a particular solution

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

SUMMARY OF MATH 1600

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

1 9/5 Matrices, vectors, and their applications

235 Final exam review questions

LINEAR ALGEBRA REVIEW

1. Select the unique answer (choice) for each problem. Write only the answer.

Review problems for MA 54, Fall 2004.

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

Extra Problems for Math 2050 Linear Algebra I

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

is Use at most six elementary row operations. (Partial

5.3 Determinants and Cramer s Rule

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

1. General Vector Spaces

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45

Solutions to Final Exam

Linear Algebra Primer

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

This MUST hold matrix multiplication satisfies the distributive property.

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

Solving a system by back-substitution, checking consistency of a system (no rows of the form

Elementary Linear Algebra Review for Exam 2 Exam is Monday, November 16th.

av 1 x 2 + 4y 2 + xy + 4z 2 = 16.

Math Computation Test 1 September 26 th, 2016 Debate: Computation vs. Theory Whatever wins, it ll be Huuuge!

Math Linear Algebra Final Exam Review Sheet

LINEAR ALGEBRA SUMMARY SHEET.

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

PRACTICE PROBLEMS FOR THE FINAL

18.06SC Final Exam Solutions

Practice Exam. 2x 1 + 4x 2 + 2x 3 = 4 x 1 + 2x 2 + 3x 3 = 1 2x 1 + 3x 2 + 4x 3 = 5

Chapter 3 Transformations

Math 1553, Introduction to Linear Algebra

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

MA 265 FINAL EXAM Fall 2012

and let s calculate the image of some vectors under the transformation T.

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

a s 1.3 Matrix Multiplication. Know how to multiply two matrices and be able to write down the formula

9.1 Eigenanalysis I Eigenanalysis II Advanced Topics in Linear Algebra Kepler s laws

MATH 235. Final ANSWERS May 5, 2015

MATH 15a: Linear Algebra Practice Exam 2

Linear Algebra March 16, 2019

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

Fundamentals of Engineering Analysis (650163)

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

MATH 369 Linear Algebra

MIT Final Exam Solutions, Spring 2017

Math 215 HW #9 Solutions

Math 21b. Review for Final Exam

Lecture Summaries for Linear Algebra M51A

TBP MATH33A Review Sheet. November 24, 2018

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Basis, Dimension, Kernel, Image

PRACTICE FINAL EXAM. why. If they are dependent, exhibit a linear dependence relation among them.

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

A Brief Outline of Math 355

Math 302 Outcome Statements Winter 2013

Basis, Dimension, Kernel, Image

Linear Algebra: Lecture notes from Kolman and Hill 9th edition.

Linear Algebra- Final Exam Review

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

Review for Exam Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions.

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

Spring 2015 Midterm 1 03/04/15 Lecturer: Jesse Gell-Redman

Math 320, spring 2011 before the first midterm

Chapter 2:Determinants. Section 2.1: Determinants by cofactor expansion

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Solutions to practice questions for the final

Linear Algebra I for Science (NYC)

I. Multiple Choice Questions (Answer any eight)

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

1. In this problem, if the statement is always true, circle T; otherwise, circle F.

Applied Linear Algebra in Geoscience Using MATLAB

MAT188H1S LINEAR ALGEBRA: Course Information as of February 2, Calendar Description:

Linear Algebra: Sample Questions for Exam 2

Math 265 Linear Algebra Sample Spring 2002., rref (A) =

Chapter 7. Linear Algebra: Matrices, Vectors,

ft-uiowa-math2550 Assignment OptionalFinalExamReviewMultChoiceMEDIUMlengthForm due 12/31/2014 at 10:36pm CST

MATH. 20F SAMPLE FINAL (WINTER 2010)

MATH 1553, Intro to Linear Algebra FINAL EXAM STUDY GUIDE

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

MATH 2360 REVIEW PROBLEMS

Transcription:

MATH 7- Final Exam Sample Problems Spring 7 ANSWERS ) ) ). 5 points) Let A be a matrix such that A =. Compute A. ) A = A ) = ) = ). 5 points) State ) the definition of norm, ) the Cauchy-Schwartz inequality and 3) the triangle inequality, for vectors in R n. ) Norm of v equals v = v T v; ) a b a b ; 3) a + b a + b. 3. 5 points) Suppose A = BC + D)E and all the matrices are n n invertible. Find an equation for C. AE = BC + BD implies C = B AE BD). 4. 5 points) Find all solutions to the system of equations w + 3x + 4y + 5z = 4w + 3x + 8y + 5z = 6w + 3x + 8y + 5z = Infinite solution case: w = /, x = 5/3)t, y = /, z = t. ) 5. 5 points) Let A =. Show the details of two different methods for finding 3 the inverse of the matrix A.

The two methods are ) A = adja) A and ) For C =< A I >, then rrefc) =< I A >. Details expected, but not supplied here. 6. 5 points) Find a factorization A = LU into lower and upper triangular matrices for the matrix A =. Let E be the result of combo,,-/) on I, and E the result of combo,3,-/3) on I. Then E E A = U = 3. Let L = E E =. 4 3 3 7. 5 points) Let Q be a matrix with QQ T = I. Prove that Q has columns of unit length and its two columns are orthogonal. First, AB = I with both A, B square ) implies BA = I. So Q T Q = I. Then Q =< q q > implies Q T q q q q Q =. Relation Q T Q = I then implies orthogonality of the q q q q columns and that the columns have length one. 8. 5 points) True or False? If the 3 3 matrices A and B are triangular, then AB is triangular. False. Consider the decomposition A = LU in a problem above. True if both matrices are upper triangular or both matrices are lower triangular. 9. 5 points) True or False? If a 3 3 matrix A has an inverse, then for all vectors b the equation A x = b has a unique solution x. True, x = A b.. 5 points) Let A be a 3 4 matrix. Find the elimination matrix E which under left multiplication against A performs both ) and ) with one matrix multiply.

) Replace Row of A with Row minus Row. ) Replace Row 3 of A by Row 3 minus 5 times Row. Perform combo,,-) on I then combo,3,-5) on the result. The elimination matrix is E = 5 5. points) Determinant problem, chapter 3. a) [%] True or False? The value of a determinant is the product of the diagonal elements. b) [%] True or False? The determinant of the negative of the n n identity matrix is. c) [3%] Assume given 3 3 matrices A, B. Suppose E E A = AB and E, E are elementary matrices representing respectively a combination and a multiply by 3. Assume detb) = 7. Let C = A. Find all possible values of detc). d) [%] Determine all values of x for which I + C) fails to exist, where I is the 3 3 x identity and C = 3x. e) [3%] Let symbols a, b, c denote constants and define A = a b c Apply the adjugate [adjoint] formula for the inverse A = adja) A to find the value of the entry in row 4, column of A. a) FALSE. True only if the matrix is triangular. b) FALSE. It equals when n is even. 3

c) Start with the determinant product theorem F G = F G. Apply it to obtain E E A = A B. Let x = A in this equation and solve for x. You will need to know that E = and E = 3. Then C = I)A = I A = ) 3 x. The answer is C = or C = 9. 4 x d) Find C + I = 3 x, then evaluate its determinant, to eventually solve for x = 5/3 and x =. Used here is F exists if and only if F. e) Find the cross-out determinant in row, column 4 no mistake, the transpose swaps rows and columns). Form the fraction, top=checkerboard sign times cross-out determinant, bottom= A. The value is b a. A maple check: C4:=Matrix[[,-,,],[,,,],[a,b,,],[,c,,/]]); /C4; # The inverse matrix C5:=linalg[minor]C4,,4); -)**+4)*linalg[det]C5)/linalg[det]C4); # ans = -b-a. 5 points) Define matrix A, vector b and vector variable x by the equations 3 3 x A = 4, b = 5, x = x. 4 For the system A x = b, find x 3 by Cramer s Rule, showing all details details count 75%). To save time, do not compute x, x! 3 3 3 x 3 = 3 /, = det 4 = 8, 3 = det 4 5 = 59, x 3 = 4 4 59. 8 3. 5 points) Define matrix A = 3 3 3 4 an upper triangular matrix U such that A = LU. 4 x 3. Find a lower triangular matrix L and

Let E be the result of combo,,-/) on I, and E the result of combo,3,-/3) on I. Then E E A = U = 3. Let L = E E =. 4 3 3 4. 5 points) Determine which values of k correspond to a unique solution for the system A x = b given by A = 4 k k k 3 4 3, b = k. There is a unique solution for deta), which implies k and k 3. Alternative 4 k solution: Elimination methods with swap, combo, multiply give k k. 3 k k Then ) Unique solution for three lead variables, equivalent to the determinant nonzero for the frame above, or k )3 k) ; ) No solution for k = 3 [signal equation]; 3) Infinitely many solutions for k =. 5. points) Let a, b and c denote constants and consider the system of equations b c x c a y b + c a z = Use techniques learned in this course to briefly explain the following facts. Only write what is needed to justify a statement. a a a a). The system has a unique solution for b + c)a + c). b). The system has no solution if a+c = and a don t explain the other possibilities). c). The system has infinitely many solutions if a = c = don t explain the other possibilities). 5

Combo, swap and mult are used to obtain in 3 combo steps the matrix b c a A 3 = b + c c + a a c a a a) Uniqueness requires zeros free variables. Then the diagonal entries of the last frame must be nonzero, written simply as c + b)a + c), which is equivalent to the determinant of A not equal to zero. b) No solution: The last row of A 3 is a signal equation if c + a = and a. c) Infinitely many solutions: If a = c =, then A 3 has last row zero. If a = c = and b =, then there is one lead variable and two free variables, because the last two rows of A 3 are zero. If a = c = and b, then there are two lead variables and one free variable. The homogeneous problem has infinitely many solutions, because of at least one free variable and no signal equation. The sequence of steps are documented below for maple. withlinearalgebra): combo:=a,s,t,m)->linearalgebra[rowoperation]a,[t,s],m); mult:=a,t,m)->linearalgebra[rowoperation]a,t,m); swap:=a,s,t)->linearalgebra[rowoperation]a,[s,t]); A:=a,b,c)->Matrix[[,b,c,-a],[,c,-a,a],[,b+c,a,a]]); A:=Aa,b,c); A:=comboAa,b,c),,,-); A:=comboA,,3,-); A3:=comboA,,3,-); A4:=convertA3,list,nested=true); A4 := [[, -b, c, a], [, b+c, -c+a, -*a], [,, -c-*a, -a]]; 6. 5 points) Explain how the span theorem applies to show that the set S of all linear combinations of the functions cosh x, sinh x is a subspace of the vector space V of all continuous functions on < x <. 6

The span theorem says span v, v ) is a subspace of V, for any two vectors in V. Choose the two vectors to be cosh x, sinh x. 7. 5 points) Write a proof that the subset S of all solutions x in R n to a homogeneous matrix equation A x = is a subspace of R n. This is called the kernel theorem. ) Zero is in S because A = ; ) If A v = and A v =, then v = v + v satisfies A v = A v + A v = + =. So v is in S; 3) Let v be in S, that is, A v =. Let c be a constant. Define v = c v. Then A v = Ac v ) = ca v = c) =. Then v is in S. This completes the proof. 8. 5 points) Using the subspace criterion, write two hypotheses that imply that a set S in a vector space V is not a subspace of V. The full statement of three such hypotheses is called the Not a Subspace Theorem. ) If the zero vector is not in S, then S is not a subspace. ) If two vectors in S fail to have their sum in S, then S is not a subspace. 3) If a vector is in S but its negative is not, then S is not a subspace. 9. 5 points) Report which columns of A are pivot columns: A = Zero cannot be a pivot column no leading one in rrefa)). The other two columns are not constant multiples of one another, therefore they are independent and will become pivot columns in rrefa). Then: pivot columns =,3.. 5 points) Find the complete solution x = x h + x p for the nonhomogeneous system x x = 3. The homogeneous solution x h is a linear combination of Strang s special solutions. Symbol x p denotes a particular solution. 7 x 3.

The augmented matrix has reduced row echelon form last frame) equal to the matrix. Then x = t, x =, x 3 = is the general solution in scalar form. The partial derivative on t gives the homogeneous solution basis vector. Then x h = c Set t = in the scalar solution to find a particular solution x p =.. 5 points) Find the vector general solution x to the equation A x = b for 4 A = 3, b = 4 4 The augmented matrix for this system of equations is 4 3 4 4 The reduced row echelon form is found as follows: 4 4 combo,,-3) 4 4 4 combo,3,-4) 6 4 4 mult3,-/6) 4 last frame 8

The last frame, or RREF, implies the system x = x 3 = 4 x 4 = The lead variables are x, x 3, x 4 and the free variable is x. The last frame algorithm introduces invented symbol t. The free variable is set to this symbol, then back-substitute into the lead variable equations of the last frame to obtain the general solution x =, x = t, x 3 = 4, x 4 =. Strang s special solution s is the partial of x on the invented symbol t. A particular solution x p is obtained by setting all invented symbols to zero. Then x = x p + t s = 4 + t. 5 points) Find the reduced row echelon form of the matrix A = It is the matrix. 3. 5 points) A 3 matrix A is given and the homogeneous system A x = is transformed to reduced row echelon form. There are 7 lead variables. How many free variables? Because x has 3 variables, then the rank plus the nullity is 3. There are 6 free variables. 4. 5 points) The rank of a 3 matrix A is 7. Find the nullity of A. 9.

There are 3 variables. The rank plus the nullity is 3. The nullity is 6. ) ) ) 3 4 5. 5 points) Given a basis v =, v = of R, and v =, then v = 4 4 c v + c v for a unique set of coefficients c, c, called the coordinates of v relative to the basis v, v. Compute c and c. 6. 5 points) Determine independence or dependence for the list of vectors 4 3,, 3 4 Possible tests are the rank test, determinant test, pivot theorem, orthogonality test. Let A denote the augmented matrix of the three column vectors. The determinant is 3, nonzero, so the vectors are independent. The pivot theorem also applies. The rrefa) is the identity matrix, so all columns are pivot columns, hence the three columns are independent. The rank test applies because the rank is 3, equal to the number of columns, hence independence. 7. 5 points) Check the independence tests which apply to prove that, x, x 3 are independent in the vector space V of all functions on < x <.

Wronskian test Wronskian of f, f, f 3 nonzero at x = x implies independence of f, f, f 3. Rank test Vectors v, v, v 3 are independent if their augmented matrix has rank 3. Determinant test Vectors v, v, v 3 are independent if their square augmented matrix has nonzero determinant. Euler Atom test Any finite set of distinct atoms is independent. Sample test Functions f, f, f 3 are independent if a sampling matrix has nonzero determinant. Pivot test Vectors v, v, v 3 are independent if their augmented matrix A has 3 pivot columns. Orthogonality test A set of nonzero pairwise orthogonal vectors is independent. The first, fourth and fifth apply to the given functions, while the others apply only to fixed vectors. 8. 5 points) Define S to be the set of all vectors x in R 3 such that x + x 3 = and x 3 + x = x. Prove that S is a subspace of R 3. Let A =. Then the restriction equations can be written as A x =. Apply the nullspace theorem also called the kernel theorem), which says that the nullspace of a matrix is a subspace. Another solution: The given restriction equations are linear homogeneous algebraic equations. Therefore, S is the nullspace of some matrix B, hence a subspace of R 3. This solution uses the fact that linear homogeneous algebraic equations can be written as a matrix equation B x =. Another solution: Verify the three checkpoints for a subspace S in the Subspace Criterion. This is quite long, and certainly the last choice for a method of proof. 9. 5 points) The 5 6 matrix A below has some independent columns. Report the

independent columns of A, according to the Pivot Theorem. 3 A = 6 6 3 Find rrefa) = /. The pivot columns are and 4. 3. 5 points) Let S be the subspace of R 4 spanned by the vectors v =, v =. Find a Gram-Schmidt orthonormal basis of S. Let y = v and u = y y. Then u = 3. Let y = v minus the shadow projection of v onto the span of v. Then y = v v v v = v v 3. Finally, u = y y. We report the Gram-Schmidt basis: u = 3, u = 6.

3. 5 points) Find the orthogonal projection vector v the shadow projection vector) of v onto v, given v =, v =. Use the formula v = d v where d = v v v v. 3. 5 points) Let A be an m n matrix with independent columns. Prove that A T A is invertible. The matrix B = A T A has dimension n n. We prove that the nullspace of B = A T A is the zero vector. Let x belong to R n. Assume B x =, then multiply this equation by x T to obtain x T A T A x = x T =. Therefore, A x =, or A x =. If A has independent columns, then the nullspace of A is the zero vector, so x =. We have proved that the nullspace of B = A T A is the zero vector. An n n matrix B is invertible if and only if its nullspace is the zero vector. So B = A T A is invertible. 33. 5 points) Let A be an m n matrix with A T A invertible. Prove that the columns of A are independent. The columns of A are independent if and only if the nullspace of A is the zero vector. If you don t know this result, then find it in Strang s book, or prove it yourself. Assume x is in the nullspace of A, A x =, then multiply by A T to get A T A x =. Because A T A is invertible, then x =, which proves the nullspace of A is the zero vector. conclude that the columns of A are independent. 34. 5 points) Let A be an m n matrix and v a vector orthogonal to the nullspace of A. Prove that v must be in the row space of A. We 3

The fundamental theorem of linear algebra is summarized by rowspace nullspace. This relation implies nullspace rowspace, because for subspaces S we have S ) = S. The conclusion follows. 35. 5 points) Define matrix A and vector b by the equations 3 A = 4, b =. 3 Find the value of x by Cramer s Rule in the system A x = b. x = /, = det 4 = 36, = deta) = 4, x = 9. 3 36. 5 points) Assume A = 6 4 ). Find the inverse of the transpose of A. Compute A T ) = A ) T = 6 4 )) T = 6 4 ). 37. 5 points) This problem uses the identity A adja) = adja)a = A I, where A is the determinant of matrix A. Symbol adja) is the adjugate or adjoint of A. The identity is used to derive the adjugate inverse identity A = adja)/ A.. Let B be the matrix given below, where? means the value of the entry does not affect the answer to this problem. The second matrix is C = adjb). Report the value of the determinant of matrix C B.??? B =??, C =?? 4 4 4 4 4 4 4

The determinant of C B is B / C. Then CB == adjb)b = B I implies C B = det B I) = B 4. Because C = B 3, then the answer is / B. Return to CB = B I and do one dot product to find the value B = 8. We report detc B ) = / B = /8. 38. 5 points) Display the entry in row 3, column 4 of the adjugate matrix [or adjoint 4 matrix] of A =. Report both the symbolic formula and the numerical 3 value. The answer is the cofactor of A in row 4, column 3 = ) 7 times minor of A in 4,3 =. 39. 5 points) Consider a 3 3 real matrix A with eigenpairs 5 i, 6, i,, i, 4 i. Display an invertible matrix P and a diagonal matrix D such that AP = P D. The columns of P are the eigenvectors and the diagonal entries of D are the eigenvalues, taken in the same order. 3 4. 5 points) Find the eigenvalues of the matrix A = 3. 5 3 To save time, do not find eigenvectors! The characteristic polynomial is deta ri) = r)3 r)r ). The eigenvalues are,,, 3. Determinant expansion of deta λi) is by the cofactor method along column. This reduces it to a 3 3 determinant, which can be expanded by the cofactor method along column 3. 5

3 4. 5 points) The matrix A = has eigenvalues,, but it is not 3 diagonalizable, because λ = has only one eigenpair. Find an eigenvector for λ =. To save time, don t find the eigenvector for λ =. Because A I = 3 has last frame B = only one eigenpair for λ =, with eigenvector v = 5. 5/, then there is 4. 5 points) Find the two complex eigenvectors ) corresponding to complex eigenvalues ± i for the matrix A =. + i, i )), 43. 5 points) Let A = i,, i 7 4 7 )), )) ). Circle possible eigenpairs of A. )) )),,,. 3 The first and the last, because the test A x = λ x passes in both cases. 44. 5 points) Let I denote the 3 3 identity matrix. Assume given two 3 3 matrices B, C, which satisfy CP = P B for some invertible matrix P. Let C have eigenvalues,, 5. Find the eigenvalues of A = I + 3B. Both B and C have the same eigenvalues, because detb λi) = detp B λi)p ) = 6

detp CP λp P ) = detc λi). Further, both B and C are diagonalizable. The answer is the same for all such matrices, so the computation can be done for a diagonal matrix B = diag,, 5). In this case, A = I + 3B = diag,, ) + diag 3, 3, 5) = diag, 5, 7) and the eigenvalues of A are, 5, 7. 45. 5 points) Let A be a 3 3 matrix with eigenpairs 4, v ), 3, v ),, v 3 ). Let P denote the augmented matrix of the eigenvectors v, v 3, v, in exactly that order. Display the answer for P AP. Justify the answer with a sentence. Because AP = P D, then D = P AP is the diagonal matrix of eigenvalues, taken in the 3 order determined by the eigenpairs 3, v ),, v 3 ), 4, v ). Then D =. 4 46. 5 points) The matrix A below has eigenvalues 3, 3 and 3. Test A to see it is diagonalizable, and if it is, then display three eigenpairs of A. 4 A = 3 Compute rrefa 3I) = eigenvector display three eigenpairs. for λ = 3.. This has rank, nullity. There is just one Not diagonalizable, no Fourier s model, not possible to 47. 5 points) Assume A is a given 4 4 matrix with eigenvalues,, 3 ± i. Find the eigenvalues of 4A 3I, where I is the identity matrix. Such a matrix is diagonalizable, because of four distinct eigenvalues. Then 4B 3I has 7

the same eigenvalues for all matrices B similar to A. In particular, 4A 3I has the same eigenvalues as 4D 3I where D is the diagonal matrix with entries,, 3+i, 3 i. Compute 3 4D 3I =. The answer is,, 9 + 8i, 9 8i. 9 + 8i 9 8i 5 3 3 48. 5 points) Find the eigenvalues of the matrix A =. 3 5 3 To save time, do not find eigenvectors! The characteristic polynomial is deta ri) = r + 6)3 r)r ). The eigenvalues are,, 3, ± 6i. Determinant expansion is by the cofactor method along column 5. This reduces it to a 4 4 determinant, which can be expanded as a product of two quadratics. In detail, 5 3 3 we first get A ri = 3 r) B ri, where B =. So we have one 3 ) B B eigenvalue 3, and we find the eigenvalues of B. Matrix B is a block matrix B =, B ) 3 B ri B where B, B, B 3 are all matrices. Then B ri =. Using the B 3 ri determinant product theorem for such special block matrices zero in the left lower block) gives B ri = B ri B 3 ri. So the answer for the eigenvalues of A is 3 and the eigenvalues of B and B 3. We report 3, ± 6i,,. It is also possible to directly find the eigenvalues of B by cofactor expansion of B ri. 49. 5 points) Consider a 3 3 real matrix A with eigenpairs 3 i 3, 6, i,, i, 4 i. ) [%] Display an invertible matrix P and a diagonal matrix D such that AP = P D. 8

) [%] Display a matrix product formula for A, but do not evaluate the matrix products, in order to save time. ) P = 3 i i 6 4, D = 3 i i. ) AP = P D implies A = P DP. 5. 5 points) Assume two 3 3 matrices A, B have exactly the same characteristic equations. Let A have eigenvalues, 3, 4. Find the eigenvalues of /3)B I, where I is the identity matrix. Because the answer is the same for all matrices similar to A that is, all B = P AP ) then it suffices to answer the question for diagonal matrices. We know A is diagonalizable, because it has distinct eigenvalues. So we choose D equal to the diagonal matrix with entries, 3, 4. Compute D I = 3 3 3 3. Then the eigenvalues are 4,,. 3 3 4 3 5. 5 points) Let 3 3 matrices A and B be related by AP = P B for some invertible matrix P. Prove that the roots of the characteristic equations of A and B are identical. The proof depends on the identity A ri = P BP ri = P B ri)p and the determinant product theorem CD = C D. We get A ri = P B ri P = P P B ri = B ri. Then A and B have exactly the same characteristic equation, hence exactly the same eigenvalues. 5. 5 points) Find the eigenvalues of the matrix B: 4 5 B = 4 4 9

The characteristic polynomial is detb ri) = r)5 r)5 r)3 r). The eigenvalues are, 3, 5, 5. It is possible to directly find the eigenvalues of B by cofactor expansion of B ri. An alternate method is described below, which depends upon a determinant product theorem for special block matrices, such as encountered in this example. ) B B Matrix B is a block matrix B =, where B, B, B 3 are all matrices. Then B ) 3 B ri B B ri =. Using the determinant product theorem for such special B 3 ri block matrices zero in the left lower block) gives B ri = B ri B 3 ri. So the answer is that B has eigenvalues equal to the eigenvalues of B and B 3. These are quickly found by Sarrus Rule applied to the two determinants B ri = r)5 r) and B 3 ri = r 8r + 5 = 5 r)3 r). 53. 5 points) Let W be the column space of A = ˆb be the near point to b in the subspace W. Find ˆb. and let b =. Let The columns of A are independent. The normal equation is A T A y = A T b, which in explicit ) ) ) 3 form is y =. The answer is y =. Then ˆb = Aỹ =. 54. 5 points) There are real matrices A such that A = 4I, where I is the identity matrix. Give an example of one such matrix A and then verify that A + 4I =. Choose any matrix whose characteristic equation is λ + 4 =. Then A + 4I = by the Cayley-Hamilton theorem. 55. 5 points) Let Q =< q q > be orthogonal and D a diagonal matrix with diagonal entries λ, λ. Prove that the matrix A = QDQ T satisfies A = λ q q T + λ q q T.

Let B = λ q q T +λ q q T. We prove A = B. First observe that both A and B are symmetric. Because the columns of Q form a basis of R, it suffices to prove that x T A = x T B for x a column of A. For example, take x = q. Then x T A = A T q ) T = A q ) T = λ q T. Orthogonality of Q implies x T B = B q ) T = λ q q T q + λ q q T q ) T = λ q ) T = λ q T. Repeat for subscript to complete the proof. 56. 5 points) A matrix A is defined to be positive definite if and only if x T A x > for nonzero x. Which of these matrices are positive definite? ) ) ),, 6 6 Only the second matrix. A useful test is positive eigenvalues. Another is principal determinants all positive. 57. 5 points) Let A be a real symmetric matrix. Prove that the eigenvalues of A are real numbers. Begin with A x = λ x. Take the conjugate of both sides to get a new equation. Because the conjugate of a real matrix is itself, then the new equation looks like A y = λ y where y is the conjugate of x. Formally, replace i by i in the components of x to obtain y. Symbol λ is the complex conjugate of λ. Transpose this new equation to get y T A = λ y T, possible because A = A T. Taking dot products two ways gives y A x = λ y x = λ y x. Because y x = x >, then we can cancel to get λ = λ, proving the eigenvalue λ is real. 58. 5 points) Let B be a real 3 4 matrix. Prove that the eigenvalues of B T B are non-negative. Let A = B T B. An eigenpair λ, v) of A satisfies A v = λ v, v. Already known is that the eigenvalue λ and the eigenvector v are real, because A = B T B is a symmetric matrix. Compute B v = B v) T B v) = v T B T B v = v T A v = v T λ v = λ v. Therefore, λ is non-negative. 59. 5 points) The spectral theorem says that a symmetric matrix A can be factored into

A = QDQ T where Q) is orthogonal and D is diagonal. Find Q and D for the symmetric 3 matrix A =. 3 Start with the equation ) r 6r +8 = having ) roots r =, 4. Compute the eigenpairs, v ), 4, v ) where v = and v =. The two vectors are orthogonal but not of unit ) length. Unitize them to get u = v, u = v. Then Q =< u u >=, D = diag, 4). 6. 5 points) Show that if B is an invertible matrix and A is similar to B, with A = P BP, then A is invertible. The determinant product theorem applies to obtain A = B, hence A is invertible. ) 6. 5 points) Write out the singular value decomposition for the matrix A =. A = ) = ) 8 ) )) T 6. 5 points) Strang s Four Fundamental Subspaces are the nullspace of A, the nullspace of A T, the row space of A and the column space of A. Describe, using a figure or drawing, the locations in the matrices U, V of the singular value decomposition A = UΣV T which are consumed by the four fundamental subspaces of A. A =< colspacea) nullspacea T ) > Σ < rowspacea) nullspacea) > T. The dimensions of the spaces left to right are r, m r, r, n r, where A is m n and r is the rank of A. 63. 5 points) Give examples for a vertical shear and a horizontal shear in the plane. Expected is a matrix A which represents the linear transformation.

k ) is a horizontal shear, k ) is a vertical shear 64. 5 points) Give examples for clockwise and counterclockwise rotations in the plane. Expected is a matrix A which represents the linear transformation. cos θ sin θ sin θ cos θ ) for θ > rotates clockwise and for θ < rotates counter clockwise. 65. 5 points) Let the linear transformation T from R 3 to R 3 be defined by its action on three independent vectors: 3 4 4 5 T = 4, T =, T =. Find the unique 3 3 matrix A such that T is defined by the matrix multiply equation T x) = A x. 3 A = 4 4 5 4 can be solved for matrix A. The answer is A = 3 5 3. 66. 5 points) Let A be an m n matrix. Denote by S the row space of of A and S the column space of A. Prove that T : S S defined by T x) = A x is one-to-one and onto. Suppose x is in the rowspace. The fundamental theorem of linear algebra says x is perpendicular to the nullspace. of A. So, if x, x are vectors in the rowspace of A and A x ) = A x then A x x ) =. This implies x = x x belongs to the nullspace of A. But x is a linear combination of vectors in S, so it is in S, which is perpendicular to the nullspace. The intersection of V and V is the zero vector, so x =, which says x = x, proving T is one-to-one. The proof for onto is done by solving the equation A x = y where y is any vector in the column space of A. We have to find x in S that solves the equation. Select any z such that y = A z. Because the rowspace is perpendicular to the nullspace, then there are unique 3

vectors x, u such that z = x + u, and u is in the nullspace while x is in the rowspace. Then y = A z = A x + A u = A x + = A x. We have solved the equation for x in S. The proof is complete. Essay Questions 67. 5 points) Define an Elementary Matrix. Display the fundamental matrix multiply equation which summarizes a sequence of swap, combo, multiply operations, transforming a matrix A into a matrix B. An elementary matrix is the matrix E resulting from one elementary row operation swap, combination, multiply) performed on the identity matrix I. The fundamental equation looks like E k E E A = B, but this is not the complete answer, because the elementary matrices have to be explained, relative to the elementary row operations which transformed A into B. 68. 5 points) Let V be a vector space and S a subset of V. Define what it means for S to be a subspace of V. The definition is sometimes called the Subspace Criterion, a theorem with three requirements, with the conclusion that S is a subspace of V. The definition can be found in the textbook, although the naming convention might not be the same. In some books it is taken as the definition, in other books it is derived from a different definition, then recorded as a theorem called the subspace criterion: ) Zero is in S; ) Sums of vectors in S are in S; 3) Scalar multiples of vectors in S are in S. The important underlying assumption is that addition and scalar multiplication are inherited from V. 69. 5 points) The null space S of an m n matrix M is a subspace of R n. This is called the Kernel Theorem, and it is proved from the Subspace Criterion. Both theorems conclude that some subset is a subspace, but they have different hypotheses. Distinguish the Kernel theorem from the Subspace Criterion, as viewed from hypotheses. 4

The distinction is that the kernel theorem applies only to fixed vectors, that is, the vector space R n, whereas the subspace criterion applies to any vector space. 7. 5 points) Least squares can be used to find the best fit line for the points, ),, ), 3, ). Without finding the line equation, describe how to do it, in a few sentences. Find a matrix equation A x = b using the line equation y = v x + v where x = Then solve the normal equation A T A v = A T b. A full solution is expected, with a formula for A. But don t solve the normal equation. 7. 5 points) State the Fundamental Theorem of Linear Algebra. Include Part : The dimensions of the four subspaces, and Part : The orthogonality equations for the four subspaces. Part. The dimensions are n r, r, rm r for nullspacea), colspacea), rowspacea), nullspacea T ). Part. The orthogonality relation is rowspace nullspace, for both A and A T. A full statement is expected, not the brief one given here. 7. 5 points) Display the equation for the Singular Value Decomposition SVD), then cite the conditions for each matrix. Finish with a written description of how to construct the matrices in the SVD. Let r be the rank of an m n matrix A. Then A = UΣV T, where A v i = σ i u i, U =< u u n >, V =< v v m > are orthogonal and Σ = diagσ,..., σ r,,..., >. The singular values are σ i = λ i where {λ i } is the list of real nonnegative eigenvalues of A T A. Only the positive values σ i, i =,..., r where r is the rank of A are entered into matrix Σ, and they must be ordered in decreasing order. Because there is a full set of n orthonormal eigenpairs λ, v) for the n n symmetric matrix A T A, the matrix V is constructed from the list of orthonormal eigenvectors { v i n i=. Matrix U is constructed from an orthonormal basis { u i } m i=, obtained from Gram-Schmidt, starting with the list of orthogonal vectors u i = σi A v i, i =,..., r, after appending to the list m r independent vectors to complete a basis of R m. 73. 5 points) State the Spectral Theorem for symmetric matrices. Include the impor- v v ). 5

tant results included in the spectral theorem, about real eigenvalues and diagonalizability. Then discuss the spectral decomposition. A real symmetric n n matrix A has only real eigenvalues. Marix A has n eigenpairs, in short it is is diagonalizable. To each eigenvalue of multiplicity k, there are k independent eigenvectors. These eigenvectors span a subspace of dimension k which by Gram-Schmidt is spanned by k orthonormal vectors. Two such subspaces corresponding to different eigenvalues are orthogonal. The spectral decomposition of A is A = QDQ T where D is a diagonal matrix of eigenvalues and Q is an orthogonal matrix of corresponding eigenvectors. 6