LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable, and (iii) if it is diagonalizable, find an invertible matrix P and a diagonal matrix D such that A = P DP If the matrix is orthogonally diagonalizable, then try to find such a P which is a diagonal matrix HINT: The eigenvalues for the matrix A below are and A = B = 9 E = Solution: The matrix A is orthogonally diagonalizable since it is symmetric (see section 5) To diagonalize it, we first find eigenvectors of A (In the following solution, I will find an orthonormal set of eigenvectors, to show you how to orthogonally diagonalize A; however, for the question as stated, you could use any set of three linearly independent eigenvectors to get a correct answer) For the eigenvalue, the associated eigenvectors are solutions to the linear system corresponding to whose solution is given by the equations x = x and x = x (with x as a free variable) So (,, ) is an eigenvector We eventually want an orthonormal basis of R, so we divide this eigenvector by its norm,, to get the unit eigenvector u =,
LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS For the eigenvalue, the associated eigenvectors are the solutions to the linear system corresponding to, whose solution is given by the equation x = x x with x and x both free Choosing (, ) and (, ) for the values of x and x in the above solution, we get (,, ) and (,, ), two linearly independent eignevalues of A with eigenvalue Since we want an orthonormal set of eigenvectors, we apply the Gram-Schmidt process to these two vectors to get the orthogonal set, (see the solution to Problem below for details on how to apply Gram- Schmidt), and then we make each of these vectors into unit vectors by dividing them by their norms, to get u =, u = Finally, as explained in section 5, we can use the orthonormal set of eigenvectors { u, u, u } and the eigenvalues we have found above to construct matrices P and D that orthogonally diagonalize A: P = u u u =, D = For B, we again start by finding its eigenvalues B is upper-triangular, so its eigenvalues are the entries on the main diagonal, and the eigenvalues of B are and (Alternatively, it is easy to find the eigenvalues of B by solving the characteristic equation, which is = ( λ)( λ)) To find an eigenvector corresponding to the eigenvalue, we solve: 9 9 The general solution is that x = and x is free, so one possible eigenvector is (, ) To find an eigenvector corresponding to the eigenvalue, we solve:
LINEAR ALGEBRA, -IPARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS 9 The general solution is that x = x and x is free, so one possible eigenvector is (, ) So B is diagonalizable since the two eigenvectors we have found form a basis of R (as explained in section 5) Using the technique in section 5, we can let P =, D = (The columns of P are the two eigenvectors we have found, and the diagonal entries of D are the associated eigenvalues) However, since the matrix B is not symmetric, it is not orthogonally diagonalizable Finally, we consider the matrix E For this matrix, the characteristic equation is: = λ λ λ and expanding the determinant by the first column, this becomes = ( λ) λ λ = ( λ) Therefore, the only eigenvalue of E is (which is a triple root of the characteristic polynomial) Note that there do exists matrices with only one eigenvalue which are diagonalizable (For practice, you could try to find an example of such a matrix) So we are not yet finished with this problem! The -eigenspace of E is the null space of the null space of the matrix This null space is the set of all (x, x, x ) in R such that x =, which has as a basis the set {(,, ), (,, )} So the dimension of the -eigenspace is only This means that the geometric multiplicity of the eigenvalue is, but the algebraic multiplicity of this eigenvalue is (as we saw in the characteristic equation above) Therefore, E is not diagonalizable (b) Give an example of a matrix C which is not similar to the matrix A in part (a) (The definition from section 5 of the textbook says: C is similar to A if and only if there is an invertible matrix P such that C = P AP ),
4LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Solution: Any two similar matrices have the same sets of eigenvalues So we just have to find a matrix C whose set of eigenvalues is different from {, }; there are many possible correct answers here, such as C = (whose eigenvalues are,, and ) Problem (a) Find an orthonormal basis for the subspace W of R spanned by, Solution: Call this subspace S First, note that the set given is linearly independent (since neither vector is a multiple of the other), so it is a basis for S So we can use the Gram-Schmidt process (section 4, Theorem ) to find an orthogonal basis of S: let v =, v = = = Then v and v are orthogonal (since v v =, but to get an orthonormal basis, we must divide each vector by its norm The norm of v is and the norm of v is, so an orthonormal basis of S is { v, v } =, Solution: This comes down to finding a vector in the orthogonal complement of S As explained in section, the orthogonal complement of S is
LINEAR ALGEBRA, -IPARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS5 equal to the nullspace of the matrix This matrix row-reduces to and the solution to the associated homogeneous linear system is given by the equations x = x x and x = x (with x free) Using the value of for x, we get the solution (,, ), and we can normalize this to get the vector v = So { v, v, v } is an orthonormal basis of R (c) Is there a unique way to extend the basis you found in (a) to an orthonormal basis of R? Explain No, the vector v found above is not the unique way to extend { v, v } to an orthonormal basis of R : the vector v works, too: it is also a unit vector orthogonal to v and v (But it turns out that v and v are the only two possibilities) (d) Find the orthogonal projection of the vector (,, ) onto the subspace W Solution: We will use the orthonormal basis for W that we found in part (a) Call this basis { w, w } The orthogonal projection of (,, ) onto W is: w w w +, w w w = w + w (note that w w = and w w = because the basis is orthonormal) = / / /