MATH 304 505 Spring 011 Sample problems for Test : Solutions Any problem may be altered or replaced by a different one! Problem 1 (15 pts) Let M, (R) denote the vector space of matrices with real entries Consider a linear operator L : M, (R) M, (R) given by x y x y 1 L = z w z w 3 4 Find the matrix of the operator L with respect to the basis ( ) ( ) ( 1 0 0 1 0 0 E 1 =, E 0 0 =, E 0 0 3 = 1 0 ), E 4 = ( ) 0 0 0 1 Let M L denote the desired matrix By definition, M L is a 4 4 matrix whose columns are coordinates of the matrices L(E 1 ), L(E ), L(E 3 ), L(E 4 ) with respect to the basis E 1, E, E 3, E 4 We have that 1 0 1 1 L(E 1 ) = = = 1E 0 0 3 4 0 0 1 + E + 0E 3 + 0E 4, 0 1 1 3 4 L(E ) = = = 3E 0 0 3 4 0 0 1 + 4E + 0E 3 + 0E 4, 0 0 1 0 0 L(E 3 ) = = = 0E 1 0 3 4 1 1 + 0E + 1E 3 + E 4, It follows that L(E 4 ) = ( ) ( ) 0 0 1 0 1 3 4 = ( ) 0 0 3 4 1 3 0 0 M L = 4 0 0 0 0 1 3 0 0 4 = 0E 1 + 0E + 3E 3 + 4E 4 Problem (0 pts) Find a linear polynomial which is the best least squares fit to the following data: x 1 0 1 f(x) 3 1 5 We are looking for a function f(x) = c 1 + c x, where c 1, c are unknown coefficients The data of the problem give rise to an overdetermined system of linear equations in variables c 1 and c : c 1 c = 3, c 1 c =, c 1 = 1, c 1 + c =, c 1 + c = 5 1
This system is inconsistent We can represent it as a matrix equation Ac = y, where 1 3 1 1 ( ) A = 1 0 1 1, c = c1, y = c 1 1 5 The least squares solution c of the above system is a solution of the system A T Ac = A T y: 1 3 ( ) 1 1 1 1 1 1 1 ( ) ( ) 1 0 1 1 0 c1 1 1 1 1 1 = 1 1 c 1 0 1 1 1 5 ( )( ) ( ) { 5 0 c1 3 c1 = 3/5 = 0 10 0 c = c Thus the function f(x) = 3 5 +x is the best least squares fit to the above data among linear polynomials Problem 3 (5 pts) Let V be a subspace of R 4 spanned by the vectors x 1 = (1, 1, 1, 1) and x = (1, 0, 3, 0) (i) Find an orthonormal basis for V First we apply the Gram-Schmidt orthogonalization process to vectors x 1,x and obtain an orthogonal basis v 1,v for the subspace V : v 1 = x 1 = (1, 1, 1, 1), v = x x v 1 v 1 = (1, 0, 3, 0) 4 (1, 1, 1, 1) = (0, 1,, 1) v 1 v 1 4 Then we normalize vectors v 1,v to obtain an orthonormal basis w 1,w for V : w 1 = v 1 v 1 = 1 v 1 = 1 (1, 1, 1, 1), w = v v = 1 v = 1 (0, 1,, 1)
(ii) Find an orthonormal basis for the orthogonal complement V Since the subspace V is spanned by vectors (1, 1, 1, 1) and (1, 0, 3, 0), it is the row space of the matrix ( ) 1 1 1 1 A = 1 0 3 0 Then the orthogonal complement V is the nullspace of A To find the nullspace, we convert the matrix A to reduced row echelon form: 1 1 1 1 1 0 3 0 1 0 3 0 1 0 3 0 1 1 1 1 0 1 1 Hence a vector (x 1, x, x 3, x 4 ) R 4 belongs to V if and only if { x1 + 3x 3 = 0 x x 3 + x 4 = 0 { x1 = 3x 3 x = x 3 x 4 The general solution of the system is (x 1, x, x 3, x 4 ) = ( 3t, t s, t, s) = t( 3,, 1, 0) + s(0, 1, 0, 1), where t, s R It follows that V is spanned by vectors x 3 = (0, 1, 0, 1) and x 4 = ( 3,, 1, 0) It remains to orthogonalize and normalize this basis for V : v 3 = x 3 = (0, 1, 0, 1), v 4 = x 4 x 4 v 3 v 3 = ( 3,, 1, 0) (0, 1, 0, 1) = ( 3, 1, 1, 1), v 3 v 3 w 3 = v 3 v 3 = 1 (0, 1, 0, 1), w 4 = v 4 v 4 = 1 3 v 4 = 1 ( 3, 1, 1, 1) 3 Thus the vectors w 3 = 1 (0, 1, 0, 1) and w 4 = 1 3 ( 3, 1, 1, 1) form an orthonormal basis for V Alternative solution: Suppose that an orthonormal basis w 1,w for the subspace V has been extended to an orthonormal basis w 1,w,w 3,w 4 for R 4 Then the vectors w 3,w 4 form an orthonormal basis for the orthogonal complement V We know that vectors v 1 = (1, 1, 1, 1) and v = (0, 1,, 1) form an orthogonal basis for V This basis can be extended to a basis for R 4 by adding two vectors from the standard basis For example, we can add vectors e 3 = (0, 0, 1, 0) and e 4 = (0, 0, 0, 1) The vectors v 1,v,e 3,e 4 do form a basis for R 4 since the matrix whose rows are these vectors is nonsingular: 1 1 1 1 0 1 1 0 0 1 0 = 1 0 0 0 0 1 To orthogonalize the basis v 1,v,e 3,e 4, we apply the Gram-Schmidt process (note that the vectors v 1 and v are already orthogonal): v 3 = e 3 e 3 v 1 v 1 e 3 v v = (0, 0, 1, 0) 1 v 1 v 1 v v 4 (1, 1, 1, 1) (0, 1,, 1) = 1 ( 3, 1, 1, 1), 1 v 4 = e 4 e 4 v 1 v 1 v 1 v 1 e 4 v v v v e 4 v 3 v 3 v 3 v 3 = = (0, 0, 0, 1) 1 4 1 1/1 (1, 1, 1, 1) (0, 1,, 1) 1/1 1 1 ( 3, 1, 1, 1) = 1 (0, 1, 0, 1) 3
It remains to normalize vectors v 1,v,v 3,v 4 : w 1 = v 1 v 1 = 1 (1, 1, 1, 1), w = v v = 1 (0, 1,, 1), w 3 = v 3 v 3 = 1v 3 = 1 3 ( 3, 1, 1, 1), w 4 = v 4 v 4 = v 4 = 1 (0, 1, 0, 1) We have obtained an orthonormal basis w 1,w,w 3,w 4 for R 4 that extends an orthonormal basis w 1,w for the subspace V It follows that w 3 = 1 ( 3, 1, 1, 1), w 3 4 = 1 (0, 1, 0, 1) is an orthonormal basis for V 1 0 Problem 4 (30 pts) Let A = 1 1 1 0 1 (i) Find all eigenvalues of the matrix A The eigenvalues of A are roots of the characteristic equation det(a λi) = 0 We obtain that 1 λ 0 det(a λi) = 1 1 λ 1 0 1 λ = (1 λ)3 (1 λ) (1 λ) = (1 λ) ( (1 λ) 4 ) = (1 λ) ( (1 λ) )( (1 λ) + ) = (λ 1)(λ + 1)(λ 3) Hence the matrix A has three eigenvalues: 1, 1, and 3 (ii) For each eigenvalue of A, find an associated eigenvector An eigenvector v = (x, y, z) of A associated with an eigenvalue λ is a nonzero solution of the vector equation (A λi)v = 0 To solve the equation, we apply row reduction to the matrix A λi First consider the case λ = 1 The row reduction yields 0 1 1 0 1 1 0 1 1 0 1 0 1 A + I = 1 1 1 1 0 1 1 0 1 1 0 1 1 0 0 0 0 0 0 0 0 0 Hence (A + I)v = 0 1 0 1 x 0 0 1 1 y = 0 0 0 0 z 0 { x z = 0, y + z = 0 The general solution is x = t, y = t, z = t, where t R In particular, v 1 = (1, 1, 1) is an eigenvector of A associated with the eigenvalue 1 Secondly, consider the case λ = 1 The row reduction yields 0 0 1 0 1 1 0 1 1 0 1 A I = 1 0 1 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 Hence (A I)v = 0 1 0 1 x 0 0 1 0 y = 0 0 0 0 z 0 { x + z = 0, y = 0 4
The general solution is x = t, y = 0, z = t, where t R In particular, v = ( 1, 0, 1) is an eigenvector of A associated with the eigenvalue 1 Finally, consider the case λ = 3 The row reduction yields 0 1 1 0 1 1 0 A 3I = 1 1 1 1 0 1 1 0 0 0 Hence 1 1 0 1 1 0 1 0 1 0 1 1 0 1 1 0 1 1 0 0 0 0 0 0 0 (A 3I)v = 0 1 0 1 x 0 0 1 1 y = 0 0 0 0 z 0 { x z = 0, y z = 0 The general solution is x = t, y = t, z = t, where t R In particular, v 3 = (1, 1, 1) is an eigenvector of A associated with the eigenvalue 3 (iii) Is the matrix A diagonalizable? Explain The matrix A is diagonalizable, ie, there exists a basis for R 3 formed by its eigenvectors Namely, the vectors v 1 = (1, 1, 1), v = ( 1, 0, 1), and v 3 = (1, 1, 1) are eigenvectors of the matrix A belonging to distinct eigenvalues Therefore these vectors are linearly independent It follows that v 1,v,v 3 is a basis for R 3 Alternatively, the existence of a basis for R 3 consisting of eigenvectors of A already follows from the fact that the matrix A has three distinct eigenvalues (iv) Find all eigenvalues of the matrix A Suppose that v is an eigenvector of the matrix A associated with an eigenvalue λ, that is, v 0 and Av = λv Then A v = A(Av) = A(λv) = λ(av) = λ(λv) = λ v Therefore v is also an eigenvector of the matrix A and the associated eigenvalue is λ We already know that the matrix A has eigenvalues 1, 1, and 3 It follows that A has eigenvalues 1 and 9 Since a 3 3 matrix can have up to 3 eigenvalues, we need an additional argument to show that 1 and 9 are the only eigenvalues of A The matrix A is diagonalizable Namely, A = UBU 1, where 1 0 0 B = 0 1 0 0 0 3 and U is the matrix whose columns are eigenvectors v 1,v,v 3 : 1 1 1 U = 1 0 1 1 1 1 Then A = UBU 1 UBU 1 = UB U 1 It follows that det(a λi) = det(ub U 1 λi) = det ( UB U 1 U(λI)U 1) 5
= det ( U(B λi)u 1) = det(u)det(b λi)det(u 1 ) = det(b λi) Thus the matrix A has the same characteristic polynomial as the diagonal matrix 1 0 0 B = 0 1 0 0 0 9 Consequently, the matrices A and B have the same eigenvalues The latter has eigenvalues 1 and 9 Bonus Problem 5 (15 pts) Let L : V W be a linear mapping of a finite-dimensional vector space V to a vector space W Show that dim Range(L) + dim ker(l) = dimv The kernel ker(l) is a subspace of V Since the vector space V is finite-dimensional, so is ker(l) Take a basis v 1,v,,v k for the subspace ker(l), then extend it to a basis v 1,,v k,u 1,u,,u m for the entire space V We are going to prove that vectors L(u 1 ), L(u ),,L(u m ) form a basis for the range L(V ) Then dim Range(L) = m, dim ker(l) = k, and dimv = k + m Spanning: Any vector w Range(L) is represented as w = L(v), where v V We have for some α i, β j R It follows that v = α 1 v 1 + α v + + α k v k + β 1 u 1 + β u + + β m u m w = L(v) = α 1 L(v 1 ) + + α k L(v k ) + β 1 L(u 1 ) + + β m L(u m ) = β 1 L(u 1 ) + + β m L(u m ) (L(v i ) = 0 since v i ker(l)) Thus Range(L) is spanned by the vectors L(u 1 ), L(u ),,L(u m ) Linear independence: Suppose that t 1 L(u 1 ) + t L(u ) + + t m L(u m ) = 0 for some t i R Let u = t 1 u 1 + t u + + t m u m Since L(u) = t 1 L(u 1 ) + t L(u ) + + t m L(u m ) = 0, the vector u belongs to the kernel of L Therefore u = s 1 v 1 + s v + + s k v k for some s j R It follows that t 1 u 1 + t u + + t m u m s 1 v 1 s v s k v k = u u = 0 Linear independence of vectors v 1,,v k,u 1,,u m implies that t 1 = t = = t m = 0 (as well as s 1 = s = = s k = 0) Thus the vectors L(u 1 ), L(u ),,L(u m ) are linearly independent