MATH 304 Linear Algebra Lecture 34: Review for Test 2.
Topics for Test 2 Linear transformations (Leon 4.1 4.3) Matrix transformations Matrix of a linear mapping Similar matrices Orthogonality (Leon 5.1 5.6) Inner products and norms Orthogonal complement Least squares problems The Gram-Schmidt orthogonalization process Eigenvalues and eigenvectors (Leon 6.1, 6.3) Eigenvalues, eigenvectors, eigenspaces Characteristic polynomial Diagonalization
Sample problems for Test 2 Problem 1 (20 pts.) Let M 2,2 (R) denote the vector space of 2 2 matrices with real entries. Consider a linear operator L : M 2,2 (R) M 2,2 (R) given by L ( x y z w ) = ( x y z w ) ( ) 1 2. 3 4 Find the matrix of the operator L with respect to the basis ( ) ( ) ( ) ( ) 1 0 0 1 0 0 0 0 E 1 =, E 0 0 2 =, E 0 0 3 =, E 1 0 4 =. 0 1
Problem 2 (30 pts.) Let V be a subspace of R 4 spanned by the vectors x 1 = (1, 1, 1, 1) and x 2 = (1, 0, 3, 0). (i) Find an orthonormal basis for V. (ii) Find an orthonormal basis for the orthogonal complement V. Problem 3 (30 pts.) Let A = 1 2 0 1 1 1. 0 2 1 (i) Find all eigenvalues of the matrix A. (ii) For each eigenvalue of A, find an associated eigenvector. (iii) Is the matrix A diagonalizable? Explain. (iv) Find all eigenvalues of the matrix A 2.
Bonus Problem 4 (20 pts.) Find a linear polynomial which is the best least squares fit to the following data: x 2 1 0 1 2 f (x) 3 2 1 2 5 Bonus Problem 5 (20 pts.) Let L : V W be a linear mapping of a finite-dimensional vector space V to a vector space W. Show that dim Range(L) + dim ker(l) = dim V.
Problem 1 (20 pts.) Let M 2,2 (R) denote the vector space of 2 2 matrices with real entries. Consider a linear operator L : M 2,2 (R) M 2,2 (R) given by ( ) ( ) ( ) x y x y 1 2 L =. z w z w 3 4 Find the matrix of the operator L with respect to the basis ( ) ( ) ( ) ( ) 1 0 0 1 0 0 0 0 E 1 =, E 0 0 2 =, E 0 0 3 =, E 1 0 4 =. 0 1 Let M L denote the desired matrix. By definition, M L is a 4 4 matrix whose columns are coordinates of the matrices L(E 1 ), L(E 2 ), L(E 3 ), L(E 4 ) with respect to the basis E 1, E 2, E 3, E 4.
( ) ( ) 1 0 1 2 L(E 1 ) = = 0 0 3 4 ( ) ( ) 0 1 1 2 L(E 2 ) = = 0 0 3 4 ( ) ( ) 0 0 1 2 L(E 3 ) = = 1 0 3 4 ( ) ( ) 0 0 1 2 L(E 4 ) = = 0 1 3 4 ( ) 1 2 = 1E 0 0 1 +2E 2 +0E 3 +0E 4, ( ) 3 4 = 3E 0 0 1 +4E 2 +0E 3 +0E 4, ( ) 0 0 = 0E 1 2 1 +0E 2 +1E 3 +2E 4, ( ) 0 0 = 0E 3 4 1 +0E 2 +3E 3 +4E 4. It follows that 1 3 0 0 M L = 2 4 0 0 0 0 1 3. 0 0 2 4
Thus the relation (x1 ) ( y 1 x y = z 1 w 1 z w ) ( ) 1 2 3 4 is equivalent to the relation x 1 1 3 0 0 x y 1 z 1 = 2 4 0 0 y 0 0 1 3 z. 0 0 2 4 w w 1
Problem 2 (30 pts.) Let V be a subspace of R 4 spanned by the vectors x 1 = (1, 1, 1, 1) and x 2 = (1, 0, 3, 0). (i) Find an orthonormal basis for V. First we apply the Gram-Schmidt orthogonalization process to vectors x 1,x 2 and obtain an orthogonal basis v 1,v 2 for the subspace V: v 1 = x 1 = (1, 1, 1, 1), v 2 = x 2 x 2 v 1 v 1 = (1, 0, 3, 0) 4 (1, 1, 1, 1) = (0, 1, 2, 1). v 1 v 1 4 Then we normalize vectors v 1,v 2 to obtain an orthonormal basis w 1,w 2 for V: v 1 = 2 = w 1 = v 1 = 1 v 1 (1, 1, 1, 1) 2 v 2 = 6 = w 2 = v 2 v 2 = 1 6 (0, 1, 2, 1)
Problem 2 (30 pts.) Let V be a subspace of R 4 spanned by the vectors x 1 = (1, 1, 1, 1) and x 2 = (1, 0, 3, 0). (ii) Find an orthonormal basis for the orthogonal complement V. Since the subspace V is spanned by vectors (1, 1, 1, 1) and (1, 0, 3, 0), it is the row space of the matrix ( ) 1 1 1 1 A =. 1 0 3 0 Then the orthogonal complement V is the nullspace of A. To find the nullspace, we convert the matrix A to reduced row echelon form: ( 1 1 1 1 1 0 3 0 ) ( ) 1 0 3 0 1 1 1 1 ( ) 1 0 3 0. 0 1 2 1
Hence a vector (x 1, x 2, x 3, x 4 ) R 4 belongs to V if and only if ( ) x 1 ( ) 1 0 3 0 x 2 0 0 1 2 1 = 0 x 3 x 4 { x1 + 3x 3 = 0 x 2 2x 3 + x 4 = 0 { x1 = 3x 3 x 2 = 2x 3 x 4 The general solution of the system is (x 1, x 2, x 3, x 4 ) = = ( 3t, 2t s, t, s) = t( 3, 2, 1, 0) + s(0, 1, 0, 1), where t, s R. It follows that V is spanned by vectors x 3 = (0, 1, 0, 1) and x 4 = ( 3, 2, 1, 0).
The vectors x 3 = (0, 1, 0, 1) and x 4 = ( 3, 2, 1, 0) form a basis for the subspace V. It remains to orthogonalize and normalize this basis: v 3 = x 3 = (0, 1, 0, 1), v 4 = x 4 x 4 v 3 v 3 = ( 3, 2, 1, 0) 2 (0, 1, 0, 1) v 3 v 3 2 = ( 3, 1, 1, 1), v 3 = 2 = w 3 = v 3 v 3 = 1 2 (0, 1, 0, 1), v 4 = 12 = 2 3 = w 4 = v 4 = 1 v 4 2 ( 3, 1, 1, 1). 3 Thus the vectors w 3 = 1 2 (0, 1, 0, 1) and w 4 = 1 2 3 ( 3, 1, 1, 1) form an orthonormal basis for V.
Problem 2 (30 pts.) Let V be a subspace of R 4 spanned by the vectors x 1 = (1, 1, 1, 1) and x 2 = (1, 0, 3, 0). (i) Find an orthonormal basis for V. (ii) Find an orthonormal basis for the orthogonal complement V. Alternative solution: First we extend the set x 1,x 2 to a basis x 1,x 2,x 3,x 4 for R 4. Then we orthogonalize and normalize the latter. This yields an orthonormal basis w 1,w 2,w 3,w 4 for R 4. By construction, w 1,w 2 is an orthonormal basis for V. It follows that w 3,w 4 is an orthonormal basis for V.
The set x 1 = (1, 1, 1, 1), x 2 = (1, 0, 3, 0) can be extended to a basis for R 4 by adding two vectors from the standard basis. For example, we can add vectors e 3 = (0, 0, 1, 0) and e 4 = (0, 0, 0, 1). To show that x 1,x 2,e 3,e 4 is indeed a basis for R 4, we check that the matrix whose rows are these vectors is nonsingular: 1 1 1 1 1 0 3 0 0 0 1 0 0 0 0 1 = 1 3 0 0 1 0 0 0 1 = 1 0.
To orthogonalize the basis x 1,x 2,e 3,e 4, we apply the Gram-Schmidt process: v 1 = x 1 = (1, 1, 1, 1), v 2 = x 2 x 2 v 1 v 1 = (1, 0, 3, 0) 4 (1, 1, 1, 1) = (0, 1, 2, 1), v 1 v 4 1 v 3 = e 3 e 3 v 1 v 1 v 1 v 1 e 3 v 2 2 6 (0, 1, 2, 1) = ( 1 4, 1 12, 1 12, 1 12 v 2 = (0, 0, 1, 0) 1 (1, 1, 1, 1) 4 v 2 v 2 ) = 1 ( 3, 1, 1, 1), 12 v 4 = e 4 e 4 v 1 v 1 v 1 v 1 e 4 v 2 v 2 v 2 v 2 e 4 v 3 v 3 v 3 v 3 = (0, 0, 0, 1) 1 4 (1, 1, 1, 1) 1 6 (0, 1, 2, 1) 1/12 = ( 0, 1 2, 0, 1 2 1 1/12 12 ) = 1 (0, 1, 0, 1). 2 ( 3, 1, 1, 1) =
It remains to normalize vectors v 1 = (1, 1, 1, 1), v 2 = (0, 1, 2, 1), v 3 = 1 ( 3, 1, 1, 1), v 12 4 = 1 (0, 1, 0, 1): 2 v 1 = 2 = w 1 = v 1 = 1 v 1 (1, 1, 1, 1) 2 v 2 = 6 = w 2 = v 2 v 2 = 1 6 (0, 1, 2, 1) v 3 = 1 12 = 1 2 3 = w 3 = v 3 = 1 v 3 2 ( 3, 1, 1, 1) 3 v 4 = 1 2 = w 4 = v 4 v 4 = 1 2 (0, 1, 0, 1) Thus w 1,w 2 is an orthonormal basis for V while w 3,w 4 is an orthonormal basis for V.
Problem 3 (30 pts.) Let A = 1 2 0 1 1 1. 0 2 1 (i) Find all eigenvalues of the matrix A. The eigenvalues of A are roots of the characteristic equation det(a λi) = 0. We obtain that 1 λ 2 0 det(a λi) = 1 1 λ 1 0 2 1 λ = (1 λ) 3 2(1 λ) 2(1 λ) = (1 λ) ( (1 λ) 2 4 ) = (1 λ) ( (1 λ) 2 )( (1 λ) + 2 ) = (λ 1)(λ + 1)(λ 3). Hence the matrix A has three eigenvalues: 1, 1, and 3.
Problem 3 (30 pts.) Let A = 1 2 0 1 1 1. 0 2 1 (ii) For each eigenvalue of A, find an associated eigenvector. An eigenvector v = (x, y, z) of the matrix A associated with an eigenvalue λ is a nonzero solution of the vector equation (A λi)v = 0 1 λ 2 0 1 1 λ 1 x y = 0 0. 0 2 1 λ z 0 To solve the equation, we convert the matrix A λi to reduced row echelon form.
First consider the case λ = 1. The row reduction yields A + I = 2 2 0 1 2 1 1 1 0 1 2 1 0 2 2 0 2 2 Hence 1 1 0 0 1 1 1 1 0 0 1 1 1 0 1 0 1 1. 0 2 2 0 0 0 0 0 0 (A + I)v = 0 { x z = 0, y + z = 0. The general solution is x = t, y = t, z = t, where t R. In particular, v 1 = (1, 1, 1) is an eigenvector of A associated with the eigenvalue 1.
Secondly, consider the case λ = 1. The row reduction yields A I = 0 2 0 1 0 1 1 0 1 0 2 0 1 0 1 0 1 0 1 0 1 0 1 0. 0 2 0 0 2 0 0 2 0 0 0 0 Hence (A I)v = 0 { x + z = 0, y = 0. The general solution is x = t, y = 0, z = t, where t R. In particular, v 2 = ( 1, 0, 1) is an eigenvector of A associated with the eigenvalue 1.
Finally, consider the case λ = 3. The row reduction yields A 3I = 2 2 0 1 2 1 1 1 0 1 2 1 1 1 0 0 1 1 0 2 2 0 2 2 0 2 2 1 1 0 0 1 1 1 1 0 0 1 1 1 0 1 0 1 1. 0 2 2 0 0 0 0 0 0 Hence (A 3I)v = 0 { x z = 0, y z = 0. The general solution is x = t, y = t, z = t, where t R. In particular, v 3 = (1, 1, 1) is an eigenvector of A associated with the eigenvalue 3.
Problem 3 (30 pts.) Let A = 1 2 0 1 1 1. 0 2 1 (iii) Is the matrix A diagonalizable? Explain. The matrix A is diagonalizable, i.e., there exists a basis for R 3 formed by its eigenvectors. Namely, the vectors v 1 = (1, 1, 1), v 2 = ( 1, 0, 1), and v 3 = (1, 1, 1) are eigenvectors of the matrix A belonging to distinct eigenvalues. Therefore these vectors are linearly independent. It follows that v 1,v 2,v 3 is a basis for R 3. Alternatively, the existence of a basis for R 3 consisting of eigenvectors of A already follows from the fact that the matrix A has three distinct eigenvalues.
Problem 3 (30 pts.) Let A = 1 2 0 1 1 1. 0 2 1 (iv) Find all eigenvalues of the matrix A 2. We have that A = UBU 1, where B = 1 0 0 0 1 0, U = 1 1 1 1 0 1. 0 0 3 1 1 1 The columns of U are eigenvectors of A belonging to the eigenvalues 1, 1, and 3, respectively. Then A 2 = UBU 1 UBU 1 = UB 2 U 1, that is, the matrix A 2 is similar to the diagonal matrix B 2 = diag(1, 1, 9). Similar matrices have the same characteristic polynomial, hence they have the same eigenvalues. Thus the eigenvalues of A 2 are the same as the eigenvalues of B 2 : 1 and 9.
Bonus Problem 4 (20 pts.) Find a linear polynomial which is the best least squares fit to the following data: x 2 1 0 1 2 f (x) 3 2 1 2 5 We are looking for a function f (x) = c 1 + c 2 x, where c 1, c 2 are unknown coefficients. The data of the problem give rise to an overdetermined system of linear equations in variables c 1 and c 2 : c 1 2c 2 = 3, c 1 c 2 = 2, c 1 = 1, c 1 + c 2 = 2, c 1 + 2c 2 = 5. This system is inconsistent.
We can represent the system as a matrix equation Ac = y, where 1 2 3 1 1 ( ) A = 1 0 1 1, c = c1 2, y = c 2 1 2. 1 2 5 The least squares solution c of the above system is a solution of the normal system A T Ac = A T y: 1 2 3 ( ) 1 1 1 1 1 1 1 ( ) ( ) 2 1 0 1 2 1 0 c1 1 1 1 1 1 2 = 1 1 c 2 2 1 0 1 2 1 2 1 2 5 ( ) ( ) ( ) { 5 0 c1 3 c1 = 3/5 = 0 10 20 c 2 = 2 c 2 Thus the function f (x) = 3 + 2x is the best least squares fit 5 to the above data among linear polynomials.
Bonus Problem 5 (20 pts.) Let L : V W be a linear mapping of a finite-dimensional vector space V to a vector space W. Show that dimrange(l) + dim ker(l) = dim V. The kernel ker(l) is a subspace of V. It is finite-dimensional since the vector space V is. Take a basis v 1,v 2,...,v k for the subspace ker(l), then extend it to a basis v 1,v 2,...,v k,u 1,u 2,...,u m for the entire space V. Claim Vectors L(u 1 ), L(u 2 ),...,L(u m ) form a basis for the range of L. Assuming the claim is proved, we obtain dimrange(l) = m, dim ker(l) = k, dim V = k + m.
Claim Vectors L(u 1 ), L(u 2 ),...,L(u m ) form a basis for the range of L. Proof (spanning): Any vector w Range(L) is represented as w = L(v), where v V. Then v = α 1 v 1 + α 2 v 2 + + α k v k + β 1 u 1 + β 2 u 2 + + β m u m for some α i, β j R. It follows that w = L(v) = α 1 L(v 1 )+ +α k L(v k )+β 1 L(u 1 )+ +β m L(u m ) = β 1 L(u 1 ) + + β m L(u m ). Note that L(v i ) = 0 since v i ker(l). Thus Range(L) is spanned by the vectors L(u 1 ),...,L(u m ).
Claim Vectors L(u 1 ), L(u 2 ),...,L(u m ) form a basis for the range of L. Proof (linear independence): Suppose that t 1 L(u 1 ) + t 2 L(u 2 ) + + t m L(u m ) = 0 for some t i R. Let u = t 1 u 1 + t 2 u 2 + + t m u m. Since L(u) = t 1 L(u 1 ) + t 2 L(u 2 ) + + t m L(u m ) = 0, the vector u belongs to the kernel of L. Therefore u = s 1 v 1 + s 2 v 2 + + s k v k for some s j R. It follows that t 1 u 1 +t 2 u 2 + +t m u m s 1 v 1 s 2 v 2 s k v k = u u = 0. Linear independence of vectors v 1,...,v k,u 1,...,u m implies that t 1 = = t m = 0 (as well as s 1 = = s k = 0). Thus the vectors L(u 1 ), L(u 2 ),...,L(u m ) are linearly independent.