FALL 2011, SOLUTION SET 10 LAST REVISION: NOV 27, 9:45 AM. (T c f)(x) = f(x c).

Size: px
Start display at page:

Download "FALL 2011, SOLUTION SET 10 LAST REVISION: NOV 27, 9:45 AM. (T c f)(x) = f(x c)."

Transcription

1 FALL 2011, SOLUTION SET 10 LAST REVISION: NOV 27, 9:45 AM TRAVIS SCHEDLER (1) Let V be the vector space of all continuous functions R C. For all c R, let T c L(V ) be the shift operator, which sends a function f V to the function T c f V defined by (T c f)(x) = f(x c). Let i be the imaginary number, i 2 = 1. (a) Show that the function f(x) = e kx is an eigenvector of T c with eigenvalue e kc. In particular, for e ikx = cos(kx) + i sin(kx) is an eigenvector of T c of eigenvalue e ikc = cos(kc) i sin(kc). (b) Now suppose that f(x) is a nonzero eigenvector under T c for all c R. Our goal is to prove the converse: f(x) = ae kx for some a, k C. Here, show the following: (b.1) f(x) is nonzero for all x; (b.2) If we rescale f so that f(0) = 1, then f(x + y) = f(x)f(y) for all x, y. (c) Now suppose also that, in addition to f being an eigenvector under T c for all c, also f is differentiable at 0; continue to assume f(0) = 1 as in (b.1). Show that, by f(x + y) = f(x)f(y) from part (b.2), it follows that f is differentiable everywhere, and f (x) = f (0)f(x). Then, by existence and uniqueness of solutions to differential equations (18.03 material) it follows that f(x) = e f (0)x, which proves the converse. (Note: in fact one can show that f(x) = ae kx without assuming that f is differentiable, but this is harder.) Solution: (a) T c (f)(x) = e k(x c) = e kc e kx = e kc f(x). The second statement follows immediately. (b) We need to assume f is a nonzero eigenvector: I forgot to write that in the problem. First (b.1): If f(x 0 ) = 0 for some x 0, then for all x 1, let λ x1 x 0 be the eigenvalue of f under T x1 x 0 ; then f(x 1 ) = T x1 x 0 (f)(x 0 ) = λ x1 x 0 f(x 0 ) = 0. So f(x) = 0 for all x, i.e., f is zero. (b.2) Rescale f as indicated. Then (T x f)(0) = f(x) = f(x)f(0). Since f is an eigenvector of T x, its eigenvalue is f(x), and hence (T x f)(y) = f(x)f(y) for all y. But (T x f)(y) = f(x + y), which gives the statement. (c) Differentiating f(x + y) = f(x)f(y) with respect to y, we get f (x + y) = f(x)f (y). Plug in y = 0, and we get f (x) = f (0)f(x), as desired. (2) Now let V be the vector space of functions {0, 1 n,..., n 1 n } C. Let us use the standard inner product on these functions: f, g = 1 n 1 f(j/n)g(j/n). n j=0 1

2 Let us continue to let i denote the imaginary number, i 2 = 1. Recall the (normalized!) discrete Fourier transform F : given f V, the function F (f) V is given by F (f)(k) = 1 n 1 ( ) j e 2kπi(j/n) f. n n j=0 This is slightly modified from PS 7, to replace n with n: we need this for part (c) below (this is the proper normalization). (a) Let δ m/n be the Kronecker delta function at m/n, i.e., δ m/n (k/n) = 0 if m k and δ m/n (k/n) = 1 if m = k. Show that F (δ m/n ) is the function 1 n e 2πimx/n, i.e., F (δ m/n )(k) = 1 n e 2πimk/n. (b) Use PS 7, #9.(a) (and show that ( nδ m/n ) is an orthonormal basis), to show that F is an isometry. (c) Verify the Fourier inversion formula: F 2 (f) is the function F 2 (f)(x) = f(1 x). I.e., f(1 (l/n)) = F 2 (f)(l/n) = 1 n 1 n 1 e 2lπik/n e 2kπij/n f n k=0 j=0 Hint: It is enough to do this for the case f = δ m/n for every m. Then the above amounts to showing that e 2lπix/n, e 2mπix/n = 1 if m = n l (i.e., if e 2mπix/n = e 2lπix/n ) and 0 otherwise. (d) From part (c), conclude that F 4 = I. From part (b), conclude that F has an orthonormal eigenbasis with eigenvalues in {i, i, 1, 1}, i.e., fourth roots of unity. Solution: (a) F (δ m/n )(k/n) = 1 n j e 2kπij/n δ m/n (j/n) = 1 n e 2πimk/n. (b) Note first that nδ l/n, nδ m/n = n 1 j=0 δ l/n(j/n)δ m/n (j/n), which is 1 if l = m and 0 otherwise. So ( nδ m/n : m {0, 1,..., n 1}) is indeed an orthonormal basis. Now in PS 7 # 9.(a), we verified that (e 2πimx/n : m {0, 1,..., n 1}) was an orthonormal basis (there we had m instead of m, but the two are the same since e 2πimx/n = e 2πi(n m)x/n for x {0, 1 n,..., n 1 n }). So F takes an orthonormal basis to an orthonormal basis, and hence (by a result from class and the book) it is an isometry. Remark 0.1. Finding a nice eigenbasis for the discrete Fourier transform is actually quite an interesting question! (These should resemble Gaussians, as happens for the continuous Fourier transform). There has been recent research on this area, with the idea being analogous to the shift operator problem: rather than find an eigenbasis just for the Fourier transform, one can take an eigenbasis which is simultaneously an eigenbasis under all the natural k-th roots of the Fourier transform, which eliminates most of the choices of eigenbasis involved and defines a special choice of eigenbasis. (Note that, in general, there are many different k-th roots of a normal operator, obtained by any k-th roots of the eigenvalues under any eigenbasis, but in this case it turns out that one can define in particular, special k-th roots for certain values of k which has close to distinct eigenvalues, i.e., whose eigenspaces are mostly 1-dimensional). (3) (i) Let V be a finite-dimensional complex inner product space. Show that the following are equivalent for T L(V ): (a) T is an isometry; (b) T is normal and T satisfies a polynomial whose roots all have absolute value one. 2 ( j n ).

3 (ii) Deduce from part (i) that, if T m = I for some m 1, then T is an isometry if and only if it is normal. Solution: (a) (b): If T is an isometry on a finite-dimensional inner product space, then T 1 = T (as we saw in class), and hence T T = T T = I. So T is normal. Also, we proved that T has an orthonormal eigenbasis (e 1,..., e n ) of eigenvalues (λ 1,..., λ n ) of absolute value one. Let p(x) = n i=1 (x λ i). Then p(t )(e i ) = 0 for all i, and hence p(t ) = 0. (b) (a): If T is normal then it has an orthonormal eigenbasis, say (e 1,..., e n ) of eigenvalues (λ 1,..., λ n ). If p(t ) = 0, then p(t )(e i ) = 0, so λ i must be a root of p(t ). Hence, if p(x) has only roots of absolute value one, then all the eigenvalues of T are of absolute value one. But we saw in class (for F = C on finite-dimensional inner product spaces) that an isometry is the same thing as an operator which has an eigenbasis of eigenvalues of absolute value one. (4) Positive operators! Let V be a finite-dimensional inner product space and S, T L(V ). (a) Show that, if S and T are positive invertible operators, then S + T is invertible. (Hint: See the lecture notes from class where we discussed the null space of S + T in this case). (b) Give a counterexample to the statement if S and T are only self-adjoint, i.e., give invertible self-adjoint S and T such that S + T is not invertible. (c) Give a counterexample to the statement that, if S and T admit eigenbases with positive (nonzero) eigenvalues, then S + T is invertible. Hint: Try two by two matrices: let S = T A and T = T B ; without loss of generality A ( could ) be diagonal, and it better not 1 0 be a multiple of the identity, so let us take A =. We want to find a B such 0 2 that (A + B)v = 0 for some nonzero v. Note that if v is an eigenvector ( of A, then it 1 won t happen that (A + B)v = 0. So find a B such that (A + B) = 0. For this, 1) ( ( ) 1 1 set v = and w = Av =. So we want B such that Bv = w. Show that 1) 2 there exists such a B with ( positive eigenvalues. Namely, M(T B, (v, w)) can be any 0 matrix with first column. Now fill in the second column so that both eigenvalues 1) are positive and distinct, using the characteristic polynomial. For this choice of matrix C = M(T B, (v, w)) (i.e., for B = (vw)c(vw) 1, where (vw) is the matrix with columns v and w), it follows that (A + B)v = 0, i.e., T A + T B is noninvertible, even though T A and T B both admit eigenbases with positive eigenvalues. In particular, part (c) shows that eigenbases do not behave the same as orthonormal eigenbases: if S and T both had orthonormal eigenbases, then they would be positive operators (since they have positive eigenvalues), and then part (a) would apply to show that S + T were invertible. Solution: (a) In class, we showed that null(s + T ) = null(s) null(t ), if S and T are positive. Hence, if null(s) = {0} or if null(t ) = {0}, then null(s + T ) = {0}. Therefore, if even only one of S or T is injective, so is S + T. Since V is finite-dimensional, S + T must be invertible. (b) Let S = I and T = I (and assume V {0}). Then S + T = 0 is not invertible, but S and T are both self-adjoint (since v, w = v, w for all v, w V ). 3

4 ( ) 1 0 (c) We follow the hint. Let A = and let B be such that M(T 0 2 B, (v, w)) = ( ) ( ( ) , where v = and w =. Note that the characteristic polynomial of this 1 3 1) 2 matrix is x 2 3x + 1, so its eigenvalues are 3± 5 2 which are positive and distinct. Also (A + B)(v) = Av + w = w + w = 0. We can explicitly compute B: ( ) ( ) ( ) 1 ( B = = ( ) 5 5 Note that A + B =, which indeed is noninvertible (5) Trace! For any matrix A Mat(n, n, F), with A = (a ij ), let its trace be defined as the sum of its diagonal entries, i.e., tr(a) = n i=1 a ii. (a) Prove that tr(ab) = tr(ba). [Note: this is in Axler, Ch. 10, if you want to read it there...] (b) Conclude that tr(bab 1 ) = tr(a) for all invertible B and all A. Conclude from the change-of-basis formula that, if T L(V ) for any finite-dimensional vector space V, then tr(m(t )) does not depend on the choice of basis. So tr(t ) := tr(m(t )) makes sense! (c) Now suppose that M(T ) is upper-triangular. Then the diagonal entries are the eigenvalues of T, with some multiplicities. We are going to prove in class that the number of times each eigenvalue λ occurs is independent of the choice of basis, and equals the dimension of the generalized eigenspace V (λ). For now, conclude from (b) that at least the sum of all the diagonal entries of M(T ) (i.e., the eigenvalues with multiplicity occuring in the matrix) is tr(t ), which is independent of the choice of basis. So for most choices of eigenvalues, there will be only one choice of multiplicities (the number of times each occurs on the diagonal of M(T )) that will add up to tr(t ). Solution: (a) This is an explicit verification: if A = (a ij ) and B = (b ij ), then tr(ab) = n n i=1 j=1 a ijb ji = tr(ba). Alternatively, we could note that A, B := tr(ab) is the same as the usual dot product of the matrices A and B considered as lists of elements. Then tr(ab) = A, B = B, A = tr(ba). (b) By (a), tr(bab 1 ) = tr((ab 1 )B) = tr(a) for all invertible B. The change-of-basis formula says that, if (w 1 w n ) = (v 1 v n )B for two bases (v 1,..., v n ) and (w 1,..., w n ) of V, then M(T, (w 1,..., w n )) = BM(T, (v 1,..., v n ))B 1. So this means that tr(m(t )) is independent of the choice of basis. (c) The trace of a matrix is the sum of its diagonal entries, so by (b) this makes sense for a linear transformation regardless of the choice of basis. In particular this is true for all choices of basis for which the matrix is upper-triangular. (6) Singular value decomposition and compression! Take a look at In particular, look at slides 5 12: these show you the approximation of an image by writing the image as a matrix, taking its singular value decomposition, and setting to zero all the singular values except for the largest, then the largest 10, then the largest 20, etc., as indicated. We are going to explain why this works. See also Muddy Responses 19, #8. Suppose that T L(V ), for dim V = n, is a transformation with SVD (e i ), (s i ), and (f i ). Now, let T be the same transformation, except with SVD (e i ), (s i ), and (f i), where s i = s i 4 ).

5 for the largest k singular values, and s i = 0 for the smallest n k values. Suppose that the smallest n k singular values are all bounded by ε 0. We are going to prove that T and T are very close, precisely, that the sum of the squares of the entries of any matrix for T T in an orthonormal basis is at most ε 2 (n k). On the other hand, the information required to store T is much less than that for T, since one need only store the top k s i and the corresponding e i and f i. For any matrix A = (a ij ), let A = i,j a ij 2. This is the direct analogue of the norm of vectors, as we will see. (a) Show that, in any orthonormal basis, A = tr(a t A). Thus, by the problem on traces (5) above, this makes sense for transformations: S := M(S), independent of the choice of orthonormal basis used. (b) Show that A 2 is the sum of the squares of the norms of the columns of A, and also the sum of the squares of the norms of the rows of A. (c) Show that, if S is an isometry and T any transformation, then ST = T = T S. Hint: explain why this is the same as showing that UA = A = AU where A is any matrix and U is a unitary matrix. Note that the columns of UA all have the same norm as the columns of A, and similarly that the rows of AU have the same norm as the rows of A. Conclude the result (using (b)). (d) Conclude that, if T has singular value decomposition (e i ), (s i ), and (f i ), then T = i s2 i. (e) Deduce that, in the original situation of the problem, T T ε n k. This says, informally, that T and T are very close. Solution: (a) The i-th diagonal entry of A t A is the dot product of the i-th column of A with itself ( n j=1 a jia ji ). The sum of all of these, over i, is i,j a ij 2 = A 2. (b) We already explained the fact about the columns in (a). For the rows, the same proof works: the square of the norm of the i-th row is n j=1 a ij 2, so the sum of this over i is A 2. (c) Since Su = u when S is an isometry and u any vector, taking orthonormal bases shows that Uv = v for all unitary matrices U (a unitary matrix is the same thing as the matrix of an isometry in an orthonormal basis). Now, UA 2 is the sum of the squares of the norms of the columns of UA, and this is the same as the sum of the squares of the norms of the columns of A, i.e., A 2. The same works for AU by noting that the norms of the rows of AU are the same as the norms of the rows of A. Now, T = M(T ) in any orthonormal basis (by definition) and since matrices of isometries in orthonormal bases are unitary, and M(ST ) = M(S)M(T ) for all S and T, we deduce the result. (d) If T = SP where S is an isometry and P is positive, then by part (c), T = P. Now since P is positive, there is an orthonormal basis in which P is diagonal with nonnegative entries, and these entries are defined to be the singular values. Thus P 2 = i s2 i, where the s i are the singular values. (e) In the original situation, T T has singular values which are the lowest n k singular values of T (all of absolute value at most ε) and k zeros. So, by (d), T T 2 (n k)ε 2, and taking a square root gives the desired result. (7) Block upper triangular matrices! 5

6 Let F = R. We will show that, for the 4 4 matrix A below, T A has no basis in which it is block diagonal: A = (a) Show that the complex eigenvalues of A are ±i. (b) Show that the complex eigenspaces of each of the eigenvalues, i and i, are onedimensional. (c) Now suppose that in some (real) basis, M(T A ) were block diagonal. Conclude that, in this case, BAB 1 must be block diagonal with 2 2 blocks, each of which have complex eigenvalues ±i. Conclude that BAB 1, and hence A, has eigenspaces of i and i of dimension two, not one. This contradicts (b). Solution: (a) Since this matrix is block upper-triangular, its eigenvalues are the eigenvalues of the diagonal blocks, which in this case we already computed many times to be ±i (or one can apply the characteristic polynomial, which for this 2 2 matrix is x 2 + 1). (b) This means showing that null(a ii) and null(a+ii) are one-dimensional. Note that ther are at least one-dimensional, since (1, i, 0, 0) t is in the first null space, and (1, i, 0, 0) t is in the second null space. We claim that all null vectors are multiples of these. Since the eigenspaces of the upper-left 2 2 block are one-dimensional, all null vectors ending in two zeros are of this form. For a contradiction, suppose we had a null vector whose last two entries were not both zero. The last two entries must be in the null space of the bottom right 2 2 block, so up to scaling the last two entries are (1, i) (since they are not both zero). Now, for (a, b, 1, i) to be in the null space, we need that ( b ia + 1, a ib i) = (0, 0). This is impossible, since it would require that i( b ia + 1) + (a ib i) = 0, but this is 2i 0. So the null spaces are one-dimensional. (c) If, in some real basis, A could be put in block diagonal form, then the two blocks would have to still have eigenvalues ±i, since these are all the eigenvalues of A, and any two by two real matrix with nonreal eigenvalues has complex eigenvalues which are conjugates of each other. But then, since the new matrix A is block diagonal, its eigenspaces of eigenvalue i and i would each be two-dimensional, since if (a, b) were a nonzero eigenvector of eigenvalue i of the first block, and (c, d) a nonzero eigenvector of eigenvalue i of the second block, then (a, b, 0, 0) and (0, 0, c, d) would be linearly independent eigenvectors of the matrix A. This can t happen, since the eigenspaces of A must be the same dimension as those of A (these matrices can both be realized as matrices of the same linear transformation, and the dimension of the eigenspaces of a linear transformation make sense independent of the choice of basis). (8) Minimal polynomial! Given a transformation T L(V ) for V finite-dimensional, the minimal polynomial is defined to be the monic polynomial p(x) of minimal degree (monic means that the leading coefficient is one) such that p(t ) = 0. (a) Show that this definition makes sense: if p(x) and q(x) were two monic polynomials of the same minimal degree such that p(t ) = q(t ) = 0, show that p(x) = q(x) (Hint: consider h(x) := p(x) q(x), and note that h(t ) = 0.) (b) Now, suppose that p(x) is the minimal polynomial of T, and q(t ) = 0. Then show that p(x) q(x), i.e., q(x) = h(x)p(x) for some polynomial h(x). (Hint: using long division of polynomials, write q(x) = h(x)p(x) + r(x) where the degree of r(x) is less than the degre of p(x), and then conclude that r(x) = 0 since q(t ) = p(t ) = 0.) 6

7 (c) Next, suppose that T has an eigenbasis and its (distinct) eigenvalues are λ 1,..., λ k. Show that p(x) = (x λ 1 ) (x λ k ). (d) In the case F = C, recall from class the decomposition theorem that V = λ V (λ) where each V (λ) is the generalized eigenspace of λ, i.e., the span of vectors v such that (T λ) m v = 0 for some m 1. Now for each λ, explain why there exists a minimal m λ such that (T λ) m λv = 0 for all v V (λ) (hint: take a basis of V (λ) and take the maximum value of m among the basis). (e) Finally, explain that the minimal polynomial in the situation of (d) is p(x) = λ (x λ) m λ. Solution: (a) Indeed, following the hint, if p(x) and q(x) were monic polynomials of the same degree and p(t ) = 0 = q(t ), then h(x) := p(x) q(x) would have strictly lower degree, and also h(t ) = 0. So if p and q had minimal degree for nonzero polynomials with the properties p(t ) = 0 = q(t ), then we would conclude h(x) = 0, i.e., p(x) = q(x), as desired. (b) By long division, we can write q(x) = h(x)p(x) + r(x), for some polynomial h(x), and for r(x) a polynomial of degree less than the degree of p. Since q(t ) = 0 and p(t ) = 0, we conclude that r(t ) = 0. Since p is the minimal polynomial, this can only imply that r(x) = 0 (otherwise we could rescale r to be monic and get a contradiction to the definition of p). So q(x) = h(x)p(x), i.e., p(x) q(x). (c) Let (v 1,..., v n ) be the eigenbasis mentioned. It is clear that p(t )v i = 0 for all i, since p(t )v i = p(λ)v i = 0 where λ is the eigenvalue of v i. Conversely, suppose q(x) is a polynomial such that q(t )v i = 0 for all i. Then q(λ)v i = 0 for the eigenvalue λ of v i, so we conclude that λ must be a root of q. Hence p(x) q(x), since p(x) is just the product of the linear factors (x λ j ), where λ j ranges over all eigenvalues. This implies that the degree of p(x) is less than or equal to the degree of q(x), so p(x) is the minimal polynomial of T. (d) As explained in class, take a basis of V (λ), and let m be large enough so that each of the basis vectors are in null(t λi) m (this must exist since there are finitely many basis vectors, and each of them are in the null space of some power of T λi since they are generalized eigenvectors of eigenvalue λ. Now, we can let m λ be the minimal possible value of m with this property. (e) Now, (T λi) m λ is zero on V (λ) for all λ, so the given polynomial p(x) indeed satisfies p(t ) = 0, since p(t ) V (λ) = 0 for all λ and V = λ V (λ). For the converse, suppose that p(x) were not minimal. Since p(t ) = 0, then p(x) = h(x)q(x) by (b) for q(x) the minimal polynomial and h(x) some product of powers of the (x λ) s. So q(x) = λ (x λ)m λ where m λ m λ, but they are not all equal. Let λ be an eigenvalue for which m λ < m λ. Then (T λi) m λ is not zero on V (λ). On the other hand, as shown in class, for µ λ, (T µi) is an isomorphism on V (λ), since V (λ) cannot have any nonzero eigenvectors of eigenvalue µ (otherwise such an eigenvector v would satisfy (T λi) m λv = (µ λ) m λv 0, a contradiction). So since q(x)/(x λ) m λ is a product of factors of the form (x µ) for µ λ, we conclude that q(t ) is nonzero on V (λ). (9) Square roots! (cf. Axler p. 177): We will characterize those linear transformations T L(V ), for F = C and V finitedimensional, such that T admits a square root S such that S 2 = T. (a) By the decomposition theorem, V = λ V (λ) where the V (λ) are generalized eigenspaces. Show that T admits a square root if and only if T V (λ) does for all λ. Hint: If Sλ 2 = T V (λ) for all λ, then λ S2 λ = T, where here Sλ 2 means Sλ 2(v) = λ S2 λ (v). 7

8 Conversely, if S 2 = T, show that S(V (λ)) V (λ) for all λ, since ST = T S and hence S(T λi) m = (T λi) m S for all λ. Deduce that, if S 2 = T, then S 2 V (λ) = T V (λ). (b) Now, we reduce to the case that V = V (λ), i.e., T λi is nilpotent (in some basis, it is upper-triangular with zeros on the diagonal, so that taking it to the power of the size of the matrix will be zero). In other words, we may assume T = λi + N, where N is nilpotent, i.e., N m = 0 for some m 1. Suppose λ 0. Let µ C be a complex square root of λ, i.e., µ 2 = λ. Then T = µ 2 I + N. Let N = N/µ 2, so T = µ 2 (I + N ). Now, let f(x) be the Taylor polynomial of 1 + x up to degree m + 1, i.e., f(x) = x 1 8 x x3 ( 1) + m 1 (2(m 1))! x m 1 (cf. (1 2(m 1))((m 1)!) 2 4 m 1 org/wiki/square_root#properties). In particular, f(x) 2 = 1 + x + terms of degree m (you don t have to prove this, but it follows from the fact that f(x) is the Taylor expansion of 1 + x). Show that the following matrix S satisfies S 2 = T : S := µ f(n ) and don t forget that (N ) m = 0. Hint: Use that f(x) 2 = 1 + x + a multiple of x m. (c) Finally, let us suppose that λ = 0, i.e., T itself is nilpotent. In this case, some but not all T admit a square root: give one nonzero example where T admits a square root, and one example where T does not admit a square root. (Hint: look for upper-triangular matrices of size 3 3 or smaller.) (a) We follow the hint. Suppose that S λ L(V (λ)) satisfy Sλ 2 = T V (λ) for all λ. Let S = λ S λ L(V ) denote the operator which takes λ v λ to λ S λ(v λ ) for all choices of v λ V (λ). This makes sense by the decomposition theorem: V = λ V (λ). Then, S 2 (v λ ) = T (v λ ) for all v λ V (λ) for all λ, and hence since V = λ V (λ), we conclude that S 2 = T. Conversely, suppose that S 2 = T. Then S 2 V (λ) = (S V (λ) ) 2 = T V (λ) for all λ. So each of T V (λ) has a square root as well. (b) Clearly, it suffices to show that f(n ) 2 = 1 + N. But f(n ) 2 = g(n ) where g(x) = f(x) 2. And, as pointed out, g(x) = 1 + x+ a multiple of x m. Thus g(n ) = I+N +(N ) m ( ) where is something. Since (N ) m = 0, this proves that g(n ) = I+N, as desired. ( ) 0 1 (c) The matrix N = has no square root: if it did, call it S, then S = N 0, but S 4 = 0. However, since S 4 = 0, S could not be injective. Since S 2 0, also S 0. So rk(s) = 1. But rk(s 2 ) = rk(n) = 1 as well. So this would imply that range(s) = range(s 2 ) is one-dimensional, and hence that S range(s) is an isomorphism. But then all (positive) powers of S would have the same range, range(s), which is nonzero. So this contradicts S 4 = 0. On the other hand, we can easily come up with a nilpotent matrix that does admit a square root, by taking the square of a nilpotent matrix which does not square to zero. For example, = So the nilpotent matrix does admit a square root

9 (10) Determinants and characteristic polynomial! For the following definition, see also p. 226 of Axler. Definition 0.2. A permutation of {1,..., n} is a bijective function σ : {1,..., n} {1,..., n}. Let S n be the set of all permutations of {1,..., n}: so we say σ S n if σ is a permutation of {1,..., n}. Equivalently, a permutation is a list (σ(1),..., σ(n)) of positive integers that contains each positive integer between 1 and n inclusive exactly once. Caution: Axler uses the notation (m 1,..., m n ) = (σ(1),..., σ(n)). I am opting for the more standard notation, but the two notions are equivalent. For the following definition, see also p. 228 of Axler. Definition 0.3. The sign of a permutation σ S n, denoted by sign(σ), is defined to be ( 1) o(σ), where o(σ) is the number of pairs (i, j) with 1 i < j n such that σ(i) > σ(j). In other words, the sign of a permutation is the number of pairs i < j that appear in reverse order in the list (σ(1),..., σ(n)). We will need the following proposition, which will be discussed in class: Proposition 0.4. If σ, τ S n, then sign(σ) sign(τ) = sign(σ τ). Note that the proof of this proposition boils down to Lemma Please use this proposition, without proving it, in the below. (You are, however welcome to try to prove it if you like and/or to see Axler.) For the following definition, see also Axler, 10.25: Definition 0.5. Let us define the determinant of a matrix A = (a i,j ) by the formula (0.6) det(a) = σ S n sign(σ)a σ(1),1 a σ(2),2 a σ(n),n. Note that the sum is over n! different things, since S n has n! elements. Prove the following properties: (a) If A has two identical columns, then det(a) = 0. (b) Suppose A has columns (v 1 v n ) and B has columns (v 1 v k 1 w k v k+1 v n ), i.e., A and B have the same columns except for the k-th column. Finally, let C := (v 1 v k 1 (v k + w k )v k+1 v n ), i.e., the same as A and B except in the k-th column, where one takes the sum of the columns v k and w k. Prove: det(a) + det(b) = det C. (b ) Note also that, if instead we had w k = cv k for some c F, then det(b) = c det(a). (c) Deduce from (a) and (b) (and (b )) that, if A is not invertible, then det(a) = 0. Hint: Write one of the columns, say the k-th column, as a linear combination of the other columns, and apply (b) iteratively as well as (a). (d) Conclude that, if λ is an eigenvalue of A, then det(a λi) = 0. In other words, λ must be a root of the characteristic polynomial of A, defined as det(a xi). Equivalently, if λ is not a root of the characteristic polynomial, then A λi is invertible. Solution: 9

10 (a) First, we deduce Lemma from the proposition (or you could just cite the lemma). If τ is obtained from σ by swapping the values of i and i + 1, i.e., σ(i + 1), if j = i τ(j) = σ(i), if j = i + 1 σ(j), if j / {i, i + 1} then τ = σ κ i,i+1 where κ i,i+1 is the permutation which swaps i and i + 1 and leaves everything else fixed. Since, o(κ i,i+1 ) = 1, it follows that sign(κ i,i+1 ) = 1, and by Proposition??, sign(τ) = sign(σ). More generally, if τ is obtained from σ by swapping the values of i and j, then τ = σ κ i,j where κ i,j is the permutation which swaps i and j and leaves everything else fixed. Note that κ i,j is conjugate to κ i,i+1 : for instance, κ i,j = κ i+1,j κ i,i+1 κ i+1,j, where κ i+1,j = κ 1 i+1,j since its square is the identity. Hence, sign(κ i,j) = sign(κ i+1,j ) 2 sign(κ i,i+1 ) = 1 for all i and j. So, the argument above shows that sign(τ) = sign(σ), as desired. Now, using Lemma 10.23, the result follows easily: if column i equals column j in the matrix A, then every term sign(σ)a σ(1),1 a σ(2),2 a σ(n),n in the expression of det A cancels with sign(τ)a τ(1),1 a τ(2),2 a τ(n),n, τ = σ κ i,j. So det A = 0. (b) Let (b ij ) and (c ij ) be the entries of B and C. Then, det C = σ S n sign(σ)c σ(1),1 c σ(2),2 c σ(n),n = sign(σ)a σ(1),1 a σ(k 1),k 1 (a σ(k),k + b σ(k),k )a σ(k+1),k+1 a σ(n),n σ S n = sign(σ)a σ(1),1 a σ(n),n + sign(σ)b σ(1),1 b σ(n),n. σ S n σ S n (b ) This is similar: all of the summands of det B have one term from the k-th column in them, so each summand is c times the corresponding summand of det A. So det B = c det A. (c) If A is not invertible, then one of the columns must be in the span of the other columns. By parts (b) and (b ), det A is then a linear combination of determinants of matrices in which this column is replaced by one of the other columns, i.e., two columns are identical. By part (a), each of these determinants is zero. Hence, det A = 0. (d) If λ is an eigenvalue of A, then A λi is noninvertible. By (c), this implies that det(a λi) = 0. Hence, λ must be a root of the polynomial det(a xi). 10

Lecture 23: Trace and determinants! (1) (Final lecture)

Lecture 23: Trace and determinants! (1) (Final lecture) Lecture 23: Trace and determinants! (1) (Final lecture) Travis Schedler Thurs, Dec 9, 2010 (version: Monday, Dec 13, 3:52 PM) Goals (2) Recall χ T (x) = (x λ 1 ) (x λ n ) = x n tr(t )x n 1 + +( 1) n det(t

More information

Lecture 19: Polar and singular value decompositions; generalized eigenspaces; the decomposition theorem (1)

Lecture 19: Polar and singular value decompositions; generalized eigenspaces; the decomposition theorem (1) Lecture 19: Polar and singular value decompositions; generalized eigenspaces; the decomposition theorem (1) Travis Schedler Thurs, Nov 17, 2011 (version: Thurs, Nov 17, 1:00 PM) Goals (2) Polar decomposition

More information

Lecture 19: Polar and singular value decompositions; generalized eigenspaces; the decomposition theorem (1)

Lecture 19: Polar and singular value decompositions; generalized eigenspaces; the decomposition theorem (1) Lecture 19: Polar and singular value decompositions; generalized eigenspaces; the decomposition theorem (1) Travis Schedler Thurs, Nov 17, 2011 (version: Thurs, Nov 17, 1:00 PM) Goals (2) Polar decomposition

More information

Lecture 19: Isometries, Positive operators, Polar and singular value decompositions; Unitary matrices and classical groups; Previews (1)

Lecture 19: Isometries, Positive operators, Polar and singular value decompositions; Unitary matrices and classical groups; Previews (1) Lecture 19: Isometries, Positive operators, Polar and singular value decompositions; Unitary matrices and classical groups; Previews (1) Travis Schedler Thurs, Nov 18, 2010 (version: Wed, Nov 17, 2:15

More information

Travis Schedler. Thurs, Oct 27, 2011 (version: Thurs, Oct 27, 1:00 PM)

Travis Schedler. Thurs, Oct 27, 2011 (version: Thurs, Oct 27, 1:00 PM) Lecture 13: Proof of existence of upper-triangular matrices for complex linear transformations; invariant subspaces and block upper-triangular matrices for real linear transformations (1) Travis Schedler

More information

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by MATH 110 - SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER 2009 GSI: SANTIAGO CAÑEZ 1. Given vector spaces V and W, V W is the vector space given by V W = {(v, w) v V and w W }, with addition and scalar

More information

Math 113 Final Exam: Solutions

Math 113 Final Exam: Solutions Math 113 Final Exam: Solutions Thursday, June 11, 2013, 3.30-6.30pm. 1. (25 points total) Let P 2 (R) denote the real vector space of polynomials of degree 2. Consider the following inner product on P

More information

Math 113 Homework 5. Bowei Liu, Chao Li. Fall 2013

Math 113 Homework 5. Bowei Liu, Chao Li. Fall 2013 Math 113 Homework 5 Bowei Liu, Chao Li Fall 2013 This homework is due Thursday November 7th at the start of class. Remember to write clearly, and justify your solutions. Please make sure to put your name

More information

Linear Algebra II. Ulrike Tillmann. January 4, 2018

Linear Algebra II. Ulrike Tillmann. January 4, 2018 Linear Algebra II Ulrike Tillmann January 4, 208 This course is a continuation of Linear Algebra I and will foreshadow much of what will be discussed in more detail in the Linear Algebra course in Part

More information

Math 4153 Exam 3 Review. The syllabus for Exam 3 is Chapter 6 (pages ), Chapter 7 through page 137, and Chapter 8 through page 182 in Axler.

Math 4153 Exam 3 Review. The syllabus for Exam 3 is Chapter 6 (pages ), Chapter 7 through page 137, and Chapter 8 through page 182 in Axler. Math 453 Exam 3 Review The syllabus for Exam 3 is Chapter 6 (pages -2), Chapter 7 through page 37, and Chapter 8 through page 82 in Axler.. You should be sure to know precise definition of the terms we

More information

Lecture 11: Finish Gaussian elimination and applications; intro to eigenvalues and eigenvectors (1)

Lecture 11: Finish Gaussian elimination and applications; intro to eigenvalues and eigenvectors (1) Lecture 11: Finish Gaussian elimination and applications; intro to eigenvalues and eigenvectors (1) Travis Schedler Tue, Oct 18, 2011 (version: Tue, Oct 18, 6:00 PM) Goals (2) Solving systems of equations

More information

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r Practice final solutions. I did not include definitions which you can find in Axler or in the course notes. These solutions are on the terse side, but would be acceptable in the final. However, if you

More information

A linear algebra proof of the fundamental theorem of algebra

A linear algebra proof of the fundamental theorem of algebra A linear algebra proof of the fundamental theorem of algebra Andrés E. Caicedo May 18, 2010 Abstract We present a recent proof due to Harm Derksen, that any linear operator in a complex finite dimensional

More information

A linear algebra proof of the fundamental theorem of algebra

A linear algebra proof of the fundamental theorem of algebra A linear algebra proof of the fundamental theorem of algebra Andrés E. Caicedo May 18, 2010 Abstract We present a recent proof due to Harm Derksen, that any linear operator in a complex finite dimensional

More information

Lecture 21: The decomposition theorem into generalized eigenspaces; multiplicity of eigenvalues and upper-triangular matrices (1)

Lecture 21: The decomposition theorem into generalized eigenspaces; multiplicity of eigenvalues and upper-triangular matrices (1) Lecture 21: The decomposition theorem into generalized eigenspaces; multiplicity of eigenvalues and upper-triangular matrices (1) Travis Schedler Tue, Nov 29, 2011 (version: Tue, Nov 29, 1:00 PM) Goals

More information

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

Lecture 22: Jordan canonical form of upper-triangular matrices (1)

Lecture 22: Jordan canonical form of upper-triangular matrices (1) Lecture 22: Jordan canonical form of upper-triangular matrices (1) Travis Schedler Tue, Dec 6, 2011 (version: Tue, Dec 6, 1:00 PM) Goals (2) Definition, existence, and uniqueness of Jordan canonical form

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM 33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM (UPDATED MARCH 17, 2018) The final exam will be cumulative, with a bit more weight on more recent material. This outline covers the what we ve done since the

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Topics in linear algebra

Topics in linear algebra Chapter 6 Topics in linear algebra 6.1 Change of basis I want to remind you of one of the basic ideas in linear algebra: change of basis. Let F be a field, V and W be finite dimensional vector spaces over

More information

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Bare-bones outline of eigenvalue theory and the Jordan canonical form Bare-bones outline of eigenvalue theory and the Jordan canonical form April 3, 2007 N.B.: You should also consult the text/class notes for worked examples. Let F be a field, let V be a finite-dimensional

More information

Math 25a Practice Final #1 Solutions

Math 25a Practice Final #1 Solutions Math 25a Practice Final #1 Solutions Problem 1. Suppose U and W are subspaces of V such that V = U W. Suppose also that u 1,..., u m is a basis of U and w 1,..., w n is a basis of W. Prove that is a basis

More information

MATH 115A: SAMPLE FINAL SOLUTIONS

MATH 115A: SAMPLE FINAL SOLUTIONS MATH A: SAMPLE FINAL SOLUTIONS JOE HUGHES. Let V be the set of all functions f : R R such that f( x) = f(x) for all x R. Show that V is a vector space over R under the usual addition and scalar multiplication

More information

THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS

THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS KEITH CONRAD. Introduction The easiest matrices to compute with are the diagonal ones. The sum and product of diagonal matrices can be computed componentwise

More information

Determinants. Beifang Chen

Determinants. Beifang Chen Determinants Beifang Chen 1 Motivation Determinant is a function that each square real matrix A is assigned a real number, denoted det A, satisfying certain properties If A is a 3 3 matrix, writing A [u,

More information

Lecture Notes for Math 414: Linear Algebra II Fall 2015, Michigan State University

Lecture Notes for Math 414: Linear Algebra II Fall 2015, Michigan State University Lecture Notes for Fall 2015, Michigan State University Matthew Hirn December 11, 2015 Beginning of Lecture 1 1 Vector Spaces What is this course about? 1. Understanding the structural properties of a wide

More information

Lecture 23: Determinants (1)

Lecture 23: Determinants (1) Lecture 23: Determinants (1) Travis Schedler Thurs, Dec 8, 2011 (version: Thurs, Dec 8, 9:35 PM) Goals (2) Warm-up: minimal and characteristic polynomials of Jordan form matrices Snapshot: Generalizations

More information

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013. The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment Two Caramanis/Sanghavi Due: Tuesday, Feb. 19, 2013. Computational

More information

Generalized eigenspaces

Generalized eigenspaces Generalized eigenspaces November 30, 2012 Contents 1 Introduction 1 2 Polynomials 2 3 Calculating the characteristic polynomial 5 4 Projections 7 5 Generalized eigenvalues 10 6 Eigenpolynomials 15 1 Introduction

More information

Math 113 Practice Final Solutions

Math 113 Practice Final Solutions Math 113 Practice Final Solutions 1 There are 9 problems; attempt all of them. Problem 9 (i) is a regular problem, but 9(ii)-(iii) are bonus problems, and they are not part of your regular score. So do

More information

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,

More information

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2 Contents Preface for the Instructor xi Preface for the Student xv Acknowledgments xvii 1 Vector Spaces 1 1.A R n and C n 2 Complex Numbers 2 Lists 5 F n 6 Digression on Fields 10 Exercises 1.A 11 1.B Definition

More information

Problems in Linear Algebra and Representation Theory

Problems in Linear Algebra and Representation Theory Problems in Linear Algebra and Representation Theory (Most of these were provided by Victor Ginzburg) The problems appearing below have varying level of difficulty. They are not listed in any specific

More information

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)

More information

MATH 235. Final ANSWERS May 5, 2015

MATH 235. Final ANSWERS May 5, 2015 MATH 235 Final ANSWERS May 5, 25. ( points) Fix positive integers m, n and consider the vector space V of all m n matrices with entries in the real numbers R. (a) Find the dimension of V and prove your

More information

GQE ALGEBRA PROBLEMS

GQE ALGEBRA PROBLEMS GQE ALGEBRA PROBLEMS JAKOB STREIPEL Contents. Eigenthings 2. Norms, Inner Products, Orthogonality, and Such 6 3. Determinants, Inverses, and Linear (In)dependence 4. (Invariant) Subspaces 3 Throughout

More information

Since G is a compact Lie group, we can apply Schur orthogonality to see that G χ π (g) 2 dg =

Since G is a compact Lie group, we can apply Schur orthogonality to see that G χ π (g) 2 dg = Problem 1 Show that if π is an irreducible representation of a compact lie group G then π is also irreducible. Give an example of a G and π such that π = π, and another for which π π. Is this true for

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

MATH 511 ADVANCED LINEAR ALGEBRA SPRING 2006

MATH 511 ADVANCED LINEAR ALGEBRA SPRING 2006 MATH 511 ADVANCED LINEAR ALGEBRA SPRING 2006 Sherod Eubanks HOMEWORK 2 2.1 : 2, 5, 9, 12 2.3 : 3, 6 2.4 : 2, 4, 5, 9, 11 Section 2.1: Unitary Matrices Problem 2 If λ σ(u) and U M n is unitary, show that

More information

5 Quiver Representations

5 Quiver Representations 5 Quiver Representations 5. Problems Problem 5.. Field embeddings. Recall that k(y,..., y m ) denotes the field of rational functions of y,..., y m over a field k. Let f : k[x,..., x n ] k(y,..., y m )

More information

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,

More information

Lecture notes - Math 110 Lec 002, Summer The reference [LADR] stands for Axler s Linear Algebra Done Right, 3rd edition.

Lecture notes - Math 110 Lec 002, Summer The reference [LADR] stands for Axler s Linear Algebra Done Right, 3rd edition. Lecture notes - Math 110 Lec 002, Summer 2016 BW The reference [LADR] stands for Axler s Linear Algebra Done Right, 3rd edition. 1 Contents 1 Sets and fields - 6/20 5 1.1 Set notation.................................

More information

A PRIMER ON SESQUILINEAR FORMS

A PRIMER ON SESQUILINEAR FORMS A PRIMER ON SESQUILINEAR FORMS BRIAN OSSERMAN This is an alternative presentation of most of the material from 8., 8.2, 8.3, 8.4, 8.5 and 8.8 of Artin s book. Any terminology (such as sesquilinear form

More information

Linear Algebra Lecture Notes-II

Linear Algebra Lecture Notes-II Linear Algebra Lecture Notes-II Vikas Bist Department of Mathematics Panjab University, Chandigarh-64 email: bistvikas@gmail.com Last revised on March 5, 8 This text is based on the lectures delivered

More information

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam May 25th, 2018

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam May 25th, 2018 University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam May 25th, 2018 Name: Exam Rules: This exam lasts 4 hours. There are 8 problems.

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

Linear algebra and applications to graphs Part 1

Linear algebra and applications to graphs Part 1 Linear algebra and applications to graphs Part 1 Written up by Mikhail Belkin and Moon Duchin Instructor: Laszlo Babai June 17, 2001 1 Basic Linear Algebra Exercise 1.1 Let V and W be linear subspaces

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

Symmetric and self-adjoint matrices

Symmetric and self-adjoint matrices Symmetric and self-adjoint matrices A matrix A in M n (F) is called symmetric if A T = A, ie A ij = A ji for each i, j; and self-adjoint if A = A, ie A ij = A ji or each i, j Note for A in M n (R) that

More information

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in 806 Problem Set 8 - Solutions Due Wednesday, 4 November 2007 at 4 pm in 2-06 08 03 Problem : 205+5+5+5 Consider the matrix A 02 07 a Check that A is a positive Markov matrix, and find its steady state

More information

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS 1. HW 1: Due September 4 1.1.21. Suppose v, w R n and c is a scalar. Prove that Span(v + cw, w) = Span(v, w). We must prove two things: that every element

More information

Topic 2 Quiz 2. choice C implies B and B implies C. correct-choice C implies B, but B does not imply C

Topic 2 Quiz 2. choice C implies B and B implies C. correct-choice C implies B, but B does not imply C Topic 1 Quiz 1 text A reduced row-echelon form of a 3 by 4 matrix can have how many leading one s? choice must have 3 choice may have 1, 2, or 3 correct-choice may have 0, 1, 2, or 3 choice may have 0,

More information

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure.

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure. Hints for Exercises 1.3. This diagram says that f α = β g. I will prove f injective g injective. You should show g injective f injective. Assume f is injective. Now suppose g(x) = g(y) for some x, y A.

More information

Linear Algebra. Workbook

Linear Algebra. Workbook Linear Algebra Workbook Paul Yiu Department of Mathematics Florida Atlantic University Last Update: November 21 Student: Fall 2011 Checklist Name: A B C D E F F G H I J 1 2 3 4 5 6 7 8 9 10 xxx xxx xxx

More information

Solution to Homework 1

Solution to Homework 1 Solution to Homework Sec 2 (a) Yes It is condition (VS 3) (b) No If x, y are both zero vectors Then by condition (VS 3) x = x + y = y (c) No Let e be the zero vector We have e = 2e (d) No It will be false

More information

235 Final exam review questions

235 Final exam review questions 5 Final exam review questions Paul Hacking December 4, 0 () Let A be an n n matrix and T : R n R n, T (x) = Ax the linear transformation with matrix A. What does it mean to say that a vector v R n is an

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

Math 291-2: Lecture Notes Northwestern University, Winter 2016

Math 291-2: Lecture Notes Northwestern University, Winter 2016 Math 291-2: Lecture Notes Northwestern University, Winter 2016 Written by Santiago Cañez These are lecture notes for Math 291-2, the second quarter of MENU: Intensive Linear Algebra and Multivariable Calculus,

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Cheat Sheet for MATH461

Cheat Sheet for MATH461 Cheat Sheet for MATH46 Here is the stuff you really need to remember for the exams Linear systems Ax = b Problem: We consider a linear system of m equations for n unknowns x,,x n : For a given matrix A

More information

MA 1B ANALYTIC - HOMEWORK SET 7 SOLUTIONS

MA 1B ANALYTIC - HOMEWORK SET 7 SOLUTIONS MA 1B ANALYTIC - HOMEWORK SET 7 SOLUTIONS 1. (7 pts)[apostol IV.8., 13, 14] (.) Let A be an n n matrix with characteristic polynomial f(λ). Prove (by induction) that the coefficient of λ n 1 in f(λ) is

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

MATHEMATICS 217 NOTES

MATHEMATICS 217 NOTES MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm Name: The proctor will let you read the following conditions

More information

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7 Linear Algebra and its Applications-Lab 1 1) Use Gaussian elimination to solve the following systems x 1 + x 2 2x 3 + 4x 4 = 5 1.1) 2x 1 + 2x 2 3x 3 + x 4 = 3 3x 1 + 3x 2 4x 3 2x 4 = 1 x + y + 2z = 4 1.4)

More information

Some notes on linear algebra

Some notes on linear algebra Some notes on linear algebra Throughout these notes, k denotes a field (often called the scalars in this context). Recall that this means that there are two binary operations on k, denoted + and, that

More information

Linear algebra I Homework #1 due Thursday, Oct Show that the diagonals of a square are orthogonal to one another.

Linear algebra I Homework #1 due Thursday, Oct Show that the diagonals of a square are orthogonal to one another. Homework # due Thursday, Oct. 0. Show that the diagonals of a square are orthogonal to one another. Hint: Place the vertices of the square along the axes and then introduce coordinates. 2. Find the equation

More information

The Spectral Theorem for normal linear maps

The Spectral Theorem for normal linear maps MAT067 University of California, Davis Winter 2007 The Spectral Theorem for normal linear maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (March 14, 2007) In this section we come back to the question

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS SPRING 006 PRELIMINARY EXAMINATION SOLUTIONS 1A. Let G be the subgroup of the free abelian group Z 4 consisting of all integer vectors (x, y, z, w) such that x + 3y + 5z + 7w = 0. (a) Determine a linearly

More information

MATH 221, Spring Homework 10 Solutions

MATH 221, Spring Homework 10 Solutions MATH 22, Spring 28 - Homework Solutions Due Tuesday, May Section 52 Page 279, Problem 2: 4 λ A λi = and the characteristic polynomial is det(a λi) = ( 4 λ)( λ) ( )(6) = λ 6 λ 2 +λ+2 The solutions to the

More information

MIT Final Exam Solutions, Spring 2017

MIT Final Exam Solutions, Spring 2017 MIT 8.6 Final Exam Solutions, Spring 7 Problem : For some real matrix A, the following vectors form a basis for its column space and null space: C(A) = span,, N(A) = span,,. (a) What is the size m n of

More information

0.1 Rational Canonical Forms

0.1 Rational Canonical Forms We have already seen that it is useful and simpler to study linear systems using matrices. But matrices are themselves cumbersome, as they are stuffed with many entries, and it turns out that it s best

More information

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

The converse is clear, since

The converse is clear, since 14. The minimal polynomial For an example of a matrix which cannot be diagonalised, consider the matrix ( ) 0 1 A =. 0 0 The characteristic polynomial is λ 2 = 0 so that the only eigenvalue is λ = 0. The

More information

GENERALIZED EIGENVECTORS, MINIMAL POLYNOMIALS AND THEOREM OF CAYLEY-HAMILTION

GENERALIZED EIGENVECTORS, MINIMAL POLYNOMIALS AND THEOREM OF CAYLEY-HAMILTION GENERALIZED EIGENVECTORS, MINIMAL POLYNOMIALS AND THEOREM OF CAYLEY-HAMILTION FRANZ LUEF Abstract. Our exposition is inspired by S. Axler s approach to linear algebra and follows largely his exposition

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

( 9x + 3y. y 3y = (λ 9)x 3x + y = λy 9x + 3y = 3λy 9x + (λ 9)x = λ(λ 9)x. (λ 2 10λ)x = 0

( 9x + 3y. y 3y = (λ 9)x 3x + y = λy 9x + 3y = 3λy 9x + (λ 9)x = λ(λ 9)x. (λ 2 10λ)x = 0 Math 46 (Lesieutre Practice final ( minutes December 9, 8 Problem Consider the matrix M ( 9 a Prove that there is a basis for R consisting of orthonormal eigenvectors for M This is just the spectral theorem:

More information

LINEAR ALGEBRA REVIEW

LINEAR ALGEBRA REVIEW LINEAR ALGEBRA REVIEW JC Stuff you should know for the exam. 1. Basics on vector spaces (1) F n is the set of all n-tuples (a 1,... a n ) with a i F. It forms a VS with the operations of + and scalar multiplication

More information

Jordan Normal Form. Chapter Minimal Polynomials

Jordan Normal Form. Chapter Minimal Polynomials Chapter 8 Jordan Normal Form 81 Minimal Polynomials Recall p A (x) =det(xi A) is called the characteristic polynomial of the matrix A Theorem 811 Let A M n Then there exists a unique monic polynomial q

More information

Numerical Linear Algebra

Numerical Linear Algebra University of Alabama at Birmingham Department of Mathematics Numerical Linear Algebra Lecture Notes for MA 660 (1997 2014) Dr Nikolai Chernov April 2014 Chapter 0 Review of Linear Algebra 0.1 Matrices

More information

Lecture 2: Eigenvalues and their Uses

Lecture 2: Eigenvalues and their Uses Spectral Graph Theory Instructor: Padraic Bartlett Lecture 2: Eigenvalues and their Uses Week 3 Mathcamp 2011 As you probably noticed on yesterday s HW, we, um, don t really have any good tools for finding

More information

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether

More information

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015 Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 205. If A is a 3 3 triangular matrix, explain why det(a) is equal to the product of entries on the diagonal. If A is a lower triangular or diagonal

More information

Linear Algebra 2 Final Exam, December 7, 2015 SOLUTIONS. a + 2b = x a + 3b = y. This solves to a = 3x 2y, b = y x. Thus

Linear Algebra 2 Final Exam, December 7, 2015 SOLUTIONS. a + 2b = x a + 3b = y. This solves to a = 3x 2y, b = y x. Thus Linear Algebra 2 Final Exam, December 7, 2015 SOLUTIONS 1. (5.5 points) Let T : R 2 R 4 be a linear mapping satisfying T (1, 1) = ( 1, 0, 2, 3), T (2, 3) = (2, 3, 0, 0). Determine T (x, y) for (x, y) R

More information

Math 113 Midterm Exam Solutions

Math 113 Midterm Exam Solutions Math 113 Midterm Exam Solutions Held Thursday, May 7, 2013, 7-9 pm. 1. (10 points) Let V be a vector space over F and T : V V be a linear operator. Suppose that there is a non-zero vector v V such that

More information

Calculating determinants for larger matrices

Calculating determinants for larger matrices Day 26 Calculating determinants for larger matrices We now proceed to define det A for n n matrices A As before, we are looking for a function of A that satisfies the product formula det(ab) = det A det

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

SPECTRAL THEORY EVAN JENKINS

SPECTRAL THEORY EVAN JENKINS SPECTRAL THEORY EVAN JENKINS Abstract. These are notes from two lectures given in MATH 27200, Basic Functional Analysis, at the University of Chicago in March 2010. The proof of the spectral theorem for

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors week -2 Fall 26 Eigenvalues and eigenvectors The most simple linear transformation from R n to R n may be the transformation of the form: T (x,,, x n ) (λ x, λ 2,, λ n x n

More information

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background Lecture notes on Quantum Computing Chapter 1 Mathematical Background Vector states of a quantum system with n physical states are represented by unique vectors in C n, the set of n 1 column vectors 1 For

More information

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 The test lasts 1 hour and 15 minutes. No documents are allowed. The use of a calculator, cell phone or other equivalent electronic

More information

Linear Algebra- Final Exam Review

Linear Algebra- Final Exam Review Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.

More information

Chapter 4. Matrices and Matrix Rings

Chapter 4. Matrices and Matrix Rings Chapter 4 Matrices and Matrix Rings We first consider matrices in full generality, i.e., over an arbitrary ring R. However, after the first few pages, it will be assumed that R is commutative. The topics,

More information