.. Honors Linear Algebra, Spring 7. Homework 8 solutions by Yifei Chen 8... { { W {v R 4 v α v β } v x, x, x, x 4 x x + x 4 x + x x + x 4 } Solve the linear system above, we get a basis of W is {v,,,, v,,, }. 8... The formula of Gram-Schmidt orthogonalization process can be found in the proof of Theorem, page 8. α β β,, α β β α α β,, α α α,, α β β α α β α α,, α α α,, Therefore, {α, α, α } is an orthonormal basis of R. 8... Apply Gram-Schmidt orthogonalization process to β,, i, β,, + i, we get notice that α β x, x, x y, y, y x ȳ + x ȳ + x ȳ in C α β β,, i α β β α α + i,, i α α + i α,, i Therefore, {α, α } is an othonormal basis for the subspace spanned by β and β. 8.. 7. Let α, 4, then α 4 + 4 7. So β α/ α /7, 4/7. a The formula for the projection p : V W is given by pv v ββ. Suppose v x, x, then [ pv Ex, x v ββ x x 7 + x + 4x 4 ] 7 7, 4 9 7 49 x x, 6 49 x + 4x. v v x, x x x + x x x x + 4x x x u v x x 4 4 y y x y x y x y + 4x y, x x where u x, x, v y, y. See Example in page 7. b Let B {e,, e, } be the standard basis. So pe E, 9/49, 6/49 and pe E, 9/49, 64/49. So the matrix of E with respect to the standard basis is 9 49 6 49 9 49 64 49
c Try to find a vector v such that v α. Suppose v x, x, then v α x x +4 x +4x x + x. Let v,. Then W Span {v}. d Let γ v/ v 47,. So {β, γ} is an orthonormal basis of R. From the geometric meaning of the orthogonal projection, we get pβ β and pγ. So the matrix of E with respect to the orthonormal basis {β, γ} is 8... The matrix of T with respect to the standard basis is + i i T. So T i i i i. So T T + i i 5 Therefore, T does not commute with T. T T 6 + i i 8.. 8. The vector space V has a basis B {, t, t, t }. Apply Gram-Schmidt process to the basis B, we can find an orthonormal basis B {, α, β, γ} calculate yourself... For a given real number t, let g t ω t + α t α + β t β + γ t γ, where ω t, α t, β t, γ t R, then ft g t a f + b f α + c f β + d f γ ω t + α t α + β t β + γ t γ a f ω f + b f α t + c f β t + d f γ t ft, where ft a f + b f α + c f β + d f γ V, a f, b f, c f, d f R. For a fixed t R, let f, t, t, t respectively. Then we have corresponding coefficients a f, b f, c f, d f and ft for each f. Therefore, we get 4 linear equations about the variables ω t, α t, β t, γ t. The linear system has solutions because f is chosen from a basis {, t, t, t }. Then we can get g t which satisfies f g t ft for all f V from the linearity of the inner product. 8.. 9. Use the above orthonormal basis B, and find the matrix A of the differentiation D with respect to B. Then take the conjugate and transpose A of A. We get A is the matrix of D with respect to the basis B. notice that the way follows from the corollary in page 94. The corollary works for an orthonormal basis. 8.4. 6. a For any α, α V, suppose α β + γ and α β + γ, where β i, W and γ j W. Then Uα α β γ β + γ β β γ γ β + γ β γ α Uα Uα Uα β γ β γ β β + γ γ β + γ β + γ α α So U is self-adjoint and unitary. b W is a subspace spanned by,,. Let w,,,,,, any α V, let β α ww and γ α β, then β W and γ W and α β + γ. So Ue e ww e e ww e ww e,, e Ue e ww e e Ue e ww e,, e. So the matrix of U with respect to the standard basis is cos θ sin θ 8.4. 8. The eigenvalues of the matrix A are e sin θ cos θ ±iθ. AA A A, i.e., A is normal. By the Spetral Theorem, A is unitarily equivalent to... So W Span {w}. For We can easily check that e iθ e iθ.
8.5. 7. Let λ be an eigenvalue of T, and α an eigenvector of λ notice that α, we are going to use the condition. Since T is unitary, we get λ e iθ. Indeed, T α λα λ α. So λ by α. Therefore, λ e iθ. On the other hand, λ is a positive real number since T is positive. From the definition of positive operator, we get T T and T v v > for all v V. Since T is self-adjoint, we get the eigenvalue λ is real. Actually, T α α λα α λα α λ λ since α α i.e., λ R. α T α α λα λα α T α α λα α > λ >. All eigenvalues λ of T satisfy λ e iθ and λ > implies λ. Apply the Spectral theorem, T is unitarily equivalent to I, i.e., U T U I for some unitary matrix U. So T UIU I. 8.5. 8. T α β T α + it α β T α β + it α β α T β + iα T β α T it β. By the uniqueness of T, we get T T it. If T and T are self-adjoint operator which commute, then T i T i, i, and T T T T. So we get T T T + it T it T T it T + it T + T T T + T T it T + it T T. So T is normal. On the other hand, if T is normal, let T T + T, T i T T, see page 97 in the textbook. We can easily check that T T + it and T T T i, i, are self-adjoint. T and T commute since and T T, i.e., T T 4i T + T T T 4i T T 4i T T T + T T T. 8.5.. Suppose the operator T is normal and nilpotent. From the Spectral theorem, T is unitarily λ λ equivalent to, where λ i are eigenvalues of T. On the other hand, T is nilpotent, so all eigenvalues λ i. So T is unitarily equivalent to zero matrix. Hence T is the zero operator. cos θ sin θ 9... It is easy to calculate that the matrix A of T θ is sin θ cos θ e ±iθ we have seen in exercise 8.4. 8... So the eigenvalues of A are By the proof of exercise 8.5. 7, we get if T θ is positive, then the eigenvalues e ±iθ of T θ are positive numbers. Therefore, we get θ kπ, k, ±, ±,... 9.. 5. a It is easy to verify that the principal minors of A are positive. That is, A det > and A det >. So by Theorem 6 in page 8, A is positive.
b Applying the Gram-Schmidt process to the basis {X, X }, we get α X X X α X X α α So α α α c Let. P α, α then A P P P P. If you wonder why P should be taken so, please read the proof of Theorem 5 in page 6. 9.5.. a Since A A, we get A A A AA, i.e., A is normal. By there exists a unitary matrix U, such that λ U λ AU, 6, where λ i, i,..., n are eigenvalues of A. U BU U k A k U U AU k k k λ k k λ k k λ k n e λ e λ e. So det B detu BU e λ+λ+ + e tr A. b B k A k A k k A k k e A. c If you know the property of exponent of matrices, then BB e A e A I. λ By the proof of a, we have A UΛU λ, where Λ. A A UΛ U UΛU Λ Λ λ i λ i λ i iθ i, i.e., λ i is a pure imaginary. So BB Ue Λ U Ue Λ U Ue Λ e Λ U U e iθ e iθ e iθ e iθ e iθn e iθn U UIU I. So B is unitary. 4
9.5. 6. a Since U is unitary, U is normal. By the Spectral theorem, there exists a unitary matrix P, such that P UP Λ, where Λ is a diagonal matrix with the main diagonal consisting of eigenvalues of U. From the condition, Uα α implies α. We get all eigenvalues of U are not. Therefore I Λ is invertible. So is I U P I ΛP. Hence I U is well defined. We can plug U in the formula of fz and get fu ii + UI U. b We want to show fu fu. From a, we have I UfU ii + U. So [fui U] [ii + U] I U fu ii + U fu ii U I + U. We can use the same way in a to prove I U is invertible. I omitted the proof here. To show fu fu, it is equivalent to showing ii + UI U ii U I + U ii U I + U ii + U I U. The verification of the right hand side above is easy and left to you. c If T is self-adjoint, then T is normal. By the spectral theorem, ±i are not eigenvalues of T same method in the proof of a. Therefore, T ± ii are invertible. Since T is self-adjoint, we have U T iit + ii UT + ii T ii UT + ii T ii [UT + ii] [T ii] T iiu T + ii T iiu T + ii., we have T iiu UT +ii T +iit ii T +I T iit +ii U U T ii T iit +iit +ii I. This is why we need to show T ± ii are invertible. 9.5. 7. See the definition for self-adjoint algebra in page 45. Let F be the family of operators obtained by letting B varying over all diagonal matrices, i.e. F {L B B is diagonal }. Since L B L B L B B and L B L B L BB L BB L B L B, we get F is a commutative subalgebra of LV, V. On the other hand, it is not difficult to verify that L B L B hint: verify that L B A C A L B C, i.e., Tr BAC Tr AC B. The last equality comes from the property of trace. Since B is diagonal, B is diagonal as well. In other words, L B is in F. By the definition of self-adjoint algebra, we get F is self-adjoint. To find the spectral resolution of F, we need to thoroughly understand the proof of Theorem 5 in page 4. Let {e ij } be the standard basis of V, then {T i L eii }, i,..., n is a basis of F verify it. Let E i : V W i be the orthogonal projection of V on W i, where W i Span {e ij }, j,..., n. Let E i : V W i be the orthogonal projection of V on W i, where W i Span {e kj }, k, j,..., n, k i. Then we have I E + E E + E E n + E n resolution of identity and L eii E i resolution of L eii and notice that the eigenvalues of L eii are,. So So I E + E E + E E n + E n. α j,j,...,j n E j E j E njn α, α V. If E j E j E njn for some sequence j,..., j n, then j k and j l for l k verify it when n. Let α e, and β E E E E n α α. Then T i β L eii β c i β, where c i { i i For any T L B F, then L B m i b il eii m i b it i, where b i is the entry in the main diagonal of B. So T β b i T i β b β. 5
Let r : F F be a function of F and defined by r T b, where T b T + + b n T n, i.e., take the first component. V r i W. Similarly, we can get r i : F F which is defined by r i T b i where T b T +... + b n T n. V r i W i. Let P j be the orthogonal projection of V on V r j, j n. Then T n j r jt P j is the spectral resolution of T in terms of F. The computation of other two families is similar work out yourself..... See Theorem and corollary in page 9-4 in Functional Analysis, Kosaku Yosida. One correction of the problem, in the textbook, actually in most textbooks I saw including the reference of Yosida, the polarization identity is 4x y x + y x y + i x + iy x iy. But the polarization identity given by Prof. Ha is suppose to be 4x y x+y x y i x+iy x iy NOT 4x y x+y x y i x+iy i x iy. They are both correct! The difference comes from Prof. Ha and some textbooks use different definition of inner product. cx y cx y by the textbook cx y cx y by Prof. Ha. This is just different convention. You will see this in problem 6 b as well.. If T is an operator on a complex finite-dimensional inner product space V, we want to show the following statement are equivalent: i T ii T x y for all x, y V iii T x x for all x V. Proof. i ii. Since T, we have T x y y for all x, y V. ii iii. Trivial! let y x iii i. By the property of inner product, we have x T x T x x. By the condition, we get x T x T x x, i.e., T is self adjoint. By the spectral theorem, T is unitary equivalent to the λ λ diagonal matrix. If we can show all eigenvalues of T are, then T is unitary equivalent to. Hence T. Let λ be an eigenvalue of T and ξ is the eigenvector of λ. Then T ξ ξ λξ ξ λξ ξ. Since ξ, we have ξ ξ >. Therefore λ. For a real vector space, i ii iii is true. But iii i is not true! For instance, let T be the rotation of π, then T x x for all x V. But T. 4. Notice: The definition of positive operator given by Prof. Ha is same as the definition of non-negative operator. The definition of strictly positive operator given by Prof. Ha is same as the definition of positive operator. See Definition in page 9 in the text and page in hw 8. a T T T T T T, so T T is self-adjoint. T T x x T x T x T x T x. The first equality comes from the definition of adjoint operator and the last inequality comes from the property of inner product. b If T is self-adjoint, then all eigenvalues of T are real. Indeed, suppose λ is an eigenvalue of T and ξ is the eigenvector of λ, then Since ξ, we have ξ ξ >. Hence λ λ. λξ ξ λξ ξ T ξ ξ ξ T ξ ξ λξ λξ ξ. T x x for all x V, we get all eigenvalues of T are nonnegative. Indeed, T ξ ξ λξ ξ λ. 6
c If T x x, then T x x R since there is no order on complex numbers, i.e., we couldn t say a complex number is greater than. Therefore, T x x T x x x T x. The second equality comes from the property of inner product. The whole equality means T is self-adjoint. d We only need to prove that T is positive if and only if T R for some self-adjoint operator R. The proof of the proposition implies the middle statement, T S S for some operator S. In part a, we have proved that if T R for some self-adjoint operator R, then T is positive. On the other hand, if T is positive, then T is self-adjoint. So T is normal. By spectral theorem, there λ exists a unitary matrix U, such that U λ T U Λ, where Λ is a diagonal matrix. Since T is positive, by part b, λ i, i,..., n. Then there exist real numbers a i, such that a a i λ a i, i,..., n. Let A. And let R U AU, then T R. Since A is a real diagonal matrix, A is self-adjoint. So is R. an If V is a real inner product space. The condition T x x no longer guarantees self-adjointness. For instance, Let V R with the standard inner product and T R θ : V V, where R θ is a rotation of an acute angle θ counter-clockwise. Then we have T x x for all x V. However T is not self-adjoint cos θ sin θ since the matrix of T with respect to the standard basis, is not self-adjoint symmetric sin θ cos θ in the real case. 5. Let A E kera Span {v }, where. Then eigenvalues of A are,. For eigenspace E ker A Span {v, v }, v, v, v Apply Gram-Schmidt method to v, v, we get an orthonormal basis {w, w } of E. Then normalize v, we get w is an orthonormal basis of E. Let Q w, w, w, then QAQ is diagonal. I omit here for concrete computation. Actually I have seen this problem in hw 6, problem. Same method can be applied to the other two matrices. 6. a See page 85 for the definition of orthogonal projection. See Theorem 4, iii for the formula to find the orthogonal projection of a vector β V. First, use Gram-Schmidt method to the v,,,, v,,,, v,,, and get an orthonormal basis {α, α, α } of Span {v, v, v }. Let α be the projection of β,,, 4, then, by the formula in Theorem 4, iii, we have You d better to practice the way. π α k β α k α k α k β α k α k. b To verify f g fθgθ dθ actually defines an inner product on V, we need to f g satisfies π the definition of inner product in page 7 in the text book. i f + g h π f + gθhθ dθ π π k fθhθ dθ π + π. gθhθ dθ f h + g h. π 7
ii cf g π cfθgθ dθ π c I mention the difference in problem as well. iii π f g fθgθ dθ π π π fθgθ dθ π fθgθ dθ π cf g π gθfθ dθ π g f. iv If f, then fx for some x [, π]. Since f is continuous, then there exists an open neighborhood U [, π] of x, such that f U. So f f π fθfθ dθ π π fθ dθ π fθ dθ U π >. notice that we need the continuity condition of f here. Otherwise, let f, fx for x, π], then f f π f π dθ. c Let f k sinkx, g k coskx, k, ±, ±. Then f k g l π π sinkx coslxdx 4π π [sinkx+lx+sinkx lx]dx [ cosk + lx + 4π k + l Similarly, we can show that f k f l, k l and g k g l, k l by the formula ] cosk lx π. k l sinkx sinlx [coskx + lx coskx lx], coskx coslx [coskx + lx + coskx lx]. Since f k, g l are mutually orthogonal, we can normalize them get an orthonormal basis of W. By the formula of part a, we get g. Then we can get f g f g, f g /. Explicit computation is cumbersome since an orthonormal basis of W consists of 9 vectors. You can compute yourself. d,e Let {, t, t, t, t 4 } be a basis of P. Use Gram-Schmidt method to get an orthonormal basis B of P. Then by the formula in part a, we can find the orthogonal projection of fθ sin θ. To find the adjoint of T d/dθ, we need to find the matrix A of T with respect to the orthonormal basis B. Then take the conjugate transpose of A. A is the adjoint of T. The method is routine, but I didn t calculate. 7. I am not clear of the definition of A log A, please ask Prof. Ha about problem 7 and 8. 8. 9. a Let ξ x,..., x n, A a ij i,j, λ max i,j { a ij }. Aξ a j x j j a j x j j + a j x j j + a j x j j a j x j j nλ n j x j + + a nj x j j + + a nj x j j + a j x j j n λ ξ. Let C nλ. Then Aξ C ξ. So A C is finite. + + a nj x j j 8
b Follow the hints, let a sup{ Aξ η : ξ η }, b sup{ Aξ : ξ }, c sup{ Aξ ξ : ξ }, d sup{ Aξ η ξ η : ξ, η }. By Cauchy-Schwartz inequality, Aξ η Aξ η Aξ for ξ η. So a b. { Aξ : ξ } { Aξ ξ Aξ η xi η Aξ : ξ } since ξ Aξ when ξ. So b c. A ξ, notice that ξ η ξ η So d a. Therefore, we get d a b c. We still need to show that c d. to be continued... ξ η η. The results of the problem can be found in the standard functional analysis textbook. I am exhausted, let me have a rest! 9