Math 593: Problem Set 10

Similar documents
Math 593: Problem Set 7

LINEAR ALGEBRA REVIEW

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Linear Algebra. Session 12

( 9x + 3y. y 3y = (λ 9)x 3x + y = λy 9x + 3y = 3λy 9x + (λ 9)x = λ(λ 9)x. (λ 2 10λ)x = 0

Elementary linear algebra

Mathematics Department Stanford University Math 61CM/DM Inner products

A PRIMER ON SESQUILINEAR FORMS

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45

Math 121 Homework 5: Notes on Selected Problems

7 Bilinear forms and inner products

Numerical Linear Algebra

The following definition is fundamental.

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure.

MATH 205 HOMEWORK #5 OFFICIAL SOLUTION

Linear Algebra. Min Yan

Linear Algebra Massoud Malek

October 25, 2013 INNER PRODUCT SPACES

Mathematical Methods wk 1: Vectors

Mathematical Methods wk 1: Vectors

Spectral Theorem for Self-adjoint Linear Operators

I teach myself... Hilbert spaces

1. General Vector Spaces

MATH 115A: SAMPLE FINAL SOLUTIONS

6 Inner Product Spaces

Algebra II. Paulius Drungilas and Jonas Jankauskas

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

MIDTERM I LINEAR ALGEBRA. Friday February 16, Name PRACTICE EXAM SOLUTIONS

Math 24 Spring 2012 Sample Homework Solutions Week 8

Linear algebra 2. Yoav Zemel. March 1, 2012

MATH 235. Final ANSWERS May 5, 2015

MATH 101A: ALGEBRA I PART C: TENSOR PRODUCT AND MULTILINEAR ALGEBRA. This is the title page for the notes on tensor products and multilinear algebra.

Homework set 4 - Solutions

Honors Algebra 4, MATH 371 Winter 2010 Assignment 4 Due Wednesday, February 17 at 08:35

LINEAR ALGEBRA MICHAEL PENKAVA

Since G is a compact Lie group, we can apply Schur orthogonality to see that G χ π (g) 2 dg =

5 Compact linear operators

Vector spaces, duals and endomorphisms

Math 594, HW2 - Solutions

Chapter 4 Euclid Space

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Chapter 5. Basics of Euclidean Geometry

Quadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix.

1.8 Dual Spaces (non-examinable)

Definition 1. A set V is a vector space over the scalar field F {R, C} iff. there are two operations defined on V, called vector addition

235 Final exam review questions

Math 113 Midterm Exam Solutions

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

4.2. ORTHOGONALITY 161

GQE ALGEBRA PROBLEMS

Linear algebra II Homework #1 solutions A = This means that every eigenvector with eigenvalue λ = 1 must have the form

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.

Algebra Exam Syllabus

Math Linear Algebra II. 1. Inner Products and Norms

Chapter 6 Inner product spaces

Chapter 2 Linear Transformations

WOMP 2001: LINEAR ALGEBRA. 1. Vector spaces

Math 396. An application of Gram-Schmidt to prove connectedness

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

MATH Linear Algebra

1 Functional Analysis

Math 25a Practice Final #1 Solutions

(v, w) = arccos( < v, w >

REPRESENTATION THEORY WEEK 7

Math 121 Homework 4: Notes on Selected Problems

Supplementary Notes on Linear Algebra

Math 113 Solutions: Homework 8. November 28, 2007

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Real representations

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

Mathematical foundations - linear algebra

Math 113 Winter 2013 Prof. Church Midterm Solutions

Hilbert spaces. 1. Cauchy-Schwarz-Bunyakowsky inequality

Vectors in Function Spaces

Quantum Computing Lecture 2. Review of Linear Algebra

Math 113 Final Exam: Solutions

SPECTRAL THEORY EVAN JENKINS

Categories and Quantum Informatics: Hilbert spaces

MATH 101B: ALGEBRA II PART A: HOMOLOGICAL ALGEBRA 23

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Math 306 Topics in Algebra, Spring 2013 Homework 7 Solutions

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS

Formal power series rings, inverse limits, and I-adic completions of rings

Typical Problem: Compute.

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Linear Algebra Review

Linear algebra II Homework #1 due Thursday, Feb. 2 A =. 2 5 A = When writing up solutions, write legibly and coherently.

Spectral Theorems in Euclidean and Hermitian Spaces

Linear algebra II Homework #1 due Thursday, Feb A =

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence)

Exercises on chapter 0

Math 396. Quotient spaces

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

Some notes on Coxeter groups

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

Transcription:

Math 593: Problem Set Feng Zhu, edited by Prof Smith Hermitian inner-product spaces (a By conjugate-symmetry and linearity in the first argument, f(v, λw = f(λw, v = λf(w, v = λf(w, v = λf(v, w. (b We may verify that, for any x, y C n and λ C, λx, y = i λx i y i = λ i x i y i = λ x, y and x, y = i λx i y i = i y i x i = y, x and x, x = i x i x i = i x i 2 with equality iff x =, so that, as defined is indeed a Hermitian inner product on C n. (c Write A := [f] B. Then A ii = f(v i, v i = f(v i, v i by conjugate-symmetry, so that A ii = f(v i, v i R. If i j, then A ji = f(v j, v i = f(v i, v j = A ij. Hence A = A, i.e. A = [f] B is Hermitian. (d Write v = n i= a iv i and w = n j= b jv j (so that [v] B = (a,..., a n and [w] B = (b,..., b n. Then by sesquilinearity, f(v, w = i,j n a if(v i, v j b j = [v] T B [f] B[w] B, and we may re-arrange this as f(v, w = n n a i f(v i, v j b j = a i f(v i, v j b j = [v] B, A[w] B or as f(v, w = i,j n i,j n a i f(v i, v j b j = i= j= ( n n a i f(v i, v j b j = A T [v] B, [w] B j= Collaborated, directly or indirectly, with Umang Varma, Daniel Irvine, Joe Kraisler, Alex Vargo, Punya Satpathy. Was also helped by discussions with Sachi Hashimoto. i=

(e Note we have [v] B = P [v] B and [w] B = P [w] B. Then and so [f] B = P T [f] B P. f(v, w = [v] T B[f] B [w] B = [v] T B P T [f] B P [w] B = [v] B [f] B [w] B (f If f(v, v = for all v, then f is an alternating symmetric form, i.e. it is the zero form; then the matrix of f w.r.t. any basis B is diagonal (is zero. So now suppose we can find v s.t. f(v, v. We now induce on the dimension n of our vector space V. For n = the statement is clear. More generally, define the linear map L v : C n C by L v (w = f(v, w. Note L v (v = im L v = C, and so W := ker L v has (C-dimension n. Now f W W is a Hermitian form on W, so inductively there is some basis B for W s.t. f W W has a diagonal matrix w.r.t. B ; then if we take B = {v} B, then [f] B is a diagonal matrix. If f is positive-definite, then all of the entries ( f(v i, v i along the diagonal of [f] B are positive, and so, if we normalize the basis B by B := v f(v,v,..., v n, then [f] B is the n n f(vn,v n identity matrix. (g If A is a n n Hermitian matrix, it has n eigenvalues counted with multiplicty (since its characteristic polynomial splits completely; in particular, it has an eigenvalue λ with associated eigenvector v. Define the Hermitian form f : C n C n by f(x, y = x, Āy. Since f(v, v = v, Āv = v, λv = λ v, v R and v, v R, we have λ R. Hence we may conclude that A has a real eigenvalue. Now let v = v, the eigenvector above, and let W = (C v, where the indicates the vectors in C n orthogonal to v under the standard Hermitian inner product on C n. For any w W, Aw, v = w, A v = w, Av = λ w, v =, and so W is A-invariant. Hence we may induce on the dimension n (the base case is supplied by n =, for which the desired result is trivially true. 2 Inner-product spaces and Gram-Schmidt (a Fix v, w V, and consider f(t = v + tw, v + tw = v, v + ( v, w + w, v t + w, w t 2 = v, v + 2R( v, w t + w, w t 2 (here assuming t R. Note that all of these coefficients are real, and that v + tw, v + tw with equality iff v = tw by the properties of the inner product. Considering the discriminant of this quadratic equation, we obtain 4(R( v, w 2 4 v, v w, w, i.e. (R( v, w 2 v 2 w 2 ; taking square roots, we obtain R( v, w v w u which suffices to show the Cauchy-Schwartz inequality in a real inner-product space. In a Hermitian inner-product space, let w = e iθ w, where θ ( π, π] is chosen s.t. v, w R (specifically, if v, w = λ, we take e iθ = λ/ λ. Then we have, from sesquilinearity and from the above, v, w = v, w v w = v w 2

(b Noting that v 2 = v, v, we have v + w 2 = v + w, v + w = v, v + w, w + v, w + w, v by bilinearity / sesquilinearity = v, v + w, w + 2R( v, w v, v + w, w + 2 v, w v 2 + w 2 + 2 v w = ( v + w 2 from Cauchy-Schwartz and, taking square roots on both sides, we obtain the triangle inequality. (c V is a 3-dimensional R-vector subspace of R 4 ; the dot product induces an inner product on R 4 (i.e. a positive-definite symmetric bilinear form R 4 R 4 R and this descends to a positive-definite symmetric bilinear form V V R, i.e. an inner product on the vector subspace V R 4. Letting e, e 2, e 3, e 4 denote the unit vectors in the positive x, y, z, w directions resp. We note that (, 2, 3, 4 is a normal vector to V in R 4, so that we may take (e.g. {(2,,,, (3,,,, (4,,, } to be a basis for V. Now we apply the Gram-Schmidt orthogonalisation process to obtain v = (2,,, = (2,,, (2,,, 5 u 2 = (3,,, v, (3,,, v = (3,,, 6 (2,,, = (.6,.2,, 5 v 2 = u 2 u 2 = (.6,.2,, 2.8 u 3 = (4,,, v, (4,,, v v 2, (4,,, v 2 = (4,,, 8 2.4 (2,,, (.6,.2,, 5 2.8 = (.8 3.6/7,.6 7.2/7, 6/7, = (2, 4, 6, 7 7 v 3 = 7 (2, 4, 6, 7 5 ( and then (v, v 2, v 3 = 5 (2,,,, 2.8 (.6,.2,,, 7 (2, 4, 6, 7 is an orthonormal 5 basis for V. (d It is clear that V is a R-vector subspace of R[x] (since it is closed under the vector addition and scalar multiplication on R[x]. We may verify that f + f 2, g = (f + f 2 g dx = f g + f 2 g dx = f g dx + f 2g dx = f, g + f 2, g and similarly (by linearity of the integral f, g + g 2 = f, g + f, g 2, λf, g = f, λg = λ f, g for all λ R and f, g, f, f 2, g, g 2 V. Moreover g, f := gf dx = fg dx =: f, g ; hence f, g as defined endows V with an inner product structure. To find an orthonormal basis, we start with the basis, x, x 2 and apply the Gram-Schmidt 3

procedure: v = u 2 = x x, = x = x 2 v 2 = ( u 2 u 2 = = ( 2 x 2 x dx u 3 = x 2 x 2, x 2, v 2 v 2 ( ( 2 = x 2 x 2 dx = x 2 3 ( 2 4 6 = x 2 3 x + 2 2 2 = x 2 x + ( 2 2 2 3 x 2 x + /2 ( 4 dx x 2 ( x 2 and so we have an orthonormal basis for V given by (v, v 2, v 3 = x 3 ( 2 x2 dx x 2 (, 2 ( x 2, x 2 2 x + ( 2 2 3. 3 Hessian (a We note that for a smooth function f we have 2 f x y = 2 f y x, and so the Hessian H(a, b at each (a, b is a symmetric matrix, and hence defines a symmetric bilinear form R 2 R 2 R by (x, y x T H(a, by. (b The second derivative test states that if (a, b is a critical point of f, then f has a local maximum at (a, b if the Hessian H(a, b is negative-definite, i.e. has signature (, 2; f has a local minimum at (a, b if the Hessian H(a, b is positive-definite, i.e. has signature (2, ; f has a saddle point at (a, b if the Hessian H(a, b is indefinite, i.e. has signature (,. The test is inconclusive if the Hessian H(a, b fails to have full rank. 4 Duals and double duals (a Suppose we have f M with λf = for some nonzero λ R. Then im f ker(x λx. Now if R is a domain and λ, then ker(x λx =, and so we may conclude f =. Hence there are no (nonzero torsion elements in M. 4

(b We may define a map M M by x e x, where e x (f = f(x (any element of x is sent to the evaluation at x homomorphism on M. This is an R-linear map: to check that e rx+y = re x + e y, we need only check that these maps take the same values on all f M. But e rx+y (f = f(rx + y = rf(x + f(y = re x (f + e y (f, by R-linearity of f. (c Given a finitely-generated free module M = R m, let e : M M be the map above. We already know this is an R-module map, so it suffices to show that it is bijective. For this, it suffices to show that the basis B := (e,..., e m is taken to a basis of M. Denote by e i the image of e i under e, and the set of all the e i by B. Before starting, we observe that because M is free, the choice of the basis B establishes an isomorphism of M with R m. So elements of M are uniquely determined by where they send the basis elements e i, which in turn, can be sent to any element of R (by the universal property of free modules. That is, M can be thought of as the module of m matrices, acting on column vectors in R m by multiplication. The elements e i in M corresponding to the unit rows (consisting of all zeros except for a in the i-th spot are a basis for M (since the unit rows are a basis for the R-module of row vectors. Note that e i (e j = δ ij (Kronecker delta. Note also, by definition of e : M M, that e(e i M is the map sending e j to e j (e i = δ ij. In particular e i (e j = δ ij. We now prove that the set B is a free generating set for M. First we show that B is linearly-independent: if m i= r ie i is the zero map M R, then in particular, it sends each of the elements e j M to zero. But then = m i= r ie i (e j = m i= r ie j (e i = m i= r iδ ij = r j, establishing linear independence. We next show that B spans M. Let Ψ be any element in M. Let r i = Ψ(e i. We claim that Ψ = m i= r ie i. To check this, it suffices to check that these maps take the same value on every element f M, and by R-linearity in fact, it is enough to check this on the generators e j. But m i= r ie i (e j = m i= r ie j (e i = m i= r iδ ij = r j, which is equal to Ψ(e j by definition. So Ψ = m i= r ie i. QED. (d If R = k is a field, then all R-modules are free; hence we have already shown that M finitelygenerated = M reflexive in the previous part. It remains to show that M reflexive = M finitely-generated in this case. Suppose M is not finitely-generated as a R-module. We claim that the map M M is injective, but not surjective in this case. Let (v α α J M be a basis for M, and let vα M be evaluation at v α. Now any v M for which v is not an evaluation map (at some element of M is not in the span of the vα ; we can construct such a v explicitly by letting v (vα = ( α, which is well defined by the universal property of freeness, where vα are the dual elements defined by vα(v β = δ αβ, (which are easily seen to be independent as above even though there are infinitely many. Now v thus defined cannot be an evaluation map at any element of M, for any element of M is a (finite R-linear combination of the v α, and hence any evaluation map must take v α to zero for all but finitely many α. (e Consider the Z-module Q. Since Q = Hom Z (Q, Z =, Q =, but this is clearly not isomorphic to Z. Hence Q is not reflexive (as a Z-module. Alternatively, consider any torsion Z-module (i.e. any finite abelian group Γ. From (a Γ and Γ are torsion-free, and so certainly not isomorphic to Γ (in particular, any R-linear map Γ Z must have ϕ( =, since there are no nonzero torsion elements in Z, so Γ = Hom Z (Γ, Z =. More generally, consider the R-module R/I for I some proper ideal of R. We must have (R/I = Hom R (R/I, R =, since any map R-linear map ϕ : R/I R sends to some element annihilated 5

by I, but since R is a domain the only such element is. So (R/I = (R/I =, and this is not isomorphic to the nonzero module R/I. 5 Adjoint functors (a We wish to show that for any R-modules C, D, there is a natural bijection Ψ : Hom R (C, Hom R (M, D Hom R (C R M, D. This is given by (c (m d (c m d. We verified in the previous homework that this was a well-defined bijection. (b Let F (S denote the free R-module on the set S. Now, for any set S, Hom(S, S = Hom(F (S, F (S via the bijection given by Hom(S, S ϕ ϕ Hom(F (S, F (S, where vp is the morphism of the free R-modules obtained by extending ϕ : S S (which is just any function of sets linearly. Since morphisms of free R-modules are determined entirely by where the generators are sent, it is clear that this map is indeed injective and surjective, and hence a bijection, as claimed. Hence the forgetful functor F (S S and the functor S F (S are adjoint. (c We should have, for all C C and D and for all γ Hom(C, G(D, C C γ G(D ϕ γ = and similarly, for all γ Hom(F (C, D, F (C F (C Ψ(γ F (ϕ Ψ(γ D D ϕ D γ F (C γ = G(D G(D Ψ (γ C G(ϕ Ψ (γ (where the vertical arrows in the diagrams on the right change direction if F or G are contravariant rather than covariant. 6