Inequalities in Hilbert Spaces

Similar documents
Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Elementary linear algebra

Kernel Method: Data Analysis with Positive Definite Kernels

Analysis Preliminary Exam Workshop: Hilbert Spaces

Inner Product Spaces An inner product on a complex linear space X is a function x y from X X C such that. (1) (2) (3) x x > 0 for x 0.

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:

I teach myself... Hilbert spaces

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

GQE ALGEBRA PROBLEMS

HILBERT SPACES AND THE RADON-NIKODYM THEOREM. where the bar in the first equation denotes complex conjugation. In either case, for any x V define

Recall that any inner product space V has an associated norm defined by

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space.

Lecture notes: Applied linear algebra Part 1. Version 2

No books, no notes, no calculators. You must show work, unless the question is a true/false, yes/no, or fill-in-the-blank question.

Linear Algebra. Session 12

CHAPTER VIII HILBERT SPACES

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7

REAL LINEAR ALGEBRA: PROBLEMS WITH SOLUTIONS

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Math Linear Algebra II. 1. Inner Products and Norms

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

Chapter 6 Inner product spaces

Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space

Your first day at work MATH 806 (Fall 2015)

Chapter 4 Euclid Space


The following definition is fundamental.

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence)

NORMS ON SPACE OF MATRICES

1 Linear Algebra Problems

Lecture 3: Review of Linear Algebra

Lecture 3: Review of Linear Algebra

Functional Analysis Review

Linear Algebra Lecture Notes-II

0.1 Rational Canonical Forms

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers.

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

LINEAR ALGEBRA QUESTION BANK

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Functional Analysis Exercise Class

1 Definition and Basic Properties of Compa Operator

CHAPTER II HILBERT SPACES

Quantum Computing Lecture 2. Review of Linear Algebra

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T

1. General Vector Spaces

Functional Analysis (2006) Homework assignment 2

Classes of Linear Operators Vol. I

Linear Algebra Massoud Malek

Vectors in Function Spaces

Math 24 Spring 2012 Sample Homework Solutions Week 8

THEOREMS, ETC., FOR MATH 515

OPERATOR THEORY ON HILBERT SPACE. Class notes. John Petrovic

Prof. M. Saha Professor of Mathematics The University of Burdwan West Bengal, India

Optimization Theory. A Concise Introduction. Jiongmin Yong

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

Eigenvalues and Eigenvectors

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

Real and Complex Analysis, 3rd Edition, W.Rudin Elementary Hilbert Space Theory

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Numerical Linear Algebra

Ordinary Differential Equations II

Real Analysis, 2nd Edition, G.B.Folland Elements of Functional Analysis

David Hilbert was old and partly deaf in the nineteen thirties. Yet being a diligent

Quadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix.

1 Inner Product Space

Math 489AB Exercises for Chapter 2 Fall Section 2.3

Math 108b: Notes on the Spectral Theorem

LECTURE 7. k=1 (, v k)u k. Moreover r

Functional Analysis Review

Hilbert Spaces. Contents

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling

A Brief Introduction to Functional Analysis

Your first day at work MATH 806 (Fall 2015)

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

WI1403-LR Linear Algebra. Delft University of Technology

1 Math 241A-B Homework Problem List for F2015 and W2016

Hilbert spaces. 1. Cauchy-Schwarz-Bunyakowsky inequality

Chapter 7: Bounded Operators in Hilbert Spaces

Spectral theory for compact operators on Banach spaces

1 Continuity Classes C m (Ω)

FUNCTIONAL ANALYSIS LECTURE NOTES: ADJOINTS IN HILBERT SPACES

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS

Review problems for MA 54, Fall 2004.

MATH 220: INNER PRODUCT SPACES, SYMMETRIC OPERATORS, ORTHOGONALITY

Linear Algebra. Paul Yiu. Department of Mathematics Florida Atlantic University. Fall A: Inner products

j=1 [We will show that the triangle inequality holds for each p-norm in Chapter 3 Section 6.] The 1-norm is A F = tr(a H A).

Real Analysis Notes. Thomas Goller

Chapter 5. Basics of Euclidean Geometry

Spectral Properties of Elliptic Operators In previous work we have replaced the strong version of an elliptic boundary value problem

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra

Math113: Linear Algebra. Beifang Chen

1 Functional Analysis

Vector Spaces. Commutativity of +: u + v = v + u, u, v, V ; Associativity of +: u + (v + w) = (u + v) + w, u, v, w V ;

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.

University of Missouri Columbia, MO USA

Transcription:

Inequalities in Hilbert Spaces Jan Wigestrand Master of Science in Mathematics Submission date: March 8 Supervisor: Eugenia Malinnikova, MATH Norwegian University of Science and Technology Department of Mathematical Sciences

NTNU Norwegian University of Science and Technology> Faculty of Information Technology, Mathematics and Electrical Engineering> Department of Mathematical Sciences Master Thesis Inequalities in Hilbert Spaces by Jan Wigestrand Supervisor: Eugenia Malinnikova Trondheim, March 5, 8

Contents Contents Acknowledgments Introduction Notation i iii v vii Classical inequalities Hilbert Spaces Hilbert spaces Examples of Hilbert spaces 3 Riesz s representation theorem 4 Cauchy-Schwarz inequality 5 Cauchy-Schwarz inequality 5 Examples of Cauchy-Schwarz inequality 6 3 Bessel s inequality 7 3 Bessel s inequality 7 3 Examples of Bessel s inequality 8 33 Application of Bessel s inequality 9 4 Selberg s inequality 3 4 Selberg s inequality 3 4 Examples of Selberg s inequality 6 Positive semidefinite matrices and inequalities 9 Positive semidefinite matrices 9 Definition and basic properties of positive semidefinite matrices 9 Further properties of positive semidefinite matrices Inequalites 3 Generalized Bessel s inequality 3 i

ii CONTENTS Selberg s inequality 4 3 Generalized Selberg s inequality 5 3 Frames 7 3 Definition and Cauchy-Schwarz upper bound 7 3 Upper bound 7 33 Lower bound 9 34 Tight frames 3 35 Examples of tight frames 3 Conclusion 35 Appendix A 39 References 4

Acknowledgments I would like to thank Eugenia Malinnikova, for her most valuable help in writing this thesis The start date is March 6, 7 The deadline is March 6, 8 Jan Wigestrand, Trondheim, March 5, 8 iii

Introduction The main result in this thesis is a new generalization of Selberg s inequality in Hilbert spaces with a proof, see page 5 In Chapter we define Hilbert spaces and give a proof of the Cauchy-Schwarz inequality and the Bessel inequality As an example of application of the Cauchy- Schwarz inequality and the Bessel inequality, we give an estimate for the dimension of an eigenspace of an integral operator Next we give a proof of Selberg s inequality including the equality conditions following [Furuta] In Chapter we give selected facts on positive semidefinite matrices with proofs or references Then we use this theory for positive semidefinite matrices to study inequalities First we give a proof of a generalized Bessel inequality following [Akhiezer,Glazman], then we use the same technique to give a new proof of Selberg s inequality We conclude with a new generalization of Selberg s inequality with a proof In the last section of Chapter we show how the matrix approach developed in Chapter and Chapter can be used to obtain optimal frame bounds We introduce a new notation for frame bounds, see page vii v

Notation a b frame bounds ( \text{\large{$a$}} \text{\large{$b$}} ) P the set {,, 3, } of all positive integers N the set {,,, 3, } of all nonnegative integers R the set of all real numbers C the set of all complex numbers z = a + ib (a R, b R, i = ) H Hilbert space A complex conjugate transpose matrix of A, A = A T A A is positive semidefinite A > A is positive definite A is the square root of a positive semidefinite matrix A I identity matrix U unitary matrix, UU = U U = I u v u and v are orthogonal vectors λ eigenvalue diag(λ,, λ n ) diagonal matrix λ max (A) largest eigenvalue of matrix A λ min> (A) smallest positive eigenvalue of matrix A vii

Chapter Classical inequalities Hilbert Spaces We will study inequalities in Hilbert spaces and in this section we give the definitions and examples of Hilbert spaces Hilbert spaces Definition A vector space H with a map, : H H C (for real vector spaces, : H H R) is called an inner product space if the following properties are satisfied: (I) x, x = x =, (I) x, x x H, (I3) x + y, z = x, z + y, z x H y H z H, (I4) αx, x = α x, x x H α C (for real vector spaces α R), (I5) x, y = y, x x H y H (the bar denotes complex conjugation) If in addition H is complete, that is ( (I6) lim x n x m, x n x m = n,m ( x H lim x x n, x x n = n then H is called a Hilbert space ) x n H n P m P ), From now on H will denote a Hilbert space The norm in H is defined by

CHAPTER CLASSICAL INEQUALITIES (I7) x = x, x x H (I8) Every Hilbert space has an orthonormal basis, see [Folland,p76] It means that there exists a system {e α } α Å of elements in H that is linearly independent, that is e α, e β = if α β and e α = e α, e α = for each α and for each x H we have x = α Å x, e α e α (the series converges in H ) If we have a separable Hilbert space we can replace {e α } α Å by {e j } j and the sentence above can be reformulated in the following way: It means that there exists a system {e j } j of elements in H that is linearly independent, that is e j, e k = if j k and e j = e j, e j = for each j and for each x H we have x = j x, e j e j (the series converges in H ) From now on a Hilbert space will be synonymous with a separable Hilbert space unless otherwise specified When the basis of H is finite we say that H is finite dimensional otherwise we say that H has infinite dimension Examples of Hilbert spaces (a) R 3 (real vector space) is a three-dimensional Hilbert space x, y = x y + x y + x 3 y 3 x, y are vectors in R 3 x, y = x T y x, y are vectors in R 3 x T = [x x x 3 ], x =,, is an orthonormal basis for R3 (b) R n (real vector space) is an n-dimensional Hilbert space x, y = x j y j x, y are vectors in R n x x x, y = x T y x, y are vectors in R n, x T = [x x x n ], x = An orthonormal basis for R n consists of n vectors each of dimension n x x x 3 x n

HILBERT SPACES 3,,, is an orthonormal basis for R n (c) x, y A = (Ax) T y = x T A T y = x T Ay x, y are vectors in R n, A R n n, A is positive definite (x T Ax > x, for more details see Chapter ) (I3), (I4) are clearly satisfied We see that if A is positive definite, then (I) and (I) are satisfied A is positive definite implies that A is symmetric (A T = A) We use the property that A is symmetric to show that (I5) is satisfied x, y A = (Ax) T y = x T A T y = (x T A T y) T = y T Ax = (A T y) T x = y, x A T = y, x A x, y are vectors in R n, A R n n It follows that (I5) is satisfied and that x, y A is an inner product Since A is symmetric the eigenvectors from different eigenspaces are orthogonal We can find an orthonormal basis for A by first finding a basis for each eigenspace of A, then apply the Gram-Schmidt process to each of these bases (d) C n (complex vector space) is an n-dimensional Hilbert space x, y = x j y j x, y are vectors in C n x x x, y = x T y x, y are vectors in C n, x T = [x x x n ], x = An orthonormal basis for C n consists of n vectors each of dimension n ( + i) ( + i),,, is an orthonormal basis for C n ( + i) x n (e) l = { x = {ξ, ξ, } : ξ j < ξ j C j P } l is an infinite-dimensional Hilbert space x, y = ξ j η j x = {ξ, ξ, }, y = {η, η, } ξ j C η j C j P

4 CHAPTER CLASSICAL INEQUALITIES An orthonormal basis for l consists of infinitely many vectors each of infinite dimension,, is an orthonormal basis for l (f) L ( X, M, µ ) { = f : X C : f is measurable and ( f dµ ) } / < where ( X, M, µ ) is a measure space and f is a measurable function on X L (X, M, µ) is an infinite-dimensional Hilbert space x, y = x(t)y(t) dµ(t) x(t) L (µ) y(t) L (µ) (g) An orthonormal basis for L [, π] is π, π cos t, π sin t, π cos t, π sin t, 3 Riesz s representation theorem We will use the Riesz representation theorem in Chapter 33 example (b) Let H be the set of all bounded linear functionals on a Hilbert space H For all F H we define F H = F(x) sup x H, x = Theorem (Riesz s representation theorem) F H there exists a unique y H such that F(x) = x, y x H () Moreover, we have y = F H A proof for Theorem can be found in [Schechter,p3]

CAUCHY-SCHWARZ INEQUALITY 5 Cauchy-Schwarz inequality The Cauchy-Schwarz inequality is one of the most used inequalities in mathematics Probably the most used inequality in advanced mathematical analysis The inequality is often used without explicit referring to it See Chapter 33 (c) for an example where the Cauchy-Schwarz inequality is used Cauchy-Schwarz inequality Theorem (Cauchy-Schwarz inequality) In an inner product space X, x, y x y x X y X () The equality () holds if and only if x and y are linearly dependent Proof y = is trivial Let y, then for any α C we have x αy = x αy, x αy = x α y, x α x, y + α y x, y Choose α = y and we have x x, y y x, y y x, y + y 4 y = x x, y y, and x, y x y follows If x, y = x y, then we can choose α C α =, such that α x, y = x, y, and we have x y α y x = x y α y x, x y α y x = x x y, y α y x x, y α x y y, x + αα y y x, x = x x y y y x x y x y x y + y y x x = According to (I), we must have x y = α y x, so x and y are linearly dependent If y = βx, β C, then x, βx = x, βx x, βx = x, βx βx, x = β x, βx x, x = x βx, and we have x, βx = x βx

6 CHAPTER CLASSICAL INEQUALITIES Examples of Cauchy-Schwarz inequality (a) In R 3 we have x, y x y x, y are vectors in R 3 We have equality if x and y are linearly dependent This can be seen from Lagrange identity which gives us x, y = x y x y, x y is the vector product, x, y are vectors in R 3 (b) In R n we have x j y j x y x, y are vectors in R n j j x x T y x x y x, y are vectors in R n, x T = [x x x n ], x = (c) x T Ay x T Ax y T Ay x, y are vectors in R n A is positive definite, A R n n (d) In C n we have x j y j x j y j x, y are vectors in C n (e) In l we have ξ j η j ξ j η j x = {ξ, ξ, }, y = {η, η, } ξ j C η j C j P (f) In L (µ) we have x(t)y(t) dµ(t) x(t) dµ(t) y(t) dµ(t) x(t) L (µ) y(t) L (µ) (g) In L (µ), Assume that µ < + and g and f L (µ), then x n the Cauchy-Schwarz inequality implies f dµ f dµ µ(x) If µ is a probability measure, then µ(x) =

3 BESSEL S INEQUALITY 7 3 Bessel s inequality Another widely used inequality for vectors in inner product spaces is the Bessel inequality The Cauchy-Schwarz inequality follows from the Bessel inequality In this section we prove the inequality and use it to give an estimate for the dimension of an eigenspace of an integral operator 3 Bessel s inequality Theorem 3 (Bessel s inequality) Let {e j } j be an orthonormal system in a Hilbert space H Then x, e j x x H (3) j Proof Let α k = x, e k, then for any n P we have x α k e k = x k= We have α k e k, x k= α k e k k= = x α k e k, x x, = x = x = x k= α k x, e k k= x, e k + k= x, e k k= α k e k + k= α k x, e k + k= α k k= α k k= x, e k α k k= x, e k = x x α k e k x k= k= Let n in the last inequality We have a sequence of nonnegative numbers, where the sum of the numbers is bounded from above Hence (3) follows The inner products x, e j in (3) are called the Fourier coefficients of x with respect to the orthonormal system {e j } j Remark We will look at a more general system later

8 CHAPTER CLASSICAL INEQUALITIES Theorem 4 Let {e j } j be an orthonormal system in a Hilbert space H Then {e j } j is an orthonormal basis if and only if for all x H we have x = x, e j (Parseval s identity) (4) j A proof for Theorem 4 can be found in [Weidmann,p39] If we have an orthonormal system with only one element ( e = y y ), then the Bessel inequality becomes the Cauchy-Schwarz inequality 3 Examples of Bessel s inequality (a) In R 3 we have x = 3 x, e j = 3 x where x = is a vector in R3 j and e =, e =, e 3 = is an orthonormal basis for R3 x (b) In R n we have x = x, e j = x n x where x = is a vector j in R n and e =, e =,, e n = is an orthonormal basis for R n x (c) In C n we have x = x, e j = x x j where x = is a vector in C n and e = ( + i) x x x 3 ( + i), e =,, e n = is ( + i) x n x n

3 BESSEL S INEQUALITY 9 an orthonormal basis for C n (d) In l we have x = ξ j x l where x = {ξ, ξ, } ξ j C j P (e) In L [, π] we have x = a + n= a n + n= b n x L [, π] where a = x, e = a n = x, e n = b n = x, e n = π π x dt, a R and π π x cos nt dt, a n R n P and π π x sin nt dt, b n R n P e = π, e = π cos t, e = π sin t, e 3 = π cos t, e 4 = π sin t, is an orthonormal basis for L [, π] a π + n= a n π cos nt + n= b n π sin nt is the Fourier series of x (f) If e is not included in example (a),(b),(c),(e) above, then we do not have an orthonormal basis and we have inequality instead of equality ( x instead of x =) 33 Application of Bessel s inequality (a) Let S = sin nx n= n < a a S converges We will show that S can not be a Fourier series of a Riemann integrable function f (x) From the Bessel inequality we have n= n a π π are the Fourier coefficients This is impossible since n= n a when a Hence S can not be a Fourier series, see [Gelbaum,Olmsted,p7] f (x) dx where n a = (b) Let H be a closed subspace of L [, ] that is contained in C[, ], where C[, ] is defined as the space of continuous functions on [, ] We will show that H is finite dimensional, see [Folland,p78 (ex66)] H is a Hilbert space since H is a closed subspace of L [, ] Both H with L -norm and C[, ] with f [,] = sup { f (x) : x [, ] } are Banach spaces, see [Griffel,p8] Consider the inclusion H C[, ] as a linear map of Banach spaces

CHAPTER CLASSICAL INEQUALITIES This map f f is closed We have to check that { ( f, f ) H C[, ] : f H } is a closed subset of H C[, ] Suppose that ( f n, f n ) is a Cauchy sequence in H C[, ], then f n is a Cauchy sequence in C[, ] and there exists an f such that f n f (uniformly on [, ]) Then f n f in H, that is f n f when n since f n f = f n f dt f n f [,] f n is a Cauchy sequence in H implies that f n is a Cauchy sequence in C[, ] since H C[, ] By the closed graph theorem the inclusion is bounded, thus there exists a C such that f [,] C f for any f H where f [,] = sup { f (x) : x [, ] } ( ) / and f = f dµ Let x [, ] and consider a linear functional F x : H H where F x ( f ) = f (x) It is bounded since F x ( f ) = f (x) f [,] C f for all f H Hence it is a continuous linear functional on H By Riesz representation theorem there exists a unique g x H such that f (x) = f, g x = f (t)g x(t) dt for all f H Further f (x) = f, g x C f f H g x, g x C g x g x C g x g x C Let { f j } n be an orthonormal system of functions in H Then by using Riesz representation theorem and Bessel s inequality and g x C from previous result we have f j(x) = f j, g x = g x, f j g x C x [, ] ( ) n = f j(x) dx C dx = C ( ) n C Thus dim H C and H is finite dimensional

3 BESSEL S INEQUALITY Remark C[, ] contains a subspace of polynomials where, x, x, are linearly independent It is infinite dimensional and is contained in L [, ], but not in H which is a closed subspace of L [, ], that is contained in C[, ] The closure (L [, ]) of this subspace is not contained in C[, ] (c) Let K(x, y) be a continuous function on [a, b] [a, b] A continuous function f on [a, b] is called an eigenfunction for K with respect to a real eigenvalue r if f (y) = r b K(x, y) f (x) dx a We will without loss of generality use [, ] instead of [a, b] { Let E r = f C[, ] : f (y) = r } K(x, y) f (x) dx We will give an estimate for the dimension of E r, see [Lang,p8 (ex7)] Let H r = { f L [, ] : f (y) = r K(x, y) f (x) dx } Let h H r We want to show that h is continuous h(y) h(y ) r K(x, y) K(x, y ) h(x) dx ( ) r K(x, y) K(x, y ) ( dx dx) h(x) when y y since K(x, y) is uniformly continuous and h L [, ] So we have that H r E r C[, ] since all functions in H r are continuous as we shown above Clearly E r H r L [, ] Hence E r = H r If we can show that H r is a closed subspace of L [, ], then we can use results from example (b) above to give an estimate for the dimension of E r Let f n f in L [, ] and f n H r and g H r Then we have f n (y) g(y) r K(x, y) f n(x) g(x) dx ( ) r K(x, ( ) y) dx f n(x) g(x) dx when fn g g is continuous and f n g on [,] Then f = g almost everywhere and f H r Hence H r is a closed subspace of L [, ] Assume that dim E r n where n P, then there exists an orthonormal

CHAPTER CLASSICAL INEQUALITIES system { f j } n in H From ( ) and ( ) in example (b) above we have that n = f j(x) dx rk(x, y) dx n r Thus dim E r r K (x, y) dydx K (x, y) dydx

4 SELBERG S INEQUALITY 3 4 Selberg s inequality Selberg s inequality is not so well known as the Cauchy-Schwarz and the Bessel inequality It is an interesting inequality and it is also a generalization of the Cauchy-Schwarz and the Bessel inequality 4 Selberg s inequality Theorem 5 (Selberg s inequality) In a Hilbert space H, x, y j k= y j, y k x x H y j y j H (5) The equality (5) holds if and only if (C) x = α j y j, α j C, and for each pair ( j, k), j k, (C) y j, y k =, or (C) α j = α k and α j y j, α k y k See [Furuta,p8] Proof For any α j C we have x α j y j = x α j y j, x α j y j = x α j y j, x x, α j y j + α j y j, α j y j = x α j y j, x α j x, y j + α j α k y j, y k k= From ( α j α k ) we have α j α k α j + α k, and we have x α j x, y j α j x, y j + α j y j, y k + k= α k y j, y k k=

4 CHAPTER CLASSICAL INEQUALITIES We can choose α j = x, y j k= y, and we have j, y k = x x, y j x, y j k= y j, y k x, y j x, y j k= y j, y k + k= x, y j y j, y k ( k= y j, y k ) + k= x, y k y j, y k ( y k, y j ) = x x, y j k= y j, y k x, y j k= y j, y k + x, y j k= y j, y k ( k= y j, y k ) + k= x, y k y j, y k ( y j, y k ) = x = x x, y j k= y j, y k + x, y j k= y j, y k + x, y j k= y j, y k + x, y j k= y j, y k + k= x, y k y j, y k x, y j k= y j, y k, and x, y j k= y j, y k x follows We will show that x = (C) x, y j k= y j, y k = x α j y j α j α k y j, y k = α j y j, y k + α k y j, y k ( )

4 SELBERG S INEQUALITY 5 (C) If (C), then for each pair ( j, k), j k, where (C) is true, we have α k y k, α j y j = α j y k, y j ( ) And for each pair ( j, k), j k, where (C) is true, we have ( ) n k= α k y k, y j k= y = j, y k = k= α k y k, y j k= y = j, y k ( k= α k y k, y j ) k= α k y j, y k α j α j k= y j, y k α j = We use ( ), and we have = ( n k= α ky k, α j y j ) k= α jα k y j, y k k= α = ky k, α j y j x = α j y j, α k y k = k= k= k= α k y k, y j αj k= y j, y k α j k= α j α k y j, y k ( k= α ky k, α j y j ) k= α jα k y j, y k k= y k, y j α j α j α k y j, y k, and Hence (C) implies x, y j k= y j, y k = x If x, y j k= y j, y k = x, then choose α j = x, y j k= y j, y k From the proof of the inequality (5) we have that equality (5) holds when we have = x α j y j and α j α k y j, y k = α j y j, y k + α k y j, y k k= k= k= For each pair ( j, k), j k we have α j y j, y k + α k y j, y k and α j α k y j, y k α j y j, y k + α k y j, y k Hence x, y j k= y j, y k = x implies ( ) If ( ), then for each pair ( j, k), j k, assume that (C) is not true

6 CHAPTER CLASSICAL INEQUALITIES Then α j y j, α k y k, and α j α k y j, y k y j, y k = α j + α k α j α k y j, y k y j, y k = α j + α k α j α k = α j + α k α j = α k Hence ( ) implies (C) When we use (C) we need only to calculate maximum (n )n pairs since we have symmetry If we have only one element (n=), y= y, then the Selberg inequality becomes the Cauchy-Schwarz inequality If we have an orthogonal system {y j } n with several elements (n ), y j, y k = ( ) if j k, let e j = y j y j, then the Selberg inequality becomes the Bessel j inequality 4 Examples of Selberg s inequality (a) In R 3, x =, y =, y =, y 3 = We have x = + + and for each pair ( j, k) in (5) we have (C) since y y, y y 3, y y 3 By Selberg s inequality we have equality in (5) since (C) is satisfied And it follows that we have Parseval s identity

4 SELBERG S INEQUALITY 7 (b) In R 3, x =, y =, y =, y 3 = We have x = + and for each pair ( j, k) in (5) we have (C) By Selberg s inequality we have equality in (5) since (C) is satisfied (c) In R 3, x =, y =, y =, y 3 = We have x = + +, and for the following pairs (, ), (, 3) in (5) we have (C) since y y, y y 3 And for the following pair (, 3) in (5), we have (C) By Selberg s inequality we have equality in (5) since (C) is satisfied (d) In L [, ], x = + t + t, y =, y = t, y 3 = t We have x = + t + t and for each pair ( j, k) in (5) we have (C) By Selberg s inequality we have equality in (5) since (C) is satisfied (e) In R 3, if x = ay + by + cy 3, a R, b R, c R, then the Selberg inequality can be written as ( ay,y + by,y + cy 3,y ) y,y + y,y + y,y 3 + ( ay,y + by,y + cy 3,y ) y,y + y,y + y,y 3 + ( ay,y 3 + by,y 3 + cy 3,y 3 ) y 3,y + y 3,y + y 3,y 3 a y, y + ab y, y + ac y, y 3 + b y, y + bc y, y 3 + c y 3, y 3 If y =, y =, y 3 = and x = a + b + c, 3 3 then we have the following Selberg s inequality, (9a+5b+c) 6 + (5a+3b+7c) 5 + (a+7b+7c) 36 9a + ab + 4ac + 3b + 4bc + 7c If a = b = c, then we have equality ( 77a = 77a ) (f) In L [, ], x = a + bt + ct, y =, y = t, y 3 = t, a R, b R, c R

8 CHAPTER CLASSICAL INEQUALITIES We have y y, y y 3 and we have the following Selberg s inequality, (a+ 3 c) 8 3 + ( 3 b) 3 + ( 3 a+ 5 c) 6 5 a + 4 3 ac + 3 b + 5 c (a+ 3 c) 8 3 + ( 3 a+ 5 c) 6 5 a + 4 3 ac + 5 c If a = c, then we have equality ( 56 5 a = 56 5 a)

Chapter Positive semidefinite matrices and inequalities In this chapter we give a new proof of Selberg s inequality It is based on the theory of positive semidefinite matrices This approach gives a new generalization of Selberg s inequality Positive semidefinite matrices Positive semidefinite matrices are closely related to nonnegative real numbers Definition and basic properties of positive semidefinite matrices A n n matrix A is normal, if A A = AA A n n matrix A is Hermitian, if A = A Hermitian matrices are normal matrices A n n matrix A is positive semidefinite, if A is Hermitian and x Ax for all nonzero x C n We will then write A A n n matrix A is positive definite, if A is Hermitian and x Ax> for all nonzero x C n We will then write A > If A B, then we will write A B The following is a list of some properties for positive semidefinite matrices that are needed in this chapter Let A, B, C, F, I, S, U denote n n matrices and x, y denote n vectors (i) If A is Hermitian, then A = U diag(λ,, λ n )U where U is a unitary 9

CHAPTER POSITIVE SEMIDEFINITE MATRICES AND INEQUALITIES matrix and λ j are nonnegative real numbers on diagonal matrix Proof See [Zhang,p65] (ii) If A is Hermitian, then S AS is Hermitian Proof (S AS) = S A S = S AS (iii) A implies S AS Proof x Ax implies y S ASy (iv) A implies det(a) Proof Let Ax = λx where λ is an eigenvalue and x is an eigenvector of A corresponding to λ Then for each λ, we have x Ax = x λx λ = x Ax x x det(a) = λ λ λ n (v) A and det(a) > A Proof For any y C n there exists an x C n such that Ax = y x = A y and x Ax = ( A y ) A A y = y ( A ) y = y A y (vi) If A B, then S AS S BS Proof A B S (A B)S S AS S BS (vii) If A, then there exists a matrix B such that B = A B is denoted by A Proof See [Zhang,p6] (viii) If A B and A exists and B exists, then B A Proof If C I, then I = C CC C IC = C A B I A BA A B A I B A Further properties of positive semidefinite matrices We also need properties of products of two and three positive semidefinite matrices The product of two positive semidefinite matrices is not always positive semidefinite (ix) (A and B ) AB [ ] [ ] Proof A =, B =, AB = 3 [ ] 5 5, then A, B, AB 3 4

POSITIVE SEMIDEFINITE MATRICES (x) If A then A Proof A ( y A ) A ( A y ) = ( A y ) A ( A y ) = x Ax (xi) A and B, then AB is Hermitian AB = BA Proof (AB) = B A = BA (xii) If A and B and AB = BA, then A B = B A Proof We have A, B AB is Hermitian Let A = U CU and B = U FU where C and F are diagonal matrices and U is a unitary matrix U is the same for A and B since A and B commute, see [Zhang,p6] Then we have A B = U C UU F U = U C F U = U F C U = U F UU C U = B A (xiii) If A and B and AB is Hermitian, then AB Proof y ABy = y A A By = ( A x ) A B B y = ( A y ) B ( A y ) = x Bx (xiv) A, B, B is invertible, C and ABC is Hermitian implies ABC Proof Let F = ( B AB ) ( B CB ) F is Hermitian since F = B (ABC) B, see (ii) We have ( B AB ) and ( B CB ), see (iii) From (xiii) we have F and finally from (iii) we have ABC = B FB (xv) If A, then λ max (A)I A λ max (A) is the largest eigenvalue of matrix A Proof Let B = U AU where B is a diagonal matrix and U is a unitary matrix Then we have λ max (A)I B Hence λ max (A)I A (xvi) If A, then λ min> (A) A A λ min> (A) is the smallest positive eigenvalue of matrix A Proof Let B = U AU where B is a diagonal matrix and U is a unitary matrix Then we have B λ min> (A)B = B λ min> (B)B = diag ( λ λ min>(b)λ,, λ n λ min> (B)λ n ) Hence λ min> (A) A A (xvii) We define the Gram matrix for {y, y,, y n } in a Hilbert space H in the following way:

CHAPTER POSITIVE SEMIDEFINITE MATRICES AND INEQUALITIES y, y y, y y n, y y, y y, y y n, y G = y, y n y, y n y n, y n () We see that G is Hermitian u Gu = β y + β y + + β n y n, β y + β y + + β n y n β β for all nonzero u =, β j C, that is G β n If y, y,, y n are linearly independent, then G > (xviii) If we have a diagonal matrix D = d d is a nonnegative real number, then we have u D u = β β for all nonzero u =, β j C, that is D Remark 3 If G = β n d n where each d j β j d j [ ] y, y y, y, then det(g) is same the y, y y, y Cauchy-Schwarz inequality

INEQUALITES 3 Inequalites In this section we will use theory for positive semidefinite matrices to study inequalities First we give a proof of a generalized Bessel s inequality following [Akhiezer,Glazman], then we use the same technique to give a new proof of Selberg s inequality We conclude this section with a new generalization of Selberg s inequality with a proof Generalized Bessel s inequality We will need the following generalized Bessel s inequality to prove Selberg s inequality in the next subsection Theorem 6 (Generalized Bessel s inequality) Let {y j } n be a linearly independent system in a Hilbert space H and let G be the corresponding Gram matrix Then x, y x, y where v = x, y n See [Akhiezer,Glazman,p4] Proof Let x = From v G v x x H () α j y j + h where α j C, h H, h y j, for any j {,,, n} x, y k = k= k= x = α j y j + h, = α j y j, = k= α j y j, y k + α j y j + h α k y k + h, k= α j α k y k, y j + h α α h, y k, we have v = G w where w = k= α k y k + α j y j, h + h, h k= α n

4 CHAPTER POSITIVE SEMIDEFINITE MATRICES AND INEQUALITIES = w G w + h = w G w + h For a linearly independent system {y j } n, the matrix G exists, and we have v G v = (G w) G G w = w G w = w G w We have equality when h = If y j y k, j k and y j =, then G = I and the generalized Bessel inequality can be written as x, y j x Selberg s inequality Here we give a new proof of Selberg s inequality based on the theory of positive semidefinite matrices Theorem 7 (Selberg s inequality) In a Hilbert space H, Proof We have x, y d x, y d v =, D = x, y n x, y j k= y j, y k x x H y j y j H (3) x, y j k= y j, y k x v D v x where d n, d j = y j, y k See Appendix A We will prove that v D v x Let x = α j y j + h where α j C, h H, h y j, for any j {,,, n} From the proof of the generalized Bessel s inequality we have x = w G w + h and v = G w v D v x (G w) D G w w G w + h k=

INEQUALITES 5 w G D G w w G w + h w GD G w w G w + h Hence it is sufficient to show that G GD G First we will show that G D u Gu = β y + β y + + β n y n, β y + β y + + β n y n = = u Du = k= k= β j β k y j, y k k= β j y j, y k for all nonzero u = β j d j = k= k= ( β j + β k ) y j, y k β β, β j C β n β j y j, y k Hence G D Further G GD G G ( I D G ) G ( D (D G) ) We have G, D, D is invertible, D G, and G GD G is Hermitian, so (xiv) is satisfied Hence G GD G β j β k y j, y k Remark 4 The proof for Selberg s inequality is valid for all systems {y j } including linearly dependent systems that has singular Gram matrices 3 Generalized Selberg s inequality Here we introduce a new inequality based on the results from the proof of Selberg s inequality in Chapter Theorem 8 (Generalized Selberg s inequality) In a Hilbert space H If y,, y n H, G is the Gram matrix for y,, y n, E G, and E is invertible, then v E v x x H (4)

6 CHAPTER POSITIVE SEMIDEFINITE MATRICES AND INEQUALITIES x, y x, y where v = x, y n Proof Let x = α j y j + h where α j C, h H, h y j, for any j {,,, n} From the proof of the generalized Bessel s inequality we have x = w G w + h, and v = G w v E v x (G w) E G w w G w + h w G E G w w G w + h w GE G w w G w + h Hence it is sufficient to show that G GE G Further G GE G G ( I E G ) G ( E (E G) ) We have G, E, E is invertible, E G, and G GE G is Hermitian, so (xiv) is satisfied Hence G GE G Remark 5 The Inequality in Theorem 8 is called Generalized Selberg s inequality because by choosing E = d d Selberg s inequality d n where d j = y j, y k we have k=

3 FRAMES 7 3 Frames In this section we show how the matrix approach developed in Chapter and Chapter can be used to obtain optimal frame bounds We introduce a new notation for frame bounds, see page vii 3 Definition and Cauchy-Schwarz upper bound We will follow [Christensen] in this subsection Definition A countable system of elements {y j } j in a Hilbert space H is a frame for H if there exist constants < a b <, such that a x x, y j b x x H (5) j The constants a, b are called frame bounds a is a lower frame bound, and b is an upper frame bound Frame bounds are not unique The optimal lower frame bound is the supremum over all lower frame bounds, and the optimal upper frame bound is the infimum over all upper frame bounds We can have infinitely many elements {y j } in a frame We will here assume that we have finitely many elements {y j } in a frame An important task is to estimate the frame bounds We start with the upper bound The first estimate is quite obvious From the Cauchy-Schwarz inequality we have m x, y j m y j x, which gives b = m y j From the Cauchy-Schwarz inequality it follows that we always have an upper bound 3 Upper bound We will in this subsection use the generalized Selberg inequality to find the optimal upper frame bound Lemma Let {y j } m be a system of elements in a Hilbert space H and let

8 CHAPTER POSITIVE SEMIDEFINITE MATRICES AND INEQUALITIES G be the corresponding Gram matrix Then for all x H we have Moreover, we have m m x, y j λ max (G) x (6) x, y j b x λ max (G) b (7) Proof First we will prove (6) From (xv) in Chapter we have λ max (G) I G, and then by applying generalized Selberg inequality we have (6) To prove (7) assume that m x, y j b x Choose x = α m α α j y j such that G w = λ max (G) w where w =, then m x, y j = (G w) G w, and x = w G w We have m x, y j b x (λ max (G)) w w bλ max (G) w w λ max (G) b (6) means that λ max (G) is an upper bound (6) and (7) means that λ max (G) is an optimal upper bound Remark 6 We have not used any properties of frame for describing the optimal upper bound (6) and (7) holds for any finite system {y j } m From the Selberg inequality we have α m

3 FRAMES 9 m m x, y j max y j, y k,,m x (8) k= This implies λ max (G) max,,m m y j, y k (9) k= If G = [ ], then λ max (G) < max m k= y j, y k,,m Remark 7 b = max,,m m k= y j, y k is an upper frame bound for {y j } m that always exists It may not be optimal but it is easier to compute than λ max (G) 33 Lower bound To find the optimal lower bound we use the condition that {y j } m is a frame Proposition Let {y j } m be a sequence in a Hilbert space H Then {y j } m is a frame for span{y j} m Proof See [Christensen,p4] Corollary A system of elements {y j } m in a Hilbert space H is a frame for H if and only if span{y j } m = H Proof See [Christensen,p4] Lemma Let {y j } m be a frame for a Hilbert space H and let G be the corresponding Gram matrix Then for all x H we have λ min> (G) x m x, y j ()

3 CHAPTER POSITIVE SEMIDEFINITE MATRICES AND INEQUALITIES where λ min> (G) is the smallest positive eigenvalue of G Moreover, we have a x m x, y j a λ min> (G) () Proof Let x = Then we have m α j y j, since from Corollary, we have x span{y,, y m } α m α x, y j = (G w) G w, and x = w G w where w = α m λ min> (G) x m x, y j λ min> (G) w G w (G w) G w w λ min> (G) G w w G w We have λ min> (G) G G, see (xvi) in Chapter Next, assume a x m x, y j Choose x = α α G w = λ min> (G) w where w =, then α m m x, y j = (G w) G w, and x = w G w We have a x m x, y j aλ min> (G) w w (λ min> (G)) w w m α j y j such that

3 FRAMES 3 a λ min> (G) () means that λ min> (G) is a lower bound () and () means that λ min> (G) is an optimal lower bound 34 Tight frames Definition 3 A frame is a tight frame if (5) is satisfied with a = b, that is if the optimal upper frame bound and the optimal lower frame bound are equal a is then called the frame bound When we have a tight frame {y j } m in a Hilbert space H, (5) becomes m x, y j = a x x H () Proposition Assume {y j } m bound a Then is a tight frame for a Hilbert space H with frame x = a m x, y j y j x H (3) Proof See [Christensen,p5] Theorem 9 (Casazza,Fickus,Kovačević,Leon,Tremain) Given an n-dimensional Hilbert space H and a sequence of positive scalars {a j } m, there exists a tight frame {y j} m for H of lengths y m = a m for all j =,, m if and only if, max,,m a j n Proof See [Casazza,Fickus,Kovačević,Leon,Tremain,p33] m a j (4) Theorem {y j } m is a tight frame for a Hilbert space H if and only if span{y j} m = H and λ min> (G) = λ max (G) G is the corresponding Gram matrix

3 CHAPTER POSITIVE SEMIDEFINITE MATRICES AND INEQUALITIES Proof Follows from Corollary, Lemma and Lemma 35 Examples of tight frames The following are examples of tight frames for R 3 (a) y =, y =, y 3 =, G = (b) y =, y =, y 3 =, y 4 =, 3 4 4 4 4 3 4 4 4 4 G = 4 3 4 4 4 4 4 3 4 4 (c) y =, y =, y 3 =, y 4 =, y 5 =, y 6 =, G = (d) y =, y =, y 3 =, y 4 =, y 5 =, y 6 =, y 7 =,

3 FRAMES 33 G = 4 4 4 3 4 4 4 4 3 4 4 4 4 4 4 3 4 4 3 4 We see that span{y j } m = R3 and λ min> (G) = λ max (G) in all the four examples By Theorem we have tight frames in all the four examples In example (a) a =, in example (b) a =, in example (c) a = and in example (d) a =

Conclusion We have shown that by using a generalized form of nonnegative real numbers called positive semidefinite matrices we get a nontrivial generalization of the Selberg inequality 35

Appendices 37

Appendix A Here we will show the first equivalence in the proof of Theorem 7 in Chapter v T D v x [ [ x, y x, y x, y n ] d d d x, y d x, y x, y d n x, y n ] x, y x x, y n d n x, y x, y x x, y n d x, y x, y + d x, y x, y + + d n x, y n x, y n x x, y k= y, y k + x, y k= y, y k + + x, y n k= y n, y k x x, y j k= y j, y k x 39

References Akhiezer N I, Glazman I M, Theory of Linear Operators in Hilbert Space Volume I, Pitman, London, 98 Casazza P G, Fickus M, Kovačević J, Leon M T, and Tremain J C, A Physical Interpretation for Finite Tight Frames, Preprint submitted to Elsevier Science, 5 October 3 http://wwwmathmissouriedu/ pete/pdf/85fiftfpdf Christensen Ole, An Introduction to Frames and Riesz Bases, Birkhäuser, Boston, 3 Folland Gerald B, Real Analysis: Modern Techniques and Their Applications, nd ed, Wiley-Interscience, 999 Furuta Takayuki, Invitiation to Linear Operators: From Matrices to Bounded Linear Operators on a Hilbert Space, Taylor & Francis, London, Gelbaum Bernard R, Olmsted John MH Counterexamples in Analysis, Dover Publications, 3 Griffel D H, Applied Functional Analysis, Ellis Horwood, 985 Lang Serge, Real and Functional Analysis, 3rd ed, Springer-Verlag, New York, 993 Schechter Martin, Principles of Functional Analysis, nd ed, American Mathematical Society, Weidmann Joachim, Linear Operators in Hilbert Spaces, Springer-Verlag, New York, 98 Zhang Fuzhen, Matrix Theory: Basic Results and Techniques, Springer-Verlag, New York, 999 4