Spectral Properties of Random Unitary Band Matrices

Size: px
Start display at page:

Download "Spectral Properties of Random Unitary Band Matrices"

Transcription

1 Spectral Properties of Random Unitary Band Matrices by Brian Z. Simanek Mihai Stoiciu, Advisor A Thesis Submitted in partial fulfillment of the requirements for the Degree of Bachelor of Arts with Honors in Mathematics WILLIAMS COLLEGE Williamstown, Massachusetts May 09, 007

2 It is a pleasure to thank Professor Stoiciu for his patience and selflessness in being my thesis advisor. Second, I would like to thank Professor Silva for being my second reader and for his helpful suggestions. Finally, I would like to thank my parents, without whom I could never have completed this thesis. i

3 Contents List of Figures iii Preliminaries. Operator Spectra CMV Matrices Applications Introduction 7 3 Poisson Statistics for a New Distribution 3. Aizenman-Molchanov Bounds The Road Map Joye Matrices 3 4. Joye s Result The Carathéodory Function Alexandrov Measures Singular Spectrum Aronszajn-Donaghue Theory Perturbations Considered by Combescure Spectral Properties of Random Joye Matrices Semi-infinite Joye Matrices Geronimus Polynomials The Uniform Distribution Localization of Eigenfunctions 4 6. Aizenman s Theorem Further Work 48 ii

4 List of Figures. A new Distribution Eigenvalue Distribution showing Poisson Statistics iii

5 Bibliography

6 Chapter Preliminaries. Operator Spectra One of the most important results in analysis is the Spectral Theorem, which shows the connection between linear operators on a Hilbert Space H and measures. This fundamental result illustrates a strong connection between algebraic objects (i.e. linear operators) and analytic objects (i.e. measures). This connection is a powerful idea in mathematics as it allows us to use tools from one area of math to make conclusions in another area. Our most powerful results will be rooted in this connection. We begin with a statement of the Spectral Theorem for unitary operators (taken from [7]). Spectral Theorem. Let U be a unitary operator on a separable Hilbert Space H. Let ϕ be a cyclic vector for U in H. There exists a unique measure µ on the unit circle (or equivalently a measure on the interval [0, π)) such that for all n Z we have π ϕ, U n ϕ = z n dµ(z) = e inz dµ(z). D Since the polynomials are dense in the space of L functions, this allows us to conclude that the same statement is true for any L function of U. That is, for any f L we have ϕ, f(u)ϕ = f(z)dµ(z). In particular, we have D ϕ, (U wi) ϕ = D 0 z w dµ(z) for w C provided the inverse exists (and we will see that sometimes it does not). We call (U wi) the resolvent of U. When we say that µ is a measure on the unit circle, this is a generalization that applies to all unitary operators. Given a particular operator unitary U 0, we can say

7 CHAPTER. PRELIMINARIES 3 that its spectral measure will be supported on the spectrum of U 0, where the spectrum of an operator is defined as follows. Definition. The spectrum of an operator U is the set of all complex numbers z C such that (U zi) is not invertible. We denote this set by σ(u). Since the spectrum of a unitary operator is always contained in the unit circle, the spectral measure of a unitary operator is always a measure on the unit circle. The Spectral Theorem essentially tells us that we can obtain information about measures on the circle by studying the spectra of unitary operators. The following theorem of Verblunsky (several proofs of which can be found in [9]) shows us that this approach is even more meaningful that we might have originally anticipated. Verblunsky s Theorem. There exists a one to one correspondence between sequences of numbers in D and nontrivial (i.e. not supported on a finite set) probability measures on the unit circle. This correspondence can be realized in a natural way by using two objects that will be of central importance to us. The first is Orthogonal Polynomials on the Unit Circle (OPUC) and the second is CMV (Cantero, Moral, and Velázquez) matrices. The way we can see this correspondence is shown here. µ OP UC {α n } n 0 CMV µ Suppose we start with a measure µ on the unit circle and the set of polynomials {, x, x, x 3,...}. We can perform the Gram-Schmidt Orthogonalization process to this set (with respect to the L norm) to get a new sequence of orthogonal polynomials {, Φ, Φ, Φ 3,...}. These polynomials satisfy the recurrence relation where if then Φ n+ (z) = zφ n (z) α n Φ n(z) Φ n (z) = Φ n(z) = n b j z j j=0 n b n j z j. j=0 Each of the complex numbers α n, which we call Verblunsky coefficients, is contained in D. Conversely, if we start with a sequence {α n } n 0 D N, then we can fill in the CMV matrix shown here.

8 CHAPTER. PRELIMINARIES 4 C = ρ k = α k. ᾱ 0 ᾱ ρ 0 ρ ρ ρ 0 ᾱ α 0 ρ α ᾱ ρ ᾱ α ᾱ 3 ρ ρ 3 ρ... 0 ρ ρ ρ α ᾱ 3 α ρ 3 α ᾱ 4 ρ 3 ᾱ 4 α The CMV matrix is a five diagonal unitary matrix meaning all of the nonzero matrix elements are located on the main diagonal, the two diagonals below it, and the two diagonals above it and it is unitary. Since it is unitary, the spectral measure associated to this matrix will be supported on D so this gives us a nontrivial measure on the circle. Notice that if one of the α j has absolute value, then the matrix decouples. The upper left corner is then an n n unitary matrix, which we denote by C (n) and the eigenvalues of C (n) are exactly the zeros of the paraorthogonal polynomial Φ n (paraorthogonal means α n = ). Verblunsky s Theorem is a beautiful result. It puts four seemingly unrelated objects - measures, orthogonal polynomials, sequences in D, and CMV matrices - in one to one correspondence with one another. Therefore, by studying one of these objects, we can gain information about the other three. Furthermore, by means of this correspondence we can use tools meant for studying one object (measures for example) to study another (such as orthogonal polynomials). The primary focus of this work will be to study the spectral properties of CMV matrices and Joye matrices, another kind of unitary band matrix.. CMV Matrices Much expository work on CMV matrices has been done in [9]. We will summarize some of the important results and give a few examples. To begin, let us derive the form of the CMV matrix. For a given measure µ, let us start with the set {, z, z, z, z,...}, which forms a basis for L ( D, dµ). We can perform the Graham-Schmidt Orthogonalization process to get a new basis of orthogonal functions, which we denote by {χ 0, χ, χ,...}. The CMV matrix is defined by C ij (dµ) = χ i, zχ j that is, the CMV matrix is the multiplication by variable operator on the space L ( D, dµ) with respect to the basis {χ j } j 0. One of the most striking properties of a CMV matrix is that it can be factored. Let us define the matrix Θ j by

9 CHAPTER. PRELIMINARIES 5 ( ᾱj ρ Θ j = j ρ j ᾱ j ). Now we can define the matrices M and L by M = Θ Θ 3..., L = Θ 0 Θ Θ One can easily verfiy that C = LM. In our original construction of the CMV matrix, we could have started with the set {, z, z, z, z,...} and proceeded in the same way. This procedure would have resulted in a different basis of orthogonal functions, which we denote by {x j } j 0. It is interesting to note that The following example is taken from [9]. M ij (dµ) = x i, χ j, L ij (dµ) = χ i, zx j. Example. If we let dµ = dθ, that is, if we consider normalized Lebesgue measure π on the circle, then all of the Verblunsky coefficients are 0 (i.e. α j = 0 for all j 0). Therefore, ( ) 0 Θ j = 0 for all j 0 and it follows that M = , L = and therefore C =

10 CHAPTER. PRELIMINARIES 6 This particular example is called the free case indicating that the spectral measure corresponding to C has no pure points..3 Applications The spectral properties of unitary and self-adjoint operators are very important in Mathematical Physics. Here we briefly discuss one particular application of Joye matrices. In [], Blatter and Browne investigate a phenomenon called Zener Tunneling. Zener Tunneling is a model of the behavior of an electron in a metal ring that is threaded with a time dependent magnetic flux, which induces an electric current that increases linearly with time. The model used in [] shows that at certain times during this ramping up of the magnetic field, an electron in a bound state can tunnel out of its state into a neighboring state with probability t <. We can interpret t as a transmission coefficient, and define a reflection coefficient r so that r + t =. Therefore, the matrix that represents the time evolution of an electron in a bound state is S + = r rt t 0 0 t r rt rt r rt t 0 t rt r rt rt r t rt which is clearly a five diagonal unitary band matrix. In [], the authors give each row a random phase (i.e. row m is multiplied by e iθ m for some random θ m ) and perform numerical simulations to show that the eigenfunctions of this operator - which represent eigenstates to a physicist - are exponentially localized in energy space. The final result of this thesis is devoted to proving that this is in fact the case provided the distribution of the phases satisfies a set of weak conditions. Another application of these methods to physical systems can be found in []. In this paper, Combescure uses methods similar to those in [3] to study time dependent Hamiltonians in quantum systems. We will return to his work in more detail in Section

11 Chapter Introduction We now define some notation that we will use throughout the first three Chapters of this thesis. Define the space Ω as in [3] by Ω = {α = (α 0, α,..., α n, α n ) D(0, R) D(0, R)... D(0, R) D} with the probability measure P obtained by taking the product of the probability measure dµ on each D(0, R) and uniform Lebesgue measure on D. We will denote by C α (n) the truncated CMV matrix C (n) corresponding to some given α Ω. We will also use F kl (z, C α (n) ) = [(C α (n) + z)(c α (n) z) ] kl and G kl (z, C α (n) ) = [(C α (n) z) ] kl. Our first new result is in the spirit of the result presented in [3] and our proof follows the road map presented there. We would like to prove results about the spectral properties of random CMV matrices when the Verblunsky coefficients are randomized in a particular fashion. In [3], Stoiciu showed that that if the Verblunsky coefficients are i.i.d. random variables uniformly distributed in the disk of radius R <, then the asymptotic distribution of the eigenvalues of the corresponding CMV matrix is almost surely Poisson (i.e. they exhibit no correlation). The difference between this result and Stoiciu s result from [3] is that in [3], the probability distribution P of the Verblunsky coefficients was the product of uniform Lebesgue measure on D(0, R) whereas here, we will use the measure γ χrdλ(z) with 0 < σ, χ z σ r indicating that the support of the measure is the disk of radius r <, and γ σ is the normalization πr σ constant to make our measure a probability measure (we will keep this definition of γ throughout). Shown here in Figure. is the distribution for σ = 0.5 and r = 0.9. We will obtain the desired result by showing (steps taken from [3]). (Fractional Moment Estimates a.k.a. Aizenman-Molchanov bounds) For the probability space Ω defined above, and for any s (0, ), there exist constants C, D > 0 that depend only on s such that for any n > 0, and k, l satisfying 0 k, l n and any e iθ D we have E( F kl (e iθ, C (n) α ) s ) C e D k l. 7

12 CHAPTER. INTRODUCTION Figure.: A new Distribution.. (Localization of Eigenfunctions) There exists a constant D > 0 and for almost every α Ω there exists a constant C α > 0 such that for any unitary eigenfunction ϕ (n) α of C α (n), there exists a point m(ϕ (n) α ) with m(ϕ (n) α ) n such that for any m satisfying m m(ϕ (n) α ) D ln(n + ), we have ϕ (n) α (m) C α e (4/D ) m m(ϕ (n) α ) where we call the point m(ϕ (n) α ) the center of localization. 3. (Decoupling the Point Process) The point process ζ (n) = n k= δ (where the z (n) k collection {z (n) k } is the collection of eigenvalues of C(n) ) can be asymptotically approximated by the direct sum of point processes [ln n] p= ζ(n,p). That is, the distribution of the eigenvalues of C (n) can be asymptotically approximated by the distribution of the eigenvalues of the direct sum of smaller matrices. The motivation for considering the distribution pictured above is as follows. In [6], Killip and Stoiciu show that if the n th Verblunsky coefficient is chosen from the uniform distribution on the disk of radius n ν with ν > 0.5 then the asymptotic distribution of the eigenvalues of the corresponding CMV matrix is not Poisson, but is called clock. That is, the eigenvalues are asymptotically evenly spaced points on the unit circle. Since the Verblunsky coefficients decay to the origin in this result from [6], this motivated the idea of considering identically distributed Verblunsky

13 CHAPTER. INTRODUCTION 9 coefficients chosen from a distribution that is highly concentrated near the origin. However, since we consider Verblunsky coefficients that are identically distributed, given any small ɛ > 0, infinitely many of the Verblunsky coefficients will have norm greater than R ɛ with probability one. Therefore, even though the distribution we consider is highly concentrated near the origin, we do not have decaying coefficients as in [6]. This leads us to the following conjecture. Conjecture. If the Verblunksy coefficients are chosen randomly from the distribution dµ = σ dλ(z) πr σ z σ on the disk of radius r < with 0 < σ <. then the asymptotic distribution of the eigenvalues of the corresponding CMV matrix will exhibit poisson statistics. We consider only the cases 0 < σ < because for σ, the distribution is no longer normalizable. We prove that this conjecture is true for 0 < σ in Chapter 3. The proof of the above conjecture for 0 < σ reveals to us the connection between the spectral properties of the CMV matrix and the way in which we randomize the Verblunsky coefficients. In general, the proofs will rely on one or many of the following properties of the distribution of the Verblunsky coefficients:. A functional form of the distribution,. The rotation invariance of the distribution, 3. The support of the distribution, 4. The lack of atoms in the distribution. The second part of this thesis (Chapters 4, 5, and 6) is an attempt to weaken the dependence of our results on any of the last three conditions listed above; in particular, we will try to eliminate Property, rotation invariance. To do this, we will use a different type of random unitary band matrices that we will call Joye matrices. These matrices will be very useful tools for us since the matrix elements are randomized in a different way than the CMV matrix elements. A result by Joye et al. (see [3] and [4]) shows that under certain conditions, the expected value of the fractional moments of the matrix elements of the resolvent decays exponentially along the rows (i.e. Aizenman-Molchanov bounds) even if the distribution of the matrix elements is not rotation invariant. Chapter 4 gives a detailed explanation of the methods used by Joye et al. to obtain these estimates. Chapter 5 contains results about the spectra of Joye matrices when the parameters are randomized in certain ways. The main attraction of Joye s results is that the angular distribution of the random parameters does not have to be constant on T (where T is the compactification of R mod π). The major shortcoming of his results is that they do not go as far as to completely characterize the distribution of the eigenvalues of the random matrices

14 CHAPTER. INTRODUCTION Figure.: Eigenvalue Distribution showing Poisson Statistics. (e.g. Poisson statistics). Numerical plots suggest that the asymptotic distribution of the eigenvalues of random Joye matrices is (almost surely) Poisson when t is small. Shown here in Figure. is the Mathematica plot of 7 eigenvalues of a random 7 7 Joye matrix when the phases are chosen from the uniform distribution on [0, 5π). The results in Chapter 6 bring us one step closer to classifying the distribution of eigenvalues. We show that Aizenman s Theorem holds true for random semi-infinite Joye matrices when the distribution of the phases is any one of an enormous class of distributions. That is, if U + (ω) is a random semi-infinite Joye matrix, then if the phases are randomized appropriately, we have that E(sup [(U + (ω)) n ] jk ) K e γ j k n for appropriately chosen constants K, γ > 0 independent of the random variable ω. Then, following the methods in [3], we can show that with probability, the eigenfunctions of U + (ω) are exponentially localized. This is step towards proving the existence of Poisson statistics (see above).

15 Chapter 3 Poisson Statistics for a New Distribution 3. Aizenman-Molchanov Bounds It is our task to show that for any s (0, ), there exist constants C, D > 0 that depend only on s such that for any n > 0, and k, l satisfying 0 k, l n and any e iθ D we have E( F kl (e iθ, C α (n) ) s ) C e D k l. We proceed as in [3]. The first lemma we need is the following, taken from [3]. Lemma 3... [3] For any s (0, ) and any k, l satisfying k, l n and any z D D we have E( F kl (z, C α (n) ) s ) C where the constant C depends only on s. Outline of Proof: After fixing ρ (0, ), Kolmogorov s Theorem gives π 0 (ϕ, (C (n) α + ρe iθ )(C α (n) ρe iθ ) ) s dθ π C = The polarization identity then gives us π 0 F kl (ρe iθ, C α (n) ) s dθ π C = s cos( πs ) cos( πs ). Since the distribution if the α k is rotationally invariant, we can consider a new set of Verblunsky coefficients α k,θ = e i(k+)θ α k and use the reasoning in [3] to conclude that the function θ E ( Fkl (ρe iθ, C α (n) ) s ) is constant and the desired conclusion follows for z D. Properties of Hardy spaces cited in [3] allow us to conclude that for Lebesgue a.e. e iθ D, the radial limit of F kl (ρe iθ, C α (n) ) exists. The desired conlcusion for z D follows from Fatou s Lemma. Next we have the following proposition, which will be useful for us later.

16 CHAPTER 3. POISSON STATISTICS FOR A NEW DISTRIBUTION Proposition 3... Consider the probability measure given by dµ α (ω) = γ χ Rdλ(ω) ω σ (where γ is just a normalization constant) supported on a disk of radius R < where 0 < σ <. Then log( ω )dµ α (ω) <. D(0,) Proof. We have that D(0,) log( ω )dµ α = γ = πγ π R 0 0 R 0 log(r) rdrdθ r σ log(r) dr. r σ Clearly this integral diverges if σ >. Now, suppose σ = δ. If δ = 0 we have R 0 R log(r) log(r) dr = dr r σ 0 r = R log (r) = so the integral diverges. If δ > 0 then we make the substitution y = r 0 to get R 0 log(r) r σ dr = = < R R C 0 R y ( log(y))y δ dy y log(y) y dy +δ +δ/ dy < for some constat C 0 so the integral converges as desired. Next we will prove that D(0,R) Let us define the region A as follows: log( ω )dµ α (ω) <. A = {z : < z R}. That is, A is the annulus with inner radius and outer radius the same as the support of dµ α (ω) (if R < then we define A = ).

17 CHAPTER 3. POISSON STATISTICS FOR A NEW DISTRIBUTION 3 Proposition 3... Let f(z) : D(0, R) R + be a positive rotationally symmetric integrable function such that there exists a constant C with 0 < f(z) < C for a.e. z A. Also suppose that log( z )f(z)dλ(z) <. Then D(0,R) D(0,R) log( z )f(z)dλ(z) <. This proposition says that as long as the distribution from which we are drawing the Verblunsky coefficients is bounded outside the disk of radius then it suffices to check that log( z )f(z)dλ(z) < D(0,R) to show the finiteness of both integrals. Proof. We clearly have that log( z )f(z)dλ(z) = D(0,R) + log( z )f(z)dλ(z) D(0, ) log( z )f(z)dλ(z). Since f(z) and log( z ) are both bounded and positive in the region A, their product is also bounded and positive, so the integral of their product over the region A is finite. Thus we have log( z )f(z)dλ(z) <. A To deal with the other part of the integral, note that A log( z )f(z) log( z )f(z) log( z ) log( z ) z z z. Therefore, log( z )f(z)dλ(z) < log( z )f(z)dλ(z) <. D(0, ) D(0, ) It follows immediately that as desired. D(0,R) log( z )f(z)dλ(z) <

18 CHAPTER 3. POISSON STATISTICS FOR A NEW DISTRIBUTION 4 From this, we get the following corollary Corollary 3... Consider the measure given by dµ α (ω) = γ χ Rdλ(ω) ω σ disk of radius R < where 0 < σ <. Then log( ω )dµ α (ω) <. D(0,) supported on a Proof. It is easily seen that with 0 < σ < satisfies the conditions of Lemma z σ 3.. and by Lemma 3.. we know that log( ω )dµ α (ω) <. D(0,) Therefore, as desired. D(0,) log( ω )dµ α (ω) < We can now proceed with the proof of the following lemma. Recall the definition of G j,j+k (z, C) from Chapter. Lemma [3] Let C = C α be the random CMV matrix associated to a family of Verblunsky coefficients {α n } n 0 with α n i.i.d. random variables distributied on the disk D(0, R) according to the distribution γ χ rdλ(z) with r <. Let s (0, ), z σ z D D, and j a positive integer. Then we have lim E( G j,j+k(z, C) s ) = 0. k Proof. The proof presented here follows that presented with Lemma 3.3 in [3]. The proof for z D is easy and is given in [3]. Now consider z = e iθ D. The transfer matrices corresponding to the CMV matrix are T n (z) = A(α n, z),..., A(α 0, z) where ( z α A(α, z) = ( α ) / αz and the Lyapunov exponent is ) γ(z) = lim n n log T n(z, {α n } (provided this limit exists). Observe that the common distribution dµ α of the Verblunsky coefficients is rotationally invariant and log( ω )dµ α (ω) < D(0,)

19 CHAPTER 3. POISSON STATISTICS FOR A NEW DISTRIBUTION 5 and D(0,) log( ω )dµ α (ω) < by 3.. and 3... By rotation invariance, the density of eigenvalues measure is just dθ and therefore π the logarithmic potential of this measure is identically zero. The Lyapunov exponent exists for every z = e iθ D and the Thouless formula gives γ(z) = log( ω )dµ α (ω). D(0,) It is easily seen that log( ω )dµ α (ω) > D(0,) D(0,) log( ω ) dθ πr (ω) > 0 where dθ (ω) represents the uniform distribution on D(0, R) and for the last inequality πr we used the result of [3]. It follows then that the Lyapunov exponent γ(e iθ ) is positive, and using the Ruelle-Osceledec Theorem, we conclude that there exists a constant λ for which ( ) lim T n(e iθ ) = 0. n λ From here, we use the same reasoning as in [3] (i.e. the theory of subordinate solutions) to conclude that for any j and almost every e iθ D, the desired conclusion follows as in [3]. lim G j,j+k(e iθ, C) = 0. k We will also need the following lemma from [3]. Lemma [3] For any fixed j, and s (0, ), and any z D, lim E( ( ) Gj,j+k z, C (n) α s) = 0. k, k n The key to the proof of this lemma is to form the matrix C (n) by decoupling the matrix C and apply the resolvent identity to (C C (n) ), a matrix with at most eight nonzero terms. Applying the same decoupling trick, we get the following lemma from [3]. Lemma [3] For any ɛ > 0, there exists a k ɛ 0 such that for any s (0, ) and k > k ɛ and n > 0 and 0 j (n ) and for any z D D, we have E ( ( ) Gj,j+k z, C (n) α s) < ɛ. Following the proof in [3], we have the following lemma. It is interesting to note that this lemma is where we most heavily rely on the fact that the support of our distribution is the disk of radius R <.

20 CHAPTER 3. POISSON STATISTICS FOR A NEW DISTRIBUTION 6 Lemma [3] Let C α (n) be exactly as defined before. Then, for any e iθ D and for any α Ω (defined above) where G(e iθ, C α (n) ) = (C α (n) e iθ ) exists, we have G kl (e iθ, C α (n) ) G ij (e iθ, C α (n) ) ( ) k i + l j. R The key to the proof of this lemma is the identity provided in [9], which shows that [ (C z) ] kl = { (z) (χ l (z)p k (z), k > l, or k = l = n, (z) (π l (z)x k (z), l > k, or k = l = n, where χ l (z) and x k (z) are orthogonal polynomials obtained from applying Gram- Schmidt to {, z, z, z,...} and {, z, z, z,...} respectively and p l (z) and π k (z) are analogs of the Weyl solutions of Golinskii-Nevai ([3]). Our next lemma is in the spirit of Lemma 3.8 from [3], but modified so that we can apply it to our measure dµ. Lemma For any constant s (0, ) and any constant β C and any σ [0, ] and any y [, ], we have dx x β s (x + y ) σ/ dx. x s (x + y ) σ/ Proof. Let β = β + iβ with β, β R and let us assume without loss of generality that β 0. Then dx = x β s (x + y ) σ/ dx (x β ) + β s/ (x + y ) σ/ dx. x β s (x + y ) σ/ If β = 0 then we are done. Otherwise, consider the expression = dx x s (x + y ) σ/ x β s x s dx. x β s x s (x + y ) σ/ dx x β s (x + y ) σ/ Notice that the expression g(x) x β s x s x β is positive on the interval [, β s x s /) and negative on the interval (β /, ]. Suppose that g(x) is negative at some x 0 = β / + δ. Then at x = β / δ the expression is positive, but equal in absolute value (that is g(x 0 ) = g(x ) ). However, >, that is, our measure x +y σ/ x 0 +y σ/ applies more weight to the positive part (recalling that y is fixed). Therefore, x +y σ/ if we let ɛ = β / then ɛ x β s x s x β s x s (x + y ) σ/ dx x β s x s dx 0 x β s x s (x + y ) σ/

21 CHAPTER 3. POISSON STATISTICS FOR A NEW DISTRIBUTION 7 since we have assumed that ɛ <. Therefore, we conclude that dx x s (x + y ) σ/ dx x s (x + y ) σ/ dx x s (x + y ) σ/ for any complex number β as desired. dx 0 x β s (x + y ) σ/ dx x β s (x + y ) σ/ dx x β s (x + y ) σ/ We will apply this lemma by means of the following corollary. First though, we must define two functions. For any D, E C, let us define f D,E : [, ] R + { } by f D,E (y) = dx x + yd + E s (x + y ) σ/ and let us define f : [, ] R + { } by f(y) = dx. x s (x + y ) σ/ With this notation, we have the following corollary. Corollary For any s (0, ) and any σ [0, ] and any y [, ] and any D, E C we have f D,E (y) f(y). Proof. Since y, D, and E are fixed, we can apply Lemma 3..7 with β = yd + E to get the desired conclusion. From this, we get the following corollary, which we will apply directly to achieve a desired lemma. Corollary For any s (0, ) and any σ [0, ] and any D, E C and with our definitions of f(y) and f D,E (y) as before, we have f D,E (y)dy f(y)dy. Proof. Corollary 3..8 shows that f D,E (y) f(y) on the interval [, ]. The desired conclusion follows immediately. It will also be helpful to have the following, the proof of which is straightforward. Proposition For any s (0, ) we have that π/ 0 csc(x) s dx < C s where C s is a constant that depends only on the value of s.

22 CHAPTER 3. POISSON STATISTICS FOR A NEW DISTRIBUTION 8 Now we can prove the desired lemma, which is a new version of Lemma 3.9 from [3]. Lemma For any s (0, ), and k satisfying k n, and any choice of α 0... α k, α k+..., α n, E( F kk (z, C α (n) ) s {α i } i k ) C for some constant C that may depend on s and σ. Proof. From Lemma 3.9 of [3], we know that the diagonal elements we wish to bound are given by (δ k, (C + z)(c z) e iθ + z δ k ) = e iθ z ϕ(eiθ ) dµ(e iθ ) D where µ is the measure associated with the collection of Verblunsky coefficients {α n } n 0 and {ϕ n } n 0 are the resulting orthonormal polynomials. Using the argument in [3], the Schur function associated to the measure ϕ(e iθ ) dµ(e iθ ) is g k (z) = C α k + C + α k C where C = f(z; α k, α k,..., α 0, ) C = zf(z; α k+, α k+,...) where f is the Schur function associated to the family of Verblunsky coefficients S (recalling Verblunsky s Theorem, which associates to every measure a sequence of Verblunsky coefficients). Then, using the reasoning in [3] we see that Fkk (z, C (n) α ) zc α k +C +α k C One important thing to note is that the constants z, C, C do not depend on α k and C, C and z <. We will use the above identity to bound E( F kk (z, C α (n) ) s {α i } i k ). To do this we consider the following s α D(0,R) zc k +C α +α k C k dλ(α k). σ By the same reasoning as in [3], it suffices to bound s sup α ω,ω D D(0,R) ω k +ω +α k ω α k dλ(α k). σ Clearly (by the same reasoning as in [3]) α ω k +ω 4 +α k ω + α k ω ω (α k + ω )..

23 CHAPTER 3. POISSON STATISTICS FOR A NEW DISTRIBUTION 9 For α k = x + iy and the same reasoning as in [3], we have + α k ω ω (α k + ω ) = x( ω +ω )+y( iω iω )+( ω ω ) where not all of ( ω +ω ), ( iω iω ), ( ω ω ) can be small. If ( ω + ω ) ɛ then D(0,R) α ω k +ω +α k ω s α k dλ(α k) 4s σ ɛ s 4s ɛ s 4s ɛ s = 4s = 4s R R R R π dxdy x + yd + E s x + y σ/ dxdy x + yd + E s x + y σ/ dxdy x s x + y σ/ 4s r drdθ ɛ s 0 0 cos(θ) s r s+σ ( ) s σ π ɛ s s σ 0 cos(θ) dθ s ( ( ) s σ ) π/ 4 csc(θ) s dθ ɛ s s σ 0 ( ) s σ ɛ s s σ C s, < 4s+ where we used Lemma 3..3 to obtain the constant C s. We attain the same bound for (ω + ω ) ɛ (replacing y with x when we invoke Corollary 3..9). If ( ω + ω ) ɛ and (ω + ω ) ɛ then x( ω + ω ) + y( iω iω ) + ( ω ω ) ( ɛ 4ɛ) It follows that s α D(0,R) ω k +ω +α k ω α k dλ(α k) σ ( ) s s+ dxdy ɛ 4ɛ x + y σ/ ( ) s s+ π drdθ ɛ 4ɛ 0 0 = s+3 π ( ) s. ɛ 4ɛ Therefore, we get the desired result with { C = max 4 s+ ( ) s σ C ɛ s s σ s, s+3 π ( ) } s ɛ 4ɛ completing the proof.

24 CHAPTER 3. POISSON STATISTICS FOR A NEW DISTRIBUTION 0 It is interesting to note that the bound derived above is the reason we insist that 0 < σ since we want to apply the lemma to any s (0, ). In [3], Stoiciu shows that the above lemmas are sufficient to prove the desired result of obtaining Aizenman-Molchanov bounds for the fractional moments of the resolvent of CMV matrices. One key step in the proof of this theorem is to apply the resolvent identity twice using two different decoupled forms of the matrix C (n). The end result is that for fixed k, there exists an m 0 such that for any s satisfying s (k + m) we can find constants C and β independent of k with β < such that E ( (C z) ks s) Cβ where C is a decoupled [ ] CMV matrix. To find a bound for E( (C z) kl s ) we can repeat this process times, each time moving (m + 3) spots to the right from k to l to obtain l k m+3 E ( (C z) s ) Cβ (l k)/(m+3), which immediately gives the desired Aizenman-Molchanov bounds. 3. The Road Map kl Now that we have established our first desired result, we can proceed with the proof of Step : Exponential localization of the eigenfunctions. By the same reasoning as in [3], we can use the results of Step and apply Aizenman s Theorem for CMV matrices to conclude that there exist positive constants C 0 and D 0 depending on s such that E(sup (δ k, (C α (n) ) j δ l ) ) C 0 e D0 k l. j Z Using this result and the methods used in [3], we have the following lemma. Lemma 3... [3] For almost every α Ω there exists a constant C α > 0 such that for any n and any k, l satisfying k, l, n and k l D 0 ln(n + ), we have sup (δ k, (C α (n) ) j δ l ) C α e D 0 k l. j Z From this, we can follow the procedure given in [3] and complete the proof of Step. Indeed we have the following theorem. Theorem 3... [3] There exists a constant D > 0 and for almost every α Ω there exists a constant C α > 0 such that for any unitary eigenfunction ϕ (n) α of C α (n), there exists a point m(ϕ α (n) ) with m(ϕ (n) α ) n such that for any m Z satisfying m m(ϕ (n) α ) D ln(n + ), we have ϕ α (n) (m) C α e (4/D ) m m(ϕ (n) α ) where we call the point m(ϕ α (n) ) the center of localization.

25 CHAPTER 3. POISSON STATISTICS FOR A NEW DISTRIBUTION Outline of Proof. Let e iθα be an eigenvalue of C α (n) and let ϕ (n) α be the corresponding eigenfunction. The sequence of functions defined by f M (e iθ ) = M + M j= M e ij(θ θα) is uniformly bounded by and converges pointwise to the characteristic function of the point e iθ α. By applying Lemma 3.. and the functional calculus to C α (n) we gett that (δ k, f M (C α (n) )δ l ) C α e D 0 k l. It follows that ϕ (n) α (k)ϕ (n) α (l) C α e D 0 k l. Now we can choose as center of localization the smallest integer m such that ϕ (n) α (m(ϕ (n) )) = max α m ϕ(n) α (m) and derive the desired conclusion. Now that we have completed the proofs of Steps and, we can proceed with the proof of the next step required to prove the existence of Poisson statistics in the asymptotic distribution of the eigenvalues. To do so, we must decouple the matrix into smaller matrices and use these smaller matrices to approximate the behavior of the original matrix. We begin by considering the matrix C (n) obtained in the same way as C (n) with the additional restriction that α [ n ln(n) ] = e iη, α [ n ln(n) ] = e iη,..., α n = e iη [ln(n)] (where [x] denotes the greatest integer less than or equal to x). Due to this additional restriction, the matrix C (n) decouples into approximately [ln(n)] smaller matrices. The main idea behind the proof is that this additional restriction that we have introduced to define the matrix C (n) creates a negligible error when trying to approximate the point process define by C (n) because the center of localization for the (n) eigenfunctions of both matrices usually avoids the regions at which the matrix C decouples. Using the methods in [3], we can make a rigorous statement about what we mean by usually. Therefore, when we replace C α (N) with C (N), our estimates concerning the point process acquire an error that becomes negligible as N, which is what we wanted to show. Finally, we can proceed with the proof of the existence of Poisson statistics. The previous results show that we can estimate the point process determined by a matrix C (N) n by considering the point processes of [ln(n)] matrices of size [ ] (denoted ln(n) C (N),... C (N) [ln(n)] ). Given an interval I N = (e i(θ 0+ πa N ), e i(θ 0+ πb N ), we can show that each of the matrices C (N),... C (N) [ln(n)] has at most one eigenvalue in the interval I N up to a negligible error. Here we provide an outline of the steps taken in [3] to give this result. If we let A(m, C, I) be the event C has at least m eigenvalues in the interval I and let M(e iθ ) be the event e iθ is an eigenvalue of C then we have the following lemma.

26 CHAPTER 3. POISSON STATISTICS FOR A NEW DISTRIBUTION Lemma [3] With C (N) and I N defined as before and for any e iθ I N we have P(A(, C (N), I N M(e iθ )) (b a). Using this result, we can make a very strong statement about the probability of having or more eigenvalues in I N. To this effect we have the following theorem. Theorem [3] Using the notation defined before, we have P(A(, C (N), I N )) (b a). Using this result, it follows (as is shown in [3]) that P(A(, C (N) k, I N )) = O([ln(n)] ) as n for all k satisfying k [ln(n)]. Therefore, the probability that the direct sum of all of the matrices C (N) k has two or more eigenvalues in the interval I N is [ln(n)]o([ln(n)] ) and therefore goes to zero as n. The importance of this result for us cannot be overstated. It allows us to model the occurrence of an eigenvalue of the matrix C (N) in the interval I N as a Bernoulli trial and we can therefore conclude that the asymptotic distribution of the eigenvalues of C (N) is almost surely Poisson. Finally, using our results from before, we know that the asymptotic distribution of the eigenvalues of C (N) is the same as that of C (N) and we have achieved the desired result.

27 Chapter 4 Joye Matrices 4. Joye s Result In Section.3, we mentioned the work of Blatter and Browne in [] involving a unitary band matrix with random phases to describe the time evolution of the energy state of an electron in a ring threaded by a time dependent magnetic flux. In [3], Joye proved several meaningful results about classes of such operators, including conditions under which we almost surely have Aizenman-Molchanov bounds and pure point spectrum. He studies these matrices in the following way. Define a probability space Ω and a map θ k by θ k : Ω T, s.t. θ k (ω) = ω k, k Z where T = R mod π. With this notation we can define a unitary operator U ω by and U ω = D ω S 0 with D ω = diag{e iθ k(ω) }... rt t r rt rt r S 0 = rt t t rt r rt rt r where the r entries are along the diagonal (this construction comes from [3]). For z D define H ω (z) = U ω (U ω z). In [3], Joye proved the following result. Theorem 4... [3] Let U ω be defined as above and suppose that {θ k (ω)} k Z is a collection of i.i.d. random variables and are distributed according to the probability measure dν(θ) = τ(θ)dθ, where τ(θ) L (T). Let s (0, ). There exists a t 0 (s) > 0 t rt... 3

28 CHAPTER 4. JOYE MATRICES 4 small enough and 0 < K(s) < such that if t < t 0 (s), there exists γ(s, t) > 0 so that for any j, k Z we have E( H ω (z) jk s ) K(s)e γ(s,t) j k. (There actually exists a stronger version of this theorem presented in [4], but we choose to work with this version because the proof can easily be generalized in a way that will eventually allow us to decouple this matrix as described in Chapter.) Clearly this result gives fractional moment estimates on the resolvent of the matrix U ω. In [3], Joye goes on to show that this implies that the spectrum of U ω is almost surely pure point as follows. If we let δ 0 be a cyclic vector associated to H ω then by the Spectral Theorem, there exists a measure µ ω such that δ 0 H ω (z)δ 0 = D e iθ e iθ z dµ ω(θ) J ω (z). Now, given a nontrivial probability measure µ on D, let us define a function G(z) as in [0] by dµ(θ) G(z) = e iθ z. A simple computation shows that if z D, we can write this as G(e iγ dµ(θ) ) = D 4 sin ( θ γ ). If we recall that the Poisson integral of a measure µ is given by z P [dµ](z) = dµ(θ) 0, z <, e iθ z then for r < we can write G(re iγ ) = P [dµ ω ](re iγ ) + D D D P [dµ ω ](re iγ ) + B ω (r, γ). r + r r cos(θ γ) dµ ω(θ) An application of the Monotone Convergence Theorem shows that B ω (γ) lim r B ω (r, γ) = G(e iγ ). In [3], Joye uses Aizenmann-Molchanov bounds on the fractional moments to show that B ω (γ) is finite for a.e. γ with respect to Lebesgue measure. Now let us define a perturbation of the matrix U ω by perturbing the matrix D ω. Let us define ˆD ω to be identical to D ω except that the (0, 0) entry has been changed to a and let us define Ĥω accordingly. Then we have the relation J ω (z) = Ĵ ω (z) Ĵ ω (z)( e iθ 0(ω) ) + e iθ 0(ω). (4.)

29 CHAPTER 4. JOYE MATRICES 5 Although it is natural to think of Ûω as a rank one perturbation of U ω, we can also think of U ω as a rank one perturbation of Ûω. We will in fact use this method of thought to apply the results of []. In that paper, Combescure proves the following theorem, which is essential for our purposes. Theorem 4... (Simon-Wolff Criterion)([0]; []) The following are equivalent:. For a.e. θ 0 (ω), the matrix U ω has only pure point spectrum,. For a.e. θ T, ˆB ω (θ) <. From this theorem, we can see that to understand the measure µ ω, it is essential that we discover the properties of ˆµ ω. Since the relationship between J ω (z) and Ĵω(z) is well understood, we can use the reasoning of [3] to conclude that ˆB ω (θ) < for a.e. θ T. Therefore, by the above theorem, we can conclude that the matrix U ω has only pure point spectrum as long as ω Ω 0 where Ω 0 Ω is a set of probability one. If we apply the same reasoning using the vector δ j, then we get the same result except we replace almost all values of θ 0 (ω) with almost all values of θ j (ω), or analogously, we get the desired result as long as ω Ω j where Ω j Ω is a set of probability one. Therefore, if ω j Z Ω j, we get the desired result, but j Z Ω j is a set of probability one, so we get the desired result almost always. 4. The Carathéodory Function In [9], Simon shows that if we are given a Carathéodory function F on D with Taylor expansion at z = 0 given by then F (z) = + F (z) = D c n z n n= e iθ + z e iθ z dµ(θ) where µ is a measure on D satisfying e inθ dµ(θ) = c n. He also states that the measure of an isolated point e iγ is given by ( ) r µ(e iγ ) = lim F (re iγ ) r D and so e iγ is a pure point of the measure µ if and only if this limit is nonzero. Similarly, in [3], Joye defines a similar function as follows. Given a measure µ on D, define the Joye function, J(z), by e iθ J(z) = e iθ z dµ(θ). D

30 CHAPTER 4. JOYE MATRICES 6 By the same reasoning as above we have µ(e iγ ) = lim r ( r)j(re iγ ). To understand the relationship between these two functions, we have the following theorem. Theorem 4... Given a probability measure µ on D, define the functions F and J as above. Then for any z D we have J(z) = F (z) +. This theorem is not a tremendously surprising result. Given the above formulae for µ(e iγ ), we would expect that J(z) F (z) if these quantities were to become very large. Proof. Clearly we have F (re iγ ) = J(re iγ ) + D re iγ dµ(θ). e iθ reiγ Let us examine this last term. We have re iγ dµ(θ) D e iθ reiγ = re iγ e iθ dµ(θ) D rei(γ θ) = re iγ e iθ ( + re i(γ θ) + (re i(γ θ) ) + (re i(γ θ) ) 3 + )dµ(θ) D = re iγ (e iθ + re i(γ θ) + r e i(γ 3θ) + r 3 e i(3γ 4θ) + )dµ(θ) D = (re i(γ θ) + r e i(γ θ) + r 3 e i(3γ 3θ) + r 4 e i(4γ 4θ) + )dµ(θ) D ( ) = r n e in(γ θ) dµ(θ) = = D n= ( r n e inγ n= c n (re iγ ) n n= = F (reiγ ) D e inθ dµ(θ) ) where we used the absolute convergence of the series to justify integrating it term by term. Therefore, F (re iγ ) = J(re iγ ) + F (reiγ )

31 CHAPTER 4. JOYE MATRICES 7 and we conclude that for 0 r < F (re iγ ) + = J(re iγ ) as desired. 4.3 Alexandrov Measures In [9], Simon introduces a particular family of perturbed measures called Alexandrov Measures that will be of particular importance for us. Before we can give a formal description, we need the following definition. Definition. If F is a Carathéodory function and Q is analytic from C r = {z Re(z) > 0} to itself with Q() = then Q(F (z)) is also a Carathéodory function. Let us define functions L, R L(z) = + z z L (z) = z z + R λ (z) = λz, λ D. The function L conformally maps C r to D with to 0 and R λ maps D to itself with R λ (0) = 0. Then Q λ = L R λ L is an analytic function from C r to itself. It is easily computed that ( λ) + ( + λ)z Q λ (z) = ( + λ) + ( λ)z. Therefore, given a Carathéodory function F, F (λ) (z) = ( λ) + ( + λ)f (z) ( + λ) + ( λ)f (z) is also a Carathéodory function for any λ D and we will call the associated family of measures Alexandrov measures and denote them by dµ λ (θ). 4.4 Singular Spectrum In this section, we will state and prove a result that is essential to understanding the Aronszajn-Donaghue Theory discussed in Section 4.5. We will prove that the support of the singular part of a measure µ s on D is contained in the set of all e iθ such that lim r P [dµ s ](re iθ ) =. This is not a new theorem, but it is presented here with an original proof. Simon, in [9], cites page 43 in [8] for this result. This may be a typo. The relevant lemma we will use here is Theorem 7.5 in [8] found on page 43 and says the following.

32 CHAPTER 4. JOYE MATRICES 8 Theorem Let µ be a positive Borel measure on R. Define Dµ(x) by Dµ(x) = lim n m(i n ) µ(i n ) where m denotes Lebesgue measure and {I n } n 0 is a sequence of intervals centered at x satisfying n 0 I n = {x}. If µ m then for µ-a.e. x. Dµ(x) = We will assume that this theorem applies replacing R with T and replacing m with σ (where σ = m is a normalized version of m). Notice that this theorem applies π to a.e. x with respect to the measure µ. It is obvious that Dµ(x) = if x is a pure point of the measure µ. The above theorem says that, in some sense, the support of a singular measure can in some ways be treated as pure points. Now we can proceed with our main result, but first we need the following lemma. Lemma Given any integer n 4, there exists r n < r n < such that for all r (r n, r n) we have r ( + r r cos( > πn. )) n Furthermore, Proof. Consider the equation lim n r n =. r = πn( + r r cos(/n)). We see that when r = 0, the left side is smaller than the right side. If we take the limit as r, again we see that the left side is smaller than the right side. Thus, if there exists some r (0, ) such that the left side is larger than the right side, then there would have to be an interval (r n, r n) (0, ) such that for all r (r n, r n), the left side is larger than the right side. It is easily checked that if r = cos(/n) then r πn( + r r cos(/n)) = cos (/n) πn( + cos (/n) cos (/n)) = sin (/n) πn(sin (/n)) = sin (/n) πn sin 4 (/n) > 0 as long as n 4. Furthermore, since it must be the case that as desired. lim cos(/n) = n lim n r n =

33 CHAPTER 4. JOYE MATRICES 9 Now we have our desired result. Theorem If e iγ is in the support of the singular part of a measure µ on D then lim r P [dµ s ](e iγ ) = where P [dµ s ](re iγ ) = D r e iθ re iγ dµ s(θ). Proof. Let e iγ supp(µ s ). For a given value of r <, define the intervals I n of the form I n = (e i(γ n ), e i(γ+ n ) ). Define y n by Then P [dµ s ](re iγ ) = y n = min z I n { z re iγ }. r D e iθ re iγ dµ s(θ) r In e iθ re iγ dµ s(θ) y n ( r )dµ s (θ) I n where we will choose n later. Clearly y n will be the distance from re iγ to the endpoints of the interval I n. Therefore, y n = e i(γ n ) re iγ = ( + r r cos( n )) and σ(i n ) = ( ) =. It follows then that the following are equivalent π n πn ( r )y n > σ(i n ) r ( + r r cos( > πn )) n r (r n, r n) by Lemma Then, by what we derived before, we have P [dµ s ](re iγ ) y n ( r )dµ s (θ) I n > I n σ(i n ) dµ s(θ) = µ s(i n ) σ(i n )

34 CHAPTER 4. JOYE MATRICES 30 for every n such that r (r n, r n). As r gets closer to, this is true for larger n, so the conclusion follows from Theorem Aronszajn-Donaghue Theory Many of the results of this section are taken from [0], though an equivalent form of the result is given in [3] and the result of Section 4. will help explain the connection. The Aronszajn-Donaghue Theory is a very thorough description of the supports of the pure point, singular continuous, and absolutely continuous parts of a measure. The first result we need is the following theorem taken from [0]. Theorem [0] Given a nontrivial probability measure µ on D, we have the following:. lim r Re(F (re iγ )) r = G(e iγ ). If G(e iγ ) <, then F (e iγ ) exists and is pure imaginary. 3. If G(e iγ ) <, then lim r F (e iγ ) F (re iγ ) r = G(e iγ ). We will adapt the proof presented it [0] to achieve a similar result for the Joye function. Theorem Given a nontrivial probability measure µ on D, we have the following:. lim r Re(J(re iγ )) r = G(e iγ ). If G(e iγ ) <, then J(e iγ ) exists and is of the form + iy with y R. 3. If G(e iγ ) <, then lim r F (e iγ ) F (re iγ ) r = G(e iγ ). Proof. For part we use the relation given in Theorem 4.. and the result of Theorem 4.5. part (i). We have G(e iγ ) = Re(F (re iγ )) lim r r = ReJ(re iγ )) lim r r as desired. For part, we note that if G(e iγ ) is finite, then part shows that ReJ(e iγ )) = 0 so ReJ(e iγ )) =. Similarly, using the results of Theorem 4.5., we see that F (eiγ ) exists and is pure imaginary. Since J(e iγ ) = (F (eiγ ) + ) we get that J(e iγ ) exists and has real part equal to as desired.

35 CHAPTER 4. JOYE MATRICES 3 For part 3, we have ( ) e iθ = r e iθ re iγ e i(θ γ) (e i(θ γ) r) e iθ e iγ as r. Thus, the Dominated Convergence Theorem shows that if G(e iγ ) is finite then lim r r J(reiγ ) G(e iγ ). This implies J(e iγ ) J(re iγ ) = and this in turn implies the desired relation. Using Theorem 4.5. we have the following. r r J(ρeiγ )dρ Theorem [0] Given a nontrivial probability measure µ on D, if e iγ is a pure point of µ λ for λ, then G(e iγ ) <, and F (e iγ ) = λ + λ. Conversely, if G(e iγ ) < and λ is defined by F (e iγ ) as above then e iγ is a pure point of µ λ. Now we will again use the reasoning of [3] backwards and consider U ω as a perturbation of Ûω. Recall that in order to get Ûω from U ω, we changed the (0, 0) entry of the matrix D ω from e iθ 0(ω) to. Therefore, to get back to U ω, we simply undo this change. Using the reasoning of [3], we get or equivalently Ĵ ω (z) = Ĵ ω (z) = J ω (z)(e iθ 0 ) (e iθ 0 )Jω (z) + J ω (z) ( e iθ 0 )Jω (z) + e iθ 0. We see that this is the same relation derived in Section 4. with J ω and Ĵω switched and e iθ 0 replacing e iθ 0. Using Theorem 4.5. and a similar argument to the proof of Theorem 0..3 in [0], we get the following. Theorem Given a nontrivial probability measure µ ω on D, if e iγ is a pure point of ˆµ ω (where we perturb the measure as in Section 4., Equation 4.), then G(e iγ ) <, and J(e iγ ) = e iθ 0 e iθ 0. Conversely, if G(e iγ ) < and e iθ 0 point of ˆµ ω. is defined by J(e iγ ) as above then e iγ is a pure

36 CHAPTER 4. JOYE MATRICES 3 Proof. Suppose e iγ is a pure point of ˆµ ω. By what we said in Section 4., we have that ˆµ ω (e iγ ) = lim r ( r)ĵω(re iγ ). Clearly lim r ( r) = 0 so if the above limit is nonzero, it must be that lim r Ĵ ω (re iγ ) =. Recall also that J ω (z) Ĵ ω (z) = ( e iθ 0 )Jω (z) + e. iθ 0 From this it follows that Using this, we can write J(e iγ ) lim r J(re iγ ) = e iθ0 e iθ 0. lim( r)ĵω(re iγ J ω (re iγ ) ) = lim( r) r r ( e iθ 0 )Jω (re iγ ) + e iθ 0 e iθ 0 e iθ 0 = lim( r) r ( e iθ 0 )(Jω (re iγ ) J ω (e iθ 0 )) e iθ 0 = lim( r) r ( e iθ 0 ) (J ω (re iγ ) J ω (e iγ )) e iθ 0 = ( e iθ 0 ) G(e iγ ) using part (iii) of Theorem Since this quantity is nonzero, we get that G(e iγ ) < as desired. The proof of the converse is similar. Now we can use all of our results to get the following. Theorem Given a nontrivial probability measure µ and e iθ 0 ( D \ {}, define the perturbed measure ˆµ so that the relation Ĵ(z) = J(z) ( e iθ 0 )J(z) + e iθ 0 holds for all z D. Then ˆµ is the Alexandrov measure µ λ with λ = e iθ 0. Proof. Notice that Ĵ(z) = ˆF (z) + = J(z) ( e iθ 0 )J(z) + e iθ 0 (F (z) + ) ( e iθ 0 )(F (z) + ) + e iθ 0 ˆF (z) = F (z) + F (z) + e iθ 0 F (z) + e iθ 0 e iθ 0 ) ( + e iθ 0 ) + ( e iθ 0)F (z) ˆF (z) = ( e iθ 0 ) + ( + e iθ 0 )F (z) ( + e iθ 0 ) + ( e iθ 0)F. (z)

37 CHAPTER 4. JOYE MATRICES 33 It follows that ˆF (z) is the Carathéodory function corresponding to the perturbed measure µ λ with λ = e iθ 0. Since the correspondence between nontrivial measures on D and Carathéodory functions is one to one, the result follows. The intuition behind this result is that the approach taken in [3] can in some ways be thought of as Simon s approach in [0] done backwards. Simon uses properties of a measure µ to get information about the spectrum of a family of perturbed measures. Joye uses information about the perturbed family to characterize the spectrum of the original measure. 4.6 Perturbations Considered by Combescure In [], Combescure studies the resolvent of a unitary operator U by studying the behavior of the function C(z) defined on D by dµ(θ) C(z) = e iθ z D where µ is the spectral measure corresponding to the operator U and some cyclic vector. We would like to gain a better understanding of the spectral measure µ and would therefore like to apply the results from [0], all of which are given in terms of the Carathéodory function F associated to the measure µ. To help us understand the relationship between F (z) and C(z) we have the following proposition. Proposition Given a nontrivial probability measure µ on D, define the functions F (z) and C(z) as above. With this definition we have that for all z D, zc(z) = F (z). Proof. In the proof of Theorem 4.., we showed that for 0 r <, re iγ e iθ re dµ(θ) = F (reiγ ) iγ and it is clear that D (re iγ )C(re iγ ) = The identity follows immediately. D re iγ dµ(θ). e iθ reiγ In [9], Simon shows that if F is the Carathéodory function associated to the measure µ then F (z) = + c n z n where c n = D n= e inθ dµ(θ).

Spectral Theory of Orthogonal Polynomials

Spectral Theory of Orthogonal Polynomials Spectral Theory of Orthogonal Polynomials Barry Simon IBM Professor of Mathematics and Theoretical Physics California Institute of Technology Pasadena, CA, U.S.A. Lecture 1: Introduction and Overview Spectral

More information

Spectral Theory of Orthogonal Polynomials

Spectral Theory of Orthogonal Polynomials Spectral Theory of Orthogonal Barry Simon IBM Professor of Mathematics and Theoretical Physics California Institute of Technology Pasadena, CA, U.S.A. Lecture 2: Szegö Theorem for OPUC for S Spectral Theory

More information

Zeros of Polynomials: Beware of Predictions from Plots

Zeros of Polynomials: Beware of Predictions from Plots [ 1 / 27 ] University of Cyprus Zeros of Polynomials: Beware of Predictions from Plots Nikos Stylianopoulos a report of joint work with Ed Saff Vanderbilt University May 2006 Five Plots Fundamental Results

More information

Boundary behaviour of optimal polynomial approximants

Boundary behaviour of optimal polynomial approximants Boundary behaviour of optimal polynomial approximants University of South Florida Laval University, in honour of Tom Ransford, May 2018 This talk is based on some recent and upcoming papers with various

More information

ORTHOGONAL POLYNOMIALS WITH EXPONENTIALLY DECAYING RECURSION COEFFICIENTS

ORTHOGONAL POLYNOMIALS WITH EXPONENTIALLY DECAYING RECURSION COEFFICIENTS ORTHOGONAL POLYNOMIALS WITH EXPONENTIALLY DECAYING RECURSION COEFFICIENTS BARRY SIMON* Dedicated to S. Molchanov on his 65th birthday Abstract. We review recent results on necessary and sufficient conditions

More information

Orthogonal Polynomials on the Unit Circle

Orthogonal Polynomials on the Unit Circle American Mathematical Society Colloquium Publications Volume 54, Part 2 Orthogonal Polynomials on the Unit Circle Part 2: Spectral Theory Barry Simon American Mathematical Society Providence, Rhode Island

More information

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS SPRING 006 PRELIMINARY EXAMINATION SOLUTIONS 1A. Let G be the subgroup of the free abelian group Z 4 consisting of all integer vectors (x, y, z, w) such that x + 3y + 5z + 7w = 0. (a) Determine a linearly

More information

Trace Class Operators and Lidskii s Theorem

Trace Class Operators and Lidskii s Theorem Trace Class Operators and Lidskii s Theorem Tom Phelan Semester 2 2009 1 Introduction The purpose of this paper is to provide the reader with a self-contained derivation of the celebrated Lidskii Trace

More information

A FLOQUET OPERATOR WITH PURE POINT SPECTRUM AND ENERGY INSTABILITY

A FLOQUET OPERATOR WITH PURE POINT SPECTRUM AND ENERGY INSTABILITY A FLOQUET OPERATOR WITH PURE POINT SPECTRUM AND ENERGY INSTABILITY Abstract. An example of Floquet operator with purely point spectrum and energy instability is presented. In the unperturbed energy eigenbasis

More information

INVARIANT SUBSPACES FOR CERTAIN FINITE-RANK PERTURBATIONS OF DIAGONAL OPERATORS. Quanlei Fang and Jingbo Xia

INVARIANT SUBSPACES FOR CERTAIN FINITE-RANK PERTURBATIONS OF DIAGONAL OPERATORS. Quanlei Fang and Jingbo Xia INVARIANT SUBSPACES FOR CERTAIN FINITE-RANK PERTURBATIONS OF DIAGONAL OPERATORS Quanlei Fang and Jingbo Xia Abstract. Suppose that {e k } is an orthonormal basis for a separable, infinite-dimensional Hilbert

More information

1 Math 241A-B Homework Problem List for F2015 and W2016

1 Math 241A-B Homework Problem List for F2015 and W2016 1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

Kaczmarz algorithm in Hilbert space

Kaczmarz algorithm in Hilbert space STUDIA MATHEMATICA 169 (2) (2005) Kaczmarz algorithm in Hilbert space by Rainis Haller (Tartu) and Ryszard Szwarc (Wrocław) Abstract The aim of the Kaczmarz algorithm is to reconstruct an element in a

More information

Spectral Theory of Orthogonal Polynomials

Spectral Theory of Orthogonal Polynomials Spectral Theory of Orthogonal Polynomials Barry Simon IBM Professor of Mathematics and Theoretical Physics California Institute of Technology Pasadena, CA, U.S.A. Lectures 11 & 12: Selected Additional,

More information

Synopsis of Complex Analysis. Ryan D. Reece

Synopsis of Complex Analysis. Ryan D. Reece Synopsis of Complex Analysis Ryan D. Reece December 7, 2006 Chapter Complex Numbers. The Parts of a Complex Number A complex number, z, is an ordered pair of real numbers similar to the points in the real

More information

November 18, 2013 ANALYTIC FUNCTIONAL CALCULUS

November 18, 2013 ANALYTIC FUNCTIONAL CALCULUS November 8, 203 ANALYTIC FUNCTIONAL CALCULUS RODICA D. COSTIN Contents. The spectral projection theorem. Functional calculus 2.. The spectral projection theorem for self-adjoint matrices 2.2. The spectral

More information

MORE NOTES FOR MATH 823, FALL 2007

MORE NOTES FOR MATH 823, FALL 2007 MORE NOTES FOR MATH 83, FALL 007 Prop 1.1 Prop 1. Lemma 1.3 1. The Siegel upper half space 1.1. The Siegel upper half space and its Bergman kernel. The Siegel upper half space is the domain { U n+1 z C

More information

Spectral Properties of Random and Deterministic CMV Matrices

Spectral Properties of Random and Deterministic CMV Matrices Math. Model. Nat. Phenom. Vol. 9, No. 5, 2014, pp. 270 281 DOI: 10.1051/mmnp/20149518 Spectral Properties of Random and Deterministic CMV Matrices M. Stoiciu Department of Mathematics and Statistics, Williams

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Large Deviations for Random Matrices and a Conjecture of Lukic

Large Deviations for Random Matrices and a Conjecture of Lukic Large Deviations for Random Matrices and a Conjecture of Lukic Jonathan Breuer Hebrew University of Jerusalem Joint work with B. Simon (Caltech) and O. Zeitouni (The Weizmann Institute) Western States

More information

MATHS 730 FC Lecture Notes March 5, Introduction

MATHS 730 FC Lecture Notes March 5, Introduction 1 INTRODUCTION MATHS 730 FC Lecture Notes March 5, 2014 1 Introduction Definition. If A, B are sets and there exists a bijection A B, they have the same cardinality, which we write as A, #A. If there exists

More information

here, this space is in fact infinite-dimensional, so t σ ess. Exercise Let T B(H) be a self-adjoint operator on an infinitedimensional

here, this space is in fact infinite-dimensional, so t σ ess. Exercise Let T B(H) be a self-adjoint operator on an infinitedimensional 15. Perturbations by compact operators In this chapter, we study the stability (or lack thereof) of various spectral properties under small perturbations. Here s the type of situation we have in mind:

More information

Introduction to Group Theory

Introduction to Group Theory Chapter 10 Introduction to Group Theory Since symmetries described by groups play such an important role in modern physics, we will take a little time to introduce the basic structure (as seen by a physicist)

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

GRE Math Subject Test #5 Solutions.

GRE Math Subject Test #5 Solutions. GRE Math Subject Test #5 Solutions. 1. E (Calculus) Apply L Hôpital s Rule two times: cos(3x) 1 3 sin(3x) 9 cos(3x) lim x 0 = lim x 2 x 0 = lim 2x x 0 = 9. 2 2 2. C (Geometry) Note that a line segment

More information

Math Homework 2

Math Homework 2 Math 73 Homework Due: September 8, 6 Suppose that f is holomorphic in a region Ω, ie an open connected set Prove that in any of the following cases (a) R(f) is constant; (b) I(f) is constant; (c) f is

More information

Analysis IV : Assignment 3 Solutions John Toth, Winter ,...). In particular for every fixed m N the sequence (u (n)

Analysis IV : Assignment 3 Solutions John Toth, Winter ,...). In particular for every fixed m N the sequence (u (n) Analysis IV : Assignment 3 Solutions John Toth, Winter 203 Exercise (l 2 (Z), 2 ) is a complete and separable Hilbert space. Proof Let {u (n) } n N be a Cauchy sequence. Say u (n) = (..., u 2, (n) u (n),

More information

Discreteness of Transmission Eigenvalues via Upper Triangular Compact Operators

Discreteness of Transmission Eigenvalues via Upper Triangular Compact Operators Discreteness of Transmission Eigenvalues via Upper Triangular Compact Operators John Sylvester Department of Mathematics University of Washington Seattle, Washington 98195 U.S.A. June 3, 2011 This research

More information

Qualifying Exam Complex Analysis (Math 530) January 2019

Qualifying Exam Complex Analysis (Math 530) January 2019 Qualifying Exam Complex Analysis (Math 53) January 219 1. Let D be a domain. A function f : D C is antiholomorphic if for every z D the limit f(z + h) f(z) lim h h exists. Write f(z) = f(x + iy) = u(x,

More information

OPUC, CMV MATRICES AND PERTURBATIONS OF MEASURES SUPPORTED ON THE UNIT CIRCLE

OPUC, CMV MATRICES AND PERTURBATIONS OF MEASURES SUPPORTED ON THE UNIT CIRCLE OPUC, CMV MATRICES AND PERTURBATIONS OF MEASURES SUPPORTED ON THE UNIT CIRCLE FRANCISCO MARCELLÁN AND NIKTA SHAYANFAR Abstract. Let us consider a Hermitian linear functional defined on the linear space

More information

Math 108b: Notes on the Spectral Theorem

Math 108b: Notes on the Spectral Theorem Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator

More information

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS TSOGTGEREL GANTUMUR Abstract. After establishing discrete spectra for a large class of elliptic operators, we present some fundamental spectral properties

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

HILBERT SPACES AND THE RADON-NIKODYM THEOREM. where the bar in the first equation denotes complex conjugation. In either case, for any x V define

HILBERT SPACES AND THE RADON-NIKODYM THEOREM. where the bar in the first equation denotes complex conjugation. In either case, for any x V define HILBERT SPACES AND THE RADON-NIKODYM THEOREM STEVEN P. LALLEY 1. DEFINITIONS Definition 1. A real inner product space is a real vector space V together with a symmetric, bilinear, positive-definite mapping,

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

Connectedness. Proposition 2.2. The following are equivalent for a topological space (X, T ).

Connectedness. Proposition 2.2. The following are equivalent for a topological space (X, T ). Connectedness 1 Motivation Connectedness is the sort of topological property that students love. Its definition is intuitive and easy to understand, and it is a powerful tool in proofs of well-known results.

More information

Exercises to Applied Functional Analysis

Exercises to Applied Functional Analysis Exercises to Applied Functional Analysis Exercises to Lecture 1 Here are some exercises about metric spaces. Some of the solutions can be found in my own additional lecture notes on Blackboard, as the

More information

Composition Operators on Hilbert Spaces of Analytic Functions

Composition Operators on Hilbert Spaces of Analytic Functions Composition Operators on Hilbert Spaces of Analytic Functions Carl C. Cowen IUPUI (Indiana University Purdue University Indianapolis) and Purdue University First International Conference on Mathematics

More information

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

COMPLEX ANALYSIS Spring 2014

COMPLEX ANALYSIS Spring 2014 COMPLEX ANALYSIS Spring 24 Homework 4 Solutions Exercise Do and hand in exercise, Chapter 3, p. 4. Solution. The exercise states: Show that if a

More information

Math 259: Introduction to Analytic Number Theory Functions of finite order: product formula and logarithmic derivative

Math 259: Introduction to Analytic Number Theory Functions of finite order: product formula and logarithmic derivative Math 259: Introduction to Analytic Number Theory Functions of finite order: product formula and logarithmic derivative This chapter is another review of standard material in complex analysis. See for instance

More information

FFTs in Graphics and Vision. Homogenous Polynomials and Irreducible Representations

FFTs in Graphics and Vision. Homogenous Polynomials and Irreducible Representations FFTs in Graphics and Vision Homogenous Polynomials and Irreducible Representations 1 Outline The 2π Term in Assignment 1 Homogenous Polynomials Representations of Functions on the Unit-Circle Sub-Representations

More information

Math 61CM - Solutions to homework 6

Math 61CM - Solutions to homework 6 Math 61CM - Solutions to homework 6 Cédric De Groote November 5 th, 2018 Problem 1: (i) Give an example of a metric space X such that not all Cauchy sequences in X are convergent. (ii) Let X be a metric

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Introduction to The Dirichlet Space

Introduction to The Dirichlet Space Introduction to The Dirichlet Space MSRI Summer Graduate Workshop Richard Rochberg Washington University St, Louis MO, USA June 16, 2011 Rochberg () The Dirichlet Space June 16, 2011 1 / 21 Overview Study

More information

Measure Theory on Topological Spaces. Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond

Measure Theory on Topological Spaces. Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond Measure Theory on Topological Spaces Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond May 22, 2011 Contents 1 Introduction 2 1.1 The Riemann Integral........................................ 2 1.2 Measurable..............................................

More information

Some basic elements of Probability Theory

Some basic elements of Probability Theory Chapter I Some basic elements of Probability Theory 1 Terminology (and elementary observations Probability theory and the material covered in a basic Real Variables course have much in common. However

More information

Solutions to Complex Analysis Prelims Ben Strasser

Solutions to Complex Analysis Prelims Ben Strasser Solutions to Complex Analysis Prelims Ben Strasser In preparation for the complex analysis prelim, I typed up solutions to some old exams. This document includes complete solutions to both exams in 23,

More information

4 Uniform convergence

4 Uniform convergence 4 Uniform convergence In the last few sections we have seen several functions which have been defined via series or integrals. We now want to develop tools that will allow us to show that these functions

More information

Complex Analysis Qualifying Exam Solutions

Complex Analysis Qualifying Exam Solutions Complex Analysis Qualifying Exam Solutions May, 04 Part.. Let log z be the principal branch of the logarithm defined on G = {z C z (, 0]}. Show that if t > 0, then the equation log z = t has exactly one

More information

The circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA)

The circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA) The circular law Lewis Memorial Lecture / DIMACS minicourse March 19, 2008 Terence Tao (UCLA) 1 Eigenvalue distributions Let M = (a ij ) 1 i n;1 j n be a square matrix. Then one has n (generalised) eigenvalues

More information

Math 259: Introduction to Analytic Number Theory Functions of finite order: product formula and logarithmic derivative

Math 259: Introduction to Analytic Number Theory Functions of finite order: product formula and logarithmic derivative Math 259: Introduction to Analytic Number Theory Functions of finite order: product formula and logarithmic derivative This chapter is another review of standard material in complex analysis. See for instance

More information

Rudin Real and Complex Analysis - Harmonic Functions

Rudin Real and Complex Analysis - Harmonic Functions Rudin Real and Complex Analysis - Harmonic Functions Aaron Lou December 2018 1 Notes 1.1 The Cauchy-Riemann Equations 11.1: The Operators and Suppose f is a complex function defined in a plane open set

More information

Considering our result for the sum and product of analytic functions, this means that for (a 0, a 1,..., a N ) C N+1, the polynomial.

Considering our result for the sum and product of analytic functions, this means that for (a 0, a 1,..., a N ) C N+1, the polynomial. Lecture 3 Usual complex functions MATH-GA 245.00 Complex Variables Polynomials. Construction f : z z is analytic on all of C since its real and imaginary parts satisfy the Cauchy-Riemann relations and

More information

Complex Analytic Functions and Differential Operators. Robert Carlson

Complex Analytic Functions and Differential Operators. Robert Carlson Complex Analytic Functions and Differential Operators Robert Carlson Some motivation Suppose L is a differential expression (formal operator) N L = p k (z)d k, k=0 D = d dz (0.1) with p k (z) = j=0 b jz

More information

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1 Chapter 2 Probability measures 1. Existence Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension to the generated σ-field Proof of Theorem 2.1. Let F 0 be

More information

Topics in Harmonic Analysis Lecture 1: The Fourier transform

Topics in Harmonic Analysis Lecture 1: The Fourier transform Topics in Harmonic Analysis Lecture 1: The Fourier transform Po-Lam Yung The Chinese University of Hong Kong Outline Fourier series on T: L 2 theory Convolutions The Dirichlet and Fejer kernels Pointwise

More information

On rational approximation of algebraic functions. Julius Borcea. Rikard Bøgvad & Boris Shapiro

On rational approximation of algebraic functions. Julius Borcea. Rikard Bøgvad & Boris Shapiro On rational approximation of algebraic functions http://arxiv.org/abs/math.ca/0409353 Julius Borcea joint work with Rikard Bøgvad & Boris Shapiro 1. Padé approximation: short overview 2. A scheme of rational

More information

Real Analysis Notes. Thomas Goller

Real Analysis Notes. Thomas Goller Real Analysis Notes Thomas Goller September 4, 2011 Contents 1 Abstract Measure Spaces 2 1.1 Basic Definitions........................... 2 1.2 Measurable Functions........................ 2 1.3 Integration..............................

More information

Kernel Method: Data Analysis with Positive Definite Kernels

Kernel Method: Data Analysis with Positive Definite Kernels Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University

More information

Department of Mathematics, University of California, Berkeley. GRADUATE PRELIMINARY EXAMINATION, Part A Fall Semester 2016

Department of Mathematics, University of California, Berkeley. GRADUATE PRELIMINARY EXAMINATION, Part A Fall Semester 2016 Department of Mathematics, University of California, Berkeley YOUR 1 OR 2 DIGIT EXAM NUMBER GRADUATE PRELIMINARY EXAMINATION, Part A Fall Semester 2016 1. Please write your 1- or 2-digit exam number on

More information

CIRCULAR JACOBI ENSEMBLES AND DEFORMED VERBLUNSKY COEFFICIENTS

CIRCULAR JACOBI ENSEMBLES AND DEFORMED VERBLUNSKY COEFFICIENTS CIRCULAR JACOBI ENSEMBLES AND DEFORMED VERBLUNSKY COEFFICIENTS P. BOURGADE, A. NIKEGHBALI, AND A. ROUAULT Abstract. Using the spectral theory of unitary operators and the theory of orthogonal polynomials

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

REPRESENTATION THEORY WEEK 7

REPRESENTATION THEORY WEEK 7 REPRESENTATION THEORY WEEK 7 1. Characters of L k and S n A character of an irreducible representation of L k is a polynomial function constant on every conjugacy class. Since the set of diagonalizable

More information

Taylor and Laurent Series

Taylor and Laurent Series Chapter 4 Taylor and Laurent Series 4.. Taylor Series 4... Taylor Series for Holomorphic Functions. In Real Analysis, the Taylor series of a given function f : R R is given by: f (x + f (x (x x + f (x

More information

L p Functions. Given a measure space (X, µ) and a real number p [1, ), recall that the L p -norm of a measurable function f : X R is defined by

L p Functions. Given a measure space (X, µ) and a real number p [1, ), recall that the L p -norm of a measurable function f : X R is defined by L p Functions Given a measure space (, µ) and a real number p [, ), recall that the L p -norm of a measurable function f : R is defined by f p = ( ) /p f p dµ Note that the L p -norm of a function f may

More information

(1) Consider the space S consisting of all continuous real-valued functions on the closed interval [0, 1]. For f, g S, define

(1) Consider the space S consisting of all continuous real-valued functions on the closed interval [0, 1]. For f, g S, define Homework, Real Analysis I, Fall, 2010. (1) Consider the space S consisting of all continuous real-valued functions on the closed interval [0, 1]. For f, g S, define ρ(f, g) = 1 0 f(x) g(x) dx. Show that

More information

The 3 dimensional Schrödinger Equation

The 3 dimensional Schrödinger Equation Chapter 6 The 3 dimensional Schrödinger Equation 6.1 Angular Momentum To study how angular momentum is represented in quantum mechanics we start by reviewing the classical vector of orbital angular momentum

More information

Bulk scaling limits, open questions

Bulk scaling limits, open questions Bulk scaling limits, open questions Based on: Continuum limits of random matrices and the Brownian carousel B. Valkó, B. Virág. Inventiones (2009). Eigenvalue statistics for CMV matrices: from Poisson

More information

Characterization of half-radial matrices

Characterization of half-radial matrices Characterization of half-radial matrices Iveta Hnětynková, Petr Tichý Faculty of Mathematics and Physics, Charles University, Sokolovská 83, Prague 8, Czech Republic Abstract Numerical radius r(a) is the

More information

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due ). Show that the open disk x 2 + y 2 < 1 is a countable union of planar elementary sets. Show that the closed disk x 2 + y 2 1 is a countable

More information

On composition operators

On composition operators On composition operators for which C 2 ϕ C ϕ 2 Sungeun Jung (Joint work with Eungil Ko) Department of Mathematics, Hankuk University of Foreign Studies 2015 KOTAC Chungnam National University, Korea June

More information

CHAPTER 6. Representations of compact groups

CHAPTER 6. Representations of compact groups CHAPTER 6 Representations of compact groups Throughout this chapter, denotes a compact group. 6.1. Examples of compact groups A standard theorem in elementary analysis says that a subset of C m (m a positive

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

9 Radon-Nikodym theorem and conditioning

9 Radon-Nikodym theorem and conditioning Tel Aviv University, 2015 Functions of real variables 93 9 Radon-Nikodym theorem and conditioning 9a Borel-Kolmogorov paradox............. 93 9b Radon-Nikodym theorem.............. 94 9c Conditioning.....................

More information

MATH 5524 MATRIX THEORY Problem Set 5

MATH 5524 MATRIX THEORY Problem Set 5 MATH 554 MATRIX THEORY Problem Set 5 Posted Tuesday April 07. Due Tuesday 8 April 07. [Late work is due on Wednesday 9 April 07.] Complete any four problems, 5 points each. Recall the definitions of the

More information

Orthogonal polynomials with respect to generalized Jacobi measures. Tivadar Danka

Orthogonal polynomials with respect to generalized Jacobi measures. Tivadar Danka Orthogonal polynomials with respect to generalized Jacobi measures Tivadar Danka A thesis submitted for the degree of Doctor of Philosophy Supervisor: Vilmos Totik Doctoral School in Mathematics and Computer

More information

2 Two-Point Boundary Value Problems

2 Two-Point Boundary Value Problems 2 Two-Point Boundary Value Problems Another fundamental equation, in addition to the heat eq. and the wave eq., is Poisson s equation: n j=1 2 u x 2 j The unknown is the function u = u(x 1, x 2,..., x

More information

NATIONAL UNIVERSITY OF SINGAPORE Department of Mathematics MA4247 Complex Analysis II Lecture Notes Part II

NATIONAL UNIVERSITY OF SINGAPORE Department of Mathematics MA4247 Complex Analysis II Lecture Notes Part II NATIONAL UNIVERSITY OF SINGAPORE Department of Mathematics MA4247 Complex Analysis II Lecture Notes Part II Chapter 2 Further properties of analytic functions 21 Local/Global behavior of analytic functions;

More information

Page 404. Lecture 22: Simple Harmonic Oscillator: Energy Basis Date Given: 2008/11/19 Date Revised: 2008/11/19

Page 404. Lecture 22: Simple Harmonic Oscillator: Energy Basis Date Given: 2008/11/19 Date Revised: 2008/11/19 Page 404 Lecture : Simple Harmonic Oscillator: Energy Basis Date Given: 008/11/19 Date Revised: 008/11/19 Coordinate Basis Section 6. The One-Dimensional Simple Harmonic Oscillator: Coordinate Basis Page

More information

Defining the Integral

Defining the Integral Defining the Integral In these notes we provide a careful definition of the Lebesgue integral and we prove each of the three main convergence theorems. For the duration of these notes, let (, M, µ) be

More information

Last Update: April 7, 201 0

Last Update: April 7, 201 0 M ath E S W inter Last Update: April 7, Introduction to Partial Differential Equations Disclaimer: his lecture note tries to provide an alternative approach to the material in Sections.. 5 in the textbook.

More information

Chapter 2 The Group U(1) and its Representations

Chapter 2 The Group U(1) and its Representations Chapter 2 The Group U(1) and its Representations The simplest example of a Lie group is the group of rotations of the plane, with elements parametrized by a single number, the angle of rotation θ. It is

More information

AP Calculus Chapter 9: Infinite Series

AP Calculus Chapter 9: Infinite Series AP Calculus Chapter 9: Infinite Series 9. Sequences a, a 2, a 3, a 4, a 5,... Sequence: A function whose domain is the set of positive integers n = 2 3 4 a n = a a 2 a 3 a 4 terms of the sequence Begin

More information

Solutions: Problem Set 4 Math 201B, Winter 2007

Solutions: Problem Set 4 Math 201B, Winter 2007 Solutions: Problem Set 4 Math 2B, Winter 27 Problem. (a Define f : by { x /2 if < x

More information

TOEPLITZ OPERATORS. Toeplitz studied infinite matrices with NW-SE diagonals constant. f e C :

TOEPLITZ OPERATORS. Toeplitz studied infinite matrices with NW-SE diagonals constant. f e C : TOEPLITZ OPERATORS EFTON PARK 1. Introduction to Toeplitz Operators Otto Toeplitz lived from 1881-1940 in Goettingen, and it was pretty rough there, so he eventually went to Palestine and eventually contracted

More information

Recall that any inner product space V has an associated norm defined by

Recall that any inner product space V has an associated norm defined by Hilbert Spaces Recall that any inner product space V has an associated norm defined by v = v v. Thus an inner product space can be viewed as a special kind of normed vector space. In particular every inner

More information

Your first day at work MATH 806 (Fall 2015)

Your first day at work MATH 806 (Fall 2015) Your first day at work MATH 806 (Fall 2015) 1. Let X be a set (with no particular algebraic structure). A function d : X X R is called a metric on X (and then X is called a metric space) when d satisfies

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Random matrix pencils and level crossings

Random matrix pencils and level crossings Albeverio Fest October 1, 2018 Topics to discuss Basic level crossing problem 1 Basic level crossing problem 2 3 Main references Basic level crossing problem (i) B. Shapiro, M. Tater, On spectral asymptotics

More information

Complex Variables. Instructions Solve any eight of the following ten problems. Explain your reasoning in complete sentences to maximize credit.

Complex Variables. Instructions Solve any eight of the following ten problems. Explain your reasoning in complete sentences to maximize credit. Instructions Solve any eight of the following ten problems. Explain your reasoning in complete sentences to maximize credit. 1. The TI-89 calculator says, reasonably enough, that x 1) 1/3 1 ) 3 = 8. lim

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space.

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Hilbert Spaces Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Vector Space. Vector space, ν, over the field of complex numbers,

More information

REPRESENTATION THEORY OF S n

REPRESENTATION THEORY OF S n REPRESENTATION THEORY OF S n EVAN JENKINS Abstract. These are notes from three lectures given in MATH 26700, Introduction to Representation Theory of Finite Groups, at the University of Chicago in November

More information

The result above is known as the Riemann mapping theorem. We will prove it using basic theory of normal families. We start this lecture with that.

The result above is known as the Riemann mapping theorem. We will prove it using basic theory of normal families. We start this lecture with that. Lecture 15 The Riemann mapping theorem Variables MATH-GA 2451.1 Complex The point of this lecture is to prove that the unit disk can be mapped conformally onto any simply connected open set in the plane,

More information

Hartogs Theorem: separate analyticity implies joint Paul Garrett garrett/

Hartogs Theorem: separate analyticity implies joint Paul Garrett  garrett/ (February 9, 25) Hartogs Theorem: separate analyticity implies joint Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ (The present proof of this old result roughly follows the proof

More information

LEBESGUE INTEGRATION. Introduction

LEBESGUE INTEGRATION. Introduction LEBESGUE INTEGATION EYE SJAMAA Supplementary notes Math 414, Spring 25 Introduction The following heuristic argument is at the basis of the denition of the Lebesgue integral. This argument will be imprecise,

More information

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Entropy Since this course is about entropy maximization,

More information