Random matrices and the Riemann zeros

Similar documents
Primes, queues and random matrices

arxiv:chao-dyn/ v1 3 Jul 1995

The Prime Unsolved Problem in Mathematics

The Prime Number Theorem

A Simple Counterexample to Havil s Reformulation of the Riemann Hypothesis

Prime Number Theory and the Riemann Zeta-Function

Quantum Mechanics-I Prof. Dr. S. Lakshmi Bala Department of Physics Indian Institute of Technology, Madras. Lecture - 21 Square-Integrable Functions

A PROBABILISTIC PROOF OF WALLIS S FORMULA FOR π. ( 1) n 2n + 1. The proof uses the fact that the derivative of arctan x is 1/(1 + x 2 ), so π/4 =

Riemann s Zeta Function and the Prime Number Theorem

The spectral zeta function

RANDOM MATRIX THEORY AND TOEPLITZ DETERMINANTS

The Prime Number Theorem

5 Irreducible representations

Page 404. Lecture 22: Simple Harmonic Oscillator: Energy Basis Date Given: 2008/11/19 Date Revised: 2008/11/19

The Riemann Hypothesis

150 Years of Riemann Hypothesis.

Determinantal point processes and random matrix theory in a nutshell

Physics 202 Laboratory 5. Linear Algebra 1. Laboratory 5. Physics 202 Laboratory

Unitary Dynamics and Quantum Circuits

Random Matrix: From Wigner to Quantum Chaos

Harmonic Oscillator. Robert B. Griffiths Version of 5 December Notation 1. 3 Position and Momentum Representations of Number Eigenstates 2

Needles and Numbers. The Buffon Needle Experiment

Math 495 Dr. Rick Kreminski Colorado State University Pueblo November 19, 2014 The Riemann Hypothesis: The #1 Problem in Mathematics

Universality of distribution functions in random matrix theory Arno Kuijlaars Katholieke Universiteit Leuven, Belgium

Moments of the Riemann Zeta Function and Random Matrix Theory. Chris Hughes

1 The functional equation for ζ

Quantum Mechanics- I Prof. Dr. S. Lakshmi Bala Department of Physics Indian Institute of Technology, Madras

Physics 137A Quantum Mechanics Fall 2012 Midterm II - Solutions

EXTENDED RECIPROCAL ZETA FUNCTION AND AN ALTERNATE FORMULATION OF THE RIEMANN HYPOTHESIS. M. Aslam Chaudhry. Received May 18, 2007

Time-Independent Perturbation Theory

Riemann Hypotheses. Alex J. Best 4/2/2014. WMS Talks

Why is the Riemann Hypothesis Important?

Chem 3502/4502 Physical Chemistry II (Quantum Mechanics) 3 Credits Fall Semester 2006 Christopher J. Cramer. Lecture 5, January 27, 2006

Separation of Variables in Linear PDE: One-Dimensional Problems

1 Mathematical preliminaries

First, let me recall the formula I want to prove. Again, ψ is the function. ψ(x) = n<x

Quantum chaos on graphs

Introducing the Normal Distribution

The Simple Harmonic Oscillator

Damped harmonic motion

Universality for random matrices and log-gases

Lecture 10: A (Brief) Introduction to Group Theory (See Chapter 3.13 in Boas, 3rd Edition)

Fourier series: Fourier, Dirichlet, Poisson, Sturm, Liouville

Riemann Zeta Function and Prime Number Distribution

Eigenvalues and Eigenvectors

OPSF, Random Matrices and Riemann-Hilbert problems

Universality of local spectral statistics of random matrices

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

Chem 3502/4502 Physical Chemistry II (Quantum Mechanics) 3 Credits Spring Semester 2006 Christopher J. Cramer. Lecture 9, February 8, 2006

Introduction to Group Theory

conventions and notation

Linear Algebra in Hilbert Space

UNDERSTANDING THE DIAGONALIZATION PROBLEM. Roy Skjelnes. 1.- Linear Maps 1.1. Linear maps. A map T : R n R m is a linear map if

Quantum Computing Lecture 2. Review of Linear Algebra

Calculus II. Calculus II tends to be a very difficult course for many students. There are many reasons for this.

1.1.1 Algebraic Operations

Chapter 29. Quantum Chaos

WAITING FOR A BAT TO FLY BY (IN POLYNOMIAL TIME)

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

Chapter 1 Recollections from Elementary Quantum Physics

The Continuing Story of Zeta

1 Notes and Directions on Dirac Notation

NUMERICAL CALCULATION OF RANDOM MATRIX DISTRIBUTIONS AND ORTHOGONAL POLYNOMIALS. Sheehan Olver NA Group, Oxford

Quantum Error Correcting Codes and Quantum Cryptography. Peter Shor M.I.T. Cambridge, MA 02139

Basic Quantum Mechanics Prof. Ajoy Ghatak Department of Physics Indian Institute of Technology, Delhi

Eigenvalues and Eigenvectors

1 Infinite-Dimensional Vector Spaces

c 1 v 1 + c 2 v 2 = 0 c 1 λ 1 v 1 + c 2 λ 1 v 2 = 0

Eigenvalues of the Redheffer Matrix and Their Relation to the Mertens Function

The following definition is fundamental.

Chapter 1. Introduction to prime number theory. 1.1 The Prime Number Theorem

Statistical Interpretation

Chapter Four Gelfond s Solution of Hilbert s Seventh Problem (Revised January 2, 2011)

MATH3500 The 6th Millennium Prize Problem. The 6th Millennium Prize Problem

This appendix provides a very basic introduction to linear algebra concepts.

GROUP THEORY PRIMER. New terms: tensor, rank k tensor, Young tableau, Young diagram, hook, hook length, factors over hooks rule

Linear Algebra: Matrix Eigenvalue Problems

MOMENTS OF HYPERGEOMETRIC HURWITZ ZETA FUNCTIONS

Frame Diagonalization of Matrices

PHY 407 QUANTUM MECHANICS Fall 05 Problem set 1 Due Sep

Primes in arithmetic progressions

Primes, partitions and permutations. Paul-Olivier Dehaye ETH Zürich, October 31 st

3.024 Electrical, Optical, and Magnetic Properties of Materials Spring 2012 Recitation 1. Office Hours: MWF 9am-10am or by appointment

Lecture 4: Postulates of quantum mechanics

DR.RUPNATHJI( DR.RUPAK NATH )

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

4 Group representations

Lecture 2: Linear operators

approximation of the dimension. Therefore, we get the following corollary.

Quantum Theory and Group Representations

QM and Angular Momentum

8.04: Quantum Mechanics Professor Allan Adams Massachusetts Institute of Technology Tuesday March 19. Problem Set 6. Due Wednesday April 3 at 10.

Repeated Eigenvalues and Symmetric Matrices

An Introduction To Resource Theories (Example: Nonuniformity Theory)

Total Angular Momentum for Hydrogen

Dirichlet s Theorem. Martin Orr. August 21, The aim of this article is to prove Dirichlet s theorem on primes in arithmetic progressions:

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

Chapter 2 The Group U(1) and its Representations

By allowing randomization in the verification process, we obtain a class known as MA.

Basic Quantum Mechanics Prof. Ajoy Ghatak Department of Physics Indian Institute of Technology, Delhi

Transcription:

Random matrices and the Riemann zeros Bruce Bartlett Talk given in Postgraduate Seminar on 5th March 2009 Abstract Random matrices and the Riemann zeta function came together during a chance meeting between Hugh Montgomery and Freeman Dyson over tea at Princeton in 972. Dyson observed that Montgomery s conjecture for the correlation between pairs of zeros of the zeta function was precisely the same as the known correlation betwen eigenvalues of large random hermitian matrices. I will try to explain how these formulas are derived, at least on the random matrix side of things. The Riemann zeta function In 859, Riemann outlined the basic analytic properties of the zeta-function ζ(s), defined for Re s > as ζ(s) = n= n s = + 2 s + 2 s + 3 s + 4 s + 5 s + = ( + 2 s + 4 s + )( + 3 s + 9 s + )( + 5 s + 25 s + ) = p ( p s ), and analytically continued to the entire complex plane thereafter (except for a simple pole at s = ). The zeta function is zero trivially at s = 2, 4, 6,... and all other zeros are known to lie in the critical strip 0 < Re s <. In this regard we have the famous Riemann hypothesis. All nontrivial zeros of ζ(s) occur on the critical line Re s = /2. The critical strip and the position of the first few zeros of ζ(s) are shown in Figure.

Figure : A sketch of the complex plane, showing the critical strip, the trivial zeros and the first few nontrivial zeros of the Riemann zeta function. Taken from [6]. 2 Counting primes The zeros of the zeta function are important because they encode the distribution of prime numbers. To see this, define the prime number counting function as π(x) = number of primes less than or equal to x. As a young boy of 6, Gauss constructed tables of prime numbers and made the observation that the density of primes near a number x was proportional, leading him to conjecture that to log x π(x) x 2 dt log t. () This statement is known as the Prime Number Theorem, and was finally proved (independently) by de Vallée Poisson and Hadamard in 896. The idea of the proof is roughly as follows. Instead of counting only the primes, one finds it is more convenient to listen also to their higher harmonics and count also powers of primes via the Chebyshev function ψ(x) = log p. p n <x It is not hard to show that that from ψ one can recover π, and vice versa, and that in terms of ψ the statement of the Prime Number Theorem () now becomes ψ(x) x. (2) 2

Using complex integration, one finds an exact formula for ψ, namely ψ(x) = x s n x sn s n 2 log( /x2 ) log(2π), (3) where the sum runs over the nontrivial zeros s n of the zeta function ζ(s). Since the last two terms above are negligible for large x, and since it is known that all the nontrivial zeros s n lie in the critical strip, to prove (2) it suffices to exclude the possibility that a zero can have real part equal to (for then the main term in the exact formula (3) will indeed be simply x, which is what we needed to show). This is the hard part of the proof, and it is what de Vallée Poisson and Hadamard showed. In any event, the exact expansion (3) gives us deeper insight into the Riemann hypothesis. If every nontrivial zero lies on the critical line s = 2, then we could write the zeros as s n = 2 + b ni where b n is real. Thus all the correction terms in the exact formula (3) would have a common envelope of x and we could write the expansion schematically as the dominant term x plus correction terms which oscillate logarithmically at higher and higher frequencies: ψ(x) x x cos(b n log x). b n a 2 n + b 2 n Or graphically: roughly x 4 + 2 +. Roughly speaking, this means that the Chebyshev function ψ(x) vibrates only at certain fixed frequencies, or in other words, the Riemann hypothesis is the statement that there is music in the primes. Were one of the zeros not to lie on the critical line s = 2, we would not be able to take out x as a common factor for the correction terms and ψ(x) would contain all frequencies the primes would be noisy and not harmonious. 3 Montgomery s conjecture Having established the importance of the zeros of the zeta function, we now ask ourselves: how are the heights b n of these zeros distributed along the 3

(a) (b) (c) (d) Figure 2: (a) Sequence of thirty random numbers. (b) Sequence of first thirty unfolded zeros of the zeta function. (c) Sequence of the first thirty primes. (d) Sequence of thirty equally spaced numbers. critical line? Are the heights b n just a random sequence of numbers, or do they follow a certain pattern? Staring at the exact formula (3), we see that knowledge of the distribution of the zeros will tell us how fast the number of primes π(x) approaches the value predicted by the Prime Number Theorem in other words, it will tell us the error in Gauss s guess for finite x. Now it is known that, unlike the distribution of the primes, the density of zeros of the zeta function increases logarithmically with height t up the critical line. To study their statistics, one should therefore define unfolded zeros w n by setting w n = 2π b n log b n 2π. Then it is known that lim N N #{w n < N} =. In Figure 2 the first thirty unfolded zeros of the zeta function have been plotted, together with a few other sequences of random numbers, all appropriately unfolded so that the average spacing is one. Notice how these In what follows, we shall for simplicity s sake assume that the zeros do all lie on the critical line, though this is not strictly necessary. 4

sequences do not look similar. For a sequence of random numbers, there are many large gaps as well as pairs or triples of points which lie very close together. In contrast, the unfolded zeros of the zeta function are more or less equally spaced. We would like to compute the correlations between pairs of zeros of the zeta function. Given a zero at certain position, what is the likelihood of finding another zero at a nearby position? Define F N ζ (α, β) = N #{b n, b m [0, N] : α b n b m < β}, so that Fζ N (α, β) counts the percentage of pairs of the first N unfolded zeros whose gap lies between α and β. We are interested in the limiting value, F ζ (α, β) = lim N F N ζ (α, β). In 972 a PhD student at Cambridge called Hugh Montgomery arrived at the following conjecture regarding these correlations: The integrand F ζ (α, β) = β α [ ( sin πx ( ) sin πx πx πx )] dx. is known as the pair correlation function and is plotted in Figure 3. Try to understand what conjecture is saying. Firstly, the function goes to zero at at the origin, so there is very little chance of two zeros being found close together zeros apparently repel one another. Secondly, there are wiggles in this function that peak at integer values zeros apparently prefer to be one unit apart. It seems that the zeros prefer to arrange themselves regularly, they have a crystal structure. Montgomery s conjecture is plotted against the actual data of 70 million zeros near 0 20 ; clearly the curve fits extremely well. This is all in stark contrast to what would happen for a sequence of random numbers since they are chosen independently, it would be just as likely to find a pair of them far apart as to find them close together. There would be no correlations at all. Montgomery was only able to prove his conjecture for low frequencies that is, that the Fourier transform of the actual pair correlation function of the zeros and the Fourier transform of (4) coincide for frequencies τ. Yet it seems to be correct: not only is there excellent numerical agreement, but it can be shown to follow exactly if one assumes a conjecture of Hardy and Littlewood concerning correlations between the primes [3]. (4) 5

(a) (b) Figure 3: (a) Montgomery s conjectural pair correlation function and 8 billion zeros near 0 2 0. (b) Pair correlations for a random sequence of numbers. 4 Eigenvalues of random matrices In 972 Montgomery visited Princeton and gave a talk on his ideas about the distribution of the zeros. Over tea he met Freeman Dyson, a mathematicianturned-physicist who had been studying eigenvalues of random matrices. Dyson observed that Montgomery s conjectural pair correlation function for the zeros (4) was precisely the known pair correlation function for the eigenvalues of large random matrices! Let us consider the set of all N N complex Hermitian matrices H ij (that is, H ij = H ji ). Suppose that the real and imaginary parts of the matrix elements are independent random variables, with the probability density for finding a matrix H given by the Gaussian distribution P(H) = P(H, H 2, H 3,..., H NN ) = i,j e H ij 2. (5) A Hermitian matrix has real eigenvalues θ, θ 2,..., θ N, and we would like to compute their resulting joint probability density P (θ, θ 2,..., θ N ). In other words, if the matrix elements are distributed as above, how are their eigenvalues distributed? To compute this, recall that every Hermitian matrix can be unitarily diagonalized, so that we can write H = U ΘU (6) where Θ = Diag(θ,..., θ N ) is the diagonal matrix of eienvalues and U is the unitary matrix whose columns are the accompanying eigenvectors u i. We therefore need to change variables from the matrix elements H ij to 6

the eigenvalues θ i and eigenvectors U ij. In other words, we must compute the Jacobian of the transformation (6), for then we can integrate out the eigenvector coordinates from (5) and write P (θ,..., θ N ) = P(H ij ) H ij U ij (θ i, U ij ) du ij. Calculating the Jacobian of (6) is reasonably straightforward; it turns out that one obtains P (θ,..., θ N ) = C N e P n i= θ2 i (θ j θ k ) 2 (7) j<k where C N factor is a constant. The new feature appearing is the square of the = j<k(θ j θ k ) which is known as the Vandermonde determinant, because it can be thought of as the determinant of the matrix of increasing powers of θ i : θ θ 2 θ N = det θ 2 θ2 2 θn 2. θ N θ2 N θ N N This factor vanishes when any two of the eigenvalues are equal, so there is very little likelihood that eigenvalues will be found close together. Apparently, although the matrix elements were independent random variables, the eigenvalues have become correlated with each other. 5 Coulomb gas model There is a beautiful physical way to think of the joint probability distribution (7) for the eigenvalues θ i. Think of the eigenvalues as N positively charged particles arranged on a line in the plane: The particles all repel one another, but imagine that each particle has a spring attached to it which pulls it towards the origin. One can write the potential energy of such a configuration of particles as W = θi 2 log θ i θ j. 2 i i<j 7

The logarithms of the distances appear because this is how electric potential is calculated in two dimensions (in three dimensions, one would have the more familiar reciprocal of the distances behaviour). In any event, one notices that P (θ,..., θ N ) = Ce 2 W. In statistical mechanics terms, this equation says that the probability distribution of the eigenvalues of random Hermitian matrices can be viewed as the probability distribution of a collection of charged particles on a line attached to the origin by a spring. The beauty of this correspondence is that we can use physical intuition to reason about the distribution of the eigenvalues. The most probable configuration will be the one which minimizes the potential energy W. There is a beautiful lemma of Stieltjes from 94 [5] which says that the potential energy W will be minimized precisely when the particles (eigenvalues) sit at the zeros of the Nth Hermite polynomial H N (x)! See Figure. Recall that the Hermite polynomials are defined as dn H N (x) = ( ) n e x2 dx n e x2, so that the first few Hermite polynomial are H 0 (x) =, H (x) = 2x, H 2 (x) = 4x 2 2, and so on. Their mathematical importance is that they are the natural orthonormal basis of polynomials on R which are orthogonal with respect to Gaussian measure. Their physical importance is that, after multiplying with the Gaussian factor, they give the oscillator functions h N (x) = (2 N N! π) /2 H n (x)e x2 /2, which turn out to be the eigenfunctions of the harmonic oscillator in quantum mechanics. We remark here that the quantum mechanical intuition behind these functions has proven tremendously useful in mathematics; indeed the harmonic oscillator forms the backbone of modern proofs of the Atiyah-Singer index theorem []. 6 Statistics of the eigenvalues Let us return to the joint probability distribution (7) for the distribution of the eigenvalues of a random N N Hermitian matrix. Define the level density as + σ N (θ) = N P (θ, θ 2,..., θ N )dθ 2 dθ N (8) θ 2 = θ N = In other words, σ N (θ) is the probability density for finding an eigenvalue at θ, where we don t care about where the other (N ) eigenvalues are (we 8

need to multiply by N above because we don t care if θ is labeled as the first eigenvalue or the second or so on). To work out the level density, we simply need to substitute in the expression for P (θ,..., θ N ) from (7) into the above formula, and perform the integrals. There is a beautiful way to do this. Namely, we realize that the Vandermonde determinant appearing in the integrand could (by taking linear combinations of rows) just have well been expressed as the determinant of a matrix whose rows consist of the Hermite polynomials: = det H (θ ) H (θ 2 ) H (θ N ) H 2 (θ ) H 2 (θ 2 ) H 2 (θ N ). H N (θ ) H N (θ 2 ) H N (θ N ) The Hermite polynomials are the natural language for the problem, because they form an orthonormal basis with respect to the Gaussian measure appearing in the integrand (8). Using these orthogonality relations, one performs the integrals and finds that the level density can be expressed in terms of the Hermite functions as σ N (θ) = N i=0 h 2 i (θ). As N, this sum can be computed by known methods and one obtains the following beautiful result, known as Wigner s semi-circle law: { σ N (θ) σ(θ) = π 2N θ 2 for θ 2N 0 otherwise. (9) So: the eigenvalues of a large random Hermitian matrix are distributed like a semicircle. There are a lot of eigenvalues near zero, and extremely few that are greater than the square root of double the dimension of the matrix (see Figure 4 ). The next statistical quantity to compute is the pair correlation function of the eigenvalues, defined as R N (θ, θ 2 ) = N! (N 2)! θ 3 = P (θ, θ 2, θ 3,..., θ N )dθ 3 dθ N θ N = That is, R N (θ, θ 2 ) is the probability density for finding one of the eigenvalues at θ and the other at θ 2, where we do not care where the other eigenvalues are (we need to multiply by N!/(N 2)! because we do not care what the labeling of these eigenvalues are). 9

Figure 4: The eigenvalue distribution for a large number of 40 40 matrices. Taken from [4]. Substituting in the joint probability density function P (θ,..., θ N ) from (7), rewriting the Vandermonde determinant in terms of the Hermite functions, and using the orthogonality of the Hermite functions as before, one obtains ( N 2 R N (θ, θ 2 ) = σ N (θ )σ N (θ 2 ) h i (θ )h i (θ 2 )). That is, the pair correlation function for finding one of the eigenvalues at θ and the other at θ 2 is the product of the level densities at θ and θ 2 respectively, minus the two-level cluster function ( N i=0 i=0 h i (θ )h i (θ 2 )) 2. To understand this, one should think of the Hermite functions as being the fundamental modes of propagation on the real line. The two-level cluster function multiplies the value of the modes at θ and θ 2, and sums over the modes (see Figure ). We are interested in the limit N. We should unfold the eigenvalues so that they have unit density. Let us consider what happens around the zero eigenvalue (this is called the bulk part of the spectrum). According to the semicircle law (9), the density of eigenvalues at zero is σ N (0) = π 2N. Let us therefore express everything in terms of the unfolded eigenvalues y i = 2N π θ i. 0

Figure 5: The two-level cluster function sums over the modes of propagation from θ to θ 2. As N, the pair correlation function of unfolded eigenvalues becomes N π R(θ, θ 2 ) = lim (σ N (θ )σ N (θ 2 ) h i (θ )h i (θ 2 ) N 2N i=0 ( ) sin( y y 2 ) 2 = (0) π y y 2 where in the second step we have used a known expression for the sum of products of Hermite functions (the Christoffel-Darboux formula [5]). So: the unfolded eigenvalues repel each other, and prefer to be spaced precisely one unit apart. In other words, Montgomery s conjecture (4) for the pair correlation of the zeros of the Riemann zeta function is precisely the known pair correlation function for the eigenvalues of large random Hermitian matrices! This remarkable coincidence is what Dyson noticed in the tea room at Princeton in 972. It has led to a great revival of interest in the idea, originally due to Hilbert and Pólya, that the Riemann zeros are the eigenvalues of some large Hermitian matrix. If this were true of course, it would immediately give a proof of the Riemann hypothesis, for the eigenvalues of a Hermitian matrix must be positive. For further thoughts on this idea, the reader is referred to [2]. References [] N. Berline, E. Getzler and M. Vergne, Heat Kernels and Dirac Operators, Springer-Verlag Berlin Heidelberg (2004). [2] M. V. Berry and J. P. Keating, The Riemann Zeros and Eigenvalue Asymptotics, SIAM Review Vol. 4 No. 2 (999), 236 266. ) 2

[3] J. Keating, Random Matrices and the Riemann Zeta-Function, Highlights of Mathematical Physics (ICMP2000), eds. A. Fokas, J. Halliwell, T. Kibble and B. Zegarlinski (AMS), 53-63. [4] D. Rockmore and L. Snell, Chance in the Primes Part III, Chance News (2002). Available as www.dartmouth.edu/~chance/chance news/recent news/primes part3/. [5] M. Lal Mehta, Random Matrices, Academic Press, London, second edition (99). [6] M. Watkins, The encoding of the distribution of prime numbers by the nontrivial zeros of the Riemann zeta function (elegant approach). Available as www.secamlocal.ex.ac.uk/people/staff/mrwatkin/zeta/encoding2.htm. 2