Zeros of Polynomials with Random Coefficients

Similar documents
Zeros of lacunary random polynomials

Asymptotic zero distribution of random polynomials spanned by general bases

Expected zeros of random orthogonal polynomials on the real line

Zeros of Random Analytic Functions

Random Bernstein-Markov factors

Natural boundary and Zero distribution of random polynomials in smooth domains arxiv: v1 [math.pr] 2 Oct 2017

Inequalities for sums of Green potentials and Blaschke products

THE ZEROS OF RANDOM POLYNOMIALS CLUSTER UNIFORMLY NEAR THE UNIT CIRCLE. 1. Introduction

EXTENSIONS OF THE BLOCH PÓLYA THEOREM ON THE NUMBER OF REAL ZEROS OF POLYNOMIALS

Considering our result for the sum and product of analytic functions, this means that for (a 0, a 1,..., a N ) C N+1, the polynomial.

Orthogonal polynomials with respect to generalized Jacobi measures. Tivadar Danka

AN INEQUALITY FOR THE NORM OF A POLYNOMIAL FACTOR IGOR E. PRITSKER. (Communicated by Albert Baernstein II)

Accumulation constants of iterated function systems with Bloch target domains

Zeros and coefficients

Brownian Motion and Conditional Probability

A combinatorial problem related to Mahler s measure

Let D be the open unit disk of the complex plane. Its boundary, the unit circle of the complex plane, is denoted by D. Let

Large Deviations, Linear Statistics, and Scaling Limits for Mahler Ensemble of Complex Random Polynomials

Spectral Theory of Orthogonal Polynomials

Constrained Leja points and the numerical solution of the constrained energy problem

Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing

THE COMPLEX ZEROS OF RANDOM SUMS

Notes 6 : First and second moment methods

ON THE ZEROS OF POLYNOMIALS WITH RESTRICTED COEFFICIENTS. Peter Borwein and Tamás Erdélyi. Abstract. It is proved that a polynomial p of the form

Czechoslovak Mathematical Journal

Zeros of Polynomials: Beware of Predictions from Plots

Asymptotics of Orthogonal Polynomials on a System of Complex Arcs and Curves: The Case of a Measure with Denumerable Set of Mass Points off the System

Part II Probability and Measure

POLYNOMIALS WITH COEFFICIENTS FROM A FINITE SET

Spectral Gap and Concentration for Some Spherically Symmetric Probability Measures

A CLT FOR MULTI-DIMENSIONAL MARTINGALE DIFFERENCES IN A LEXICOGRAPHIC ORDER GUY COHEN. Dedicated to the memory of Mikhail Gordin

Derivatives of Faber polynomials and Markov inequalities

ALGEBRAIC POLYNOMIALS WITH SYMMETRIC RANDOM COEFFICIENTS K. FARAHMAND AND JIANLIANG GAO

Reverse Triangle Inequalities for Riesz Potentials and Connections with Polarization

RANDOM MATRICES: LAW OF THE DETERMINANT

METRIC HEIGHTS ON AN ABELIAN GROUP

arxiv: v1 [math.ds] 31 Jul 2018

M17 MAT25-21 HOMEWORK 6

Unit Roots in White Noise?!

Isomorphism for transitive groupoid C -algebras

The Hilbert Transform and Fine Continuity

Lecture 21: Expectation of CRVs, Fatou s Lemma and DCT Integration of Continuous Random Variables

On the concentration of eigenvalues of random symmetric matrices

PACKING-DIMENSION PROFILES AND FRACTIONAL BROWNIAN MOTION

Logarithmic scaling of planar random walk s local times

Math 680 Fall A Quantitative Prime Number Theorem I: Zero-Free Regions

A TALE OF TWO CONFORMALLY INVARIANT METRICS

Mathematical Institute, University of Utrecht. The problem of estimating the mean of an observed Gaussian innite-dimensional vector

Estimates for Bergman polynomials in domains with corners

If Y and Y 0 satisfy (1-2), then Y = Y 0 a.s.

ON THE DEFINITION OF RELATIVE PRESSURE FOR FACTOR MAPS ON SHIFTS OF FINITE TYPE. 1. Introduction

SZEGÖ ASYMPTOTICS OF EXTREMAL POLYNOMIALS ON THE SEGMENT [ 1, +1]: THE CASE OF A MEASURE WITH FINITE DISCRETE PART

Joshua N. Cooper Department of Mathematics, Courant Institute of Mathematics, New York University, New York, NY 10012, USA.

An Asymptotic Formula for Goldbach s Conjecture with Monic Polynomials in Z[x]

Convergence Rates for Renewal Sequences

Independence of some multiple Poisson stochastic integrals with variable-sign kernels

Rectangular Young tableaux and the Jacobi ensemble

On multivariable Fejér inequalities

Journal of Inequalities in Pure and Applied Mathematics

ON THE PATHWISE UNIQUENESS OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS

The Degree of the Splitting Field of a Random Polynomial over a Finite Field

THE ZERO-DISTRIBUTION AND THE ASYMPTOTIC BEHAVIOR OF A FOURIER INTEGRAL. Haseo Ki and Young One Kim

Ratio Asymptotics for General Orthogonal Polynomials

3 Integration and Expectation

Self-normalized laws of the iterated logarithm

WAVELET EXPANSIONS OF DISTRIBUTIONS

L n = l n (π n ) = length of a longest increasing subsequence of π n.

Radon Transforms and the Finite General Linear Groups

THE DISTRIBUTION OF ROOTS OF A POLYNOMIAL. Andrew Granville Université de Montréal. 1. Introduction


Bloch radius, normal families and quasiregular mappings

LIMITS FOR QUEUES AS THE WAITING ROOM GROWS. Bell Communications Research AT&T Bell Laboratories Red Bank, NJ Murray Hill, NJ 07974

POLYNOMIALS WITH COEFFICIENTS FROM A FINITE SET

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

Asymptotics of minimax stochastic programs

Decomposition of random graphs into complete bipartite graphs

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory

Takens embedding theorem for infinite-dimensional dynamical systems

A note on the σ-algebra of cylinder sets and all that

MORE NOTES FOR MATH 823, FALL 2007

NORMAL NUMBERS AND UNIFORM DISTRIBUTION (WEEKS 1-3) OPEN PROBLEMS IN NUMBER THEORY SPRING 2018, TEL AVIV UNIVERSITY

Markov s Inequality for Polynomials on Normed Linear Spaces Lawrence A. Harris

RELATION BETWEEN SMALL FUNCTIONS WITH DIFFERENTIAL POLYNOMIALS GENERATED BY MEROMORPHIC SOLUTIONS OF HIGHER ORDER LINEAR DIFFERENTIAL EQUATIONS

Proofs for Large Sample Properties of Generalized Method of Moments Estimators

Random walks on Z with exponentially increasing step length and Bernoulli convolutions

ON THE COMPLETE CONVERGENCE FOR WEIGHTED SUMS OF DEPENDENT RANDOM VARIABLES UNDER CONDITION OF WEIGHTED INTEGRABILITY

3. 4. Uniformly normal families and generalisations

ON THE KAC-RICE FORMULA

ENERGY INTEGRALS AND SMALL POINTS FOR THE ARAKELOV HEIGHT

EC 521 MATHEMATICAL METHODS FOR ECONOMICS. Lecture 1: Preliminaries

PCA sets and convexity

HYPERBOLIC DERIVATIVES AND GENERALIZED SCHWARZ-PICK ESTIMATES

Complex Analysis Qualifying Exam Solutions

Decomposition of random graphs into complete bipartite graphs

RESEARCH STATEMENT TURGAY BAYRAKTAR

TRANSLATION INVARIANCE OF FOCK SPACES

GROWTH OF SOLUTIONS TO HIGHER ORDER LINEAR HOMOGENEOUS DIFFERENTIAL EQUATIONS IN ANGULAR DOMAINS

Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit

Boundary behaviour of optimal polynomial approximants

Hartogs Theorem: separate analyticity implies joint Paul Garrett garrett/

Transcription:

Zeros of Polynomials with Random Coefficients Igor E. Pritsker Department of Mathematics, Oklahoma State University Stilwater, OK 74078 USA igor@math.okstate.edu Aaron M. Yeager Department of Mathematics, Oklahoma State University Stillwater, OK 74078 USA aaron.yeager@math.okstate.edu September 7, 2014 Abstract Zeros of many ensembles of polynomials with random coefficients are asymptotically equidistributed near the unit circumference. We give quantitative estimates for such equidistribution in terms of the expected discrepancy and expected number of roots in various sets. This is done for polynomials with coefficients that may be dependent, and need not have identical distributions. We also study random polynomials spanned by various deterministic bases. Keywords: Polynomials, random coefficients, expected number of zeros, uniform distribution, random orthogonal polynomials. 1 Introduction Zeros of polynomials of the form P n (z = n A kz k, where {A n } n are random coefficients, have been studied by Bloch and Pólya 4, Littlewood and Offord 19, Erdős and Offord 8, Kac 18, Hammersley 12, Shparo and Shur 23, Arnold 2, and many other authors. The early history of the subject with numerous references is summarized in the books of Bharucha-Reid and Sambandham 6, and of Farahmand 10. It is now well known that, under mild conditions on the probability distribution of the coefficients, the majority of zeros of these polynomials is accumulating near the unit circumference, and they are also equidistributed in the angular sense. Introducing modern terminology, we call a collection 1

of random polynomials P n (z = n A kz k, n N, with i.i.d. coefficients, the ensemble of Kac polynomials. Let Z(P n = {Z 1, Z 2,..., Z n } be the set of complex zeros of a polynomial P n of degree n. These zeros {Z k } n k=1 give rise to the zero counting measure τ n = 1 n δ Zk, k=1 which is a random unit Borel measure in C. The fact of equidistribution for the zeros of random polynomials can now be expressed via the convergence of τ n in the weak topology to the the normalized arclength measure µ T on the unit circumference T, where dµ T (e it := dt/(2π. Namely, we have that τ n µ T with probability 1 (abbreviated as a.s. or almost surely. More recent papers on zeros of random polynomials include Hughes and Nikeghbali 13, Ibragimov and Zeitouni 14, Ibragimov and Zaporozhets 15, and Kabluchko and Zaporozhets 16, 17. In particular, Ibragimov and Zaporozhets 15 proved that if the coefficients are independent and identically distributed, then the condition Elog + A 0 < is necessary and sufficient for τ n µ T almost surely. Here, EX denotes the expectation of a random variable X. Furthermore, Ibragimov and Zeitouni 14 obtained asymptotic results on the expected number of zeros when random coefficients are from the domain of attraction of a stable law, generalizing earlier results of Shepp and Vanderbei 21 for Gaussian coefficients. Our goal is to provide estimates on the expected rate of convergence of τ n to µ T. A standard way to study the deviation of τ n from µ T is to consider the discrepancy of these measures in the annular sectors of the form A r (α, β = {z C : r < z < 1/r, α arg z < β}, 0 < r < 1. Such estimates were recently provided by Pritsker and Sola 20 using the largest order statistic Y n = max,...,n A k. The results of 20 require the coefficients {A k } n be independent and identically distributed (i.i.d. complex random variables having absolutely continuous distribution with respect to the area measure. This assumption excluded many important distributions such as discrete ones, in particular. We remove many unnecessary restrictions in this paper, and generalize the results of 20 in several directions. Section 2 develops essentially the same theory as in 20 (but uses a different approach for the case of coefficients that are neither independent nor identically distributed, and whose distributions only satisfy certain uniform bounds for the fractional and logarithmic moments. We also consider random polynomials spanned by general bases in Section 3, which includes random orthogonal polynomials on the unit circle and the unit disk. Section 4 shows how one can handle the discrete random coefficients by methods involving the highest order statistic Y n, augmenting the ideas of 20. We further develop the highest order statistic approach to the case of dependent coefficients in Section 5, under the assumption that the coefficients satisfy uniform bounds on the first two moments. All proofs are contained in Section 6. 2

2 Expected Number of Zeros of Random Polynomials Let A k, k = 0, 1, 2,..., be complex valued random variables that are not necessarily independent nor identically distributed, and let P n = sup T P n, where T := {z : z = 1}. We study the expected deviation of the normalized number of zeros from µ T in annular sectors, which is often referred to as discrepancy between those measures. Theorem 2.1. Suppose that the coefficients of P n (z = n A kz k are complex random variables that satisfy: 1. E A k t <, k = 0,..., n, for a fixed t (0, 1 2. Elog A 0 > and Elog A n >. Then we have for all large n N that E τ n (A r (α, β β α 2π C r where being Catalan s constant. C r := 1 n 2π k + 2 1 r ( 1 t log E A k t 1 1/2 2 Elog A 0A n, (2.1 with k := ( 1 k (2k + 1 2 Introducing uniform bounds, we obtain the rates of convergence for the expected discrepancy as n. Corollary 2.2. Let P n (z = n A k,nz k, n N, be a sequence of random polynomials. If and M := sup{e A k,n t k = 0,..., n, n N} < L := inf{elog A k,n k = 0 & n, n N} >, then E τ n (A r (α, β β α ( 1/2 ( 1 log(n + 1 + log M log n 2π C r L = O n t n as n. The arguments of 20 now give quantitative results about the expected number of zeros of random polynomials in various sets. We first consider sets separated from T. Proposition 2.3. Let E C be a compact set such that E T =, and set d := dist(e, T. If P n is as in Theorem 2.1, then the expected number of its zeros in E satisfies ( ( Enτ n (E d + 1 2 d t log E A k t Elog A 0 A n. 3

Just as in 20, the following proposition gives a bound on the number of zeros in sets that have non-tangential contact with T. Proposition 2.4. If E is a polygon inscribed in T, and the sequence {P n } n=1 Corollary 2.2, then the expected number of zeros of P n in E satisfies ( Enτ n (E = O n log n as n. is as in Finally, if an open set insects T, then it must carry a positive fraction of zeros according to the normalized arclength measure on T. This is illustrated below for the disks D r (w = {z C : z w < r}, w T. Proposition 2.5. If w T and r < 2, and the sequence {P n } n=1 is as in Corollary 2.2, then the expected number of zeros of P n in D r (w satisfies Enτ n (D r (w = 2 arcsin(r/2 π ( n + O n log n as n. 3 Random Polynomials Spanned by General Bases We now analyze the behavior of random polynomials spanned by general bases. Throughout this section, let B k (z = k j=0 b j,kz j, where b j,k C for all j and k, and b k,k 0 for all k, be a polynomial basis, i.e. a linearly independent set of polynomials. Observe that deg B k = k for all k N {0}. We study the zero distribution of random polynomials P n (z = A k B k (z. Throughout this section, we assume that lim sup B k 1/k 1 and lim b k,k 1/k = 1. (3.1 k k It is well known that B k b k,k holds for all polynomials, so that (3.1 in fact implies lim k B k 1/k = 1. Conditions (3.1 hold for many standard bases used for representing analytic functions in the unit disk, e.g., for various sequences of orthogonal polynomials (cf. Stahl and Totik 24. In the latter case, random polynomials spanned by such bases are called random orthogonal polynomials. Their asymptotic zero distribution was recently studied in a series of papers by Shiffman and Zelditch 22, Bloom 5, and others. Our main result of this section is the following: Theorem 3.1. For P n (z = n A kb k (z, let {A k } n be random variables satisfying E A k t <, k = 0,..., n, for a fixed t (0, 1, and set D n := A n b n n,n A kb 0,k. If 4

Elog D n > then we have for all large n N that E τ n (A r (α, β β α 2π ( ( 1 1 C r n t log E A k t + log max B k 1 1/2 0 k n 2 Elog D n, where 2π C r = k + 2 1 r. In particular, if Elog A n > and Elog A 0 + z L > for all z C, then (3.2 and (3.2 holds. Elog D n log b 0,0 b n,n + Elog A n + L >, (3.3 An example of a typical basis satisfying (3.1 is given below by orthonormal polynomials on the unit circle. We apply Theorem 3.1 to obtain a quantitative result on the zero distribution of random orthogonal polynomials. Corollary 3.2. Let P n (z = n A k,nb k (z, n N, be a sequence of random orthogonal polynomials. Suppose that the following uniform estimates for the coefficients hold true: and sup{e A k,n t k = 0,..., n; n N} <, t (0, 1, (3.4 ( min inf Elog A n,n, inf Elog A 0,n + z >. (3.5 n N n N,z C If the basis polynomials B k are orthonormal with respect to a positive Borel measure µ supported on T = {e iθ : 0 θ < 2π}, such that the Radon-Nikodym derivative dµ/dθ > 0 for almost every θ 0, 2π, then (3.1 is satisfied and lim E n τ n (A r (α, β β α 2π = 0. (3.6 Furthermore, if dµ(θ = w(θ dθ, where w(θ c > 0, θ 0, 2π, then E τ n (A r (α, β β α ( log n 2π = O n. (3.7 n It is clear that if the coefficients have identical distributions, then all uniform bounds in (3.4 and (3.5 reduce to those on the single coefficient A 0. One can relax conditions on the orthogonality measure µ while preserving the results, e.g., one can show that (3.7 also holds for the generalized Jacobi weights of the form w(θ = v(θ J j=1 θ θ j α j, α j > 0, where v(θ c > 0, θ 0, 2π. Note that the analogs of Propositions 2.4-2.5 for the random orthogonal polynomials follow from (3.7. 5

4 Discrete Random Coefficients Let A 0, A 1,... be independent and identically distributed (i.i.d. complex discrete random variables. We assume as before that E A 0 t = µ < for a fixed real t > 0. These assumptions are certainly more restrictive than those of Sections 2 and 3. The goal of Sections 4 and 5 is to generalize the ideas of 20. First we prove essentially the same results as in 20 in the discrete case. Furthermore, since any real random variable is the limit of an increasing sequence of discrete random variables, we extend the arguments to arbitrary random variables. Proposition 4.1. Let A 0, A 1,... be iid complex random variables, and let Y n := max 0 k n A k. If µ := E A 0 t <, where t > 0, then Elog Y n log(n + 1 + log µ. t This result provides an immediate extension of Theorem 3.3 of 20 to arbitrary random variables (satisfying the moment assumption by following the same proof. Indeed, we have that ( ( Elog P n = E log sup A k z k = E log A k E log z T ( (n + 1 max A k 0 k n = log(n + 1 + Elog Y n. Thus referring to the proof of Theorem 3.3 of 20 and using our bound of Elog Y n gives the result. 5 Dependent Coefficients We generalize Theorem 3.7 of 20 in this section, replacing the requirement that the first and the second moments of the absolute values of all coefficients be equal with the requirement they be uniformly bounded. More precisely, we assume that sup E A k =: M < and k sup Var A k =: S 2 <. (5.1 k Following the ideas of Arnold and Groeneveld 3 (see also 7, we show that Proposition 5.1. If (5.1 is satisfied, then we have for Y n = max 0 k n A k that EY n = O( n as n. An analog of the result from 20 is obtained along the same lines as before. 6

Theorem 5.2. If the (possibly dependent coefficients of P n satisfy (5.1 as well as Elog A 0 > and Elog A n >, then E as n. τ n (A r (α, β β α 2π C r 3 2 log(n + 1 1 2 Elog A 0 1 2 Elog A n + O(1 n Clearly, this result has more restrictive assumptions than Theorem 2.1. 6 Proofs 6.1 Proofs for Section 2 Define the logarithmic Mahler measure (logarithm of geometric mean of P n by m(p n = 1 2π log P n (e iθ dθ. 2π 0 It is immediate to see that m(p n log P n. The majority of our results are obtained with help of the following modified version of the discrepancy theorem due to Erdős and Turán (cf. Proposition 2.1 of 20: Lemma 6.1. Let P n (z = n c kz k, c k C, and assume c 0 c n 0. For any r (0, 1 and 0 α < β < 2π, we have τ n (A r (α, β β α 2π 2π 1 k n log P n (6.1 c0 c n + 2 n(1 r m where k = ( 1k /(2k + 1 2 is Catalan s constant. ( P n c0 c n This estimate shows how close the zero counting measure τ n is to µ T. The following lemma is used several times below. Lemma 6.2. If A k, k = 0,..., n, are complex random variables satisfying E A k t <, k = 0,..., n, for a fixed t (0, 1, then ( E log A k 1 t log E A k t. (6.2, 7

Proof. We first observe an elementary inequality. If x i 0, i = 0,..., n, and n x i = 1, then for any t (0, 1 we have that (x i t x i = 1. Applying this inequality with x i = A i / n A k, we obtain that ( t A k A k t and ( E log A k 1t E log A k t. (6.3 Jensen s inequality and linearity of expectation now give that ( E log A k 1 t log E A k t = 1 t log E A k t. Proof of Theorem 2.1. Note that m(q n log Q n for all polynomials Q n. Hence (6.1 and Jensen s inequality imply that E τ n (A r (α, β β α 2π 2π log 1 k n E P n 2 + A0 A n n(1 r E log P n A0 A n C r log 1 n E P n, A0 A n where the last inequality holds for all sufficiently large n N. Since P n n A k, we use the linearity of expectation and (6.2 to estimate E log P n A0 A n E log A k 1 2 Elog A 0A n 1 t log ( E A k t 1 2 Elog A 0A n. The latter upper bound is finite by our assumptions. 8

Proof of Corollary 2.2. The result follows immediately upon using the uniform bounds M and L in estimate (2.1. Proof of Proposition 2.3. In was shown in 20 (see (5.3 in that paper that ( 2 τ n (C \ A r (0, 2π n(1 r m P n. A0 A n Since m(q n log Q n for all polynomials Q n, it follows that ( 2 τ n (C \ A r (0, 2π n(1 r log P n. A0 A n Note that for r = 1/(dist(E, T + 1, we have E C \ A r (0, 2π. Estimating P n as in the proof of Theorem 2.1, we obtain that ( Enτ n (E 2 1 r E P n log A0 A n ( ( 2 1 1 r t log E A k t 1 2 Elog A 0A n ( ( = d + 1 2 d t log E A k t Elog A 0 A n. Proof of Proposition 2.4. The proof of this proposition proceeds in the same manner as the proof of Proposition 3.5 in 20 by using our Corollary 2.2 along with Proposition 2.3. Proof of Proposition 2.5. As in the previous proof, this result follows in direct parallel to the proof of Proposition 3.6 of 20 while taking into account our bound in Proposition 2.4. 6.2 Proofs for Section 3 Proof of Theorem 3.1. We proceed with an argument similar to the proof of Theorem 2.1. Note that the leading coefficient of P n is A n b n,n, and its constant term is n A kb 0,k. Using the fact m(q n log Q n for all polynomials Q n, we apply (6.1 and Jensen s inequality to obtain E τ n (A r (α, β β α 2π 2π log 1 k n E P n 2 + Dn n(1 r E log P n Dn C r log 1 n E P n Dn 9

for all sufficiently large n N. It is clear that Hence (6.1 yields E log P n Dn E P n max 0 k n B k log A k. A k + log max B k 1 0 k n 2 Elog D n 1 t log ( E A k t + log max 0 k n B k 1 2 Elog D n. Thus (3.2 follows as a combination of the above estimates. We now proceed to the lower bound for the expectation of log D n in (3.3 by estimating that Elog D n = E log A nb n,n A k b 0,k = Elog A n + log b n,n + E log A k b 0,k = Elog A n + log b n,n + log b 0,0 + E log A b 0,k 0 + A k log b 0,0 b n,n + Elog A n + L, where we used that b 0,0 0 and Elog A 0 + z L for all z C. Proof of Corollary 3.2. We apply (3.2 with (3.3. The uniform bounds on the expectations for the coefficients immediately give that ( ( 1 log n tn log E A k,n t 1 = O and n 2n Elog D n 1 ( 1 n log b n,n + O. n The assumption dµ/dθ > 0 for a.e. θ implies (3.1, see Corollary 4.1.2 of 24, which in turn gives that 1 lim n n log b 1 n,n = lim log max n n B k = 0. 0 k n Hence (3.6 follows from (3.2. Recall that the leading coefficient b n,n of the orthonormal polynomial B n gives the solution of the following extremal problem 24: { } b n,n 2 = inf Q n 2 dµ : Q n is a monic polynomial of degree n. k=1 b 0,0 10

Using Q n (z = z n, we obtain that b n,n (µ(t 1/2 and 1 n log b n,n 1 log µ(t. 2n We now show that log B n = O(log n as n, provided dµ(θ = w(θ dθ with w(θ c > 0, θ 0, 2π. Indeed, the Cauchy-Schwarz inequality gives for the orthonormal polynomial B n (z = n b k,nz k that B n b k,n ( 1/2 n + 1 b k,n 2 = ( 1 2π 1/2 n + 1 B n (e iθ 2 dθ 2π 0 ( n + 1 2π 1/2 n + 1 B n (e iθ 2 w(θ dθ = 2πc 2πc. This estimate completes the proof of (3.7. 6.3 Proofs for Section 4 0 Proof of Proposition 4.1. Assume that the discrete random variable A 0 takes values {x k } k=1 that are arranged in the increasing order, and note that the range of values for Y n is the same. Let a k = P(Y n x k and b k = P( A 0 x k, where k N. It is clear that P(Y n = x k = a k a k 1 and P( A 0 = x k = b k b k 1, k N. Since the A k s are independent and identically distributed, we have that a k = P(Y n x k = P( A 0 x k, A 1 x k,..., A n x k = P( A 0 x k P( A 1 x k P( A n x k = P( A 0 x k n+1 = b n+1 k holds for all k N. Thus EYn t := x t k P(Y n = x k = = k=1 k=1 k=1 x t k b n+1 k b n+1 x t k a k a k 1 k=1 k 1 = k=1 x t k b k b k 1 b n k + b n 1 k b k 1 + + b n k 1 x t k b k b k 1 (n + 1b n k (n + 1 x t k P( A 0 = x k =(n + 1 E A 0 t. k=1 By Jensen s inequality and the previous estimate, we have 1 Elog Y n = E t log Y n t 1 t log EY t n 1 t (log((n + 1 E A 0 t = 1 (log(n + 1 + log µ. t 11

We now show that this argument can be extended to arbitrary random variables { C k } n. Consider the increasing sequences of simple (discrete random variables { A k,i } i=1 such that lim i A k,i = C k, k = 0,..., n. For Y n,i = max 0 k n A k,i and Z n = max 0 k n C k, one can see that lim Y n,i t = Zn t and lim A 0,i t = C 0 t, i i where t > 0. Moreover, the sequence of simple random variables Y t n,i is increasing to Z t n, so that the Monotone Convergence Theorem gives lim EY n,i t = EZn. t i Using the already proven result for discrete random variables and passing to the limit as i, we obtain that EZ t n (n + 1E C 0 t. Hence Jensen s inequality yields as before. 6.4 Proofs for Section 5 Elog Z n 1 t (log(n + 1 + log E C 0 t, The following lemma is due to Arnold and Groeneveld 3, and is also found in 7, p. 110. We prove it in our setting for completeness. Lemma 6.3. Let X i, i = 0, 1,..., n, be possibly dependent random variables with EX i = µ i and VarX i = σ 2 i. Then for any real constants c i, the ordered random variables X 0:n X 1:n X n:n satisfy ( 1/2 E c i (X i:n µ (c i c 2 (µ i µ 2 + σi 2, where c = n 1 n c i, µ = n 1 n µ i:n = n 1 n µ i, and µ i:n = EX i:n. Proof. We use the Cauchy-Schwartz inequality in the following estimate: c i (X i:n µ = (c i c(x i:n µ 1/2 (c i c 2 (X i:n µ 2. 12

Observe that E(Y E( Y for any random variable Y, and that E(Z 1/2 E(Z 1/2 for Z 0 by Jensen s inequality. Applying these facts while taking the expectation of the previous inequality gives 1/2 1/2 E c i (X i:n µ (c i c E 2 (X i:n µ 2 1/2 = EXi:n 2 2EX i:n µ µ 2 = 21/2 (c i c (c i c 21/2 σ 2 i + (µ i µ 2 1/2. Proof of Proposition 5.1. To obtain bounds for EY n = µ n:n = EA n:n, we apply the previous lemma while choosing c 0 = c 1 = = c n 1 = 0 and c n = 1. This yields ( 1/2 EA n:n µ (n c 2 + (1 c 2 (µ 2 i 2µ i µ + µ 2 + σi 2 It follows that (( ( n = (n + 1 + 1 1 2 (µ 2 2 i 2µ i µ + µ 2 + σi 2 n + 1 ( 1/2 (M 2 + 2M 2 + M 2 + S 2 = (4M 2 + S 2 1/2 (n + 1 1/2. EY n = EA n:n µ + (4M 2 + S 2 1/2 (n + 1 1/2 M + (4M 2 + S 2 1/2 (n + 1 1/2. 1/2 Proof of Theorem 5.2. As in the proof of Theorem 2.1, we apply (6.1 and Jensen s inequality to obtain for all sufficiently large n N the following E τ n (A r (α, β β α 2π C r log 1 n E P n A0 A n Elog P n 1 = C Elog A 2 0 1Elog A 2 n r. n 13

Observe that P n = sup A k z k T A k (n + 1 max 0 k n A k = (n + 1Y n. Taking the logarithm and then the expectation of the above yields Elog P n Elog(n + 1 + log Y n = log(n + 1 + Elog Y n log(n + 1 + log EY n, where the last inequality follows from Jensen s inequality. As n, applying proposition 5.1 gives log(n + 1 + log EY n log(n + 1 + log O( n = log(n + 1 + 1 log n + O(1 2 < 3 log(n + 1 + O(1. 2 Combining these bounds gives the result of Theorem 5.2. Acknowledgements. Research of I. E. Pritsker was partially supported by the National Security Agency (grant H98230-12-1-0227, and by the AT&T Professorship. Research of A. M. Yeager was partially supported by the Vaughn Foundation and by the Jobe scholarship from the Department of Mathematics at Oklahoma State University, and it is a portion of his work towards a PhD degree. References 1 V. V. Andrievskii and H.-P. Blatt, Discrepancy of signed measures and polynomial approximation, Springer-Verlag, New York, 2002. 2 L. Arnold, Über die Nullstellenverteilung zufälliger Polynome, Math. Z. 92 (1966, 12 18. 3 B. Arnold and R. Groeneveld, Bounds on the expectations of linear systematic statistics based on dependent samples, Ann. Statistics 7 (1979, 220 223. 4 A. Bloch and G. Pólya, On the roots of certain algebraic equations, Proc. London Math. Soc. 33 (1932, 102 114. 5 T. Bloom, Random polynomials and (pluripotential theory, Ann. Polon. Math. 91 (2007, 131 141. 14

6 A. T. Bharucha-Reid and M. Sambandham, Random polynomials, Academic Press, Orlando, 1986. 7 H. A. David and H. N. Nagaraja, Order statistics, John Wiley & Sons, Hoboken, 2003. 8 P. Erdős, A. C. Offord, On the number of real roots of a random algebraic equation, Proc. London Math. Soc. 6 (1956, 139 160. 9 P. Erdős and P. Turán, On the distribution of roots of polynomials, Ann. Math. 51 (1950, 105 119. 10 K. Farahmand, Topics in random polynomials, Pitman Res. Notes Math. 393 (1998. 11 T. Ganelius, Sequences of analytic functions and their zeros, Ark. Mat. 3 (1958, 1 50. 12 J. M. Hammersley, The zeros of a random polynomial. In Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability, 19541955, vol. II, pages 89 111, Berkeley and Los Angeles, 1956. University of California Press. 13 C. P. Hughes and A. Nikeghbali, The zeros of random polynomials cluster uniformly near the the unit circle, Compositio Math. 144 (2008, 734 746. 14 I. Ibragimov and O. Zeitouni, On roots of random polynomials, Trans. Amer. Math. Soc. 349 (1997, 2427 2441. 15 I. Ibragimov and D. Zaporozhets, On distribution of zeros of random polynomials in complex plane, Prokhorov and contemporary probability theory, Springer Proc. Math. Stat. 33 (2013, 303 323. 16 Z. Kabluchko and D. Zaporozhets, Roots of random polynomials whose coefficients have logarithmic tails, Ann. Probab. 41 (2013, 3542 3581. 17 Z. Kabluchko and D. Zaporozhets, Universality for zeros of random analytic functions, preprint. arxiv:1205.5355 18 M. Kac, On the average number of real roots of a random algebraic equation, Bull. Amer. Math. Soc. 49 (1943 314 320. 19 J. E. Littlewood and A. C. Offord, On the number of real roots of a random algebraic equation. J. Lond. Math. Soc. 13 (1938, 288 295. 20 I. E. Pritsker and A. A. Sola, Expected discrepancy for zeros of random algebraic polynomials, Proc. Amer. Math. Soc., to appear. arxiv:1307.6202 21 L. Shepp and R. J. Vanderbei, The complex zeros of random polynomials. Trans. Amer. Math. Soc. 347 (1995, 4365 4383. 15

22 B. Shiffman and S. Zelditch, Equilibrium distribution of zeros of random polynomials, Int. Math. Res. Not. 1 (2003, 25 49. 23 D. I. Shparo and M. G. Shur, On distribution of zeros of random polynomials, Vestnik Moskov. Univ. Ser. I Mat. Mekh. 3 (1962, 40 43. 24 H. Stahl and V. Totik, General orthogonal polynomials, Cambridge Univ. Press, New York, 1992. 16