Random Bernstein-Markov factors

Similar documents
Natural boundary and Zero distribution of random polynomials in smooth domains arxiv: v1 [math.pr] 2 Oct 2017

Zeros of lacunary random polynomials

Zeros of Polynomials with Random Coefficients

Complete moment convergence of weighted sums for processes under asymptotically almost negatively associated assumptions

STAT 7032 Probability Spring Wlodek Bryc

NEWMAN S INEQUALITY FOR INCREASING EXPONENTIAL SUMS

Estimates for probabilities of independent events and infinite series

Let D be the open unit disk of the complex plane. Its boundary, the unit circle of the complex plane, is denoted by D. Let

AN INEQUALITY FOR THE NORM OF A POLYNOMIAL FACTOR IGOR E. PRITSKER. (Communicated by Albert Baernstein II)

Inequalities for sums of Green potentials and Blaschke products

SHARP BOUNDARY TRACE INEQUALITIES. 1. Introduction

LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS. S. G. Bobkov and F. L. Nazarov. September 25, 2011

L p Functions. Given a measure space (X, µ) and a real number p [1, ), recall that the L p -norm of a measurable function f : X R is defined by

The Hilbert Transform and Fine Continuity

BALANCING GAUSSIAN VECTORS. 1. Introduction

A class of trees and its Wiener index.

GAUSSIAN MEASURE OF SECTIONS OF DILATES AND TRANSLATIONS OF CONVEX BODIES. 2π) n

ON THE BOUNDEDNESS BEHAVIOR OF THE SPECTRAL FACTORIZATION IN THE WIENER ALGEBRA FOR FIR DATA

A CLT FOR MULTI-DIMENSIONAL MARTINGALE DIFFERENCES IN A LEXICOGRAPHIC ORDER GUY COHEN. Dedicated to the memory of Mikhail Gordin

Complete Moment Convergence for Weighted Sums of Negatively Orthant Dependent Random Variables

Random Process Lecture 1. Fundamentals of Probability

Wald for non-stopping times: The rewards of impatient prophets

Continued Fraction Digit Averages and Maclaurin s Inequalities

ON THE STRONG LIMIT THEOREMS FOR DOUBLE ARRAYS OF BLOCKWISE M-DEPENDENT RANDOM VARIABLES

The Codimension of the Zeros of a Stable Process in Random Scenery

If Y and Y 0 satisfy (1-2), then Y = Y 0 a.s.

A note on the growth rate in the Fazekas Klesov general law of large numbers and on the weak law of large numbers for tail series

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University

COMPLETE QTH MOMENT CONVERGENCE OF WEIGHTED SUMS FOR ARRAYS OF ROW-WISE EXTENDED NEGATIVELY DEPENDENT RANDOM VARIABLES

On the Set of Limit Points of Normed Sums of Geometrically Weighted I.I.D. Bounded Random Variables

Part II Probability and Measure

1 Sequences of events and their limits

CESARO OPERATORS ON THE HARDY SPACES OF THE HALF-PLANE

9. Series representation for analytic functions

A NOTE ON THE COMPLETE MOMENT CONVERGENCE FOR ARRAYS OF B-VALUED RANDOM VARIABLES

ON THE COMPLETE CONVERGENCE FOR WEIGHTED SUMS OF DEPENDENT RANDOM VARIABLES UNDER CONDITION OF WEIGHTED INTEGRABILITY

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1

Limit Theorems for Exchangeable Random Variables via Martingales

Weighted norm inequalities for singular integral operators

Probability and Measure

The main results about probability measures are the following two facts:

POLYNOMIALS WITH COEFFICIENTS FROM A FINITE SET

Chapter 6. Convergence. Probability Theory. Four different convergence concepts. Four different convergence concepts. Convergence in probability

SZEGÖ ASYMPTOTICS OF EXTREMAL POLYNOMIALS ON THE SEGMENT [ 1, +1]: THE CASE OF A MEASURE WITH FINITE DISCRETE PART

Why Bohr got interested in his radius and what it has led to

Bernstein-Szegö Inequalities in Reproducing Kernel Hilbert Spaces ABSTRACT 1. INTRODUCTION

Functional Analysis, Stein-Shakarchi Chapter 1

An operator preserving inequalities between polynomials

Journal of Inequalities in Pure and Applied Mathematics

SCHWARTZ SPACES ASSOCIATED WITH SOME NON-DIFFERENTIAL CONVOLUTION OPERATORS ON HOMOGENEOUS GROUPS

A Note on the Central Limit Theorem for a Class of Linear Systems 1

EXTREMAL PROPERTIES OF THE DERIVATIVES OF THE NEWMAN POLYNOMIALS

Mathematical Institute, University of Utrecht. The problem of estimating the mean of an observed Gaussian innite-dimensional vector

A note on a construction of J. F. Feinstein

Convergence Rates for Renewal Sequences

POLYNOMIALS WITH COEFFICIENTS FROM A FINITE SET

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Scaling Limits of Waves in Convex Scalar Conservation Laws under Random Initial Perturbations

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

Introduction and Preliminaries

The small ball property in Banach spaces (quantitative results)

INTRODUCTION TO PADÉ APPROXIMANTS. E. B. Saff Center for Constructive Approximation

Math 328 Course Notes

ENTIRE FUNCTIONS AND COMPLETENESS PROBLEMS. Lecture 3

Analytic Theory of Polynomials

Polarization constant K(n, X) = 1 for entire functions of exponential type

APPROXIMATING CONTINUOUS FUNCTIONS: WEIERSTRASS, BERNSTEIN, AND RUNGE

Lecture I: Asymptotics for large GUE random matrices

MATHS 730 FC Lecture Notes March 5, Introduction

ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1

NECESSARY CONDITIONS FOR WEIGHTED POINTWISE HARDY INEQUALITIES

ESTIMATES FOR MAXIMAL SINGULAR INTEGRALS

Math 259: Introduction to Analytic Number Theory Functions of finite order: product formula and logarithmic derivative

Expected zeros of random orthogonal polynomials on the real line

3 Integration and Expectation

Czechoslovak Mathematical Journal

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

CARLESON MEASURES AND DOUGLAS QUESTION ON THE BERGMAN SPACE. Department of Mathematics, University of Toledo, Toledo, OH ANTHONY VASATURO

HAIYUN ZHOU, RAVI P. AGARWAL, YEOL JE CHO, AND YONG SOO KIM

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS

Lecture 21 Representations of Martingales

UNIMODULAR ROOTS OF RECIPROCAL LITTLEWOOD POLYNOMIALS

RECENT RESULTS FOR SUPERCRITICAL CONTROLLED BRANCHING PROCESSES WITH CONTROL RANDOM FUNCTIONS

On complete convergence and complete moment convergence for weighted sums of ρ -mixing random variables

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales

Hartogs Theorem: separate analyticity implies joint Paul Garrett garrett/

Notes 15 : UI Martingales

Remarks on the Rademacher-Menshov Theorem

Lecture 8: February 8

arxiv: v1 [math.pr] 22 Dec 2018

Lectures 2 3 : Wigner s semicircle law

On the convergence of sequences of random variables: A primer

An idea how to solve some of the problems. diverges the same must hold for the original series. T 1 p T 1 p + 1 p 1 = 1. dt = lim

C.7. Numerical series. Pag. 147 Proof of the converging criteria for series. Theorem 5.29 (Comparison test) Let a k and b k be positive-term series

Chapter One. The Calderón-Zygmund Theory I: Ellipticity

Lecture 21: Expectation of CRVs, Fatou s Lemma and DCT Integration of Continuous Random Variables

Useful Probability Theorems

MARKOV-NIKOLSKII TYPE INEQUALITIES FOR EXPONENTIAL SUMS ON FINITE INTERVALS

P -adic root separation for quadratic and cubic polynomials

Transcription:

Random Bernstein-Markov factors Igor Pritsker and Koushik Ramachandran October 20, 208 Abstract For a polynomial P n of degree n, Bernstein s inequality states that P n n P n for all L p norms on the unit circle, 0 < p, with equality for P n (z = cz n. We study this inequality for random polynomials, and show that the expected (average and almost sure value of P n / P n is often different from the classical deterministic upper bound n. In particular, for circles of radii less than one, the ratio P n / P n is almost surely bounded as n tends to infinity, and its expected value is uniformly bounded for all degrees under mild assumptions on the random coefficients. For norms on the unit circle, Borwein and Lockhart mentioned that the asymptotic value of P n / P n in probability is n/ 3, and we strengthen this to almost sure it for p = 2. If the radius R of the circle is larger than one, then the asymptotic value of P n / P n in probability is n/r, matching the sharp upper bound for the deterministic case. We also obtain bounds for the case p = on the unit circle. Introduction Let D C denote the unit disc centered at the origin. Let P n (z = n a kz k be a polynomial of degree n. The classical Bernstein s inequality for the derivative of a polynomial states that P n n P n, (. where f denotes the sup norm of f over D. Polynomials P n (z = cz n are extremal for the inequality (.. It is also well known that inequality (. holds when is replaced by L p norms p taken over D (with respect to dθ/(2π, for any p > 0. More information about Bernstein s inequality and its history can be found in [8]. Bernstein s inequality is a direct analogue of Markov s inequality for the sup norm of the derivative of a polynomial on the interval [, ], see [8]. Since Markov s inequality is apparently the first sharp estimate for the derivative of a polynomial, the ratio of norms M n (P n := P n P n 200 Mathematics Subject Classification: Primary 4A7 Secondary 30E0, 60F05

is often referred to as the Markov factor of P n, while in our case the Bernstein-Markov factor is a more appropriate term. If P n (z 0 then we set M n (P n := 0, which holds for any constant polynomial, and this convention is carried through to all other norms used in our paper. In this article, we consider random polynomials of the form P n (z = A k z k, where the {A k } i.i.d. complex random variables. Throughout this article we will assume that our random variables are non trivial, meaning that P(A k = 0 <. Our goal is to study the asymptotics of the average Bernstein-Markov factors, namely P E[M n (P n ] = E n P n for various norms, as well as probabilistic its of these factors. Define the L p norm on the unit circle D by ( 2π /p P n p = P n (e it dt p, p > 0. 2π 0 The Bernstein-Markov factors of random polynomials for the L p norms on the unit circle were considered by Borwein and Lockhart, who stated an asymptotic for the case of random Littlewood polynomials. But Theorems and 2 in [] give the following more general result. Theorem. Let 0 < p <. Consider random polynomials P n (z = n A kz k where the A k are i.i.d. real valued random variables with mean 0, variance and additionally if p > 2, E ( A 0 2p <. Then P n p P as n, n P n p 3 where the convergence holds in probability. As a consequence, the average Bernstein-Markov factors satisfy P n E n p =. P n p 3 We first show that when p = 2, the convergence of Bernstein-Markov factors holds in the stronger almost sure sense. Theorem 2. If {A k } are i.i.d. complex valued random variables with E[ A 0 2 ] <, then P n 2 = n P n 2 3 a.s. 2

Hence the average Bernstein-Markov factors satisfy P n E n 2 =. P n 2 3 We next consider the average Bernstein-Markov factors for the L p norm on D r = {z : z = r}, for r > 0. This norm is defined by P n p,r = ( 2π /p P n (re it dt p, p > 0. 2π 0 Using a simple change of variable z rz, we obtain from the Bernstein inequality on the unit circle that P n p,r n r P n p,r, 0 < p, where equality holds for polynomials of the form P n (z = cz n. However, we show that in the randomized model of this problem, the Bernstein-Markov factors remain bounded as n with probability one, for all r (0,. At a heuristic level, this is because for z such that z = r, we can view P n (z as a partial sum of a random power series F (z. Hence one expects the it of M n to be the corresponding ratio of norms for F and F. Theorem 3. Suppose that 0 < p and r (0,. If {A k } are i.i.d. complex random variables with E[log + A 0 ] <, then there exists a positive random variable X p,r such that 0 < P n p,r P n p,r = X p,r < a.s. If E[ A 0 ] <, and for some ε > 0 the distribution of A 0 is absolutely continuous on [0, ε], with the density bounded on [0, ε], then P n p,r sup E <. n N P n p,r The second statement above admits a stronger result when p = 2. This we collect as a proposition. Proposition 4. Let r (0,. Let {A k } be i.i.d. complex valued random variables with E( A 0 2 <. Then there exists a positive random variable X 2,r such that 0 < P n 2,r P n 2,r = X 2,r < a.s. Furthermore, E [ P n 2,r P n 2,r ] = E[X 2,r ] <. 3

While the Bernstein-Markov factors are typically bounded on circles of radii less than one, we now show that they are asymptotic to n/r (have maximal growth for the L 2 -norms on circles of radii R >. Theorem 5. Assume that {A k } are i.i.d. complex valued random variables with E[ A 0 2 ] <. If for some ɛ > 0, the distribution of A 0 2 is absolutely continuous on [0, ε], and its density is bounded on [0, ε], then for any R > we have P n 2,R P n P n 2,R R as n. As a consequence, we obtain that P n E n 2,R = P n 2,R R, R >. We return to the supremum norm on the unit circle in the last two results. Results on the asymptotic behaviour of the norm of random Littlewood polynomials are proved in [5], see also [6]. Theorem 6. If {A k } are i.i.d. complex valued random variables with common distribution invariant under complex conjugation, then the expected Bernstein-Markov factors in the sup norm on the unit circle satisfy P E n n P n 2. It is clear that the result applies to all real i.i.d. random coefficients, as well as many standard complex ones. Before we state our next theorem, we recall that a random variable X is said to have the standard complex normal distribution, denoted by N C (0,, if it has density π exp( z 2 in the complex plane C. We note for future reference here that if X N C (0,, then X 2 E(, the exponential distribution with parameter. Theorem 7. If {A k } are i.i.d. complex valued random variables with N C (0, distribution, then the expected Bernstein-Markov factors in the sup norm on the unit circle satisfy P inf 2 n E n P sup P n n E n 2 P n 3. Remarks: The L 2 norm of polynomials on the unit circle has an explicit formula in terms of the coeffficients of the polynomial and this helps to derive almost sure convergence of the Bernstein-Markov factors as in Theorem 2. For general p (0,, p 2, one does not have such a formula. However by proving convergence in distribution of certain related 4

random variables, Borwein and Lockhart [] derive their result. We believe that almost sure convergence holds for all p > 0 and it would be interesting to see a proof of such a result. In Theorem 5, we once again expect almost sure convergence although we have not been able to prove it so far. Finally for Theorem 7, we believe that the iting value of the scaled average Bernstein-Markov factors exist, and that this it is / 3. 2 Proofs Proof of Theorem. Under the conditions stated, Theorems and 2 in [] give P n p n P Γ( + p/2 p and P n p n 3 P Γ( + p/2 p 3 where in both cases convegence holds in probability. Combining the two displays gives the first result. Taking expectation and applying the dominated convergence theorem yields the result about convergence of the expected Bernstein-Markov factors. Proof of Theorem 2. It is an easy computation to see that if P n (z = n A kz k P n 2 = ( n A k 2 /2 and P n 2 = ( n k= k2 A k 2 /2. Therefore we obtain that then ( P n n 2 k= = A k 2 /2 P n n 2 A. k 2 (2. By scaling, we can assume without loss of generality that E[ A k 2 ] =. With this assumption, since A k 2 are i.i.d., the Law of Large Numbers gives n + A k 2 a.s. (2.2 To deal with the numerator in (2. we use the following lemma which is a strengthening of the Law of Large Numbers, cf. Section 5.2, Theorem 3 in [2]. Lemma 8. Let {X n } be i.i.d. random variables with finite mean. If {a n } are positive constants with S n = n k= a k, such that b n = S n /a n and b 2 n b 2 k=n k = O(n, then S n a k X k E[X ] a.s. k= 5

Continuing with the proof of Theorem 2, we apply Lemma 8 with X k = A k 2, a k = k 2, and S n = n k= k2 to obtain Next, we write S n k 2 A k 2 a.s. (2.3 k= ( n k= k2 A k 2 n A k 2 /2 = Sn n + n k= k2 A k 2 S n n A k 2 n + /2. (2.4 Dividing equation (2.4 by n and then letting n, we obtain from (2.2 and (2.3 that P n 2 = a.s. n P n 2 3 A simple use of the Bounded Convergence Theorem yields P n E n 2 = P n 2 3 and finishes the proof of Theorem 2. Proof of Theorem 3. Our assumption E[log + A 0 ] < implies that sup A n /n a.s., see, e.g., [7]. Therefore, with probability one, both series A k z k and ka k z k converge uniformly on compact subsets of the unit disk to the random analytic functions F 0 and F 0 respectively. This means P n converge uniformly on z = r to F, and P n converge uniformly on z = r to F. As a consequence, we obtain that P n p,r = F p,r 0 and P n p,r = F p,r 0 Hence the first part of this theorem follows. For the second part, we need to show that the expected Bernstein-Markov factors are uniformly bounded for all n N. We start with deterministic estimates for the norms of P n and P n. It is immediate that P n p,r P n,r k A k r k, 0 < p. k= 6 k= a.s.

If p then we use the elementary estimate P n A k to give the lower bound P n p,r P n,r = A k r k z k A k r k, k = 0,,..., n. Hence we have for all P n that P n p,r r 2 ( A 0 + A, p. If p (0, then we use a theorem of Hardy and Littlewood (cf. Theorem 6.2 in [3, p. 95] to estimate P n p,r = A k r k z k c p r( A 0 p + A p /p c p r( A 0 + A, 0 < p <, p where c p > 0 depends only on p. It follows that for all p (0, ] P n n p,r k= k A ( k r k P n p,r r min(c p, /2( A 0 + A = A n r min(c p, /2 A 0 + A + k=2 k A k r k A 0 + A ( n k=2 + k A k r k. r min(c p, /2 A 0 + A Taking expectation, and using independence of the A k, we obtain that ( [ P ] E n p,r + E E k A k r k P n p,r r min(c p, /2 A 0 + A k=2 ( = + E[ A 0 ] E kr k. r min(c p, /2 A 0 + A Since n k=2 krk k=2 krk < for 0 < r <, it remains to show that E [( A 0 + A ] is finite under the additional assumption on the density f of A 0. Let Z = A 0 + A, and denote the density function of this random variable on [0, ε] by g. It is known that g is given by the convolution f f. We finish the proof with the following estimate: ε ( x P(Z > ε E [/Z] f(x tf(t dt dx + x ε 0 ε sup x [0,ε] x 0 x 0 f(x tf(t dt + /ε ε sup f 2 (x + /ε <. x [0,ε] k=2 7

Proof of Proposition 4. A direct computation shows that P n 2,r = ( n A k 2 r 2k /2, and similarly P n 2,r = ( n k= k2 A k 2 r 2k 2 /2. Recall that the Bernstein-Markov factor is Mn = P n 2,r / P n 2,r. It is straightforward to see by considering the differences M n+ M n, that M n increases with n and that the it M n = X 2,r exists with probability one, where k= X 2,r = k2 A k 2 r 2k 2 A. k 2 r 2k After applying the monotone convergence theorem, it remains to show that E[X 2,r ] <. For this, we first observe that since A 0 is a non-trivial random variable, there exists α > 0 and 0 < c < such that P ( A 0 2 < α = c. Applying the Cauchy-Schwarz inequality, we obtain that ( ( /2 ( /2 E(X 2,r E k 2 A k 2 r (E 2k 2 A. k 2 r 2k k= Since E( A k 2 <, the first term has finite expectation. Hence it is enough to show that ( E <, Z 2,r where Z 2,r = A k 2 r 2k. For simplicity of notation, we henceforth just write Z for this random variable. As a preparation we claim that P ( αr 2n+2 Z < αr 2n c n+, n 0. (2.5 Assume for the moment that the estimate (2.5 is true. Then if c < r <, we have ( ( E = E Z Z ; Z > α α + E n=0 n=0 ( + E Z ; 0 < Z < α ( Z ; αr2n+2 Z < αr 2n α + αr P ( αr 2n+2 Z < αr 2n 2n+2 α + α n=0 ( c r 2 n+ <, where the last line is true because of our choice of r. So E[X 2,r ] < for c < r <. Observe that for a fixed n, the quantity rp n 2,r P n 2,r is monotonically increasing with respect to r. This 8

property can be shown by simply computing the derivative with respect to r and checking that it is nonnegative. It follows that for s c < r < sx 2,s = sp n 2,s P n 2,s rp n 2,r P n 2,r = rx 2,r almost surely. The latter shows E[X 2,r ] < for all r (0,. It now remains to prove (2.5, which is easily done as follows: P ( ( αr 2n+2 Z < αr 2n P(Z < αr 2n P A k 2 r 2k < αr 2n n P ( A k 2 < αr 2n 2k c n+. n P ( A k 2 < α Proof of Theorem 5. As before, we have R P n 2,R P n 2,R = n k= k2 A k 2 R 2k n A k 2 R 2k. Denote a k = A k 2. After a scaling, we may assume without loss of generality that E[a k ] =. For j 0, let Y j = j a kr 2k. By the Abel summation formula, we obtain that k 2 a k R 2k = n 2 k= n a k R 2k a 0 (2j + Y j. j= Therefore R P n 2,R n P n 2,R = n k= k2 a k R 2k n 2 n a kr = a n 0 j= (2j + Y j. (2.6 2k n 2 Y n n 2 Y n a 0 Since a 0, we see that 0 a.s., and hence in probability. We now show that the Y n n 2 Y n random variables n j= U n = (2j + Y j n 2 Y n also converge to 0 in probability, which proves this theorem. Convergence in probability in turn follows from E[Un /2 ] 0 as n. Indeed, applying the Cauchy-Schwarz inequality we obtain 9

E[Un /2 ] ( n /2 ( /2 (2j + E[Y j ] E (2.7 n 2 Y n j= Since E[a k ] =, we have that E[Y j ] = j R2k. It is a simple matter to now see that n j= (2j + E[Y j] = O (nr 2n. To estimate the other term we use the fact that Y n R 2n 2 (a n + a n to obtain ( E n 2 Y n n 2 R E = O, (2.8 2n 2 a n + a n n 2 R 2n where we used the same idea as in the end of proof for Theorem 3 to conclude that E [/(a n + a n ] <. Combining the two estimates and using (2.7, we obtain that E[Un /2 ] 0 as n and finish the proof. Proof of Theorem 6. For any polynomial p n (z = n a kz k with complex coefficients, we associate its conjugate reciprocal polynomial q n (z := z n p n (/ z = a n k z k. Obviously, p n (z = q n (z holds for z =. It is also clear that the polynomial p n q n is self reciprocal, i.e. coincides with its conjugate reciprocal. For such polynomials, the Bernstein- Markov factor in the sup norm on the unit circle is known to be equal to the degree divided by 2, see [8, p. 527]. Hence we obtain that It follows that n p n q n = n p n q n = p nq n + p n q n p n q n + p n q n. p n p n + q n q n n. (2.9 Applying (2.9 with p n = P n being our random polynomial, and taking into account that the expected values of both additive terms on the left of (2.9 are identical under our assumptions on random coefficients, we arrive at the required result. Proof of Theorem 7. The inequality involving the inf of the Bernstein-Markov factors follows immediately from Theorem 6. Thus we need to prove the inequality involving the sup. For this we will give asymptotic bounds for P n and P n separately. We start with a bound for P n. 0

Proposition 9. inf P n n log(n a.s. Proof. We first observe that for z with z =, P n (z is complex Gaussian, distributed as N C (0, n+. This means that for such z, Pn(z 2 E(, where E( denotes the exponential n+ random variable with parameter. For 0 l n, let z l = exp( i2πl. Note that for j, l n n+ with j l we have ( E P n (z j P n (z l = (z j z l k = 0. (2.0 Hence P n (z j are uncorrelated for different values of j. But since they are Gaussian, this means that P n (z j, 0 j n, are independent. This in turn implies the independence of X n,j = Pn(z j 2 n+, 0 j n. Therefore P n 2 (n + log(n + max 0jn X n,j. (2. log(n + We need the following lemma, whose proof follows by a simple modification of the more well known result involving extremes of i.i.d. exponential random variables, see [4], page 07. Lemma 0. Let Y n,j, n N, j n, be a triangular array of random variables each of which is distributed as E(, and in which every row is jointly independent. Let T n = max jn Y n,j. Then T n log(n = a.s. Applying Lemma 0 with X n,j in place of Y n.j and making use of (2., we obtain inf This conculdes the proof of Proposition 9. P n 2 (n + log(n + a.s. Our next objective is to bound P n which we do by means of Proposition. sup Proof. Note that for each z with z =, claim that for every c > 2 P n= P n n3 log(n 2 3 a.s. P n(z 2 n(n + (2n + /6 ( P n 2 + (2n + > cn(n log(n <. 6 is distributed as E(. We

By the Borel-Cantelli lemma this will prove Proposition. To show convergence, let us take ɛ > 0 so small that c = ( ɛ 2 c > 2 and then cover the unit circle by a net of size ɛ/(n. Let z,ɛ, z 2,ɛ...z k,ɛ be these points, where k = (n. We will need the following ɛ lemma whose proof is well known, but for the reader s benefit we present it below. Lemma 2. Let Q be a polynomial of degree n. Let t D be such that Q(t = Q. Let 0 < ɛ <. Then for all z such that z t < ɛ/n, we have Q(z ( ɛ Q. Proof of Lemma 2. From the integral representation, we have Q(z Q(t z t Q z t n Q, where we used Bernstein s inequality (. in the latter estimate. Using z t ɛ/n, and combining with the previous line, the result now follows from Q(z Q(t Q(z Q(t Q ɛ Q = ( ɛ Q. Using the lemma along with the fact that for z with z =, as E(, we obtain P n(z 2 is distributed n(n + (2n + /6 ( ( k P P n 2 + (2n + P > cn(n log(n P n(z j,ɛ 2 6 n(n + (2n + /6 > c log(n. j= k n C c ɛ n c and since c > 2, the corresponding series converges and this proves Proposition. Combining Propositions 9 and yields sup An application of Fatou s Lemma gives sup P n n P n n E which is the upper bound in Theorem 7. ( P n P n 2 3 a.s. 2 3, Acknowledgment: We thank Manjunath Krishnapur for valuable discussions and references. 2

References [] P. Borwein and R. Lockhart, The expected L p norm of random polynomials, Proc. Amer. Math. Soc. 29 (200, 463 472. [2] Y. S. Chow and H. Teicher, Probability theory: Independence, Interchangeability, Martingales, Springer-Verlag, 978. [3] P. L. Duren, Theory of H p Spaces, Dover, New York, 2000. [4] A. Gut, Probability: A graduate course, Springer Texts in Statistics, 2005. [5] G. Halász, On a result of Salem and Zygmund concerning random polynomials, Studia Sci. Math. Hungar. 8 (973, 369 377. [6] J. P. Kahane, Some random series of functions, vol.5, Cambridge studies in Advanced Mathematics, Cambrdige, 2nd edition (985. [7] I. E. Pritsker, Asymptotic zero distribution of random polynomials spanned by general bases, Contemp. Math. 66 (206, 2 40. [8] Q. I. Rahman and G. Schmeisser, Analytic Theory of Polynomials, Clarendon Press, Oxford, 2002. I. P.: Department of Mathematics, Oklahoma State University Stillwater, OK 74078, USA Email : igor@math.okstate.edu K. R.: Tata Institute of Fundamental Research, Centre for Applicable Mathematics Bengaluru, India 560065 Email: koushik@tifrbng.res.in 3