Extrema of log-correlated random variables Principles and Examples
|
|
- Belinda Wheeler
- 5 years ago
- Views:
Transcription
1 Extrema of log-correlated random variables Principles and Examples Louis-Pierre Arguin Université de Montréal & City University of New York Introductory School IHP Trimester CIRM, January
2 Acknowledgements Thank you very much to the organizers for the invitation! Much of what I know on the topic I learned from my collaborators: Anton Bovier, Nicola Kistler, Olivier Zindy, David Belius; and my students: Samuel April, Jean-Sébastien Turcotte et Frédéric Ouimet. I am grateful for all the discussions and insights on the subject. There are many outstanding papers on the subject. I will not be able to reference everybody on the slides. See my webpage arguinlp/recherche.html for the slides and detailed complementary references.
3 What is the Statistics of Extremes? The statistics of extremes or extreme value theory in probability deals with questions about the maxima of a collection of random variables: Consider N random variables on a probability space (Ω, F, P) (X i, i = 1,..., N) In the limit N, What can be said about the r.v. max i=1,...,n Xi? Law of the maximum What can be said about the joint law of the reordered collection X (1) X (2) X (3)... Order Statistics or Extremal Process In this mini-course, we will mostly focus on the law of the maximum.
4 Statistics of Extremes To keep in mind: max i=1,...,n Xi is a functional on the process (X i, i = 1,..., N) like the sum. Our objectives are similar in spirit as the limit theorems for a sum of random variables N i=1 Xi in the limit N Order of magnitude of the maximum Law of Large Numbers Fluctuation of the maximum Central Limit Theorem Ultimately, we want to answer the following questions: Problem Find a N and b N such that max i N X i a N b N N. Identify the limit. converges in law in the limit
5 Statistics of Extremes: A brief history Earlier works on the theory of extreme values focused on the case where or weakly correlated. (X i, i N) are IID r.v. We have a complete answer to the question. There are only three possible limit laws: Fréchet, Weibull or Gumbel. 1925: Tippett studied the largest values from samples of Gaussians 1927: Fréchet studied distributions other than Gaussian. Obtains the Fréchet limit law. 1928: Fisher & Tippett find two other limit laws. 1936: von Mises find sufficient conditions to converge to the 3 classes. 1943: Gnedenko find necessary and sufficient conditions. 1958: Gumbel writes the first book Statistics of extremes Figure : Gumbel
6 Statistics of Extremes: Motivation There are applications of the theory of extreme values for IID r.v. in meteorology (floods, droughts, etc). One goal of today s probability theory: Find other classes for the maximum when the r.v. s are STRONGLY CORRELATED. What are the motivations to look at strongly correlated r.v.? Finance: Evidence of slowly-decaying correlations for volatility Physics: The behavior of systems in Statistical physics is determined by the states of lowest energies. States are often correlated through the environment. Ex: Spin glasses, polymers, growth models (KPZ, Random matrices) Mathematics: Distribution of prime numbers seems to exhibit features of strongly correlated r.v. (Lecture 3). Of course, there are many correlation structures that can be studied. We will focus on one class LOG-CORRELATED models
7 Outline Lecture 1 1. Warm-up: Extrema of IID r.v. 2. Log-correlated Gaussian fields (LGF) Branching Random Walk (BRW) and 2D Gaussian Free Field (2DGFF) 3. Three fundamental properties 4. First order of the maximum ( LLN) Lecture 2 ntermezzo Relations to statistical physics 5. Second order of the maximum ( refined LLN) 6. A word on Convergence and Order Statistics Lecture 3: Universality Class of LGF 7. The maxima of characteristic polynomial of unitary matrices 8. The maxima of the Riemann zeta function
8 General Setup When we are dealing with correlations, it is convenient to index the r.v. s by points in a metric space, say V n with metric d. Choice of parametrization: (X n(v), v V n) #V n = 2 n E[X n(v)] = 0 for all v V n E[X n(v) 2 ] = σ 2 n For simplicity, assume that (X n(v), v V n) is a Gaussian process. Technical advantages: The covariance encodes the law. Comparison arguments (Slepian s Lemma) may simplify some proofs. The principles that we will discuss hold (or are expected to) in general.
9 1. Warm-up: The maximum of IID variables Consider (X i, i = 1,... 2 n ) IID Gaussians of variance σ 2 n. In this case it is easy to find a n and b n such that ( ) X i a n P max x converges. i b n Note that a n and b n are defined up to constants, additive and multiplicative resp. We obviously have P ( ) X i a n max x i b n We need to establish convergence of = = ( ) 2 n P(X 1 b nx + a n) ( ) 2 n 1 P(X 1 > b nx + a n). 2 n P(X 1 > b nx + a n) More refined than large deviation.
10 1. Warm-up: Extrema of IID variables Proposition Consider (X i, i = 1,... 2 n ) IID Gaussians of variance σ 2 n. Then for with c = 2 log 2 σ, we have a n = cn σ2 2c log n P ( max X i a n x ) exp( e cx ) i Gumbel distribution In other words We refer to ( ) max i 2 n Xi = cn σ2 2c log n + G }{{}}{{} Fluctuation Deterministic Order First order of the maximum: cn Second order of the maximum: σ2 log n. 2c Our goal: Establish a general method to prove similar results for log-correlated fields
11 2. Log-correlated Gaussian Fields ; v ^ v Vn v v0
12 Log-correlated Gaussian fields A Gaussian field (X n(v), v V n) is log-correlated if the covariance decays slowly with the distance E[X n(v)x n(v )] log d(v, v ) 2 n This is to be compared with d(v, v ) α or e d(v,v ). This implies that there are an exponential number of points whose correlation with v is of the order of the variance. Precisely, for 0 < r < 1, and a given v V n, { } v V n : E[Xn(v)Xn(v )] r 2n E[Xn(v)] 2 2 rn The correlations do not have to be exactly logarithmic. Approximate or asymptotic log-correlations is enough.
13 Example 1: Branching Random Walk V n: leafs of a binary tree of depth n Let (Y l ) be IID N (0, σ 2 ) on edges X n(v) = Y l (v) ; l: v Y 1 (v) Variance: E[X n(v) 2 ] = n l=1 E[Y 2 l (v)] = σ 2 n Y 2 (v) v ^ v 0 Covariance: E[X n(v)x n(v )] = v v l=1 E[Y 2 l (v)] = σ 2 v v Y 3 (v) For any 0 r 1 { } v V n : E[Xn(v)Xn(v )] r 2n E[Xn(v)] 2 2 rn v v 0 V n
14 Example 2: 2D Gaussian Free Field I I Vn : square box in Z2 with 2n points (Xn (v), v Vn ) Gaussian field with 15 0 E[Xn (v)xn (v )] = Ev "τ V Xn # k=0 (Sk )k 0 SRW starting at v. I 10 1{Sk =v0 } Log-Correlations E[Xn (v)2 ] = σ 2 n + O(1) 2n + O(1) E[Xn (v)xn (v 0 )] = log kv v 0 k2 where σ 2 = log 2 π v Vn far from the boundary v, v 0 Vn far from the boundary
15 3. Fundamental Properties There are three fundamental properties of log-correlated random variables. They are well-illustrated by the case of branching random walk. 1. Multiscale decomposition n X n(v) = Y l (v) l=1 ; Y 1 (v) Define X k (v) = k l=1 Y l(v), 1 k n 2. Self-similarity of scales For a given v, (X n(v ) X l (v ), v v l) Y 2 (v) v ^ v 0 is a BRW on (v, v v l) 2 n l points 3. Dichotomy of scales Y 3 (v) E[Y l (v)y l (v )] = { σ 2 if l v v 0 if l > v v v v 0 V n
16 Fundamental Properties We now verify the properties for the 2DGFF (X n(v), v V n). Reminder It is good to see (X n(v), v V n) as vectors in a Gaussian Hilbert space. E[X n(v) 2 ]: square norm of the vector E[X n(v)x n(v )], the inner product For B V n, the conditional expectation of X n(v) on X n(v ), v B, E[X n(v) {X n(v ), v B}] = a vv X n(v ) v B is the projection on the subspace spanned by X n(v ), v B. In particular, it is a linear combination of the X v, hence also Gaussian. Orthogonal decomposition X n(v) = (X n(v) E[X n(v) {X n(v ), v B}]) + E[X n(v) {X n(v ), v B}]
17 Fundamental Properties 1: Multiscale decomposition n X n(v) = Y l (v) l=1 V n Consider B l (v), a ball around v containing 2 n l points. Define F l = σ{x n(v ) : v / B l (v)} Define X l (v) = E[X n(v) F l (v)], l < n. n-l 2 points (X l (v), l n) is a martingale. Lemma (Multiscale) The increments Y l (v) = X l (v) X l 1 (v), l = 1,..., n are independent Gaussians.
18 Fundamental Properties 2: Self-Similarity Lemma (Self-Similarity) For a given v, (X n(v ) X l (v ), v v l) has the original law on (v, v v l) If B V n, write X B(v) = E[X n(v) {X n(v ), v / B}]. Then ( ) X n(v) X B(v), v B is a GFF on B. In our case, B are the neighborhoods B l (v) containing 2 n l points. E[(X n(v) X l (v)) 2 ] = σ 2 (n l)+o(1) The Y l s have variance σ 2 (1 + o(1)). Linearity of scales! Warning! If v B l (v), it is not true that X l (v ) = X l (v) (as in BRW)... but close! V n n-l 2 points
19 Fundamental Properties 3: Dichotomy V n E[Y l (v)y l (v )] = { σ 2 if l v v 0 if l > v v Define v v := greatest l such that B l (v) B l (v ). Lemma (Gibbs-Markov Property) For B V n, E[X n(v) {X n(v ), v B c }] = p u(v)x u u B This implies that X n(v) X l (v) = n k=l+1 Y l(v) is independent of Y l (v ) for all l such that v v < l.
20 Fundamental Properties 3: Dichotomy V n E[Y l (v)y l (v )] = { σ 2 if l v v 0 if l > v v v v' Lemma (Markov Property) For B V n, E[X n(v) {X n(v ), v / B}] = p u(v)x u u A This implies that X n(v) X l (v) = n k=l+1 Y l(v) is independent of Y l (v ) for all l such that v v < l. The decoupling is not exact at the branching point but is for larger scales soon after.
21 Fundamental Properties 3: Splitting V n E[Y l (v)y l (v )] = { σ 2 if l v v 0 if l > v v v v' Lemma For all l such that v v 2 < 2 n l (l neighborhoods touch) E[(X l (v) X l (v )) 2 ] = O(1) This implies E[X l (v)x l (v )] = E[X l (v) 2 ] + O(1) = σ 2 l + O(1). Thus E[Y l (v)y l (v )] = σ 2 + o(1).
22 Lectures goals For the remaining part of the lectures, our specific goals are to prove the deterministic orders of the maximum using the 3 properties. Theorem 1. First order: 2. Second order: max v VN X n(v) lim = 2 log 2σ =: c in probability n n max v VN X n(v) cn log n In other words, with large probability max X n(v) = v V n = 3 σ 2 2 c ( cn 3 σ 2 2 c log n Lectures 1 and 2: 2DGFF (BRW as a guide) in probability ) + O(ε log n). Lecture 3: toy model of the Riemann zeta function
23 4. The first order of the maximum max v Vn X n(v) lim = 2 log 2 σ n n W1(v) W2(v) 1 K n<v^ v0 apple 2 K n W3(v) W3(v 0 ) v v 0
24 First order of the maximum Let (X n(v), v V n) be a Gaussian field with #V n = 2 n, E[X n(v) 2 ] = σ 2 n. Theorem (First order of the maximum) If (X n(v), v V n) satisfies the three properties (multiscale, self-similarity, splitting), we have max v Vn X n(v) lim = 2 log 2 σ n n }{{} =c in probability This was shown by Biggins 77 for the BRW. This was shown by Bolthausen, Deuschel & Giacomin 2001 for GFF. We follow here the general method of Kistler (2013). 1. Upper bound: P ( max v Vn X n(v) > (c + δ)n ) 0 2. Lower bound P ( max v Vn X n(v) > (c δ)n ) 1 (c )n (c + )n
25 Upper bound: Plain Markov This is the easy part. Consider the number of exceedances of a level a N n(a) = #{v V n : X n(v) > a} Clearly, by Markov s inequality (or union bound) P ( max X n(v) > a ) = P ( N n(a) 1 ) v V n Note that correlations play no role here! By Gaussian estimate with a = (c + δ)n E[N n(a))] = 2 n P(X n(v) > a) 2 n P(X v > (c + δ)n) 2 n e (c+δ)2 n/2σ 2n e 2 log 2δn/σ goes to zero exponentially fast as n. c = 2 log 2 σ is designed to counterbalance the entropy 2 n e c2 n/2σ 2n = 1.
26 Lower bound: Multiscale second moment The only tool at our disposal to get lower bound for the right tail of a positive random variable is the Paley-Zigmund inequality: P(N 1) We would like to show that for a = (c δ)n E[N ]2 E[N 2 ] P ( N n(a) 1 ) E[Nn(a)]2 E[N n(a) 2 ] 1 The correlations play a role in the denominator. Good news: we need to find an upper bound. E[N n(a) 2 ] = ( Xn(v) > a, X n(v ) > a ). If the r.v. were independent v,v V n P E[N n(a) 2 ] = v v P ( X n(v) > a )2 + = E[N n(a)] 2 + v E[N n(a)] 2 }{{} +E[N n(a)] dominant for a small! v V n P ( X n(v) > a ) P ( X n(v) > a ) (1 P ( X n(v) > a ) )
27 Lower bound: Multiscale second moment Use the multiscale decomposition (Prop. 1). K scales (large but fixed) suffices X n(v) = K k=1 k 1 K Y l (v) n<l K k }{{ n } :=W k (v) (W k (v), k = 1,... ) are IID N (0, σ 2 n/k) Prop. 1 and 2 Define a modified number of exceedances Ñ n(a) = # {v V n : W k (v) > a } K k = 1,..., n W2(v) W1(v) k =1 k =2 Note that Since the first order is linear in the scales, this is a good choice. P ( N n(a) 1 ) P ( Ñ n(a) 1 ) v
28 Lower bound: Multiscale second moment Not losing much in dropping W 1 K X n(v) = W 1(v) + W k (v) > (c δ)n }{{} k=2 > δn }{{} >a( K 1 K ) P(W 1(v) > δn) 1 since Var(W 1) = n/k. This step is crucial and not only technical. W2(v) W1(v) k =1 k =2 v Ñ n(a) = #{v V n : W k (v) > a K It remains to show for a = (c δ)n k = 2,..., K} P ( Ñ n(a) 1 ) E[Ñn(a)]2 E[Ñn(a)2 ] 1
29 Lower bound: Multiscale second moment The second moment for these exceedances is E[Ñn(a)2 ] = K k=1 v,v : k 1 K n<v v K k n P (W j(v) > a K, Wj(v ) > a ) K j 2 We expect the dominant term to be k = 1 (most independence). For v, v with v v n/k, Prop 3. Splitting P (W j(v) > a K, Wj(v ) > a ) K j 2 = P (W j(v) > a ) 2 K j 2 #{v, v : v v n/k} = 2 2n 2 n 2 n n/k = 2 2n (1 + o(1)) ) 2 But E[Ñn(a)]2 = 2 2n P (W j(v) > ak j 2 E[Ñn(a)2 ] = (1 + } o(1))e[ñn(a)] {{} k>1 dominant?
30 Lower bound: Multiscale second moment k>1 v,v : k 1 K n<v v K k n P (W j(v) > a K, Wj(v ) > a ) K j 2 Since we need an upper bound, we can drop conditions in the probability. W1(v) Take v v = l for k 1 n < l k n K K ) P (W j(v) > ak j 2, W j (v ) > ak j k + 1 Use Prop. 3 (splitting): if j > v v, Y j(v) indep. of Y j(v ) P (W j(v) > a ) K j k P (W j (v) > a ) 2 K j k+1 W2(v) 1 K n<v^ v0 apple 2 K n W3(v) W3(v 0 ) v v 0
31 Lower bound: Multiscale second moment k>1 v,v : k 1 K n<v v K k n P (W j(v) > a K, Wj(v ) > a ) K j 2 Since we need an upper bound, we can drop conditions in the probability. Take v v = l for k 1 2 n 2 k 1 n K K n such pairs. n < l k n. At most K P (W j(v) > a ) K j k P (W j (v) > a ) 2 K j k+1 W1(v) The inside sum is E[Ñn(a)]2 times W2(v) 1 K n<v^ v0 apple 2 K n k 2 k 1 K n 2 k 1 K j=2 ( P W j(v) > a ) 1 K n 2 k 1 K n(1 δ)2 W3(v) W3(v 0 ) This goes to 0 exponentially fast! v v 0
32 Outline Lecture 1 1. Warm-up: Extrema of IID r.v. 2. Log-correlated Gaussian fields (LGF) Branching Random Walk (BRW) and 2D Gaussian Free Field (2DGFF) 3. Three fundamental properties 4. First order of the maximum ( LLN) Lecture 2 ntermezzo Relations to statistical physics 5. Second order of the maximum ( refined LLN) 6. A word on Convergence and Order Statistics Lecture 3: Universality Class of LGF 7. The maxima of characteristic polynomial of unitary matrices 8. The maxima of the Riemann zeta function
arxiv: v1 [math.pr] 4 Jan 2016
EXTREMA OF LOG-CORRELATED RANDOM VARIABLES: PRINCIPLES AND EXAMPLES LOUIS-PIERRE ARGUIN arxiv:1601.00582v1 [math.pr] 4 Jan 2016 Abstract. These notes were written for the mini-course Extrema of log-correlated
More informationConnection to Branching Random Walk
Lecture 7 Connection to Branching Random Walk The aim of this lecture is to prepare the grounds for the proof of tightness of the maximum of the DGFF. We will begin with a recount of the so called Dekking-Host
More informationExtreme Value Analysis and Spatial Extremes
Extreme Value Analysis and Department of Statistics Purdue University 11/07/2013 Outline Motivation 1 Motivation 2 Extreme Value Theorem and 3 Bayesian Hierarchical Models Copula Models Max-stable Models
More informationThe maximum of the characteristic polynomial for a random permutation matrix
The maximum of the characteristic polynomial for a random permutation matrix Random Matrices and Free Probability Workshop IPAM, 2018/05/16 Nick Cook, UCLA Based on joint work with Ofer Zeitouni Model
More informationA. Bovier () Branching Brownian motion: extremal process and ergodic theorems
Branching Brownian motion: extremal process and ergodic theorems Anton Bovier with Louis-Pierre Arguin and Nicola Kistler RCS&SM, Venezia, 06.05.2013 Plan 1 BBM 2 Maximum of BBM 3 The Lalley-Sellke conjecture
More informationConcentration inequalities and tail bounds
Concentration inequalities and tail bounds John Duchi Outline I Basics and motivation 1 Law of large numbers 2 Markov inequality 3 Cherno bounds II Sub-Gaussian random variables 1 Definitions 2 Examples
More information8.1 Concentration inequality for Gaussian random matrix (cont d)
MGMT 69: Topics in High-dimensional Data Analysis Falll 26 Lecture 8: Spectral clustering and Laplacian matrices Lecturer: Jiaming Xu Scribe: Hyun-Ju Oh and Taotao He, October 4, 26 Outline Concentration
More informationExtrema of discrete 2D Gaussian Free Field and Liouville quantum gravity
Extrema of discrete 2D Gaussian Free Field and Liouville quantum gravity Marek Biskup (UCLA) Joint work with Oren Louidor (Technion, Haifa) Discrete Gaussian Free Field (DGFF) D R d (or C in d = 2) bounded,
More informationarxiv: v1 [math.pr] 22 May 2008
THE LEAST SINGULAR VALUE OF A RANDOM SQUARE MATRIX IS O(n 1/2 ) arxiv:0805.3407v1 [math.pr] 22 May 2008 MARK RUDELSON AND ROMAN VERSHYNIN Abstract. Let A be a matrix whose entries are real i.i.d. centered
More informationExtremal process associated with 2D discrete Gaussian Free Field
Extremal process associated with 2D discrete Gaussian Free Field Marek Biskup (UCLA) Based on joint work with O. Louidor Plan Prelude about random fields blame Eviatar! DGFF: definitions, level sets, maximum
More informationOn Optimal Stopping Problems with Power Function of Lévy Processes
On Optimal Stopping Problems with Power Function of Lévy Processes Budhi Arta Surya Department of Mathematics University of Utrecht 31 August 2006 This talk is based on the joint paper with A.E. Kyprianou:
More informationLecture 22: Variance and Covariance
EE5110 : Probability Foundations for Electrical Engineers July-November 2015 Lecture 22: Variance and Covariance Lecturer: Dr. Krishna Jagannathan Scribes: R.Ravi Kiran In this lecture we will introduce
More informationEntropy and Ergodic Theory Lecture 15: A first look at concentration
Entropy and Ergodic Theory Lecture 15: A first look at concentration 1 Introduction to concentration Let X 1, X 2,... be i.i.d. R-valued RVs with common distribution µ, and suppose for simplicity that
More informationBrownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539
Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory
More informationn! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2
Order statistics Ex. 4. (*. Let independent variables X,..., X n have U(0, distribution. Show that for every x (0,, we have P ( X ( < x and P ( X (n > x as n. Ex. 4.2 (**. By using induction or otherwise,
More informationLecture 4: September Reminder: convergence of sequences
36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 4: September 6 In this lecture we discuss the convergence of random variables. At a high-level, our first few lectures focused
More information4 Derivations of the Discrete-Time Kalman Filter
Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof N Shimkin 4 Derivations of the Discrete-Time
More informationMod-φ convergence I: examples and probabilistic estimates
Mod-φ convergence I: examples and probabilistic estimates Valentin Féray (joint work with Pierre-Loïc Méliot and Ashkan Nikeghbali) Institut für Mathematik, Universität Zürich Summer school in Villa Volpi,
More informationLecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1
Random Walks and Brownian Motion Tel Aviv University Spring 011 Lecture date: May 0, 011 Lecture 9 Instructor: Ron Peled Scribe: Jonathan Hermon In today s lecture we present the Brownian motion (BM).
More informationThe Convergence Rate for the Normal Approximation of Extreme Sums
The Convergence Rate for the Normal Approximation of Extreme Sums Yongcheng Qi University of Minnesota Duluth WCNA 2008, Orlando, July 2-9, 2008 This talk is based on a joint work with Professor Shihong
More informationNonparametric regression with martingale increment errors
S. Gaïffas (LSTA - Paris 6) joint work with S. Delattre (LPMA - Paris 7) work in progress Motivations Some facts: Theoretical study of statistical algorithms requires stationary and ergodicity. Concentration
More informationNear extreme eigenvalues and the first gap of Hermitian random matrices
Near extreme eigenvalues and the first gap of Hermitian random matrices Grégory Schehr LPTMS, CNRS-Université Paris-Sud XI Sydney Random Matrix Theory Workshop 13-16 January 2014 Anthony Perret, G. S.,
More informationConcentration of Measures by Bounded Size Bias Couplings
Concentration of Measures by Bounded Size Bias Couplings Subhankar Ghosh, Larry Goldstein University of Southern California [arxiv:0906.3886] January 10 th, 2013 Concentration of Measure Distributional
More informationMFM Practitioner Module: Quantitiative Risk Management. John Dodson. October 14, 2015
MFM Practitioner Module: Quantitiative Risk Management October 14, 2015 The n-block maxima 1 is a random variable defined as M n max (X 1,..., X n ) for i.i.d. random variables X i with distribution function
More informationAssessing the dependence of high-dimensional time series via sample autocovariances and correlations
Assessing the dependence of high-dimensional time series via sample autocovariances and correlations Johannes Heiny University of Aarhus Joint work with Thomas Mikosch (Copenhagen), Richard Davis (Columbia),
More informationPENULTIMATE APPROXIMATIONS FOR WEATHER AND CLIMATE EXTREMES. Rick Katz
PENULTIMATE APPROXIMATIONS FOR WEATHER AND CLIMATE EXTREMES Rick Katz Institute for Mathematics Applied to Geosciences National Center for Atmospheric Research Boulder, CO USA Email: rwk@ucar.edu Web site:
More informationModelling large values of L-functions
Modelling large values of L-functions How big can things get? Christopher Hughes Exeter, 20 th January 2016 Christopher Hughes (University of York) Modelling large values of L-functions Exeter, 20 th January
More informationBALANCING GAUSSIAN VECTORS. 1. Introduction
BALANCING GAUSSIAN VECTORS KEVIN P. COSTELLO Abstract. Let x 1,... x n be independent normally distributed vectors on R d. We determine the distribution function of the minimum norm of the 2 n vectors
More informationTheory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk
Instructor: Victor F. Araman December 4, 2003 Theory and Applications of Stochastic Systems Lecture 0 B60.432.0 Exponential Martingale for Random Walk Let (S n : n 0) be a random walk with i.i.d. increments
More informationPart II Probability and Measure
Part II Probability and Measure Theorems Based on lectures by J. Miller Notes taken by Dexter Chua Michaelmas 2016 These notes are not endorsed by the lecturers, and I have modified them (often significantly)
More informationLecture Quantitative Finance Spring Term 2015
on bivariate Lecture Quantitative Finance Spring Term 2015 Prof. Dr. Erich Walter Farkas Lecture 07: April 2, 2015 1 / 54 Outline on bivariate 1 2 bivariate 3 Distribution 4 5 6 7 8 Comments and conclusions
More informationConcentration inequalities: basics and some new challenges
Concentration inequalities: basics and some new challenges M. Ledoux University of Toulouse, France & Institut Universitaire de France Measure concentration geometric functional analysis, probability theory,
More informationA note on the extremal process of the supercritical Gaussian Free Field *
Electron. Commun. Probab. 20 2015, no. 74, 1 10. DOI: 10.1214/ECP.v20-4332 ISSN: 1083-589X ELECTRONIC COMMUNICATIONS in PROBABILITY A note on the extremal process of the supercritical Gaussian Free Field
More informationSome functional (Hölderian) limit theorems and their applications (II)
Some functional (Hölderian) limit theorems and their applications (II) Alfredas Račkauskas Vilnius University Outils Statistiques et Probabilistes pour la Finance Université de Rouen June 1 5, Rouen (Rouen
More informationPCA with random noise. Van Ha Vu. Department of Mathematics Yale University
PCA with random noise Van Ha Vu Department of Mathematics Yale University An important problem that appears in various areas of applied mathematics (in particular statistics, computer science and numerical
More informationA class of probability distributions for application to non-negative annual maxima
Hydrol. Earth Syst. Sci. Discuss., doi:.94/hess-7-98, 7 A class of probability distributions for application to non-negative annual maxima Earl Bardsley School of Science, University of Waikato, Hamilton
More informationFluctuations for the Ginzburg-Landau Model and Universality for SLE(4)
Fluctuations for the Ginzburg-Landau Model and Universality for SLE(4) Jason Miller Department of Mathematics, Stanford September 7, 2010 Jason Miller (Stanford Math) Fluctuations and Contours of the GL
More informationarxiv: v2 [math.pr] 28 Dec 2015
MAXIMA OF A RANDOMIZED RIEMANN ZETA FUNCTION, AND BRANCHING RANDOM WALKS LOUIS-PIERRE ARGUIN, DAVID BELIUS, AND ADAM J. HARPER arxiv:1506.0069v math.pr] 8 Dec 015 Abstract. A recent conjecture of Fyodorov
More informationLecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN
Lecture Notes 5 Convergence and Limit Theorems Motivation Convergence with Probability Convergence in Mean Square Convergence in Probability, WLLN Convergence in Distribution, CLT EE 278: Convergence and
More information1. Stochastic Processes and filtrations
1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S
More informationExtreme Value Theory and Applications
Extreme Value Theory and Deauville - 04/10/2013 Extreme Value Theory and Introduction Asymptotic behavior of the Sum Extreme (from Latin exter, exterus, being on the outside) : Exceeding the ordinary,
More informationMarch 1, Florida State University. Concentration Inequalities: Martingale. Approach and Entropy Method. Lizhe Sun and Boning Yang.
Florida State University March 1, 2018 Framework 1. (Lizhe) Basic inequalities Chernoff bounding Review for STA 6448 2. (Lizhe) Discrete-time martingales inequalities via martingale approach 3. (Boning)
More informationRandomized Algorithms Week 2: Tail Inequalities
Randomized Algorithms Week 2: Tail Inequalities Rao Kosaraju In this section, we study three ways to estimate the tail probabilities of random variables. Please note that the more information we know about
More informationFinite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product
Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )
More informationPhenomena in high dimensions in geometric analysis, random matrices, and computational geometry Roscoff, France, June 25-29, 2012
Phenomena in high dimensions in geometric analysis, random matrices, and computational geometry Roscoff, France, June 25-29, 202 BOUNDS AND ASYMPTOTICS FOR FISHER INFORMATION IN THE CENTRAL LIMIT THEOREM
More informationEE595A Submodular functions, their optimization and applications Spring 2011
EE595A Submodular functions, their optimization and applications Spring 2011 Prof. Jeff Bilmes University of Washington, Seattle Department of Electrical Engineering Winter Quarter, 2011 http://ee.washington.edu/class/235/2011wtr/index.html
More informationLecture 5: Asymptotic Equipartition Property
Lecture 5: Asymptotic Equipartition Property Law of large number for product of random variables AEP and consequences Dr. Yao Xie, ECE587, Information Theory, Duke University Stock market Initial investment
More informationRandom Process Lecture 1. Fundamentals of Probability
Random Process Lecture 1. Fundamentals of Probability Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2016 1/43 Outline 2/43 1 Syllabus
More informationLecture 5. 1 Chung-Fuchs Theorem. Tel Aviv University Spring 2011
Random Walks and Brownian Motion Tel Aviv University Spring 20 Instructor: Ron Peled Lecture 5 Lecture date: Feb 28, 20 Scribe: Yishai Kohn In today's lecture we return to the Chung-Fuchs theorem regarding
More informationStable Process. 2. Multivariate Stable Distributions. July, 2006
Stable Process 2. Multivariate Stable Distributions July, 2006 1. Stable random vectors. 2. Characteristic functions. 3. Strictly stable and symmetric stable random vectors. 4. Sub-Gaussian random vectors.
More informationEstimation of risk measures for extreme pluviometrical measurements
Estimation of risk measures for extreme pluviometrical measurements by Jonathan EL METHNI in collaboration with Laurent GARDES & Stéphane GIRARD 26th Annual Conference of The International Environmetrics
More informationLecture 8: Information Theory and Statistics
Lecture 8: Information Theory and Statistics Part II: Hypothesis Testing and I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 23, 2015 1 / 50 I-Hsiang
More informationP (A G) dp G P (A G)
First homework assignment. Due at 12:15 on 22 September 2016. Homework 1. We roll two dices. X is the result of one of them and Z the sum of the results. Find E [X Z. Homework 2. Let X be a r.v.. Assume
More information9 Brownian Motion: Construction
9 Brownian Motion: Construction 9.1 Definition and Heuristics The central limit theorem states that the standard Gaussian distribution arises as the weak limit of the rescaled partial sums S n / p n of
More informationFrontier estimation based on extreme risk measures
Frontier estimation based on extreme risk measures by Jonathan EL METHNI in collaboration with Ste phane GIRARD & Laurent GARDES CMStatistics 2016 University of Seville December 2016 1 Risk measures 2
More informationLecture 3 - Expectation, inequalities and laws of large numbers
Lecture 3 - Expectation, inequalities and laws of large numbers Jan Bouda FI MU April 19, 2009 Jan Bouda (FI MU) Lecture 3 - Expectation, inequalities and laws of large numbersapril 19, 2009 1 / 67 Part
More informationWeak quenched limiting distributions of a one-dimensional random walk in a random environment
Weak quenched limiting distributions of a one-dimensional random walk in a random environment Jonathon Peterson Cornell University Department of Mathematics Joint work with Gennady Samorodnitsky September
More informationExercises in Extreme value theory
Exercises in Extreme value theory 2016 spring semester 1. Show that L(t) = logt is a slowly varying function but t ǫ is not if ǫ 0. 2. If the random variable X has distribution F with finite variance,
More informationThe largest eigenvalues of the sample covariance matrix. in the heavy-tail case
The largest eigenvalues of the sample covariance matrix 1 in the heavy-tail case Thomas Mikosch University of Copenhagen Joint work with Richard A. Davis (Columbia NY), Johannes Heiny (Aarhus University)
More informationScaling exponents for certain 1+1 dimensional directed polymers
Scaling exponents for certain 1+1 dimensional directed polymers Timo Seppäläinen Department of Mathematics University of Wisconsin-Madison 2010 Scaling for a polymer 1/29 1 Introduction 2 KPZ equation
More informationMax stable Processes & Random Fields: Representations, Models, and Prediction
Max stable Processes & Random Fields: Representations, Models, and Prediction Stilian Stoev University of Michigan, Ann Arbor March 2, 2011 Based on joint works with Yizao Wang and Murad S. Taqqu. 1 Preliminaries
More informationQualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama
Qualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama Instructions This exam has 7 pages in total, numbered 1 to 7. Make sure your exam has all the pages. This exam will be 2 hours
More informationYour first day at work MATH 806 (Fall 2015)
Your first day at work MATH 806 (Fall 2015) 1. Let X be a set (with no particular algebraic structure). A function d : X X R is called a metric on X (and then X is called a metric space) when d satisfies
More informationx log x, which is strictly convex, and use Jensen s Inequality:
2. Information measures: mutual information 2.1 Divergence: main inequality Theorem 2.1 (Information Inequality). D(P Q) 0 ; D(P Q) = 0 iff P = Q Proof. Let ϕ(x) x log x, which is strictly convex, and
More informationRandom matrices: Distribution of the least singular value (via Property Testing)
Random matrices: Distribution of the least singular value (via Property Testing) Van H. Vu Department of Mathematics Rutgers vanvu@math.rutgers.edu (joint work with T. Tao, UCLA) 1 Let ξ be a real or complex-valued
More informationIntroduction to Empirical Processes and Semiparametric Inference Lecture 12: Glivenko-Cantelli and Donsker Results
Introduction to Empirical Processes and Semiparametric Inference Lecture 12: Glivenko-Cantelli and Donsker Results Michael R. Kosorok, Ph.D. Professor and Chair of Biostatistics Professor of Statistics
More informationk-protected VERTICES IN BINARY SEARCH TREES
k-protected VERTICES IN BINARY SEARCH TREES MIKLÓS BÓNA Abstract. We show that for every k, the probability that a randomly selected vertex of a random binary search tree on n nodes is at distance k from
More informationLecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.
Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal
More informationGaussian vectors and central limit theorem
Gaussian vectors and central limit theorem Samy Tindel Purdue University Probability Theory 2 - MA 539 Samy T. Gaussian vectors & CLT Probability Theory 1 / 86 Outline 1 Real Gaussian random variables
More informationSuperconcentration inequalities for centered Gaussian stationnary processes
Superconcentration inequalities for centered Gaussian stationnary processes Kevin Tanguy Toulouse University June 21, 2016 1 / 22 Outline What is superconcentration? Convergence of extremes (Gaussian case).
More informationLarge Sample Theory. Consider a sequence of random variables Z 1, Z 2,..., Z n. Convergence in probability: Z n
Large Sample Theory In statistics, we are interested in the properties of particular random variables (or estimators ), which are functions of our data. In ymptotic analysis, we focus on describing the
More informationMaximum of the characteristic polynomial of random unitary matrices
Maximum of the characteristic polynomial of random unitary matrices Louis-Pierre Arguin Department of Mathematics, Baruch College Graduate Center, City University of New York louis-pierre.arguin@baruch.cuny.edu
More informationNotes on Gaussian processes and majorizing measures
Notes on Gaussian processes and majorizing measures James R. Lee 1 Gaussian processes Consider a Gaussian process {X t } for some index set T. This is a collection of jointly Gaussian random variables,
More informationA sequential hypothesis test based on a generalized Azuma inequality 1
A sequential hypothesis test based on a generalized Azuma inequality 1 Daniël Reijsbergen a,2, Werner Scheinhardt b, Pieter-Tjerk de Boer b a Laboratory for Foundations of Computer Science, University
More informationIntroduction to Self-normalized Limit Theory
Introduction to Self-normalized Limit Theory Qi-Man Shao The Chinese University of Hong Kong E-mail: qmshao@cuhk.edu.hk Outline What is the self-normalization? Why? Classical limit theorems Self-normalized
More information4 Sums of Independent Random Variables
4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables
More informationSpectral Continuity Properties of Graph Laplacians
Spectral Continuity Properties of Graph Laplacians David Jekel May 24, 2017 Overview Spectral invariants of the graph Laplacian depend continuously on the graph. We consider triples (G, x, T ), where G
More informationErgodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.
Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions
More information7 Convergence in R d and in Metric Spaces
STA 711: Probability & Measure Theory Robert L. Wolpert 7 Convergence in R d and in Metric Spaces A sequence of elements a n of R d converges to a limit a if and only if, for each ǫ > 0, the sequence a
More informationNegative Association, Ordering and Convergence of Resampling Methods
Negative Association, Ordering and Convergence of Resampling Methods Nicolas Chopin ENSAE, Paristech (Joint work with Mathieu Gerber and Nick Whiteley, University of Bristol) Resampling schemes: Informal
More informationIEOR 6711: Stochastic Models I Fall 2013, Professor Whitt Lecture Notes, Thursday, September 5 Modes of Convergence
IEOR 6711: Stochastic Models I Fall 2013, Professor Whitt Lecture Notes, Thursday, September 5 Modes of Convergence 1 Overview We started by stating the two principal laws of large numbers: the strong
More informationExtreme values of two-dimensional discrete Gaussian Free Field
Extreme values of two-dimensional discrete Gaussian Free Field Marek Biskup (UCLA) Joint work with Oren Louidor (Technion, Haifa) Midwest Probability Colloquium, Evanston, October 9-10, 2015 My collaborator
More informationComputer Vision Group Prof. Daniel Cremers. 6. Mixture Models and Expectation-Maximization
Prof. Daniel Cremers 6. Mixture Models and Expectation-Maximization Motivation Often the introduction of latent (unobserved) random variables into a model can help to express complex (marginal) distributions
More information1 Sequences of events and their limits
O.H. Probability II (MATH 2647 M15 1 Sequences of events and their limits 1.1 Monotone sequences of events Sequences of events arise naturally when a probabilistic experiment is repeated many times. For
More informationEE514A Information Theory I Fall 2013
EE514A Information Theory I Fall 2013 K. Mohan, Prof. J. Bilmes University of Washington, Seattle Department of Electrical Engineering Fall Quarter, 2013 http://j.ee.washington.edu/~bilmes/classes/ee514a_fall_2013/
More informationSVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices)
Chapter 14 SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Today we continue the topic of low-dimensional approximation to datasets and matrices. Last time we saw the singular
More informationLecture 3: Central Limit Theorem
Lecture 3: Central Limit Theorem Scribe: Jacy Bird (Division of Engineering and Applied Sciences, Harvard) February 8, 003 The goal of today s lecture is to investigate the asymptotic behavior of P N (
More informationLecture 21 Representations of Martingales
Lecture 21: Representations of Martingales 1 of 11 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 21 Representations of Martingales Right-continuous inverses Let
More information1 Math 241A-B Homework Problem List for F2015 and W2016
1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let
More informationReview (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology
Review (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology M. Soleymani Fall 2012 Some slides have been adopted from Prof. H.R. Rabiee s and also Prof. R. Gutierrez-Osuna
More informationEcon 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines
Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines Maximilian Kasy Department of Economics, Harvard University 1 / 37 Agenda 6 equivalent representations of the
More informationIntroduction to Stochastic processes
Università di Pavia Introduction to Stochastic processes Eduardo Rossi Stochastic Process Stochastic Process: A stochastic process is an ordered sequence of random variables defined on a probability space
More informationMoreover this binary operation satisfies the following properties
Contents 1 Algebraic structures 1 1.1 Group........................................... 1 1.1.1 Definitions and examples............................. 1 1.1.2 Subgroup.....................................
More informationLecture 3. Random Fourier measurements
Lecture 3. Random Fourier measurements 1 Sampling from Fourier matrices 2 Law of Large Numbers and its operator-valued versions 3 Frames. Rudelson s Selection Theorem Sampling from Fourier matrices Our
More informationLECTURE 10: REVIEW OF POWER SERIES. 1. Motivation
LECTURE 10: REVIEW OF POWER SERIES By definition, a power series centered at x 0 is a series of the form where a 0, a 1,... and x 0 are constants. For convenience, we shall mostly be concerned with the
More informationConvergence in Distribution
Convergence in Distribution Undergraduate version of central limit theorem: if X 1,..., X n are iid from a population with mean µ and standard deviation σ then n 1/2 ( X µ)/σ has approximately a normal
More informationLecture 14: SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Lecturer: Sanjeev Arora
princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 14: SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Lecturer: Sanjeev Arora Scribe: Today we continue the
More informationScaling limit of random planar maps Lecture 2.
Scaling limit of random planar maps Lecture 2. Olivier Bernardi, CNRS, Université Paris-Sud Workshop on randomness and enumeration Temuco, Olivier Bernardi p.1/25 Goal We consider quadrangulations as metric
More informationMath 576: Quantitative Risk Management
Math 576: Quantitative Risk Management Haijun Li lih@math.wsu.edu Department of Mathematics Washington State University Week 11 Haijun Li Math 576: Quantitative Risk Management Week 11 1 / 21 Outline 1
More informationVector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.
Vector spaces DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Vector space Consists of: A set V A scalar
More information