A new method to bound rate of convergence

Size: px
Start display at page:

Download "A new method to bound rate of convergence"

Transcription

1 A new method to bound rate of convergence Arup Bose Indian Statistical Institute, Kolkata, Sourav Chatterjee University of California, Berkeley

2 Empirical Spectral Distribution Distribution (LSD) Different Types of Convergence Rates of Convergence A new technique of bounding the rates of convergence Application to the matrices: Wigner, Sample Covariance, Reverse Circulant,, of our approach Random matrices - p. 2/33

3 A n is an n n Hermitian matrix with random entries. Eigenvalues (characteristic roots): λ 0, λ 1,...,λ n 1. The empirical spectral distribution (ESD) function of A n : n 1 F n (x) = n 1 i=0 I{λ i x}. The empirical spectral measure: the corresponding probability measure P n The expected spectral distribution function: E(F n ( )). Expected spectral measure: The corresponding probability measure of our approach Random matrices - p. 3/33

4 The Distribution (or measure) (LSD): weak limit of {P n }, in some probabilistic sense, such as almost surely or in probability. The weak convergence of E(F n ) often serves as an intermediate step in showing the weak convergence of F n. LSD known for Wigner, Sample covariance,,, Reverse Circulant... Two main tools to establish the LSD: the moment method and the Stieltjes transform method. of our approach Random matrices - p. 4/33

5 When the LSD exists, how does one obtain the rate of convergence? So far the main method to establish such rates is the use of Stieltjes transform. : 1. Establish some general results useful in establishing the probabilistic weak convergence of F n from the convergence of E(F n ) and the corresponding rates of convergence. 2. Apply these to establish some new rates of convergence. of our approach Random matrices - p. 5/33

6 ϕ= of A: the characteristic function of the empirical spectral distribution of A. ϕ(t) = 1 n tr(eita ) where n= the order of A and e M = k=0 1 k! Mk. In general, for any distribution function G (which may be random) on R, its (random) characteristic function is defined as ϕ G (t) = e itx dg(x). of our approach Random matrices - p. 6/33

7 of our approach STEP 1. Obtain bounds for and its partial derivatives. (For a complex random variable X, its variance is defined to be E X E(X) 2.) STEP 2. Deduce the concentration of the spectral measure near its mean, and also get the magnitude of concentration using the bounds. of our approach Random matrices - p. 7/33

8 Kolmogorov : F G = sup F(x) G(x). x R Theorem 1. Suppose F is a random distribution function on R with (random) characteristic function ϕ. Suppose Ct 2 for each t. If G is a nonrandom distribution function on R, such that sup x R G (x) λ, then E F G 2 E(F) G + 8(3)1/2 λ 1/2 C 1/4. π This will be eventually used to establish rates of convergence for the ESD. C determines the rate. of our approach Random matrices - p. 8/33

9 The convolution F G of F and G is defined in the usual way. That is, F G(x) = F(x y)dg(y) = G(x y)df(y). Define the probability density h L (x) = 1 coslx πlx 2. Let H L be the corresponding distribution function. The characteristic function of H L is given by Note that ψ L (t) = (1 t L )I( t L). ψ L (t) dt = L. of our approach Random matrices - p. 9/33

10 Let F 0 = E(F), and let η be the characteristic function of F 0. Then by assumption, E ϕ(t) η(t) C t. By Esseen s lemma, F G 2 F H L G H L + 24λ πl Now F H L G H L F 0 H L G H L + F H L F 0 H L This in turn is bounded by F 0 G + F H L F 0 H L. of our approach Random matrices - p. 10/33

11 So by applying the inversion formula (see Feller 1966, page ) and the hypothesis about, E F H L F 0 H L 1 π C1/2 L π. ψ L(t) E ϕ(t) η(t) t dt Combining all these observations, we have E F G 2 F 0 G + 2 CL π + 24λ πl Choosing L 2 = 12λC 1/2 gives the desired conclusion. of our approach Random matrices - p. 11/33

12 Note the condition on the variance in Theorem 1. Note that the matrix A is a function of independent real random variables x 1, x 2,...,x m, (note that m is large). Write ϕ(t) as a function of x 1, x 2,...,x m. Hoeffding (unpublished work), Efron and Stein (1981), Steele (1986) and Devroye (1991): See Györfi et al. (2002) for proof. Theorem 2. (Efron-Stein type ). Suppose Z 1,...,Z n, Z1,...,Zn vectors where Z i has the same distribution as Zi f : R m.n R is a square integrable function. Then are independent m-dimensional random for all i. Suppose that Var(f(Z 1,...,Z n )) 1 2 n k=1 E f(z 1,...,Z n ) f(z 1,...,Z k 1, Z k, Z k+1,...,z n ) 2. of our approach Random matrices - p. 12/33

13 Theorem 3. Suppose f(x 1, x 2,...,x n ) is a coordinatewise differentiable map from R n into C and M j, 1 j n are constants such that sup f M j for j = 1, 2,...,n. x x j Then for independent complex random variables X 1, X 2,...,X n, n Var(f(X 1, X 2,...,X n )) Mj 2 Var(X j ). j=1 of our approach Random matrices - p. 13/33

14 POIN There exists a constant K > 0 such that if X F, and g : R R is any differentiable map, then Var(g(X)) KE g (X) 2. Log-concave densities (i.e. densities of the form e U(x) where U is a concave function) satisfy POIN: The standard normal: Chernoff (1981) with K = 1. Double exponential, and uniform distributions See also Chen (1982). of our approach Random matrices - p. 14/33

15 The complete characterization of all absolutely continuous distributions which satisfy POIN is given in the following theorem, proved in a more general form in Muckenhoupt (1972). Theorem 4. A distribution function F with F(0) = 1/2 and density f w.r.t. Lebesgue measure satisfies POIN if and only if B = max{sup x 0 (1 F(x)) x 0 1 f(t) dt, sup x<0 F(x) 0 x Moreover, the smallest constant K admissible in POIN satisfies 1 2 B K 4B. 1 f(t) dt} <. of our approach Random matrices - p. 15/33

16 The next theorem follows easily from the Efron-Stein : Theorem 5. If X 1, X 2,...,X n i.i.d F where F has property POIN with constant K, then for any coordinatewise differentiable map f : R n C, Var(f(X 1,...,X n )) Kσ 2 E f(x 1,...,X n ) 2 where denotes the Euclidean norm, and σ 2 = Var(X 1 ). of our approach Random matrices - p. 16/33

17 Use the identity ϕ(t) = 1 x j n tr ( it A ) e ita x j Use the fact that e ita is a unitary matrix. Either the partial derivatives are bounded by small numbers in sup norm Then use an Efron-Stein type. Or the expected value of the norm-squared of ϕ(t) is small. Then use a Poincaré type of our approach Random matrices - p. 17/33

18 Suppose A(u) is a matrix with complex entries. Derivative of product formula: d du (A(u)B(u)) = A (u)b(u) + A(u)B (u). Lemma 1. If A(u) is an elementwise differentiable map from R or C into C n n then ( ) d da du tr(ea ) = tr du ea Lemma 2. If A is Hermitian and t is real, then e ita is a unitary matrix. In particular, for any vector x, e ita x = x, (where denotes the Euclidean norm) and also all entries of e ita have modulus 1. of our approach Random matrices - p. 18/33

19 W n is a Wigner matrix with independent entries (X (n) jk ) having common variance 1. We shall drop the superscript n. In many situations, the LSD of n 1/2 W n exists and is given by F(x) = 1 2π x Note that sup 2 x 2 F (x) = π 1. Let F n be the ESD of n 1/2 W n. (4 y 2 ) 1/2 I [ 2, 2] dy. of our approach Random matrices - p. 19/33

20 (W1) E(X jk ) = 0, E(Xjk 2 ) = 1. (W2) sup i,j,n EXij 4 ( <. ) (W3) E Xij 4 I { X ij ɛn 1/2 } = o(n 2 ) for any ɛ > 0. Bai (1993a). Bai, Miao and Tsay (1997). Bai, Miao and Tsay (2002). (W3*) F n F = O p (n 2/5 ). E(X 8 ij I { Xij ɛn 1/2 }) = o(n 2 ) for any ɛ > 0. Then E(F n ) F = O(n 1/2 ). (W3**) sup n sup ij E X ij k < for every k 1. Then F n F = O(n 2/5+η ) almost surely for every η > 0. of our approach Random matrices - p. 20/33

21 W n : R n(n+1)/2 R n n Takes a tuple (a jk ) to the Wigner matrix with (j, k)-th entry n 1/2 a jk (j k), and n 1/2 a kj otherwise. Then W jk n = W n a jk is a constant matrix whose only nonzero entries are (j, k)-th and (k, j)-th entries (=n 1/2 ). ϕ t n= the empirical characteristic function of W n. Then ϕ t ( n = n 1 tr it W ) n e itw n = n 1 tr ( itwn jk e itw ) n. a jk a jk Let B = e itw n. Note that B is unitary. Then ϕ t n = it(b jk + b kj ) a jk n = 2itb jk n n n ϕ t n 2 = j k ϕt n a jk 2 j,k 4t 2 b jk 2 n 3 = 4t2 n 2. of our approach Random matrices - p. 21/33

22 Stronger conditions new and stronger results. Note that sup 2 x 2 F (x) = π 1. Theorem 6. If W n is a random real Wigner matrix whose entries on and above the diagonal are i.i.d. with variance 1, drawn from a distribution satisfying POIN with constant K, then Var(ϕ n (t)) 4Kt 2 /n 2. Consequently, by Theorem 1, if F denotes the semicircular law, then E F n F 2 EF n F + 8(6)1/2 K 1/4 π 3/2 n 1/2 where F n denotes the empirical c.d.f. of n 1/2 W n. of our approach Random matrices - p. 22/33

23 Minimal conditions weaker rate results. Lemma 2: the elements of e itw n are bounded in modulus by 1. Hence ϕ t n a jk 2 t n 3/2. So we can invoke Theorems 3 and 1: Theorem 7. If {W n } is a sequence of random Wigner matrices, whose entries on and above the diagonal are independent with variance uniformly bounded by 1, then Var(ϕ n (t)) 4t2 n 3 j k Var(a jk ) 4t2 n Hence if F n denotes the empirical c.d.f. of n 1/2 W n and F denotes the semicircular law, then E F n F 2 EF n F + 8(6)1/2 K 1/4 π 3/2 n 1/4. of our approach Random matrices - p. 23/33

24 Suppose X is a real p n matrix with entries x jk, which are i.i.d. real random variables with mean zero and unit variance. Let S = 1 n XXT. If y n = p/n y (0, 1] then the ESD of S n converges almost surely to the law F y ( ) with the Marčenko-Pastur density f y (x) = 1 2πxy (b x)(x a) if a x b, 0 otherwise. where a = a(y) = (1 y) 2 and b = b(y) = (1 + y) 2. It can be easily shown that the density is bounded by λ = [ π y(1 y) ] 1. In cases where y > 1, the LSD exists but has a point mass at the origin. If y = 0, then a scaling and a centering are required for the LSD of S n to exist. See Bai (1999) or Bose et al. (2003) for the precise results. We do not consider these cases. (1) of our approach Random matrices - p. 24/33

25 Stieltjes transform method: Bai (1993b) proved that E(F n ) converges to F y at a rate O(n 1/4 ) or O(n 5/48 ) depending on how close y n is to 1. The same rates were obtained for convergence in probability of F n to F y in Bai, Miao and Tsay (1997). Bai, Miao and Yao (2003) prove several results under the conditions given in Wigner case: If y n remains bounded away from 1 and suitable combinations of the above conditions hold then E(F n ) F y = O(n 1/2 ), F n F y = O P (n 2/5 ) and F n F y = O a.s. (n 2/5+η ). of our approach Random matrices - p. 25/33

26 Let Y = X/ x jk = e j e T k x k = kth column of X ϕ t n = empirical characteristic function evaluated at t. S jk := S x jk = 1 n (Y XT + XY T ) ϕ t n x jk = p 1 tr(its jk e its ) = it(np) 1 tr(y X T e its + XY T e its ) = it(np) 1 tr(e j x T ke its + x k e T j e its ) = it(np) 1 (x T ke its e j + e T j e its x k ) = 2itz kj np where we have written z kj for the jth component of the vector z k := e its x k. Note that since e its is unitary, z k = x k. of our approach Random matrices - p. 26/33

27 S matrix rate with POIN j,k ϕt n x jk 2 4t2 n 2 p 2 n k=1 z k 2 = 4t2 n 2 p 2 n k=1 x k 2 = 4t2 n 2 p 2 j,k x jk 2 a.s.. So, if j, k, E x jk 2 M 2, then E ϕ t n 2 4M2 t 2 np. Applying Theorems 5 and 1, Theorem 8. If {x jk } are i.i.d. from a distribution satisfying POIN with constant K, mean 0 and variance 1, then Var(ϕ n (t)) 4Kt2 np. Consequently, if y n = p/n (0, 1) and F y denotes the Marčenko-Pastur distribution with parameter y, then E F n,p F yn 2 EF n,p F yn + 8(6)1/2 K 1/4 π 3/2 [y n (1 y n )] 1/2 n 1/2 where F n,p denotes the ESD of S. of our approach Random matrices - p. 27/33

28 S matrix rate without POIN If we don t assume POIN but instead impose j, k, x jk M a.s., then ϕt n 2 t (np) 1 z k = 2 t (np) 1 x k 2M t x jk n p Thus, if the variance of x jk is bounded by 1, then Var(ϕ n (t)) 4M2 t 2 n 2 p np = 4M2 t 2. n Hence we get Theorem 9. Suppose y = p/n (0, 1) and x jk are independent with mean zero and variance bounded by 1. Suppose M is such that P( x jk M) = 1. Then E F n F y 2 EF n F y + where F n is the ESD of S n. a.s. 8(6) 1/2 M 1/2 π 3/2 y 1/4 (1 y) 1/2n 1/4. of our approach Random matrices - p. 28/33

29 : LSD Reverse Circulant matrix A n = ((x (i+j 2) mod n )). Visually, x 0 x 1 x 2 x n 1 x 1 x 2 x n 1 x 0 A n = x 2 x n 1 x 0 x 1. x n 1 x 0 x 1 x n 2 Bose and Mitra (2002), Bose, Chatterjee and Gangopadhyay (2003): If {x i } are i.i.d. with mean zero and variance 1 then ESD of X n = n 1/2 A n converges (in probability) to the LSD with density f given by f(x) = x exp( x 2 ), < x <. of our approach Random matrices - p. 29/33

30 rate with POIN Let B = e ita n. Then b ij 2 = n. Also for each k, there are at most 2n pairs of (i, j) such that i + j 2 = k mod n. Let ϕ t n= empirical characteristic function of A n evaluated at t. ϕ t n x k = k it n n i+j 2=kmod n ϕt n 2 t2 x k n 3 [2n k b ji i+j 2=kmod n b ji 2 ] = 2t2 n If x k are i.i.d. from a density satisfying POIN, then E F n F 2 EF n F + O(n 1/4 ). The eigenvalues can be explicitly obtained and using them, EF n F is of a much smaller order than n 1/4. of our approach Random matrices - p. 30/33

31 H n = ((x i+j 2 )) is called a matrix. Bryc Jiang and Dembo: LSD of n 1/2 H n. Not much known about the LSD. IF the limit has a bounded density, then we can use our technique. The computations for our method are very similar to the previous example. In fact, here ϕ t n x k = it n n i+j 2=k b ji So, similar computations show that under POIN E F n F 2 EF n F + O(n 1/4 ). of our approach Random matrices - p. 31/33

32 T n = ((x i j )) is called a matrix of order n. Bryc Dembo and Jiang: LSD exists and is unimodal. IF the limiting distribution has a bounded density: Exactly the same kind of computations as in the preceding examples show that in this case, too, Under POIN, E F n F 2 EF n F + O(n 1/4 ). of our approach Random matrices - p. 32/33

33 Do the and the limits have bounded densities? How optimal are the rates? Sharpen the method to obtain better results? Develop other characteristic function based methods? Higher order estimates? of our approach Random matrices - p. 33/33

PATTERNED RANDOM MATRICES

PATTERNED RANDOM MATRICES PATTERNED RANDOM MATRICES ARUP BOSE Indian Statistical Institute Kolkata November 15, 2011 Introduction Patterned random matrices. Limiting spectral distribution. Extensions (joint convergence, free limit,

More information

Lectures 6 7 : Marchenko-Pastur Law

Lectures 6 7 : Marchenko-Pastur Law Fall 2009 MATH 833 Random Matrices B. Valkó Lectures 6 7 : Marchenko-Pastur Law Notes prepared by: A. Ganguly We will now turn our attention to rectangular matrices. Let X = (X 1, X 2,..., X n ) R p n

More information

Asymptotic results for empirical measures of weighted sums of independent random variables

Asymptotic results for empirical measures of weighted sums of independent random variables Asymptotic results for empirical measures of weighted sums of independent random variables B. Bercu and W. Bryc University Bordeaux 1, France Workshop on Limit Theorems, University Paris 1 Paris, January

More information

DISTRIBUTION OF EIGENVALUES OF REAL SYMMETRIC PALINDROMIC TOEPLITZ MATRICES AND CIRCULANT MATRICES

DISTRIBUTION OF EIGENVALUES OF REAL SYMMETRIC PALINDROMIC TOEPLITZ MATRICES AND CIRCULANT MATRICES DISTRIBUTION OF EIGENVALUES OF REAL SYMMETRIC PALINDROMIC TOEPLITZ MATRICES AND CIRCULANT MATRICES ADAM MASSEY, STEVEN J. MILLER, AND JOHN SINSHEIMER Abstract. Consider the ensemble of real symmetric Toeplitz

More information

Random Toeplitz Matrices

Random Toeplitz Matrices Arnab Sen University of Minnesota Conference on Limits Theorems in Probability, IISc January 11, 2013 Joint work with Bálint Virág What are Toeplitz matrices? a0 a 1 a 2... a1 a0 a 1... a2 a1 a0... a (n

More information

Fluctuations from the Semicircle Law Lecture 4

Fluctuations from the Semicircle Law Lecture 4 Fluctuations from the Semicircle Law Lecture 4 Ioana Dumitriu University of Washington Women and Math, IAS 2014 May 23, 2014 Ioana Dumitriu (UW) Fluctuations from the Semicircle Law Lecture 4 May 23, 2014

More information

Asymptotic results for empirical measures of weighted sums of independent random variables

Asymptotic results for empirical measures of weighted sums of independent random variables Asymptotic results for empirical measures of weighted sums of independent random variables B. Bercu and W. Bryc University Bordeaux 1, France Seminario di Probabilità e Statistica Matematica Sapienza Università

More information

Lecture I: Asymptotics for large GUE random matrices

Lecture I: Asymptotics for large GUE random matrices Lecture I: Asymptotics for large GUE random matrices Steen Thorbjørnsen, University of Aarhus andom Matrices Definition. Let (Ω, F, P) be a probability space and let n be a positive integer. Then a random

More information

On corrections of classical multivariate tests for high-dimensional data. Jian-feng. Yao Université de Rennes 1, IRMAR

On corrections of classical multivariate tests for high-dimensional data. Jian-feng. Yao Université de Rennes 1, IRMAR Introduction a two sample problem Marčenko-Pastur distributions and one-sample problems Random Fisher matrices and two-sample problems Testing cova On corrections of classical multivariate tests for high-dimensional

More information

arxiv: v2 [math.pr] 2 Nov 2009

arxiv: v2 [math.pr] 2 Nov 2009 arxiv:0904.2958v2 [math.pr] 2 Nov 2009 Limit Distribution of Eigenvalues for Random Hankel and Toeplitz Band Matrices Dang-Zheng Liu and Zheng-Dong Wang School of Mathematical Sciences Peking University

More information

On corrections of classical multivariate tests for high-dimensional data

On corrections of classical multivariate tests for high-dimensional data On corrections of classical multivariate tests for high-dimensional data Jian-feng Yao with Zhidong Bai, Dandan Jiang, Shurong Zheng Overview Introduction High-dimensional data and new challenge in statistics

More information

Stein s Method and Characteristic Functions

Stein s Method and Characteristic Functions Stein s Method and Characteristic Functions Alexander Tikhomirov Komi Science Center of Ural Division of RAS, Syktyvkar, Russia; Singapore, NUS, 18-29 May 2015 Workshop New Directions in Stein s method

More information

A note on a Marčenko-Pastur type theorem for time series. Jianfeng. Yao

A note on a Marčenko-Pastur type theorem for time series. Jianfeng. Yao A note on a Marčenko-Pastur type theorem for time series Jianfeng Yao Workshop on High-dimensional statistics The University of Hong Kong, October 2011 Overview 1 High-dimensional data and the sample covariance

More information

Limit Laws for Random Matrices from Traffic Probability

Limit Laws for Random Matrices from Traffic Probability Limit Laws for Random Matrices from Traffic Probability arxiv:1601.02188 Slides available at math.berkeley.edu/ bensonau Benson Au UC Berkeley May 9th, 2016 Benson Au (UC Berkeley) Random Matrices from

More information

Lectures 2 3 : Wigner s semicircle law

Lectures 2 3 : Wigner s semicircle law Fall 009 MATH 833 Random Matrices B. Való Lectures 3 : Wigner s semicircle law Notes prepared by: M. Koyama As we set up last wee, let M n = [X ij ] n i,j=1 be a symmetric n n matrix with Random entries

More information

Comparison Method in Random Matrix Theory

Comparison Method in Random Matrix Theory Comparison Method in Random Matrix Theory Jun Yin UW-Madison Valparaíso, Chile, July - 2015 Joint work with A. Knowles. 1 Some random matrices Wigner Matrix: H is N N square matrix, H : H ij = H ji, EH

More information

Preface to the Second Edition...vii Preface to the First Edition... ix

Preface to the Second Edition...vii Preface to the First Edition... ix Contents Preface to the Second Edition...vii Preface to the First Edition........... ix 1 Introduction.............................................. 1 1.1 Large Dimensional Data Analysis.........................

More information

18.175: Lecture 15 Characteristic functions and central limit theorem

18.175: Lecture 15 Characteristic functions and central limit theorem 18.175: Lecture 15 Characteristic functions and central limit theorem Scott Sheffield MIT Outline Characteristic functions Outline Characteristic functions Characteristic functions Let X be a random variable.

More information

Non white sample covariance matrices.

Non white sample covariance matrices. Non white sample covariance matrices. S. Péché, Université Grenoble 1, joint work with O. Ledoit, Uni. Zurich 17-21/05/2010, Université Marne la Vallée Workshop Probability and Geometry in High Dimensions

More information

On singular values distribution of a matrix large auto-covariance in the ultra-dimensional regime. Title

On singular values distribution of a matrix large auto-covariance in the ultra-dimensional regime. Title itle On singular values distribution of a matrix large auto-covariance in the ultra-dimensional regime Authors Wang, Q; Yao, JJ Citation Random Matrices: heory and Applications, 205, v. 4, p. article no.

More information

HYPOTHESIS TESTING ON LINEAR STRUCTURES OF HIGH DIMENSIONAL COVARIANCE MATRIX

HYPOTHESIS TESTING ON LINEAR STRUCTURES OF HIGH DIMENSIONAL COVARIANCE MATRIX Submitted to the Annals of Statistics HYPOTHESIS TESTING ON LINEAR STRUCTURES OF HIGH DIMENSIONAL COVARIANCE MATRIX By Shurong Zheng, Zhao Chen, Hengjian Cui and Runze Li Northeast Normal University, Fudan

More information

Strong Convergence of the Empirical Distribution of Eigenvalues of Large Dimensional Random Matrices

Strong Convergence of the Empirical Distribution of Eigenvalues of Large Dimensional Random Matrices Strong Convergence of the Empirical Distribution of Eigenvalues of Large Dimensional Random Matrices by Jack W. Silverstein* Department of Mathematics Box 8205 North Carolina State University Raleigh,

More information

Large sample covariance matrices and the T 2 statistic

Large sample covariance matrices and the T 2 statistic Large sample covariance matrices and the T 2 statistic EURANDOM, the Netherlands Joint work with W. Zhou Outline 1 2 Basic setting Let {X ij }, i, j =, be i.i.d. r.v. Write n s j = (X 1j,, X pj ) T and

More information

Exponential tail inequalities for eigenvalues of random matrices

Exponential tail inequalities for eigenvalues of random matrices Exponential tail inequalities for eigenvalues of random matrices M. Ledoux Institut de Mathématiques de Toulouse, France exponential tail inequalities classical theme in probability and statistics quantify

More information

Operator-Valued Free Probability Theory and Block Random Matrices. Roland Speicher Queen s University Kingston

Operator-Valued Free Probability Theory and Block Random Matrices. Roland Speicher Queen s University Kingston Operator-Valued Free Probability Theory and Block Random Matrices Roland Speicher Queen s University Kingston I. Operator-valued semicircular elements and block random matrices II. General operator-valued

More information

Assessing the dependence of high-dimensional time series via sample autocovariances and correlations

Assessing the dependence of high-dimensional time series via sample autocovariances and correlations Assessing the dependence of high-dimensional time series via sample autocovariances and correlations Johannes Heiny University of Aarhus Joint work with Thomas Mikosch (Copenhagen), Richard Davis (Columbia),

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 59 Classical case: n d. Asymptotic assumption: d is fixed and n. Basic tools: LLN and CLT. High-dimensional setting: n d, e.g. n/d

More information

SPECTRAL MEASURE OF LARGE RANDOM HANKEL, MARKOV AND TOEPLITZ MATRICES

SPECTRAL MEASURE OF LARGE RANDOM HANKEL, MARKOV AND TOEPLITZ MATRICES SPECTRAL MEASURE OF LARGE RANDOM HANKEL, MARKOV AND TOEPLITZ MATRICES W LODZIMIERZ BRYC, AMIR DEMBO, AND TIEFENG JIANG Abstract We study the limiting spectral measure of large symmetric random matrices

More information

Existence and uniqueness: Picard s theorem

Existence and uniqueness: Picard s theorem Existence and uniqueness: Picard s theorem First-order equations Consider the equation y = f(x, y) (not necessarily linear). The equation dictates a value of y at each point (x, y), so one would expect

More information

MA 575 Linear Models: Cedric E. Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2

MA 575 Linear Models: Cedric E. Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2 MA 575 Linear Models: Cedric E Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2 1 Revision: Probability Theory 11 Random Variables A real-valued random variable is

More information

Estimation of the Global Minimum Variance Portfolio in High Dimensions

Estimation of the Global Minimum Variance Portfolio in High Dimensions Estimation of the Global Minimum Variance Portfolio in High Dimensions Taras Bodnar, Nestor Parolya and Wolfgang Schmid 07.FEBRUARY 2014 1 / 25 Outline Introduction Random Matrix Theory: Preliminary Results

More information

Norms of Random Matrices & Low-Rank via Sampling

Norms of Random Matrices & Low-Rank via Sampling CS369M: Algorithms for Modern Massive Data Set Analysis Lecture 4-10/05/2009 Norms of Random Matrices & Low-Rank via Sampling Lecturer: Michael Mahoney Scribes: Jacob Bien and Noah Youngs *Unedited Notes

More information

Wigner s semicircle law

Wigner s semicircle law CHAPTER 2 Wigner s semicircle law 1. Wigner matrices Definition 12. A Wigner matrix is a random matrix X =(X i, j ) i, j n where (1) X i, j, i < j are i.i.d (real or complex valued). (2) X i,i, i n are

More information

1 Intro to RMT (Gene)

1 Intro to RMT (Gene) M705 Spring 2013 Summary for Week 2 1 Intro to RMT (Gene) (Also see the Anderson - Guionnet - Zeitouni book, pp.6-11(?) ) We start with two independent families of R.V.s, {Z i,j } 1 i

More information

Random Matrix Theory and its Applications to Econometrics

Random Matrix Theory and its Applications to Econometrics Random Matrix Theory and its Applications to Econometrics Hyungsik Roger Moon University of Southern California Conference to Celebrate Peter Phillips 40 Years at Yale, October 2018 Spectral Analysis of

More information

Concentration inequalities and the entropy method

Concentration inequalities and the entropy method Concentration inequalities and the entropy method Gábor Lugosi ICREA and Pompeu Fabra University Barcelona what is concentration? We are interested in bounding random fluctuations of functions of many

More information

Random regular digraphs: singularity and spectrum

Random regular digraphs: singularity and spectrum Random regular digraphs: singularity and spectrum Nick Cook, UCLA Probability Seminar, Stanford University November 2, 2015 Universality Circular law Singularity probability Talk outline 1 Universality

More information

Concentration Inequalities for Random Matrices

Concentration Inequalities for Random Matrices Concentration Inequalities for Random Matrices M. Ledoux Institut de Mathématiques de Toulouse, France exponential tail inequalities classical theme in probability and statistics quantify the asymptotic

More information

A New Method for Bounding Rates of Convergence of Empirical Spectral Distributions

A New Method for Bounding Rates of Convergence of Empirical Spectral Distributions A New Method for Boudig Rates of Covergece of Empirical Spectral Distributios Sourav Chatterjee Staford Uiversity, Califoria Arup Bose Idia Statistical Istitute, Kolkata Abstract The probabilistic properties

More information

Eigenvalues and Singular Values of Random Matrices: A Tutorial Introduction

Eigenvalues and Singular Values of Random Matrices: A Tutorial Introduction Random Matrix Theory and its applications to Statistics and Wireless Communications Eigenvalues and Singular Values of Random Matrices: A Tutorial Introduction Sergio Verdú Princeton University National

More information

Convergence rates of spectral distributions of large sample covariance matrices

Convergence rates of spectral distributions of large sample covariance matrices Title Convergence rates of spectral distributions of large sample covariance matrices Authors) Bai, ZD; Miao, B; Yao, JF Citation SIAM Journal On Matrix Analysis And Applications, 2004, v. 25 n., p. 05-27

More information

the convolution of f and g) given by

the convolution of f and g) given by 09:53 /5/2000 TOPIC Characteristic functions, cont d This lecture develops an inversion formula for recovering the density of a smooth random variable X from its characteristic function, and uses that

More information

The circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA)

The circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA) The circular law Lewis Memorial Lecture / DIMACS minicourse March 19, 2008 Terence Tao (UCLA) 1 Eigenvalue distributions Let M = (a ij ) 1 i n;1 j n be a square matrix. Then one has n (generalised) eigenvalues

More information

Multivariate Statistics Random Projections and Johnson-Lindenstrauss Lemma

Multivariate Statistics Random Projections and Johnson-Lindenstrauss Lemma Multivariate Statistics Random Projections and Johnson-Lindenstrauss Lemma Suppose again we have n sample points x,..., x n R p. The data-point x i R p can be thought of as the i-th row X i of an n p-dimensional

More information

The Hilbert Space of Random Variables

The Hilbert Space of Random Variables The Hilbert Space of Random Variables Electrical Engineering 126 (UC Berkeley) Spring 2018 1 Outline Fix a probability space and consider the set H := {X : X is a real-valued random variable with E[X 2

More information

Convergence of empirical spectral distributions of large dimensional quaternion sample covariance matrices

Convergence of empirical spectral distributions of large dimensional quaternion sample covariance matrices Ann Inst Stat Math 206 68:765 785 DOI 0007/s0463-05-054-0 Convergence of empirical spectral distributions of large dimensional quaternion sample covariance matrices Huiqin Li Zhi Dong Bai Jiang Hu Received:

More information

Spring 2012 Math 541B Exam 1

Spring 2012 Math 541B Exam 1 Spring 2012 Math 541B Exam 1 1. A sample of size n is drawn without replacement from an urn containing N balls, m of which are red and N m are black; the balls are otherwise indistinguishable. Let X denote

More information

Homework 1. Yuan Yao. September 18, 2011

Homework 1. Yuan Yao. September 18, 2011 Homework 1 Yuan Yao September 18, 2011 1. Singular Value Decomposition: The goal of this exercise is to refresh your memory about the singular value decomposition and matrix norms. A good reference to

More information

Invertibility of symmetric random matrices

Invertibility of symmetric random matrices Invertibility of symmetric random matrices Roman Vershynin University of Michigan romanv@umich.edu February 1, 2011; last revised March 16, 2012 Abstract We study n n symmetric random matrices H, possibly

More information

SHOTA KATAYAMA AND YUTAKA KANO. Graduate School of Engineering Science, Osaka University, 1-3 Machikaneyama, Toyonaka, Osaka , Japan

SHOTA KATAYAMA AND YUTAKA KANO. Graduate School of Engineering Science, Osaka University, 1-3 Machikaneyama, Toyonaka, Osaka , Japan A New Test on High-Dimensional Mean Vector Without Any Assumption on Population Covariance Matrix SHOTA KATAYAMA AND YUTAKA KANO Graduate School of Engineering Science, Osaka University, 1-3 Machikaneyama,

More information

5.1 Consistency of least squares estimates. We begin with a few consistency results that stand on their own and do not depend on normality.

5.1 Consistency of least squares estimates. We begin with a few consistency results that stand on their own and do not depend on normality. 88 Chapter 5 Distribution Theory In this chapter, we summarize the distributions related to the normal distribution that occur in linear models. Before turning to this general problem that assumes normal

More information

1 Exercises for lecture 1

1 Exercises for lecture 1 1 Exercises for lecture 1 Exercise 1 a) Show that if F is symmetric with respect to µ, and E( X )

More information

Diagonalization by a unitary similarity transformation

Diagonalization by a unitary similarity transformation Physics 116A Winter 2011 Diagonalization by a unitary similarity transformation In these notes, we will always assume that the vector space V is a complex n-dimensional space 1 Introduction A semi-simple

More information

Notes on Random Vectors and Multivariate Normal

Notes on Random Vectors and Multivariate Normal MATH 590 Spring 06 Notes on Random Vectors and Multivariate Normal Properties of Random Vectors If X,, X n are random variables, then X = X,, X n ) is a random vector, with the cumulative distribution

More information

. Find E(V ) and var(v ).

. Find E(V ) and var(v ). Math 6382/6383: Probability Models and Mathematical Statistics Sample Preliminary Exam Questions 1. A person tosses a fair coin until she obtains 2 heads in a row. She then tosses a fair die the same number

More information

Problem Y is an exponential random variable with parameter λ = 0.2. Given the event A = {Y < 2},

Problem Y is an exponential random variable with parameter λ = 0.2. Given the event A = {Y < 2}, ECE32 Spring 25 HW Solutions April 6, 25 Solutions to HW Note: Most of these solutions were generated by R. D. Yates and D. J. Goodman, the authors of our textbook. I have added comments in italics where

More information

Local semicircle law, Wegner estimate and level repulsion for Wigner random matrices

Local semicircle law, Wegner estimate and level repulsion for Wigner random matrices Local semicircle law, Wegner estimate and level repulsion for Wigner random matrices László Erdős University of Munich Oberwolfach, 2008 Dec Joint work with H.T. Yau (Harvard), B. Schlein (Cambrigde) Goal:

More information

Eigenvalue variance bounds for Wigner and covariance random matrices

Eigenvalue variance bounds for Wigner and covariance random matrices Eigenvalue variance bounds for Wigner and covariance random matrices S. Dallaporta University of Toulouse, France Abstract. This work is concerned with finite range bounds on the variance of individual

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

The Lindeberg central limit theorem

The Lindeberg central limit theorem The Lindeberg central limit theorem Jordan Bell jordan.bell@gmail.com Department of Mathematics, University of Toronto May 29, 205 Convergence in distribution We denote by P d the collection of Borel probability

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

Chapter 1 Vector Spaces

Chapter 1 Vector Spaces Chapter 1 Vector Spaces Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 110 Linear Algebra Vector Spaces Definition A vector space V over a field

More information

Inhomogeneous circular laws for random matrices with non-identically distributed entries

Inhomogeneous circular laws for random matrices with non-identically distributed entries Inhomogeneous circular laws for random matrices with non-identically distributed entries Nick Cook with Walid Hachem (Telecom ParisTech), Jamal Najim (Paris-Est) and David Renfrew (SUNY Binghamton/IST

More information

Freeness and the Transpose

Freeness and the Transpose Freeness and the Transpose Jamie Mingo (Queen s University) (joint work with Mihai Popa and Roland Speicher) ICM Satellite Conference on Operator Algebras and Applications Cheongpung, August 8, 04 / 6

More information

MATH 5524 MATRIX THEORY Problem Set 5

MATH 5524 MATRIX THEORY Problem Set 5 MATH 554 MATRIX THEORY Problem Set 5 Posted Tuesday April 07. Due Tuesday 8 April 07. [Late work is due on Wednesday 9 April 07.] Complete any four problems, 5 points each. Recall the definitions of the

More information

The largest eigenvalues of the sample covariance matrix. in the heavy-tail case

The largest eigenvalues of the sample covariance matrix. in the heavy-tail case The largest eigenvalues of the sample covariance matrix 1 in the heavy-tail case Thomas Mikosch University of Copenhagen Joint work with Richard A. Davis (Columbia NY), Johannes Heiny (Aarhus University)

More information

Numerical Solutions to the General Marcenko Pastur Equation

Numerical Solutions to the General Marcenko Pastur Equation Numerical Solutions to the General Marcenko Pastur Equation Miriam Huntley May 2, 23 Motivation: Data Analysis A frequent goal in real world data analysis is estimation of the covariance matrix of some

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2)

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2) 14:17 11/16/2 TOPIC. Convergence in distribution and related notions. This section studies the notion of the so-called convergence in distribution of real random variables. This is the kind of convergence

More information

Spectral analysis of the Moore-Penrose inverse of a large dimensional sample covariance matrix

Spectral analysis of the Moore-Penrose inverse of a large dimensional sample covariance matrix Spectral analysis of the Moore-Penrose inverse of a large dimensional sample covariance matrix Taras Bodnar a, Holger Dette b and Nestor Parolya c a Department of Mathematics, Stockholm University, SE-069

More information

The Dynamics of Learning: A Random Matrix Approach

The Dynamics of Learning: A Random Matrix Approach The Dynamics of Learning: A Random Matrix Approach ICML 2018, Stockholm, Sweden Zhenyu Liao, Romain Couillet L2S, CentraleSupélec, Université Paris-Saclay, France GSTATS IDEX DataScience Chair, GIPSA-lab,

More information

Lecture # 3 Orthogonal Matrices and Matrix Norms. We repeat the definition an orthogonal set and orthornormal set.

Lecture # 3 Orthogonal Matrices and Matrix Norms. We repeat the definition an orthogonal set and orthornormal set. Lecture # 3 Orthogonal Matrices and Matrix Norms We repeat the definition an orthogonal set and orthornormal set. Definition A set of k vectors {u, u 2,..., u k }, where each u i R n, is said to be an

More information

The norm of polynomials in large random matrices

The norm of polynomials in large random matrices The norm of polynomials in large random matrices Camille Mâle, École Normale Supérieure de Lyon, Ph.D. Student under the direction of Alice Guionnet. with a significant contribution by Dimitri Shlyakhtenko.

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Concentration inequalities: basics and some new challenges

Concentration inequalities: basics and some new challenges Concentration inequalities: basics and some new challenges M. Ledoux University of Toulouse, France & Institut Universitaire de France Measure concentration geometric functional analysis, probability theory,

More information

Introduction to Empirical Processes and Semiparametric Inference Lecture 02: Overview Continued

Introduction to Empirical Processes and Semiparametric Inference Lecture 02: Overview Continued Introduction to Empirical Processes and Semiparametric Inference Lecture 02: Overview Continued Michael R. Kosorok, Ph.D. Professor and Chair of Biostatistics Professor of Statistics and Operations Research

More information

Gaussian vectors and central limit theorem

Gaussian vectors and central limit theorem Gaussian vectors and central limit theorem Samy Tindel Purdue University Probability Theory 2 - MA 539 Samy T. Gaussian vectors & CLT Probability Theory 1 / 86 Outline 1 Real Gaussian random variables

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

Small Ball Probability, Arithmetic Structure and Random Matrices

Small Ball Probability, Arithmetic Structure and Random Matrices Small Ball Probability, Arithmetic Structure and Random Matrices Roman Vershynin University of California, Davis April 23, 2008 Distance Problems How far is a random vector X from a given subspace H in

More information

Convergence in Distribution

Convergence in Distribution Convergence in Distribution Undergraduate version of central limit theorem: if X 1,..., X n are iid from a population with mean µ and standard deviation σ then n 1/2 ( X µ)/σ has approximately a normal

More information

Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics

Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Data from one or a series of random experiments are collected. Planning experiments and collecting data (not discussed here). Analysis:

More information

Semicircle law on short scales and delocalization for Wigner random matrices

Semicircle law on short scales and delocalization for Wigner random matrices Semicircle law on short scales and delocalization for Wigner random matrices László Erdős University of Munich Weizmann Institute, December 2007 Joint work with H.T. Yau (Harvard), B. Schlein (Munich)

More information

Geometry of log-concave Ensembles of random matrices

Geometry of log-concave Ensembles of random matrices Geometry of log-concave Ensembles of random matrices Nicole Tomczak-Jaegermann Joint work with Radosław Adamczak, Rafał Latała, Alexander Litvak, Alain Pajor Cortona, June 2011 Nicole Tomczak-Jaegermann

More information

1 Glivenko-Cantelli type theorems

1 Glivenko-Cantelli type theorems STA79 Lecture Spring Semester Glivenko-Cantelli type theorems Given i.i.d. observations X,..., X n with unknown distribution function F (t, consider the empirical (sample CDF ˆF n (t = I [Xi t]. n Then

More information

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University PCA with random noise Van Ha Vu Department of Mathematics Yale University An important problem that appears in various areas of applied mathematics (in particular statistics, computer science and numerical

More information

Real Analysis Problems

Real Analysis Problems Real Analysis Problems Cristian E. Gutiérrez September 14, 29 1 1 CONTINUITY 1 Continuity Problem 1.1 Let r n be the sequence of rational numbers and Prove that f(x) = 1. f is continuous on the irrationals.

More information

Nonlinear equations. Norms for R n. Convergence orders for iterative methods

Nonlinear equations. Norms for R n. Convergence orders for iterative methods Nonlinear equations Norms for R n Assume that X is a vector space. A norm is a mapping X R with x such that for all x, y X, α R x = = x = αx = α x x + y x + y We define the following norms on the vector

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Random Fermionic Systems

Random Fermionic Systems Random Fermionic Systems Fabio Cunden Anna Maltsev Francesco Mezzadri University of Bristol December 9, 2016 Maltsev (University of Bristol) Random Fermionic Systems December 9, 2016 1 / 27 Background

More information

A remark on the maximum eigenvalue for circulant matrices

A remark on the maximum eigenvalue for circulant matrices IMS Collections High Dimensional Probability V: The Luminy Volume Vol 5 (009 79 84 c Institute of Mathematical Statistics, 009 DOI: 04/09-IMSCOLL5 A remark on the imum eigenvalue for circulant matrices

More information

THE EIGENVALUES AND EIGENVECTORS OF FINITE, LOW RANK PERTURBATIONS OF LARGE RANDOM MATRICES

THE EIGENVALUES AND EIGENVECTORS OF FINITE, LOW RANK PERTURBATIONS OF LARGE RANDOM MATRICES THE EIGENVALUES AND EIGENVECTORS OF FINITE, LOW RANK PERTURBATIONS OF LARGE RANDOM MATRICES FLORENT BENAYCH-GEORGES AND RAJ RAO NADAKUDITI Abstract. We consider the eigenvalues and eigenvectors of finite,

More information

On the Spectra of General Random Graphs

On the Spectra of General Random Graphs On the Spectra of General Random Graphs Fan Chung Mary Radcliffe University of California, San Diego La Jolla, CA 92093 Abstract We consider random graphs such that each edge is determined by an independent

More information

(Part 1) High-dimensional statistics May / 41

(Part 1) High-dimensional statistics May / 41 Theory for the Lasso Recall the linear model Y i = p j=1 β j X (j) i + ɛ i, i = 1,..., n, or, in matrix notation, Y = Xβ + ɛ, To simplify, we assume that the design X is fixed, and that ɛ is N (0, σ 2

More information

Random Matrix Theory Lecture 1 Introduction, Ensembles and Basic Laws. Symeon Chatzinotas February 11, 2013 Luxembourg

Random Matrix Theory Lecture 1 Introduction, Ensembles and Basic Laws. Symeon Chatzinotas February 11, 2013 Luxembourg Random Matrix Theory Lecture 1 Introduction, Ensembles and Basic Laws Symeon Chatzinotas February 11, 2013 Luxembourg Outline 1. Random Matrix Theory 1. Definition 2. Applications 3. Asymptotics 2. Ensembles

More information

Random Matrices: Beyond Wigner and Marchenko-Pastur Laws

Random Matrices: Beyond Wigner and Marchenko-Pastur Laws Random Matrices: Beyond Wigner and Marchenko-Pastur Laws Nathan Noiry Modal X, Université Paris Nanterre May 3, 2018 Wigner s Matrices Wishart s Matrices Generalizations Wigner s Matrices ij, (n, i, j

More information

STAT C206A / MATH C223A : Stein s method and applications 1. Lecture 31

STAT C206A / MATH C223A : Stein s method and applications 1. Lecture 31 STAT C26A / MATH C223A : Stein s method and applications Lecture 3 Lecture date: Nov. 7, 27 Scribe: Anand Sarwate Gaussian concentration recap If W, T ) is a pair of random variables such that for all

More information

Spectral representations and ergodic theorems for stationary stochastic processes

Spectral representations and ergodic theorems for stationary stochastic processes AMS 263 Stochastic Processes (Fall 2005) Instructor: Athanasios Kottas Spectral representations and ergodic theorems for stationary stochastic processes Stationary stochastic processes Theory and methods

More information

Problem set 2 The central limit theorem.

Problem set 2 The central limit theorem. Problem set 2 The central limit theorem. Math 22a September 6, 204 Due Sept. 23 The purpose of this problem set is to walk through the proof of the central limit theorem of probability theory. Roughly

More information

Chapter 4. Chapter 4 sections

Chapter 4. Chapter 4 sections Chapter 4 sections 4.1 Expectation 4.2 Properties of Expectations 4.3 Variance 4.4 Moments 4.5 The Mean and the Median 4.6 Covariance and Correlation 4.7 Conditional Expectation SKIP: 4.8 Utility Expectation

More information

1.1 Limits and Continuity. Precise definition of a limit and limit laws. Squeeze Theorem. Intermediate Value Theorem. Extreme Value Theorem.

1.1 Limits and Continuity. Precise definition of a limit and limit laws. Squeeze Theorem. Intermediate Value Theorem. Extreme Value Theorem. STATE EXAM MATHEMATICS Variant A ANSWERS AND SOLUTIONS 1 1.1 Limits and Continuity. Precise definition of a limit and limit laws. Squeeze Theorem. Intermediate Value Theorem. Extreme Value Theorem. Definition

More information