Stein s Method and Characteristic Functions
|
|
- Lora Shaw
- 5 years ago
- Views:
Transcription
1 Stein s Method and Characteristic Functions Alexander Tikhomirov Komi Science Center of Ural Division of RAS, Syktyvkar, Russia; Singapore, NUS, May 2015 Workshop New Directions in Stein s method
2 Topics Central Limit Theorems for Sum of Independent and Dependent Random variables CLT for Maximal Sum CLT for Multivariate Random Variables CLT for Hilbert Space Valued Random Variables Semi circular Law for Random Matrices CLT for Linear Statistics of Eigenvalues of Random Matrices
3 Topics Central Limit Theorems for Sum of Independent and Dependent Random variables CLT for Maximal Sum CLT for Multivariate Random Variables CLT for Hilbert Space Valued Random Variables Semi circular Law for Random Matrices CLT for Linear Statistics of Eigenvalues of Random Matrices
4 Topics Central Limit Theorems for Sum of Independent and Dependent Random variables CLT for Maximal Sum CLT for Multivariate Random Variables CLT for Hilbert Space Valued Random Variables Semi circular Law for Random Matrices CLT for Linear Statistics of Eigenvalues of Random Matrices
5 Topics Central Limit Theorems for Sum of Independent and Dependent Random variables CLT for Maximal Sum CLT for Multivariate Random Variables CLT for Hilbert Space Valued Random Variables Semi circular Law for Random Matrices CLT for Linear Statistics of Eigenvalues of Random Matrices
6 Topics Central Limit Theorems for Sum of Independent and Dependent Random variables CLT for Maximal Sum CLT for Multivariate Random Variables CLT for Hilbert Space Valued Random Variables Semi circular Law for Random Matrices CLT for Linear Statistics of Eigenvalues of Random Matrices
7 Topics Central Limit Theorems for Sum of Independent and Dependent Random variables CLT for Maximal Sum CLT for Multivariate Random Variables CLT for Hilbert Space Valued Random Variables Semi circular Law for Random Matrices CLT for Linear Statistics of Eigenvalues of Random Matrices
8 Stein Comments The abstract normal approximation theorem of Section 2 is elementary in the sense that it uses only the basic properties of conditional expectation and elements of analysis, including the solution of a first order linear differential equation. It also direct, in the sense the expectation of a fairly arbitrary function of the random variable W in question is approximated directly without going through characteristic function.
9 Stein Comments. Continue Because of the clumsiness of the technique used on this point, it is likely that slightly better results could be obtained by using the basic identity, Lemma 2.1, to approximate the characteristic function of W and by standard procedures, to go from this to an approximation for the distribution of W. However, I believe that, in the long run, better results will be obtained by direct methods. Ch. Stein, 1972
10 The Notation Let {Ω, M, Pr} be probability space and let, for any n 1, F n M be sub σ-algebra of M s.t. F n F n+1. Let (G n ) n=1 be a sequence of r.v. s with E G n <. Let W n := E{G n F n }. For any sequence of r.v. s Wn with E Wn < define the following quantities: D n := W n Wn, (1)
11 The Notation. Continue ( e D n 1 ) δ n (t) := E G n 1, it ε n1 (t) := E G n e itw n, { (Gn ε n2 (t) := E E (e Dn 1) E G n (e Dn 1) ) }, Fn ε n (t) := ε n1 (t) + ε n2 (t), δn (T ) := sup δ n (t). t: t T
12 The main Lemma Lemma For any 0γ < 1 and for any T > 0 s. t. δ n (T ) γ, there exists an absolute constant C > 0 s.t. the following inequality holds sup Pr{W n x} Φ(x) C x T + δ n (T ) 1 γ where Φ(x) = 1 2π x e u2 2 du. + 1 T 1 γ 0 e t2 2 t t 0 ε n (t) e u2 2 dudt,
13 The proof of Main Lemma Let f n (t) := E e i twn. Hence E W n < and E G n < we may write f n(t) = i E W n e itwn = i E G n e itwn. (2) Continuing, we get f n(t) = i E G n e i tw n + i f n (t) E G n (1 e i t Dn ) [ { + E E G n (1 e i tdn ) E G n (1 e i tdn ) F n }]e i twn.
14 The Continuing Let θ( ) (with an index or without) denote any function s.t. θ( ) 1, By symbol C we shall denote an absolute constant, which can be changed from row to row. We may rewrite now the previous equation for ch. f. in the form f n(t) = t(1 δ n ( t))f n (t) + θ n (t)ε n (t).
15 Solving this equation with initial condition f n (0) = 1, we get t f n (t) = exp { t2 2 + ( 1 + Furthermore, note that t u } uδ n (u)du 0 t 0 { u 2 u } ) θ n (u)ε n (u) exp 2 vδ n (v)dv du. 0 vδ n (v)dv γ2 2 (t2 u 2 ), for u t T. (3)
16 The last relation immediately implies that, for t T, f n (t) = exp { t2 2 (1 + θ n1(t) δ n (T )) } + θ n2 (t) t From here it follows that f n (t) e t2 2 δn (T)t 2 e (1 γ)t ε n (u) exp { (t2 u 2 )(1 γ) } du. 2 t 0 ε n (u) e (1 γ)(t2 u 2 ) 2 du. (4)
17 Applying the famous Esseen s inequality, we get sup Pr{W n x} Φ(x) C x T + δ n (T ) 1 γ + 1 T e (1 γ)t2 /2 ( t 1 γ 0 t 0 ) ε(u) e (1 γ)u2/2 du dt. (5)
18 Corollary Corollary Assume that δ n (t) 0 and ε n (t) 0 as n uniformly in all bounded interval [0, T ]. Then lim sup Pr{W n x} Φ(x) = 0. (6) n x
19 Applications of Main Lemma. The Sums of Weakly Dependent R.V. s Let X 1, X 2,... be a stationary sequence of random variables with E X 1 = 0 and E X1 2 = 1. Let Mb a denote σ-algebra generated by X j, when j [a, b]. We denote the strong mixing coefficient of the sequence (X j ) j=1 by the equality α(m) := sup Pr(AB) Pr(A) Pr(B) (7) A M k 1, B M k+m
20 Theorem Let the following conditions hold for some δ > 0, Then m=1 [α(m)] δ 2+δ <, E X 1 2+δ <, n lim n n 1 E( X j ) 2 = σ 2 > 0. lim sup Pr{B 1 n n x j=1 n X j x} Φ(x) = 0 (8) j=1
21 To prove this result we use our Lemma. Let I be uniformly distributed on the set {1,..., n} random variable. Put G = nb 1 n X I. Then W = E{G F}. Put W = B 1 Then we have the estimation n j: j I >m X j. ε n1 (t) = E Ge itw CnB 1 n E δ 2+δ X1 2+δ [α(m)] 1+δ 2+δ. (9) Furthermore, δ n (t) 1 B n n j=1 where D nj = B 1 n e itd nj 1 it E X j C t 1+δ m 1+δ E X 1 2+δ, it n k: j k m X j. (10)
22 For ε n2 (t) we have the following estimation ε n2 = E 1 n X j (e itd nj 1) E X j (e itd nj 1) C t δ mδ. (11) B n j=1 n δ 2
23 CLT for Quadratic forms Let X 1, X 2,... be a sequence of r.v. s s.t. E X k = 0, E X 2 k = 1 and let the sequence be uniformly integrated, i.e. sup E Xk 2 I{ X k > M} 0, as M 0. (12) k 1 We consider the quadratic form Q = n j,k=1 a (n) jk X jx k, where the matrix of coefficients A n = (a (n) jk )n j,k=1 is supposed symmetric and diagonal entries of A are equal zero, a (n) kk = 0.
24 Let A denote the operator norm of matrix A and let A 2 denote the Hilbert Schmidt norm of matrix A. We shall assume that lim n This condition is equivalent to A n A n 2 = 0. (13) λ (n) 1 lim = 0, (14) n σ n where Λ (n) 1 is the maximum modulus eigenvalue of A n and σ 2 n = E Q 2 n = 2 A n 2 2.
25 Theorem Assume conditions (13) and (12). Then sup Pr{σn 1 Q n x} Φ(x) 0 as n. (15) x
26 To apply main Lemma, we denote by T j := {1,..., n} \ {j} and introduce r.v. s ξ j = X j k T j a jk X k. Note that Q = n j=1 ξ j. We put G = nξ I. We introduce W := Q (I), where Q (I) := k,l T I a kl X k X l. Note that W W = 2ξ I First we assume that X j M for all j = 1,..., n. We may estimate now δ n (t) and ε n (t). For instance n n ε n1 (it) 1 E ξ l (e 2itξ l 12 itξ l ) C t 2 E ξ l 3 1=1 Ct 2 M 2 l=1 n ( a kl 2 ) 3 2 C t 2 λ n. (16) k T l l=1
27 Asymptotic distribution of quadratic forms Consider the quadratic forms Q = 1 j k n a jk X j X k + n j=1 a jj (X 2 j 1) Let V 2 = n j=1 a2 jj, L2 j = n k=1 a2 jk, A 2 2 = n j,k=1 a2 jk.
28 Let µ k = E X k 1. Let A(0) = (a jk ) n j,k=1 with A(0) jj = 0, and let d = (a 11,..., a nn ). Let Y and Y be two independent copies of standard Gaussian vector in R n. Let G := µ 2 < A (0) Y, Y > +µ 3 µ < d, Y > + µ 4 µ 2 2 < d, Y >.
29 Characterized equation Denote by R t the resolvent matrix of A 0., R t = (I 2itµ 2 A (0) ) 1 Introduce the function G(t) = µ 2 Tr (R t A (0) ) t 2 µ 2 3 < d, R2 t A (0) d > +it(µ 4 µ 2 2 ) < d, d >. (17)
30 The characteristic function f n (t) = E exp{itq n } satisfy the equation f n(t) = ig(t)f (t) + ε n (t), (18) where the function ε n (t) convergence to 0 uniformly in all bounded intervals.
31 CLT for Generalized U-statistics Consider statistics of type U 2 = 1 l l n g lj(x l, X k ). Denote by σ 2 kj := E g 2 k,j (X k, X j ) <, g k,l (x, y) we shall assume (a) g lj (x, y) = g jl (y, x); (b) E g kj (X k, X j ) X j ) = 0; (c) (d) lim lim max σ 2 M 1 k j k n σn 2 n (e) lim σn 2 n σn 2 = 2 k,j σ2 kj. About the functions kj E gkj 2 (X k, X j )I{ g kj (X k, X j ) M} = 0; E g lk (X l, X k )g j,k (X j, X k ) 2 = 0; l j k k σ2 jk = 0.
32 Theorem Assuming conditions (a) (e), we have sup Pr{σn 1 U 2 x} Φ(x) 0 as n. x Moreover, there exist an absolute positive constant C > 0 s.t. sup x ( Pr{σn 1 U 2 x} Φ(x) C +σ 4 n k n j,k=1 E g jk (X j, X k ) 4 + σ 4 n +σ 4 n k n j ( k n E k=1 σ 3 n k n j,k=1 k n j,k=1 E g jk (X j, X k ) 3 k n E g kl (X k, X l )g jl (X j, X l ) l=1 E{gjk 2 (X j, X k ) ) 2 ) 1 2 X j }. 2
33 Maximal sum It is well-known that the limit distribution of the maximal sum of independent random variables is 2 x G(x) = e u2 /2 dui{x 0}, (19) π 0 i.e. the distribution of module of standard Gaussian r.v. Note that Φ(x) = (1 G( x) + G(x))/2 and f (t) = Re g(t), where f (t), g(t) are characteristic functions of Φ(x) and G(x) respectively. That mean we may proof CLT for the symmetrization of maximal sum using our Lemma.
34 CLT for Hilbert-valued r.v. s Let X1, X2,... be a sequence of random elements of the Hilbert space H with zero mean, EX j = O, and covariation operators A j = cov(x j ). Suppose the sequence {Xj : j = 1...} satisfies the uniformly strong mixing condition with coefficient ϕ(m). Let S n = n j=1 X j, B n = cov(s n ), Z n = S n Bn. (20)
35 The well-known equations for the characteristic function f (t) = E exp{it ξ + a 2 }): ( f (t) = i Tr B(I 2itB) 1) f (t)+ < a, (I 2B) 2 a > f (t).
36 The main result is the following theorem: Theorem Let E X j 3 L < and constants K, β > 0 exist such that ϕ(m) < K exp{ βm}, for all m 1. Let also the following conditions hold: A A n lim n n = A and ϕ(m) < 1. (21) Then there exists a constant c depending on the first eigenvalues of the operator A such that sup Pr{ Z n +a < r} Pr{ Y n +a < r} cln 1/2 log{n(1+ a )}, r where Y n, is a Gaussian r. v. with covariance operator equal to the covariance A. Tikhomirov, operator Syktyvkar, ofrussia Zn andstein with Metod mean zero.
37 Semi Circular Law for Random Matrices Let p(x) denote the density of semi-circular law, p(x) = 1 2π 4 x 2 I [ 2,2] (x), and let f (t) be the characteristic function of semicircular law, f (t) = eitx p(x)dx. Lemma The characteristic function of semi-circular law is the unit solution of the equation f (t) = t 0 f (u)f (t u)du, f (0) = 1. (22)
38 Recall that the moments {α k } of semi-circular law satisfies the relation We have representation 0 k 1 α k+1 = α ν α k ν. (23) f (t) = i ν=0 k=1 ν=0 µ=0 (it) k k! α k+1 From the other hand side t t (iu) µ (i(t u)) µ f (u)f (t u)du = α ν α µ du ν!µ! = ν=0 µ=0 0 α ν α µ i µ+ν t µ+ν+1 (µ + ν + 1)!
39 Changing the order of summing, we get t 0 f (u)f (t u)du = s=0 i s t s+1 (s + 1)! s α ν α s ν. ν=0 Continuing this equality and applying relation (23), we obtain t 0 f (u)f (t u)du = s=0 i s t s+1 (s + 1)! α s+2 = i s=1 (it) s s! α s+1 = f (t).
40 Lemma Let for the characteristic functions f n (t) the following representation holds t f n(t) = f n (s)f n (t s)ds + ε n (t), (24) 0 where ε n (t) 0 as n uniformly for all bounded intervals. Then lim f n(t) = f (t), (25) n where f (t) is characteristic functions of semi-circular law.
41 We represent f n (t) in the form t t f n (t) = 1 + s 0 f n(u)f n (s u)duds + ε n (s)ds. (26) 0 0 If we put now g n (t) = g(t) then we get g n (t) = t s 0 0 t g n (u)g n (s u) + ε n (s)ds. (27) 0 Let g n (t) = sup s [0,t] g n (s) and ε n (t) = sup s [0,t] ε n (s) It is straightforward to check that, for any k 1 g n (t) 2 t k k! g n (t) + e t ε n (t). (28) The last inequality implies the claim.
42 How we may used the last Lemma? Let W = 1 n (X jk ) n j,k=1 be n n Hermitian matrix with independent entries (up to symmetry). Let U(t) = e itw. Then f n (t) = 1 ntr U(t). We have E f n(t) = i n E Tr WU(t) = i n j,k=1 E X jku kj (t). It is well-known n n U pq (t) X jk = i(1 + δ jk ) 1 (U jp U kq (t) + U pk U jq (t)), (29) where f g(t) := t 0 f (s)g(t s)ds.
43 Using the last relations, we get t E f n(t) = E 0 ( 1 n 2 n j,k=1 ) t (U jj (s)u kk (t s)ds +ε n (t) = 0 E f n (s)f n (t (30)
44 For the Marchenko Pastur Law we may used representation for characteristic function of symmetrizing law f (t) = y t 0 t f (s)f (t s)ds (1 y) f (s)ds. (31) 0
45 Thank you for your attention!
Gaussian vectors and central limit theorem
Gaussian vectors and central limit theorem Samy Tindel Purdue University Probability Theory 2 - MA 539 Samy T. Gaussian vectors & CLT Probability Theory 1 / 86 Outline 1 Real Gaussian random variables
More information1. Stochastic Processes and filtrations
1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S
More informationA new method to bound rate of convergence
A new method to bound rate of convergence Arup Bose Indian Statistical Institute, Kolkata, abose@isical.ac.in Sourav Chatterjee University of California, Berkeley Empirical Spectral Distribution Distribution
More informationRandom matrices: Distribution of the least singular value (via Property Testing)
Random matrices: Distribution of the least singular value (via Property Testing) Van H. Vu Department of Mathematics Rutgers vanvu@math.rutgers.edu (joint work with T. Tao, UCLA) 1 Let ξ be a real or complex-valued
More informationA Tight Upper Bound on the Second-Order Coding Rate of Parallel Gaussian Channels with Feedback
A Tight Upper Bound on the Second-Order Coding Rate of Parallel Gaussian Channels with Feedback Vincent Y. F. Tan (NUS) Joint work with Silas L. Fong (Toronto) 2017 Information Theory Workshop, Kaohsiung,
More informationFIXED POINT ITERATIONS
FIXED POINT ITERATIONS MARKUS GRASMAIR 1. Fixed Point Iteration for Non-linear Equations Our goal is the solution of an equation (1) F (x) = 0, where F : R n R n is a continuous vector valued mapping in
More informationA remark on the maximum eigenvalue for circulant matrices
IMS Collections High Dimensional Probability V: The Luminy Volume Vol 5 (009 79 84 c Institute of Mathematical Statistics, 009 DOI: 04/09-IMSCOLL5 A remark on the imum eigenvalue for circulant matrices
More informationThe largest eigenvalues of the sample covariance matrix. in the heavy-tail case
The largest eigenvalues of the sample covariance matrix 1 in the heavy-tail case Thomas Mikosch University of Copenhagen Joint work with Richard A. Davis (Columbia NY), Johannes Heiny (Aarhus University)
More informationMatricial R-circular Systems and Random Matrices
Instytut Matematyki i Informatyki Politechnika Wrocławska UC Berkeley, March 2014 Motivations Motivations: describe asymptotic distributions of random blocks construct new random matrix models unify concepts
More informationTriangular matrices and biorthogonal ensembles
/26 Triangular matrices and biorthogonal ensembles Dimitris Cheliotis Department of Mathematics University of Athens UK Easter Probability Meeting April 8, 206 2/26 Special densities on R n Example. n
More information6 Inner Product and Hilbert Spaces
6 Inner Product and Hilbert Spaces 6. Motivation Of the different p-norms on R n, p = 2 is special. This is because the 2-norm (λ, λ 2,..., λ n ) 2 = λ 2 + λ2 2 + + λ2 n comes from the an inner product
More informationOn the smallest eigenvalues of covariance matrices of multivariate spatial processes
On the smallest eigenvalues of covariance matrices of multivariate spatial processes François Bachoc, Reinhard Furrer Toulouse Mathematics Institute, University Paul Sabatier, France Institute of Mathematics
More informationNon white sample covariance matrices.
Non white sample covariance matrices. S. Péché, Université Grenoble 1, joint work with O. Ledoit, Uni. Zurich 17-21/05/2010, Université Marne la Vallée Workshop Probability and Geometry in High Dimensions
More informationOn Expected Gaussian Random Determinants
On Expected Gaussian Random Determinants Moo K. Chung 1 Department of Statistics University of Wisconsin-Madison 1210 West Dayton St. Madison, WI 53706 Abstract The expectation of random determinants whose
More informationFreeness and the Transpose
Freeness and the Transpose Jamie Mingo (Queen s University) (joint work with Mihai Popa and Roland Speicher) ICM Satellite Conference on Operator Algebras and Applications Cheongpung, August 8, 04 / 6
More informationExtreme inference in stationary time series
Extreme inference in stationary time series Moritz Jirak FOR 1735 February 8, 2013 1 / 30 Outline 1 Outline 2 Motivation The multivariate CLT Measuring discrepancies 3 Some theory and problems The problem
More informationKaczmarz algorithm in Hilbert space
STUDIA MATHEMATICA 169 (2) (2005) Kaczmarz algorithm in Hilbert space by Rainis Haller (Tartu) and Ryszard Szwarc (Wrocław) Abstract The aim of the Kaczmarz algorithm is to reconstruct an element in a
More informationElementary Row Operations on Matrices
King Saud University September 17, 018 Table of contents 1 Definition A real matrix is a rectangular array whose entries are real numbers. These numbers are organized on rows and columns. An m n matrix
More informationA Note on the Central Limit Theorem for the Eigenvalue Counting Function of Wigner and Covariance Matrices
A Note on the Central Limit Theorem for the Eigenvalue Counting Function of Wigner and Covariance Matrices S. Dallaporta University of Toulouse, France Abstract. This note presents some central limit theorems
More informationNOTES ON LINEAR ODES
NOTES ON LINEAR ODES JONATHAN LUK We can now use all the discussions we had on linear algebra to study linear ODEs Most of this material appears in the textbook in 21, 22, 23, 26 As always, this is a preliminary
More informationMarkov Chains and Stochastic Sampling
Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,
More informationSTAT 200C: High-dimensional Statistics
STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 59 Classical case: n d. Asymptotic assumption: d is fixed and n. Basic tools: LLN and CLT. High-dimensional setting: n d, e.g. n/d
More informationAn idea how to solve some of the problems. diverges the same must hold for the original series. T 1 p T 1 p + 1 p 1 = 1. dt = lim
An idea how to solve some of the problems 5.2-2. (a) Does not converge: By multiplying across we get Hence 2k 2k 2 /2 k 2k2 k 2 /2 k 2 /2 2k 2k 2 /2 k. As the series diverges the same must hold for the
More informationFunctional Analysis HW #3
Functional Analysis HW #3 Sangchul Lee October 26, 2015 1 Solutions Exercise 2.1. Let D = { f C([0, 1]) : f C([0, 1])} and define f d = f + f. Show that D is a Banach algebra and that the Gelfand transform
More informationThe circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA)
The circular law Lewis Memorial Lecture / DIMACS minicourse March 19, 2008 Terence Tao (UCLA) 1 Eigenvalue distributions Let M = (a ij ) 1 i n;1 j n be a square matrix. Then one has n (generalised) eigenvalues
More informationBrownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539
Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory
More informationLecture 3. Random Fourier measurements
Lecture 3. Random Fourier measurements 1 Sampling from Fourier matrices 2 Law of Large Numbers and its operator-valued versions 3 Frames. Rudelson s Selection Theorem Sampling from Fourier matrices Our
More informationSmall Ball Probability, Arithmetic Structure and Random Matrices
Small Ball Probability, Arithmetic Structure and Random Matrices Roman Vershynin University of California, Davis April 23, 2008 Distance Problems How far is a random vector X from a given subspace H in
More informationON THE UNIQUENESS OF SOLUTIONS TO THE GROSS-PITAEVSKII HIERARCHY. 1. Introduction
ON THE UNIQUENESS OF SOLUTIONS TO THE GROSS-PITAEVSKII HIERARCHY SERGIU KLAINERMAN AND MATEI MACHEDON Abstract. The purpose of this note is to give a new proof of uniqueness of the Gross- Pitaevskii hierarchy,
More informationComparison Method in Random Matrix Theory
Comparison Method in Random Matrix Theory Jun Yin UW-Madison Valparaíso, Chile, July - 2015 Joint work with A. Knowles. 1 Some random matrices Wigner Matrix: H is N N square matrix, H : H ij = H ji, EH
More informationON MATRIX VALUED SQUARE INTEGRABLE POSITIVE DEFINITE FUNCTIONS
1 2 3 ON MATRIX VALUED SQUARE INTERABLE POSITIVE DEFINITE FUNCTIONS HONYU HE Abstract. In this paper, we study matrix valued positive definite functions on a unimodular group. We generalize two important
More informationP i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=
2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]
More informationWeak convergence. Amsterdam, 13 November Leiden University. Limit theorems. Shota Gugushvili. Generalities. Criteria
Weak Leiden University Amsterdam, 13 November 2013 Outline 1 2 3 4 5 6 7 Definition Definition Let µ, µ 1, µ 2,... be probability measures on (R, B). It is said that µ n converges weakly to µ, and we then
More informationP (A G) dp G P (A G)
First homework assignment. Due at 12:15 on 22 September 2016. Homework 1. We roll two dices. X is the result of one of them and Z the sum of the results. Find E [X Z. Homework 2. Let X be a r.v.. Assume
More informationA CLT FOR MULTI-DIMENSIONAL MARTINGALE DIFFERENCES IN A LEXICOGRAPHIC ORDER GUY COHEN. Dedicated to the memory of Mikhail Gordin
A CLT FOR MULTI-DIMENSIONAL MARTINGALE DIFFERENCES IN A LEXICOGRAPHIC ORDER GUY COHEN Dedicated to the memory of Mikhail Gordin Abstract. We prove a central limit theorem for a square-integrable ergodic
More informationMultivariate Normal-Laplace Distribution and Processes
CHAPTER 4 Multivariate Normal-Laplace Distribution and Processes The normal-laplace distribution, which results from the convolution of independent normal and Laplace random variables is introduced by
More informationCentral Limit Theorems for linear statistics for Biorthogonal Ensembles
Central Limit Theorems for linear statistics for Biorthogonal Ensembles Maurice Duits, Stockholm University Based on joint work with Jonathan Breuer (HUJI) Princeton, April 2, 2014 M. Duits (SU) CLT s
More informationKernel Method: Data Analysis with Positive Definite Kernels
Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University
More informationLecture notes: Applied linear algebra Part 1. Version 2
Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and
More informationPROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS
PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please
More informationLax Solution Part 4. October 27, 2016
Lax Solution Part 4 www.mathtuition88.com October 27, 2016 Textbook: Functional Analysis by Peter D. Lax Exercises: Ch 16: Q2 4. Ch 21: Q1, 2, 9, 10. Ch 28: 1, 5, 9, 10. 1 Chapter 16 Exercise 2 Let h =
More information1 Intro to RMT (Gene)
M705 Spring 2013 Summary for Week 2 1 Intro to RMT (Gene) (Also see the Anderson - Guionnet - Zeitouni book, pp.6-11(?) ) We start with two independent families of R.V.s, {Z i,j } 1 i
More informationAsymptotic Properties of an Approximate Maximum Likelihood Estimator for Stochastic PDEs
Asymptotic Properties of an Approximate Maximum Likelihood Estimator for Stochastic PDEs M. Huebner S. Lototsky B.L. Rozovskii In: Yu. M. Kabanov, B. L. Rozovskii, and A. N. Shiryaev editors, Statistics
More informationarxiv: v1 [math.pr] 13 Jan 2019
arxiv:1901.04084v1 [math.pr] 13 Jan 2019 Wiener Itô integral representation in vector valued Gaussian stationary random fields Péter Major Mathematical Institute of the Hungarian Academy of Sciences Budapest,
More informationPROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS
PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please
More informationLecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1
Random Walks and Brownian Motion Tel Aviv University Spring 011 Lecture date: May 0, 011 Lecture 9 Instructor: Ron Peled Scribe: Jonathan Hermon In today s lecture we present the Brownian motion (BM).
More informationCreating materials with a desired refraction coefficient: numerical experiments
Creating materials with a desired refraction coefficient: numerical experiments Sapto W. Indratno and Alexander G. Ramm Department of Mathematics Kansas State University, Manhattan, KS 66506-2602, USA
More informationStatistical signal processing
Statistical signal processing Short overview of the fundamentals Outline Random variables Random processes Stationarity Ergodicity Spectral analysis Random variable and processes Intuition: A random variable
More informationON THE HÖLDER CONTINUITY OF MATRIX FUNCTIONS FOR NORMAL MATRICES
Volume 10 (2009), Issue 4, Article 91, 5 pp. ON THE HÖLDER CONTINUITY O MATRIX UNCTIONS OR NORMAL MATRICES THOMAS P. WIHLER MATHEMATICS INSTITUTE UNIVERSITY O BERN SIDLERSTRASSE 5, CH-3012 BERN SWITZERLAND.
More informationThe Hadamard product and the free convolutions
isid/ms/205/20 November 2, 205 http://www.isid.ac.in/ statmath/index.php?module=preprint The Hadamard product and the free convolutions Arijit Chakrabarty Indian Statistical Institute, Delhi Centre 7,
More information2. Matrix Algebra and Random Vectors
2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns
More informationCourse 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra
Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra D. R. Wilkins Contents 3 Topics in Commutative Algebra 2 3.1 Rings and Fields......................... 2 3.2 Ideals...............................
More informationApproximately Gaussian marginals and the hyperplane conjecture
Approximately Gaussian marginals and the hyperplane conjecture Tel-Aviv University Conference on Asymptotic Geometric Analysis, Euler Institute, St. Petersburg, July 2010 Lecture based on a joint work
More informationAn introduction to multivariate data
An introduction to multivariate data Angela Montanari 1 The data matrix The starting point of any analysis of multivariate data is a data matrix, i.e. a collection of n observations on a set of p characters
More informationAsymptotic Statistics-III. Changliang Zou
Asymptotic Statistics-III Changliang Zou The multivariate central limit theorem Theorem (Multivariate CLT for iid case) Let X i be iid random p-vectors with mean µ and and covariance matrix Σ. Then n (
More informationELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process
Department of Electrical Engineering University of Arkansas ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process Dr. Jingxian Wu wuj@uark.edu OUTLINE 2 Definition of stochastic process (random
More informationRandom Matrices: Beyond Wigner and Marchenko-Pastur Laws
Random Matrices: Beyond Wigner and Marchenko-Pastur Laws Nathan Noiry Modal X, Université Paris Nanterre May 3, 2018 Wigner s Matrices Wishart s Matrices Generalizations Wigner s Matrices ij, (n, i, j
More informationUC Berkeley Department of Electrical Engineering and Computer Sciences. EECS 126: Probability and Random Processes
UC Berkeley Department of Electrical Engineering and Computer Sciences EECS 6: Probability and Random Processes Problem Set 3 Spring 9 Self-Graded Scores Due: February 8, 9 Submit your self-graded scores
More informationNotes on basis changes and matrix diagonalization
Notes on basis changes and matrix diagonalization Howard E Haber Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, CA 95064 April 17, 2017 1 Coordinates of vectors and matrix
More informationPreface to the Second Edition...vii Preface to the First Edition... ix
Contents Preface to the Second Edition...vii Preface to the First Edition........... ix 1 Introduction.............................................. 1 1.1 Large Dimensional Data Analysis.........................
More informationarxiv: v3 [math.ds] 22 Feb 2012
Stability of interconnected impulsive systems with and without time-delays using Lyapunov methods arxiv:1011.2865v3 [math.ds] 22 Feb 2012 Sergey Dashkovskiy a, Michael Kosmykov b, Andrii Mironchenko b,
More information4 Derivations of the Discrete-Time Kalman Filter
Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof N Shimkin 4 Derivations of the Discrete-Time
More informationSubmitted to the Brazilian Journal of Probability and Statistics
Submitted to the Brazilian Journal of Probability and Statistics Multivariate normal approximation of the maximum likelihood estimator via the delta method Andreas Anastasiou a and Robert E. Gaunt b a
More informationMathematics Department Stanford University Math 61CM/DM Inner products
Mathematics Department Stanford University Math 61CM/DM Inner products Recall the definition of an inner product space; see Appendix A.8 of the textbook. Definition 1 An inner product space V is a vector
More informationIntertibility and spectrum of the multiplication operator on the space of square-summable sequences
Intertibility and spectrum of the multiplication operator on the space of square-summable sequences Objectives Establish an invertibility criterion and calculate the spectrum of the multiplication operator
More informationMath 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.
Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,
More informationELEMENTS OF PROBABILITY THEORY
ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable
More informationErgodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.
Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions
More informationThe following definition is fundamental.
1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic
More informationOn a class of stochastic differential equations in a financial network model
1 On a class of stochastic differential equations in a financial network model Tomoyuki Ichiba Department of Statistics & Applied Probability, Center for Financial Mathematics and Actuarial Research, University
More informationTHE DVORETZKY KIEFER WOLFOWITZ INEQUALITY WITH SHARP CONSTANT: MASSART S 1990 PROOF SEMINAR, SEPT. 28, R. M. Dudley
THE DVORETZKY KIEFER WOLFOWITZ INEQUALITY WITH SHARP CONSTANT: MASSART S 1990 PROOF SEMINAR, SEPT. 28, 2011 R. M. Dudley 1 A. Dvoretzky, J. Kiefer, and J. Wolfowitz 1956 proved the Dvoretzky Kiefer Wolfowitz
More informationBasic Properties of Metric and Normed Spaces
Basic Properties of Metric and Normed Spaces Computational and Metric Geometry Instructor: Yury Makarychev The second part of this course is about metric geometry. We will study metric spaces, low distortion
More informationSome asymptotic expansions and distribution approximations outside a CLT context 1
6th St.Petersburg Workshop on Simulation (2009) 444-448 Some asymptotic expansions and distribution approximations outside a CLT context 1 Manuel L. Esquível 2, João Tiago Mexia 3, João Lita da Silva 4,
More informationLecture I: Asymptotics for large GUE random matrices
Lecture I: Asymptotics for large GUE random matrices Steen Thorbjørnsen, University of Aarhus andom Matrices Definition. Let (Ω, F, P) be a probability space and let n be a positive integer. Then a random
More informationarxiv: v1 [math.pr] 4 Nov 2016
Perturbations of continuous-time Markov chains arxiv:1611.1254v1 [math.pr 4 Nov 216 Pei-Sen Li School of Mathematical Sciences, Beijing Normal University, Beijing 1875, China E-ma: peisenli@ma.bnu.edu.cn
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More information1 Local Asymptotic Normality of Ranks and Covariates in Transformation Models
Draft: February 17, 1998 1 Local Asymptotic Normality of Ranks and Covariates in Transformation Models P.J. Bickel 1 and Y. Ritov 2 1.1 Introduction Le Cam and Yang (1988) addressed broadly the following
More informationThe Central Limit Theorem: More of the Story
The Central Limit Theorem: More of the Story Steven Janke November 2015 Steven Janke (Seminar) The Central Limit Theorem:More of the Story November 2015 1 / 33 Central Limit Theorem Theorem (Central Limit
More informationProbability Background
Probability Background Namrata Vaswani, Iowa State University August 24, 2015 Probability recap 1: EE 322 notes Quick test of concepts: Given random variables X 1, X 2,... X n. Compute the PDF of the second
More informationRANK AND PERIMETER PRESERVER OF RANK-1 MATRICES OVER MAX ALGEBRA
Discussiones Mathematicae General Algebra and Applications 23 (2003 ) 125 137 RANK AND PERIMETER PRESERVER OF RANK-1 MATRICES OVER MAX ALGEBRA Seok-Zun Song and Kyung-Tae Kang Department of Mathematics,
More informationNotes on the second moment method, Erdős multiplication tables
Notes on the second moment method, Erdős multiplication tables January 25, 20 Erdős multiplication table theorem Suppose we form the N N multiplication table, containing all the N 2 products ab, where
More informationarxiv: v1 [math.pr] 3 Apr 2017
KOLMOGOROV BOUNDS FOR THE NORMAL APPROXIMATION OF THE NUMBER OF TRIANGLES IN THE ERDŐS-RÉNYI RANDOM GRAPH Adrian Röllin National University of Singapore arxiv:70.000v [math.pr] Apr 07 Abstract We bound
More informationMathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )
Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca October 22nd, 2014 E. Tanré (INRIA - Team Tosca) Mathematical
More informationDEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular
form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix
More information1 Curvature of submanifolds of Euclidean space
Curvature of submanifolds of Euclidean space by Min Ru, University of Houston 1 Curvature of submanifolds of Euclidean space Submanifold in R N : A C k submanifold M of dimension n in R N means that for
More informationMarkov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains
Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time
More informationarxiv: v1 [math.pr] 13 Jan 2019
arxiv:1901.04086v1 [math.pr] 13 Jan 2019 Non-central limit theorem for non-linear functionals of vector valued Gaussian stationary random fields Péter Major Mathematical Institute of the Hungarian Academy
More informationJournal of Inequalities in Pure and Applied Mathematics
Journal of Inequalities in Pure and Applied Mathematics ON A HYBRID FAMILY OF SUMMATION INTEGRAL TYPE OPERATORS VIJAY GUPTA AND ESRA ERKUŞ School of Applied Sciences Netaji Subhas Institute of Technology
More informationBRUNO L. M. FERREIRA AND HENRIQUE GUZZO JR.
REVISTA DE LA UNIÓN MATEMÁTICA ARGENTINA Vol. 60, No. 1, 2019, Pages 9 20 Published online: February 11, 2019 https://doi.org/10.33044/revuma.v60n1a02 LIE n-multiplicative MAPPINGS ON TRIANGULAR n-matrix
More informationSupermodular ordering of Poisson arrays
Supermodular ordering of Poisson arrays Bünyamin Kızıldemir Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological University 637371 Singapore
More informationFree Probability Theory and Random Matrices. Roland Speicher Queen s University Kingston, Canada
Free Probability Theory and Random Matrices Roland Speicher Queen s University Kingston, Canada What is Operator-Valued Free Probability and Why Should Engineers Care About it? Roland Speicher Queen s
More informationAlgebra C Numerical Linear Algebra Sample Exam Problems
Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric
More informationFunctions of Several Variables
Functions of Several Variables The Unconstrained Minimization Problem where In n dimensions the unconstrained problem is stated as f() x variables. minimize f()x x, is a scalar objective function of vector
More informationRadial Basis Functions I
Radial Basis Functions I Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 14, 2008 Today Reformulation of natural cubic spline interpolation Scattered
More informationStochastic Processes
Introduction and Techniques Lecture 4 in Financial Mathematics UiO-STK4510 Autumn 2015 Teacher: S. Ortiz-Latorre Stochastic Processes 1 Stochastic Processes De nition 1 Let (E; E) be a measurable space
More informationarxiv: v1 [math.pr] 22 May 2008
THE LEAST SINGULAR VALUE OF A RANDOM SQUARE MATRIX IS O(n 1/2 ) arxiv:0805.3407v1 [math.pr] 22 May 2008 MARK RUDELSON AND ROMAN VERSHYNIN Abstract. Let A be a matrix whose entries are real i.i.d. centered
More informationMulti-stage convex relaxation approach for low-rank structured PSD matrix recovery
Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery Department of Mathematics & Risk Management Institute National University of Singapore (Based on a joint work with Shujun
More informationSTAT C206A / MATH C223A : Stein s method and applications 1. Lecture 31
STAT C26A / MATH C223A : Stein s method and applications Lecture 3 Lecture date: Nov. 7, 27 Scribe: Anand Sarwate Gaussian concentration recap If W, T ) is a pair of random variables such that for all
More informationarxiv: v2 [math.pr] 16 Aug 2014
RANDOM WEIGHTED PROJECTIONS, RANDOM QUADRATIC FORMS AND RANDOM EIGENVECTORS VAN VU DEPARTMENT OF MATHEMATICS, YALE UNIVERSITY arxiv:306.3099v2 [math.pr] 6 Aug 204 KE WANG INSTITUTE FOR MATHEMATICS AND
More informationThe reference [Ho17] refers to the course lecture notes by Ilkka Holopainen.
Department of Mathematics and Statistics Real Analysis I, Fall 207 Solutions to Exercise 6 (6 pages) riikka.schroderus at helsinki.fi Note. The course can be passed by an exam. The first possible exam
More information