Multi-dimensional Central Limit Theorem

Similar documents
Multi-dimensional Central Limit Argument

Effects of Ignoring Correlations When Computing Sample Chi-Square. John W. Fowler February 26, 2012

Differentiating Gaussian Processes

The Prncpal Component Transform The Prncpal Component Transform s also called Karhunen-Loeve Transform (KLT, Hotellng Transform, oregenvector Transfor

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1

APPENDIX A Some Linear Algebra

Composite Hypotheses testing

Fall 2012 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

Supplementary material: Margin based PU Learning. Matrix Concentration Inequalities

Lecture 4: Dealing with the measurement noise of a sensor

However, since P is a symmetric idempotent matrix, of P are either 0 or 1 [Eigen-values

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Chapter 7 Channel Capacity and Coding

LECTURE 9 CANONICAL CORRELATION ANALYSIS

1 (1 + ( )) = 1 8 ( ) = (c) Carrying out the Taylor expansion, in this case, the series truncates at second order:

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Chat eld, C. and A.J.Collins, Introduction to multivariate analysis. Chapman & Hall, 1980

Statistical pattern recognition

1 Convex Optimization

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14

9 Characteristic classes

Homework Notes Week 7

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

xp(x µ) = 0 p(x = 0 µ) + 1 p(x = 1 µ) = µ

Prof. Dr. I. Nasser Phys 630, T Aug-15 One_dimensional_Ising_Model

Chapter 7 Channel Capacity and Coding

Lecture 4 Hypothesis Testing

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /

Goodness of fit and Wilks theorem

What would be a reasonable choice of the quantization step Δ?

The Concept of Beamforming

Outline. Multivariate Parametric Methods. Multivariate Data. Basic Multivariate Statistics. Steven J Zeil

Radar Trackers. Study Guide. All chapters, problems, examples and page numbers refer to Applied Optimal Estimation, A. Gelb, Ed.

CHALMERS, GÖTEBORGS UNIVERSITET. SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD

Error Probability for M Signals

Modelli Clamfim Equazioni differenziali 7 ottobre 2013

Econ Statistical Properties of the OLS estimator. Sanjaya DeSilva

Georgia Tech PHYS 6124 Mathematical Methods of Physics I

Expected Value and Variance

Communication with AWGN Interference

Lecture 6/7 (February 10/12, 2014) DIRAC EQUATION. The non-relativistic Schrödinger equation was obtained by noting that the Hamiltonian 2

Linear Approximation with Regularization and Moving Least Squares

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

Lecture 3: Probability Distributions

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Probability Theory. The nth coefficient of the Taylor series of f(k), expanded around k = 0, gives the nth moment of x as ( ik) n n!

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1

Lecture 10 Support Vector Machines II

Statistics Spring MIT Department of Nuclear Engineering

Systems of Equations (SUR, GMM, and 3SLS)

2.3 Nilpotent endomorphisms

= = = (a) Use the MATLAB command rref to solve the system. (b) Let A be the coefficient matrix and B be the right-hand side of the system.

Strong Markov property: Same assertion holds for stopping times τ.

Dimension Reduction and Visualization of the Histogram Data

Assuming that the transmission delay is negligible, we have

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

T E C O L O T E R E S E A R C H, I N C.

Primer on High-Order Moment Estimators

The Geometry of Logit and Probit

Quantum Mechanics for Scientists and Engineers. David Miller

Two-factor model. Statistical Models. Least Squares estimation in LM two-factor model. Rats

b ), which stands for uniform distribution on the interval a x< b. = 0 elsewhere

Poisson brackets and canonical transformations

DECOUPLING THEORY HW2

Assignment 2. Tyler Shendruk February 19, 2010

10-701/ Machine Learning, Fall 2005 Homework 3

Modelli Clamfim Equazioni differenziali 22 settembre 2016

Homework 9 for BST 631: Statistical Theory I Problems, 11/02/2006

SIO 224. m(r) =(ρ(r),k s (r),µ(r))

Digital Modems. Lecture 2

Random Process Review

Gaussian Conditional Random Field Network for Semantic Segmentation - Supplementary Material

Chapter 7 Generalized and Weighted Least Squares Estimation. In this method, the deviation between the observed and expected values of

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

STATS 306B: Unsupervised Learning Spring Lecture 10 April 30

A REVIEW OF ERROR ANALYSIS

Structure from Motion. Forsyth&Ponce: Chap. 12 and 13 Szeliski: Chap. 7

Google PageRank with Stochastic Matrix

Eigenvalues of Random Graphs

Lecture 3. Ax x i a i. i i

10.40 Appendix Connection to Thermodynamics and Derivation of Boltzmann Distribution

Estimation: Part 2. Chapter GREG estimation

COS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture # 15 Scribe: Jieming Mao April 1, 2013

10-801: Advanced Optimization and Randomized Methods Lecture 2: Convex functions (Jan 15, 2014)

The Dirac Equation for a One-electron atom. In this section we will derive the Dirac equation for a one-electron atom.

Probability Theory (revisited)

Non-linear Canonical Correlation Analysis Using a RBF Network

Modelli Clamfim Equazione del Calore Lezione ottobre 2014

Statistics and Probability Theory in Civil, Surveying and Environmental Engineering

6) Derivatives, gradients and Hessian matrices

THE ARIMOTO-BLAHUT ALGORITHM FOR COMPUTATION OF CHANNEL CAPACITY. William A. Pearlman. References: S. Arimoto - IEEE Trans. Inform. Thy., Jan.

Changing Topology and Communication Delays

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system

Hidden Markov Models & The Multivariate Gaussian (10/26/04)

Number of cases Number of factors Number of covariates Number of levels of factor i. Value of the dependent variable for case k

Mechanics Physics 151

Transcription:

Mult-dmensonal Central Lmt heorem Outlne ( ( ( t as ( + ( + + ( ( ( Consder a sequence of ndependent random proceses t, t, dentcal to some ( t. Assume t = 0. Defne the sum process t t t t = ( t = (; t ( t are d to (, t ( t = 0 = { } ( t tme ( t ( t t t As, ( t becomes a Gaussan random process. t, t,, t are ontly Gaussan for any and for any samplng nstants. { } t

Jont Characterstc Functon of a Random Vector defne ts ont characterstc functon as ( Defnton. For a -dmensonal random vector =,,,, ωx ωx ω (,,, x Φ ω ω ω (,,, e e e f x x x dxdxdx where f x, x,, x s the ont pdf of. Usng the expectaton notaton, ( ω+ ω+ ω Φ ( ω, ω,, ω = e ( g When the random varables { } are statstcally ndependent, ( ω ω ω ω ω,,, Φ = e e e ω In the one-dmensonal case, Φ ω = e ω Our vectors are row vectors. Usng matrx notaton, ( ω ω ω ( Let ω =,,, and =,,,. hen ω and eq. g s wrtten as = ω + ω + + ω ω e ( g Φ = ω

Covarance Matrx of a Random Vector Consder a -dmensonal random vector,,,. Defne and = λ = cov(, = λ λ λ λ λ λ Λ = λ λ λ Λ s referred to as the covarance matrx of the random vector. When s a zero mean random vector, that s, = 0 for every =,,,, In that case, ote that λ = Λ = ( = =

4 Covarance Matrx of the Sum Vector ( Let =,,, be a zero-mean -dmensonal random vector. Let Λ denote the covarance matrx of : Λ =. Consder ndependent vectors,,, statstcally dentcal to. Defne the sum vector as =. = hen Λ = Λ ( g ( t ( t = (;{ (} t t are d to (, t ( t = 0 = tme ( t ( t t t As, ( t becomes a Gaussan random process. { t, t,, t ( } are ontly Gaussan for any and for any samplng nstants. t Proof Snce s a zero mean random vector, Λ = ( ( whch s = = = + + + + + + = = = = + notng = = 0 for Λ = = = Λ =Λ =

5 Jont Characterstc Functon of the Sum Vector Let = and assume { } are d to. = hen ln ω Φ ω = ln Φ ( g4 Proof Φ ( ω = e ω = e ω = = = e ω notng { } are ndependent = = e ω = ω = Φ = Φ ω notng { } are dentcal to

6 Jont CF of a zero-mean random vector Recall, for a -dm random vector, ts ont characterstc functon s where Φ ( ω = e ( ω ω ω ( ω =,,, and =,,,. ω ω = ω + ω + + ω Defne the random varable W as ( ω ω ω W = ω = + + + hen Φ ( ω = e W W W = + W + + +!! ( m ow assume s a zero-mean random vector. he nd term of eq. m s ( ω ω ω W = + + + However, for a zero-mean vector, = 0 and thus W = 0. rd term of eq.m: ( ω ω ω W = + + + ( ω ω ω ( ω ω ω = + + + + + + = = = ω ω recallng the covarance λ = when = = 0 = = = = ωλ ω ωλ ω ( m

7 Mult-dmensonal Central Lmt heorem Let = and { } be d to, where = 0. hen where = ωλ ω lm Φ ( ω = e ( m Λ = Λ. s referred to as a zero-mean Gaussan random vector when ts ont characterstc functon s the form shown n eq.m. Proof From eq. m and m, ω W W W Φ = + + + +!! = ωλ ω + ω ln Φ = ln ωλ + f ω u u Recallng ln( + u = u + ; u < ω ln Φ = ωλω + f + other terms From eq. g4, f Fnally ω ln Φ ω = ln Φ = ωλω + f + other terms lm ln Φ and from eq. g, Λ = Λ. ω = ωλω

8 Jont Char Functon of non-zero mean Gaussan Let be a Gaussan random vector wth mean m and covarance matrx Λ. hen ts ont CF s ωλ ω Φ =e ω + ωm A Gaussan random vector s completely defned by the mean and ts covarance matrx. Proof. Defne Y = m. hen Y s a zero-mean Gaussan random vector, and t s easy to see Λ = Λ. Y From eq.m, Φ Y ω = exp ωλyω. hus ω exp ( ω Φ = ( ω( Y m = exp + = exp =Φ Y ( ωy exp ( ωm ω exp ( ωm = exp ωλyω + ωm = exp ωλω + ωm notng Λ = Λ Y

9 Formal Defnton of Gaussan Random Vector s a Gaussan random vector (or the component random varables are ontly Gaussan f and only f ts ont characterstc functon s Φ ω = exp ωλω + ωm where m s the mean vector and Λ s the covarance matrx. he ont pdf f f ( π ( x can be found by the nverse Fourer transform: ( ( x = exp x m Λ x m Λ

0 Weghted Sum of Gaussan Random Varables Let be a Gaussan random vector and defne Y as a transformaton of Y = A + b where dm =, A s a matrx, and b s a -dmesonal constant vector. hen Y s also a Gaussan random vector wth m Y and Y = Am + b Λ = AΛ A ote. A sum of Gaussan random varables s Gaussan. he component Gaussan random varables { } don't have to be ndependent for the sum to be Gaussan. Homewor. Weghted Sum of Gaussan Random Varables Prove that a transformaton of a Gaussan random vector s a Gaussan random vector. Hnt. ω( AΛ A ω ω m A b Show Φ ω =e + + to prove Y s Gaussan wth Y Y = + and Y = m m A b Λ AΛ A

Mult-dmensonal Central Lmt hm - Example = ( t = ( t; { ( t } are ndependent random telegraph sgnals = ( t ( t ( t t = 0 As t = t = t =, ( t becomes a Gaussan random process. t, t, t are ontly Gaussan. { }

Covarance Matrx of the Random elegraph Sgnal Samples ( t s a random telegraph sgnal wth transton rate α [transtons/second] We have shown that ( t s a WSS random process wth mean m ( t = 0 ; t t varance σ ( = ( = ; auto-correlaton, R ( τ = e ατ In ths example, we wll tae = tme samples. ( t t t = [ ] Samplng tme nstants are,, (,, seconds. ( ( =,, =,, s a -dmensonal random vector. = 0 for =,, or n vector notaton, the mean vector m = 0. λ = cov(,. Snce = = 0, λ = = R ( t t = e α t Let Λ be the covarance matrc of the random vector. hen α 4α e e α α Λ e e = 4α α e e t Covarance Matrx of the Sum Vector Defne =. hen Λ =Λ. =

Jont Characterstc Functon = (,, a -dmensonal random vector e e e f x x x dxdxdx ωx ωx ωx Φ ( ω, ω, ω (,, where f x, x, x s the ont pdf of. Usng expectaton notaton, ( ω+ ω+ ω Φ,, = e e ( ω ω ω Usng matrx notaton, ( ω ω ω ( Let ω =,, and =,,. hen ω = ω + ω + ω Φ ( ω = e ω eq. e and s wrtten as For the sum vector, = (,, a -dmensonal random vector e e e f z z z dzdzdz ωz ωz ωz Φ ( ω, ω, ω (,, where f z, z, z s the ont pdf of. Usng matrx notaton, Φ = e ω ω We do not now Φ ( ω yet. However, as, we can fnd Φ ( ω wthout nowledge of Φ ( ω.

4 Mult-dmensonal Central Lmt heorem As, Φ ω = exp ωλω ( e α 4α e e α α where Λ e e = Λ = 4α α e e Eq. e s the ont characterstc functon of a zero-mean Gaussan random vector. Jont pdf of the Gaussan Random Vector he ont pdf f ( z can be found by the nverse Fourer transform from Φ ( ω : f ( π ( ( z = exp zλ z wth =. Λ