EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as

Similar documents
Introduction to Probability and Stocastic Processes - Part I

Multiple Random Variables

A Probability Review

16.584: Random Vectors

Multivariate Random Variable

3. Probability and Statistics

The Multivariate Gaussian Distribution

5. Random Vectors. probabilities. characteristic function. cross correlation, cross covariance. Gaussian random vectors. functions of random vectors

5 Operations on Multiple Random Variables

Continuous Random Variables

Chapter 5 continued. Chapter 5 sections

Lecture 2: Repetition of probability theory and statistics

Lecture 25: Review. Statistics 104. April 23, Colin Rundel

Multiple Random Variables

Bivariate Distributions. Discrete Bivariate Distribution Example

Elements of Probability Theory

18 Bivariate normal distribution I

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).

EE4601 Communication Systems

ECE 636: Systems identification

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.

Lecture 14: Multivariate mgf s and chf s

matrix-free Elements of Probability Theory 1 Random Variables and Distributions Contents Elements of Probability Theory 2

ECE Lecture #10 Overview

Let X and Y denote two random variables. The joint distribution of these random

ECON 3150/4150, Spring term Lecture 6

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n

where r n = dn+1 x(t)

Formulas for probability theory and linear models SF2941

Lecture 11. Multivariate Normal theory

Exercises with solutions (Set D)

UCSD ECE153 Handout #34 Prof. Young-Han Kim Tuesday, May 27, Solutions to Homework Set #6 (Prepared by TA Fatemeh Arbabjolfaei)

CS229 Lecture notes. Andrew Ng

B4 Estimation and Inference

Section 8.1. Vector Notation

Algorithms for Uncertainty Quantification

Probability Background

Random Variables and Their Distributions

Covariance. Lecture 20: Covariance / Correlation & General Bivariate Normal. Covariance, cont. Properties of Covariance

EE 302: Probabilistic Methods in Electrical Engineering

conditional cdf, conditional pdf, total probability theorem?

Chapter 5. Chapter 5 sections

Whitening and Coloring Transformations for Multivariate Gaussian Data. A Slecture for ECE 662 by Maliha Hossain

ENGG2430A-Homework 2

Lecture 3. Probability - Part 2. Luigi Freda. ALCOR Lab DIAG University of Rome La Sapienza. October 19, 2016

Lecture 2. Spring Quarter Statistical Optics. Lecture 2. Characteristic Functions. Transformation of RVs. Sums of RVs

2.3. The Gaussian Distribution

ACM 116: Lectures 3 4

Statistics for Economists Lectures 6 & 7. Asrat Temesgen Stockholm University

REVIEW OF MAIN CONCEPTS AND FORMULAS A B = Ā B. Pr(A B C) = Pr(A) Pr(A B C) =Pr(A) Pr(B A) Pr(C A B)

Chapter 4 : Expectation and Moments

8 - Continuous random vectors

Joint Distributions. (a) Scalar multiplication: k = c d. (b) Product of two matrices: c d. (c) The transpose of a matrix:

Probability, Random Processes and Inference

UNIT Define joint distribution and joint probability density function for the two random variables X and Y.

ECE 650 Lecture 4. Intro to Estimation Theory Random Vectors. ECE 650 D. Van Alphen 1

Lecture Note 1: Probability Theory and Statistics

This exam is closed book and closed notes. (You will have access to a copy of the Table of Common Distributions given in the back of the text.

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Random Process. Random Process. Random Process. Introduction to Random Processes

UC Berkeley Department of Electrical Engineering and Computer Sciences. EECS 126: Probability and Random Processes

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

Statistics for scientists and engineers

Basics on Probability. Jingrui He 09/11/2007

Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017

Two hours. Statistical Tables to be provided THE UNIVERSITY OF MANCHESTER. 14 January :45 11:45

SIMG-713 Homework 5 Solutions

Machine learning - HT Maximum Likelihood

1.12 Multivariate Random Variables

BASICS OF PROBABILITY

Chapter 5. The multivariate normal distribution. Probability Theory. Linear transformations. The mean vector and the covariance matrix

MAS113 Introduction to Probability and Statistics. Proofs of theorems

The Multivariate Normal Distribution. In this case according to our theorem

TAMS39 Lecture 2 Multivariate normal distribution

MA 575 Linear Models: Cedric E. Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2

01 Probability Theory and Statistics Review

Statistical Methods in Particle Physics

Stat 5101 Notes: Brand Name Distributions

Probability and Distributions

III - MULTIVARIATE RANDOM VARIABLES

Chapter 3: Random Variables 1

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Solutions to Homework Set #5 (Prepared by Lele Wang) MSE = E [ (sgn(x) g(y)) 2],, where f X (x) = 1 2 2π e. e (x y)2 2 dx 2π

Multivariate Statistics

1 Review of Probability and Distributions

Stat 206: Sampling theory, sample moments, mahalanobis

Introduction to Computational Finance and Financial Econometrics Probability Review - Part 2

The Gaussian distribution

Joint Probability Distributions and Random Samples (Devore Chapter Five)

MAS223 Statistical Inference and Modelling Exercises

Lecture 2: Review of Basic Probability Theory

MATH2715: Statistical Methods

4. CONTINUOUS RANDOM VARIABLES

ELEG 3143 Probability & Stochastic Process Ch. 4 Multiple Random Variables

Chapter 4. Multivariate Distributions. Obviously, the marginal distributions may be obtained easily from the joint distribution:

Notes on Random Vectors and Multivariate Normal

Random Variables. P(x) = P[X(e)] = P(e). (1)

Physics 403. Segev BenZvi. Parameter Estimation, Correlations, and Error Bars. Department of Physics and Astronomy University of Rochester

Joint Distribution of Two or More Random Variables

Stochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno

Transcription:

L30-1 EEL 5544 Noise in Linear Systems Lecture 30 OTHER TRANSFORMS For a continuous, nonnegative RV X, the Laplace transform of X is X (s) = E [ e sx] = 0 f X (x)e sx dx. For a nonnegative RV, the Laplace transform of X is the same as the moment generating function for X with s replaced by s. Moments can be found from the Laplace transform as E[X n ] = ( 1) n dn ds n X (s). s=0 For a discrete, integer-valued RV X, the probability generating function of X is G X (z) = E[z X ] = p X (k)z k k= Same as the z-transform with z replaced by z 1. Can be found from the characteristic function as G X (z) = Φ X (w) e jω =z G X (z) can be used to find factorial moments. The nth factorial moment of a real RV X is E[X(X 1) (X n + 1)]. E[X(X 1) (X n + 1)] = dn dz G X(z) n z=1

L30-2 Example RANDOM VECTORS The joint cumulative distribution function of X 1, X 2,..., X N is F X1,X 2,...,X n (x 1, x 2,..., x n ) = P (X 1 x 1, X 2 x 2,..., X N x n ) We let X = (X 1, X 2,..., X n ) T be a (column) vector of the X i s, and define X x = {X 1 x 1, X 2 x 2,..., X n x n }. Then The joint cumulative distribution function of X is F X1,X 2,...,X n (x 1, x 2,..., x n ) = F X (x) = P (X x) The joint probability density function of X 1, X 2,..., X N is f X (x) = n x 1 x n F X (x)

L30-3 The marginal pdf of X i is obtained by integrating out the other variables x 1, x 2,..., x i 1, x i+1,..., x n. The conditional pdf of X i given all other X 1,..., X n is f Xi X 1,...,X i 1,X i+1,...,x n (x i x 1,..., x i 1, x i+1,..., x n ) f X1,...,X = n (x 1,..., x n ) f X1,...,X i 1,X i+1,...,x n (x 1,..., x i 1, x i+1,..., x n ) One common use for these types of conditional pdfs is for a series of RVs: Let X n = [X 1, X 2,..., X n ] Then f Xn X1,X 2,...,X n 1 (x n x 1, x 2,..., x n 1 ) = f Xn (x 1,..., x n ) f Xn 1 (x 1,..., x n 1 ) f Xn (x 1,..., x n ) = f Xn Xn 1 (x n x 1,..., x n 1 ) f Xn 1 X n 2 (x n 1 x 1,..., x n 2 )... f X2 X 1 (x 2 x 1 )f X1 (x 1 ) INDEPENDENCE FOR MULTIPLE RVS RVs X 1, X 2,... X n are statistically independent if and only if F X1,X 2,...,X n (x 1, x 2,..., x n ) = F X1 (x 1 )F X2 (x 2 )... F Xn (x n ) or, equivalently, f X1,X 2,...,X n (x 1, x 2,..., x n ) = f X1 (x 1 )f X2 (x 2 )... f Xn (x n ). EXPECTATION VECTORS AND COVARIANCE MATRICES Often the joint distribution and density functions for a sequence of random variables are unknown or difficult to work with In those cases, we often work with the random vectors in terms of their moments The most common moments used are the first moment (mean) and second central moment (covariance)

L30-4 A special case of random vectors that we often use is Gaussian random variables For Gaussian random vectors, the mean and covariances completely specify the distribution The of a random vector X is a whose elements µ 1, µ 2,..., µ n are given by µ i = x i f X (x 1, x 2,..., x n )dx 1...dx n. Note that µ i can be found directly from the marginal density for X i as µ i = x i f Xi (x i )dx i. The random vector X is K = E [ (X µ)(x µ) H]. associated with a complex For complex random vectors, K ij = E[(X i µ i )(X j µ j ) ] = E[(X j µ j ) (X i µ i )] = Kji A matrix M is a if M ij = M ji. Thus, for complex random vectors, the covariance matrix is a Note that the ith diagonal element is K ii = E[(X i µ i )(X i µ i ) ] (the for the random variable X i ) For real random vectors X, K ij = E[(X i µ i )(X j µ j )] = E[(X j µ j )(X i µ i )] = K ji

L30-5 A matrix M is if. So, for real random vectors, the covariance matrix is The is R = E[XX H ]. for a vector random variable X Note that K = R µµ H The correlation matrix is also a for a real vector random variable). ( or CLASSIFICATION OF RANDOM VECTORS Two random vectors X and Y are E[XY H ] = if and only if Two random vectors X and Y are E[XY H ] = if and only if Two random vectors X and Y are f X,Y (x, y) = if and only if

L30-6 THE MULTIDIMENSIONAL GAUSSIAN LAW If X = (X 1, X 2,..., X n ) T is a Gaussian random vector with mean µ and covariance matrix K then the density function for X can be written as [ 1 f X (x) = exp 1 ] (2π) n/2 1/2 [det K] 2 (x µ)t K 1 (x µ) SPECIAL CASE: BIVARIATE GAUSSIAN DISTRIBUTION X, Y are jointly Gaussian if and only if the joint density of X and Y can be written as { [ (x ) 2 1 1 µx f X,Y (x, y) = exp 2πσ X σ Y 1 ρ 2 2(1 ρ 2 X,Y ) σ X X,Y ( ) ( ) ( ) ]} 2 x µx y µy y µy 2ρ X,Y + σ X σ Y σ Y An equivalent condition that may be easier to work with is: X and Y are jointly Gaussian if and only if ax + by is a Gaussian random variable for any real a and b pdf is bell-shaped, centered at (µ X, µ Y ) Additional insight can be gained from considering contours of equal prob. density For equal prob.: [ (x ) 2 µx 2ρ X,Y σ X ( ) ( ) ( ) ] 2 x µx y µy y µy + = const. (1) σ X σ Y σ Y

L30-7 Equation (1) is the equation for an ellipse: (From Komo, Random Signal Analysis...) -When ρ X,Y = 0, X and Y are s.i., and equal-prob. contour ellipse is aligned w/ x- and y-axes: (a) σ x = σ y ; ρ XY = 0 (b) σ x > σ y ; ρ XY = 0 (c) σ x < σ y ; ρ XY = 0 (From Stark and Woods, Probability and Random Processes...) -When ρ X,Y 0,the major axis is at an angle given by Note that σ X = σ Y θ = 45 degrees θ = 1 2 arctan ( 2ρX,Y σ X σ Y σ 2 X σ2 Y )

L30-8 (d) (e) (f) (g) pdf (h) equiprobability contours Joint Gaussian RVs, µ X =µ Y =0, σ X =σ Y = 2, ρ XY =0.9 (From Stark and Woods, Probability and Random Processes...) SPECIAL CASE: JOINTLY GAUSSIAN RANDOM VARIABLES WITH ZERO MEAN AND UNIT VARIANCE Two Gaussian random variables X and Y that each have mean 0 and variance 1 are said to be jointly Gaussian if their joint density function can be written as f XY (x, y) = 1 2π 1 ρ { 2 } (x 2 2ρxy + y 2 ) exp, 2 (1 ρ 2 ) < x < < y <

L30-9 EX Find the marginal densities EX Under what conditions are X and Y independent? Note that X and Y can each be Gaussian without being jointly Gaussian: For example, if the joint density of X and Y is given by f XY (x, y) = 1 { } (x 2 2π exp + y 2 ) 2 ( 1 + xy exp { (x 2 + y 2 2) }), then X and Y are each Gaussian but clearly not jointly Gaussian.