Chapter 6: Random Processes 1

Similar documents
Stationary independent increments. 1. Random changes of the form X t+h X t fixed h > 0 are called increments of the process.

ECE353: Probability and Random Processes. Lecture 18 - Stochastic Processes

UCSD ECE 153 Handout #46 Prof. Young-Han Kim Thursday, June 5, Solutions to Homework Set #8 (Prepared by TA Fatemeh Arbabjolfaei)

Chapter 3: Random Variables 1

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process

Chapter 3: Random Variables 1

Northwestern University Department of Electrical Engineering and Computer Science

Lecture 2: Repetition of probability theory and statistics

Chapter 2 Random Processes

UCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, Practice Final Examination (Winter 2017)

E X A M. Probability Theory and Stochastic Processes Date: December 13, 2016 Duration: 4 hours. Number of pages incl.

Introduction to Probability and Stochastic Processes I

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University

16.584: Random (Stochastic) Processes

UCSD ECE250 Handout #24 Prof. Young-Han Kim Wednesday, June 6, Solutions to Exercise Set #7

Probability and Statistics

Algorithms for Uncertainty Quantification

Stochastic Process II Dr.-Ing. Sudchai Boonto

UCSD ECE250 Handout #20 Prof. Young-Han Kim Monday, February 26, Solutions to Exercise Set #7

Stochastic Processes

Random Process. Random Process. Random Process. Introduction to Random Processes

Chapter 4 Random process. 4.1 Random process

Exponential Distribution and Poisson Process

1.1 Review of Probability Theory

Statistical signal processing

Probability Models. 4. What is the definition of the expectation of a discrete random variable?

ELEMENTS OF PROBABILITY THEORY

Formulas for probability theory and linear models SF2941

Part I Stochastic variables and Markov chains

ECE534, Spring 2018: Solutions for Problem Set #5

Chapter 2: Random Variables

Name of the Student: Problems on Discrete & Continuous R.Vs

ECE534, Spring 2018: Solutions for Problem Set #4 Due Friday April 6, 2018

Manual for SOA Exam MLC.

System Simulation Part II: Mathematical and Statistical Models Chapter 5: Statistical Models

Random Processes. DS GA 1002 Probability and Statistics for Data Science.

ECE 650 Lecture #10 (was Part 1 & 2) D. van Alphen. D. van Alphen 1

Lecture 7: Chapter 7. Sums of Random Variables and Long-Term Averages

Multiple Random Variables

ENSC327 Communications Systems 19: Random Processes. Jie Liang School of Engineering Science Simon Fraser University

Chapter 6. Random Processes

Stochastic Processes. Monday, November 14, 11

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

UCSD ECE153 Handout #40 Prof. Young-Han Kim Thursday, May 29, Homework Set #8 Due: Thursday, June 5, 2011

Proving the central limit theorem

Probability and Distributions

Continuous Time Processes

Stochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno

Continuous-time Markov Chains

The distribution inherited by Y is called the Cauchy distribution. Using that. d dy ln(1 + y2 ) = 1 arctan(y)

STAT Chapter 5 Continuous Distributions

for valid PSD. PART B (Answer all five units, 5 X 10 = 50 Marks) UNIT I

Lecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN

P 1.5 X 4.5 / X 2 and (iii) The smallest value of n for

Chapter 5. Statistical Models in Simulations 5.1. Prof. Dr. Mesut Güneş Ch. 5 Statistical Models in Simulations

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Fig 1: Stationary and Non Stationary Time Series

Module 9: Stationary Processes

CDA5530: Performance Models of Computers and Networks. Chapter 2: Review of Practical Random Variables

A Probability Primer. A random walk down a probabilistic path leading to some stochastic thoughts on chance events and uncertain outcomes.

Random Variables and Their Distributions

EE126: Probability and Random Processes

5 Operations on Multiple Random Variables

Chapter 5 Random Variables and Processes

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Deterministic. Deterministic data are those can be described by an explicit mathematical relationship

Review. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

The story of the film so far... Mathematics for Informatics 4a. Continuous-time Markov processes. Counting processes

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators

STAT/MATH 395 A - PROBABILITY II UW Winter Quarter Moment functions. x r p X (x) (1) E[X r ] = x r f X (x) dx (2) (x E[X]) r p X (x) (3)

Poisson Processes. Stochastic Processes. Feb UC3M

Things to remember when learning probability distributions:

1 Inverse Transform Method and some alternative algorithms

ECE 450 Homework #3. 1. Given the joint density function f XY (x,y) = 0.5 1<x<2, 2<y< <x<4, 2<y<3 0 else

ECE 302 Division 1 MWF 10:30-11:20 (Prof. Pollak) Final Exam Solutions, 5/3/2004. Please read the instructions carefully before proceeding.

ECE302 Spring 2006 Practice Final Exam Solution May 4, Name: Score: /100

Continuous Random Variables

Definition of a Stochastic Process

Chapter 6 - Random Processes

Random variables. DS GA 1002 Probability and Statistics for Data Science.

S n = x + X 1 + X X n.

Random Variables. P(x) = P[X(e)] = P(e). (1)

1 Presessional Probability

FINAL EXAM: 3:30-5:30pm

Probability Models in Electrical and Computer Engineering Mathematical models as tools in analysis and design Deterministic models Probability models

Limiting Distributions

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

The exponential distribution and the Poisson process

EE/CpE 345. Modeling and Simulation. Fall Class 5 September 30, 2002

Relationship between probability set function and random variable - 2 -

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

2 Continuous Random Variables and their Distributions

Stochastic Processes. Chapter Definitions

1 Random Variable: Topics

STAT2201. Analysis of Engineering & Scientific Data. Unit 3

STOCHASTIC PROCESSES, DETECTION AND ESTIMATION Course Notes

3. Probability and Statistics

Probability Space. J. McNames Portland State University ECE 538/638 Stochastic Signals Ver

Gaussian, Markov and stationary processes

Transcription:

Chapter 6: Random Processes 1 Yunghsiang S. Han Graduate Institute of Communication Engineering, National Taipei University Taiwan E-mail: yshan@mail.ntpu.edu.tw 1 Modified from the lecture notes by Prof. Mao-Ching Chiu

Y. S. Han Random Processes 1 Definition of a Random Process Random experiment with sample space S. To every outcome ζ S, we assign a function of time according to some rule: X(t,ζ) t I. For fixed ζ, the graph of the function X(t,ζ) versus t is a sample function of the random process. For each fixed t k from the index set I, X(t k,ζ) is a random variable.

Y. S. Han Random Processes 2 The indexed family of random variables {X(t,ζ),t I} is called a random process or stochastic process.

Y. S. Han Random Processes 3

Y. S. Han Random Processes 4 A stochastic process is said to be discrete-time if the index set I is a countable set. A continuous-time stochastic process is one in which I is continuous. Example: Let ζ be a number selected at random from the interval S = [0, 1], and let b 1 b 2 be the binary expansion of ζ ζ = b i 2 i b i {0, 1}. i=1 Define the discrete-time random process X(n,ζ) by X(n,ζ) = b n n = 1, 2,. A sequence of binary numbers is obtained.

Y. S. Han Random Processes 5 Example: 1. Let ζ S = [ 1, +1] be selected at random. Define the continuous-time random process X(t,ζ) by X(t,ζ) = ζ cos(2πt) < t <. 2. Let ζ S = ( π,π) be selected at random, and let Y (t,ζ) = cos(2πt + ζ)

Y. S. Han Random Processes 6

Y. S. Han Random Processes 7 6.2 Specifying a Random Process Joint Distributions of Time Samples Let X 1,X 2,...,X k be the k random variables obtained by sampling the random process X(t,ζ) at the time t 1,t 2,...,t k : X 1 = X(t 1,ζ), X 2 = X(t 2,ζ),..., X k = X(t k,ζ). The joint behavior of the random process at these k time instants is specified by the joint cdf of (X 1,X 2,...,X k ).

Y. S. Han Random Processes 8 A stochastic process is specified by the collection of kth-order joint cumulative distribution functions: F X1,...,X k (x 1,...,x k ) = P[X 1 x 1,...,X k x k ] for any k and any choice of sampling instants t 1,...,t k.

Y. S. Han Random Processes 9 If the stochastic process is discrete-valued, then a collection of probability mass functions can be used to specify the stochastic process p X1,...,X k (x 1,...,x k ) = P[X 1 = x 1,...,X k = x k ]. If the stochastic process is continuous-valued, then a collection of probability density functions can be used instead: f X1,...,X k (x 1,...,x k ).

Y. S. Han Random Processes 10 Example: Let X n be a sequence of independent, identically distributed Bernoulli random variables with p = 1/2. The joint pmf for any k time samples is then P[X 1 = x 1,X 2 = x 2,...,X k = x k ] = 2 k x i {0, 1} i.

Y. S. Han Random Processes 11 A random process X(t) is said to have independent increments if for any k and any choice of sampling instants t 1 < t 2 < t k, the random variables X(t 2 ) X(t 1 ),X(t 3 ) X(t 2 ),...,X(t k ) X(t k 1 ) are independent random variables. A random process X(t) is said to be Markov if the future of the process given the present is independent of the past; that is, for any k and any choice of sampling instants t 1 < t 2 < < t k and for any x 1,x 2,...,x k, f X(tk )(x k X(t k 1 ) = x k 1,...,X(t 1 ) = x 1 ) = f X(tk )(x k X(t k 1 ) = x k 1 )

Y. S. Han Random Processes 12 if X(t) is continuous-valued, and P[X(t k ) = x k X(t k 1 ) = x k 1,...,X(t 1 ) = x 1 ) = P[X(t k ) = x k X(t k 1 ) = x k 1 ) if X(t) is discrete-valued.

Y. S. Han Random Processes 13 Independent increments Markov; Markov NOT independent increments The Mean, Autocorrelation, and Autocovariance Functions The mean m X (t) of a random process X(t) is defined by m X (t) = E[X(t)] = + where f X(t) (x) is the pdf of X(t). m X (t) is a function of time. xf X(t) (x)dx, The autocorrelation R X (t 1,t 2 ) of a random process

Y. S. Han Random Processes 14 X(t) is defined as R X (t 1,t 2 ) = E[X(t 1 )X(t 2 )] = + + xyf X(t1 ),X(t 2 )(x,y)dxdy. In general, the autocorrelation is a function of t 1 and t 2.

Y. S. Han Random Processes 15 The autocovariance C X (t 1, t 2 ) of a random process X(t) is defined as the covariance of X(t 1 ) and X(t 2 ) C X (t 1, t 2 ) = E [{X(t 1 ) m X (t 1 )}{X(t 2 ) m X (t 2 )}]. C X (t 1, t 2 ) = R X (t 1, t 2 ) m X (t 1 )m X (t 2 ). The variance of X(t) can be obtained from C X (t 1, t 2 ): VAR[X(t)] = E[(X(t) m X (t)) 2 ] = C X (t, t). The correlation coefficient of X(t) is given by ρ X (t 1, t 2 ) = C X (t 1, t 2 ) CX (t 1, t 1 ) C X (t 2, t 2 ). ρ X (t 1, t 2 ) 1.

Y. S. Han Random Processes 16 Example: Let X(t) = A cos 2πt, where A is some random variable. The mean of X(t) is given by The autocorrelation is and the autocovariance m X (t) = E[A cos 2πt] = E[A] cos2πt. R X (t 1, t 2 ) = E[A cos(2πt 1 )A cos(2πt 2 )] = E[A 2 ] cos(2πt 1 ) cos(2πt 2 ), C X (t 1, t 2 ) = R X (t 1, t 2 ) m X (t 1 )m X (t 2 ) = {E[A 2 ] E[A] 2 } cos(2πt 1 ) cos(2πt 2 ) = VAR[A] cos(2πt 1 ) cos(2πt 2 ).

Y. S. Han Random Processes 17 Example: Let X(t) = cos(ωt + Θ), where Θ is uniformly distributed in the interval ( π, π). The mean of X(t) is given by m X (t) = E[cos(ωt + Θ)] = 1 2π π π cos(ωt + θ)dθ = 0. The autocorrelation and autocovariance are then C X (t 1, t 2 ) = R X (t 1, t 2 ) = E[cos(ωt 1 + Θ) cos(ωt 2 + Θ)] = 1 2π π π = 1 2 cos(ω(t 1 t 2 )). 1 2 {cos(ω(t 1 t 2 )) + cos(ω(t 1 + t 2 ) + 2θ)}dθ

Y. S. Han Random Processes 18 Gaussian Random Process A random process X(t) is a Gaussian random process if the samples X 1 = X(t 1 ), X 2 = X(t 2 ),...,X k = X(t k ) are joint Gaussian random variables for all k, and all choices of t 1,...,t k : where m = f X1,X 2,...,X k (x 1,...,x k ) = e 1/2(x m)k 1 (x m) (2π) k/2 K 1/2, m X (t 1 ). m X (t k ) K = C X (t 1, t 1 ) C X (t 1, t 2 ) C X (t 1, t k ) C X (t 2, t 1 ) C X (t 2, t 2 ) C X (t 2, t k ).. C X (t k, t 1 ) C X (t k, t k ).. The joint pdf s of Gaussian random process are completely specified by the mean and by covariance function.

Y. S. Han Random Processes 19 Linear operation on a Gaussian random process results in another Gaussian random process.

Y. S. Han Random Processes 20 Example: Let the discrete-time random process X n be a sequence of independent Gaussian random variables with mean m and variance σ 2. The covariance matrix for the time t 1,...,t k is {C X (t i, t j )} = {σ 2 δ ij } = σ 2 I, where δ ij = 1 when i = j and 0 otherwise, and I is the identity matrix. The corresponding joint pdf f X1,...,X k (x 1,...,x k ) = { 1 exp (2πσ 2 ) k/2 } k (x i m) 2 /2σ 2 i=1 = f X (x 1 )f X (x 2 ) f X (x k ).

Y. S. Han Random Processes 21 Multiple Random Processes The joint behavior of X(t) and Y (t) must specify all possible joint density functions of X(t 1 ),...,X(t k ) and Y (t 1),...,Y (t j ) for all k, j and all choices of t 1,...,t k and t 1,...,t j. X(t) and Y (t) are said to be independent if the vector random variables (X(t 1 ),...,X(t k )) and (Y (t 1),...,Y (t j )) are independent for all k, j and all choices of t 1,...,t k and t 1,...,t j. The cross-correlation R X,Y (t 1, t 2 ) of X(t) and Y (t) is defined by R X,Y (t 1, t 2 ) = E[X(t 1 )Y (t 2 )]. The process X(t) and Y (t) are said to be orthogonal if R X,Y (t 1, t 2 ) = 0 for all t 1 and t 2.

Y. S. Han Random Processes 22 The cross-covariance C X,Y (t 1, t 2 ) of X(t) and Y (t) is defined by C X,Y (t 1, t 2 ) = E[{X(t 1 ) m X (t 1 )}{Y (t 2 ) m Y (t 2 )}] = R X,Y (t 1, t 2 ) m X (t 1 )m Y (t 2 ). The process X(t) and Y (t) are said to be uncorrelated if C X,Y (t 1, t 2 ) = 0 for all t 1 and t 2. Note that C X,Y (t 1, t 2 ) = 0 R X,Y (t 1, t 2 ) = E[X(t 1 )Y (t 2 )] = m X (t 1 )m Y (t 2 ) = E[X(t 1 )]E[Y (t 2 )].

Y. S. Han Random Processes 23 Example: Let X(t) = cos(ωt + Θ) and Y (t) = sin(ωt + Θ), where Θ is a random variable uniformly distributed in [ π, π]. Find the cross-covariance of X(t) and Y (t). Sol: Since X(t) and Y (t) are zero-mean, the cross-covariance is equal to the cross-correlation. R X,Y (t 1, t 2 ) = E[cos(ωt 1 + Θ) sin(ωt 2 + Θ)] [ = E 1 2 sin(ω(t 1 t 2 )) + 1 ] 2 sin(ω(t 1 + t 2 ) + 2Θ) = 1 2 sin(ω(t 1 t 2 )), since E[sin(ω(t 1 + t 2 ) + 2Θ)] = 0.

Y. S. Han Random Processes 24 Example: Suppose we observe a process Y (t): Y (t) = X(t) + N(t). Find the cross-correlation between the observed signal and the desired signal assuming that X(t) and N(t) are independent random processes. R X,Y (t 1,t 2 ) = E[X(t 1 )Y (t 2 )] = E[X(t 1 ) {X(t 2 ) + N(t 2 )}] = E[X(t 1 )X(t 2 )] + E[X(t 1 )N(t 2 )] = R X (t 1,t 2 ) + E[X(t 1 )]E[N(t 2 )] = R X (t 1,t 2 ) + m X (t 1 )m N (t 2 ).

Y. S. Han Random Processes 25 6.3 Examples of Discrete-Time Random Processes iid Random Processes The sequence X n is called independent, identically distributed (iid) random process, if the joint cdf for any time instants n 1,...,n k can be expressed as F Xn1,...,X nk (x n1,...,x nk ) = P[X n1 x n1,...,x nk x nk ] The mean of an iid process is = F Xn1 (x n1 ) F Xnk (x nk ). m X (n) = E[X n ] = m for all n.

Y. S. Han Random Processes 26 Autocovariance function: If n 1 n 2 C X (n 1,n 2 ) = E[(X n1 m)(x n2 m)] = E[(X n1 m)]e[(x n2 m)] = 0 since X n2 and X n2 are independent. If n 1 = n 2 = n C X (n 1,n 2 ) = E[(X n m) 2 ] = σ 2. Therefore C X (n 1,n 2 ) = σ 2 δ n1,n 2.

Y. S. Han Random Processes 27 The autocorrelation function of an iid process R X (n 1,n 2 ) = C X (n 1,n 2 ) + m 2.

Y. S. Han Random Processes 28 Sum Processes: The Binomial Counting and Random Walk Process Consider a random process S n which is the sum of a sequence of iid random variables, X 1, X 2,...: where S 0 = 0. S n = X 1 + X 2 +... + X n n = 1, 2,... = S n 1 + X n,

Y. S. Han Random Processes 29 We call S n the sum process. S n is independent of the past when S n 1 is known.

Y. S. Han Random Processes 30 Example: Let I i be the sequence of independent Bernoulli random variable, and let S n be the corresponding sum process. S n is a binomial random variable with parameter n and p = P[I = 1]: ( n ) P[S n = j] = j p j (1 p) n j for 0 j n, and zero otherwise. Thus S n has mean np and variance np(1 p).

Y. S. Han Random Processes 31 The sum process S n has independent increments in nonoverlapping time intervals. For example: n 0 < n n 1 and n 2 < n n 3, where n 1 n 2. We have S n1 S n0 = X n0 +1 + + X n1 S n3 S n2 = X n2 +1 + + X n3. The independence of the X n s implies (S n1 S n0 ) and (S n3 S n2 ) are independent random variables. For n > n, (S n S n ) is the sum of n n iid random variables, so it has the same distribution as S n n P[S n S n = y] = P[S n n = y].

Y. S. Han Random Processes 32 Thus, increments in intervals of the same length have the same distribution regardless of when the interval begins. We say that S n has stationary increments.

Y. S. Han Random Processes 33 Compute the joint pmf of S n at time n 1, n 2, and n 3 P[S n1 = y 1,S n2 = y 2,S n3 = y 3 ] = P[S n1 = y 1,S n2 S n1 = y 2 y 1,S n3 S n2 = y 3 y 2 ] = P[S n1 = y 1 ]P[S n2 S n1 = y 2 y 1 ] P[S n3 S n2 = y 3 y 2 ]. The stationary increments property implies that P[S n1 = y 1,S n2 = y 2,S n3 = y 3 ] = P[S n1 = y 1 ]P[S n2 n 1 = y 2 y 1 ]P[S n3 n 2 = y 3 y 2 ]. In general, we have P[S n1 = y 1,S n2 = y 2,...,S nk = y k ]

Y. S. Han Random Processes 34 = P[S n1 = y 1 ]P[S n2 n 1 = y 2 y 1 ] P[S nk n k 1 = y k y k 1 ]. If X n are continuous-valued random variables, then f Sn1,...,S nk (y 1,...,y k ) = f Sn1 (y 1 )f Sn2 n 1 (y 2 y 1 ) f Snk S nk 1 (y k y k 1 ). Example: Find the joint pmf for the binomial counting process at times n 1 and n 2. P[S n1 = y 1,S n2 = y 2 ] = P[S n1 = y 1 ]P[S n2 n 1 = y 2 y 1 ] ( ) n2 n 1 = p y 2 y 1 (1 p) n 2 n 1 y 2 +y 1 y 2 y 1

Y. S. Han Random Processes 35 = ( n1 y 1 ( n2 n 1 y 2 y 1 ) p y 1 (1 p) n 1 y 1 ) ( ) n1 p y 2 (1 p) n 2 y 2. y 1

Y. S. Han Random Processes 36 The mean and variance of a sum process m S (n) = E[S n ] = ne[x] = nm VAR[S n ] = nvar[x] = nσ 2. The autocovariance of S n C S (n,k) = E[(S n E[S n ])(S k E[S k ])] = E[(S n nm)(s k km)] [{ n } { k }] = E (X i m) (X j m) i=1 j=1 n k = E[(X i m)(x j m)] i=1 j=1

Y. S. Han Random Processes 37 = = = n i=1 n i=1 k C X (i,j) j=1 k σ 2 δ i,j j=1 min(n,k) i=1 C X (i,i) = min(n,k)σ 2. The property of independent increments allows us to compute the autocovariance in another way. Suppose n k so n = min(n,k) C S (n, k) = E[(S n nm)(s k km)]

Y. S. Han Random Processes 38 = E[(S n nm){(s n nm) + (S k km) (S n nm)}] = E[(S n nm) 2 ] + E[(S n nm)(s k S n (k n)m)] = E[(S n nm) 2 ] + E[S n nm]e[s k S n (k n)m] = E[(S n nm) 2 ] = VAR[S n ] = nσ 2.

Y. S. Han Random Processes 39 first-order Autoregressive Random Process

Y. S. Han Random Processes 40 Moving Average Process

Y. S. Han Random Processes 41 6.4 Examples of Continuous-Time Random Processes Poisson Process Events occur at random instants of time at an average rate of λ events per second. Let N(t) be the number of event occurrences in the time interval [0,t].

Y. S. Han Random Processes 42 Divide [0,t] into n subintervals of duration δ = t/n. Assume that the following two conditions hold:

Y. S. Han Random Processes 43 1. The probability of more than one event occurrence in a subinterval is negligible compared to the probability of observing one or zero events. Bernoulli trial 2. Whether or not an event occurs in a subinterval is independent of the outcomes in other subintervals. Bernoulli trials are independent. N(t) can be approximated by the binomial counting process. Let p be the prob. of event occurrence in each subinterval. Then the expected number of event occurrence in the interval [0,t] is np.

Y. S. Han Random Processes 44 The average number of events in the interval [0,t] is also λt. Thus λt = np. Let n and then p 0 while np = λt remains fixed. Binomial distribution approaches Poisson distribution with parameter λt. P[N(t) = k] = (λt)k k! N(t) is called Poisson process. e λt for k = 0, 1,...

Y. S. Han Random Processes 45 We will show that if n is large and p is small, then for α = np, p k = ( n )p k (1 p) n k αk k k! e α k = 0, 1,... Consider the probability that no events occur in n trials: ( p 0 = (1 p) n = 1 α ) n e α as n. n Let q = 1 p. Noting that p k+1 p k = ( n k+1 ( n k ) p k+1 q n k 1 ) pk q n k

Y. S. Han Random Processes 46 Thus and = p k+1 = (n k)p (k + 1)q = (1 k/n)α (k + 1)(1 α/n) α as n. k + 1 α k + 1 p k for k = 0, 1, 2,... p 0 = e α. A simple induction argument then shows that p k = αk k! e α for k = 0, 1, 2,... The mean (the variance) of N(t) is λt.

Y. S. Han Random Processes 47 Independent and stationary increments P[N(t 1 ) = i, N(t 2 ) = j] = P[N(t 1 ) = i]p[n(t 2 ) N(t 1 ) = j i] = P[N(t 1 ) = i]p[n(t 2 t 1 ) = j i] = (λt 1) i e λt 1 i! Covariance of N(t). Suppose t 1 t 2, then (λ(t 2 t 1 )) j i e λ(t 2 t 1 ). (j i)! C N (t 1,t 2 ) = E[(N(t 1 ) λt 1 )(N(t 2 ) λt 2 )] = E[(N(t 1 ) λt 1 ){N(t 2 ) N(t 1 ) λt 2 + λt 1 +(N(t 1 ) λt 1 )}] = E[N(t 1 ) λt 1 ]E[N(t 2 ) N(t 1 ) λ(t 2 t 1 )] +VAR[N(t 1 )] = VAR[N(t 1 )] = λt 1 = λ min(t 1,t 2 ).

Y. S. Han Random Processes 48 Interevent Times Consider the time T between event occurrences in a Poisson process. [0,t] is divided into n subintervals of length δ = t/n. The probability that T > t is P[T > t] = P[no events in t seconds] = (1 p) n ( = 1 λt n ) n e λt as n.

Y. S. Han Random Processes 49 The cdf of T is then 1 e λt. T is an exponential random variable with parameter λ. Since the times between event occurrences in the underlying binomial process are independent geometric random variable, it follows that the interevent times in a Poisson Process form an iid sequence of exponential random variables with mean 1/λ.

Y. S. Han Random Processes 50 Individual Arrival Times In applications where the Poisson process models customer interarrival times, it is customary to say that arrivals occur at random. Suppose that we are given that only one arrival occurred in an interval [0,t], and let X be the arrival time of the single customer. For 0 < x < t, let N(x) be the number of events up to time x, and let N(t) N(x) be the increment in the interval (x,t], then P[X x] = P[N(x) = 1 N(t) = 1]

Y. S. Han Random Processes 51 = = = P[N(x) = 1 and N(t) = 1] P[N(t) = 1] P[N(x) = 1 and N(t) N(x) = 0] P[N(t) = 1] P[N(x) = 1]P[N(t) N(x) = 0] P[N(t) = 1] = λxe λx e λ(t x) λte λt = x t. It can be shown that if the number of arrivals in the interval [0,t] is k, then the individual arrival times are distributed independently and uniformly in the interval.

Y. S. Han Random Processes 52 6.5 Stationary Random Processes We now consider those random processes that Randomness in the processes does not change with time, that is, they have the same behaviors between an observation in (t 0,t 1 ) and (t 0 + τ,t 1 + τ). A discrete-time or continuous-time random process X(t) is stationary if the joint distribution of any set of samples does not depend on the placement of the time origin. That is, F X(t1 ),...,X(t k )(x 1,...,x k ) = F X(t1 +τ),...,x(t k +τ)(x 1,...,x k )

Y. S. Han Random Processes 53 for all time shift τ, all k, and all choices of sample times t 1,...,t k.

Y. S. Han Random Processes 54 Two processes X(t) and Y (t) are said to be jointly stationary if the joint cdf s of X(t 1 ),...,X(t k ) and Y (t 1),...,Y (t j) do not depend on the placement of the time origin for all k and j and all choices of sampling times t 1,...,t k and t 1,...,t j. The first-order cdf of a stationary random process must be independent of time, i.e., F X(t) (x) = F X(t+τ) (x) = F X (x) for all t and τ; m X (t) = E[X(t)] = m for all t; VAR[X(t)] = E[(X(t) m) 2 ] = σ 2 for all t. The second-order cdf of a stationary random process is

Y. S. Han Random Processes 55 with F X(t1 ),X(t 2 )(x 1,x 2 ) = F X(0),X(t2 t 1 )(x 1,x 2 ) for all t 1, t 2.

Y. S. Han Random Processes 56 The autocorrelation and autocovariance of stationary random process X(t) depend only on t 2 t 1 : R X (t 1,t 2 ) = R X (t 2 t 1 ) for all t 1,t 2 ; C X (t 1,t 2 ) = C X (t 2 t 1 ) for all t 1,t 2. Example: Is the sum process S n a discrete-time stationary process? We have where X i is an iid sequence. Since S n = X 1 + X 2 + + X n, m S (n) = nm and VAR[S n ] = nσ 2,

Y. S. Han Random Processes 57 mean and variance of S n are not constant. Thus, S n cannot be a stationary process.

Y. S. Han Random Processes 58 Wide-Sense Stationary Random Processes A discrete-time or continuous-time random process X(t) is wide-sense stationary (WSS) if it satisfies m X (t) = m for all t and C X (t 1,t 2 ) = C X (t 1 t 2 ) for all t 1 and t 2. Two processes X(t) and Y (t) are said to be jointly wide-sense stationary if they are both wide-sense stationary and if their cross-covariance depends only on t 1 t 2. When X(t) is wide-sense stationary, we have C X (t 1,t 2 ) = C X (τ) and R X (t 1,t 2 ) = R X (τ),

Y. S. Han Random Processes 59 where τ = t 1 t 2. Stationary random process wide-sense stationary process

Y. S. Han Random Processes 60 Assume that X(t) is a wide-sense stationary process. The average power of X(t) is given by E[X(t) 2 ] = R X (0) for all t. The autocorrelation function of X(t) is an even function since R X (τ) = E[X(t+τ)X(t)] = E[X(t)X(t+τ)] = R X ( τ). The autocorrelation function is a measure of the rate of change of a random process. Consider the change in the process from time t to t + τ: P[ X(t + τ) X(t) > ǫ] = P[(X(t + τ) X(t)) 2 > ǫ 2 ]

Y. S. Han Random Processes 61 E[(X(t + τ) X(t))2 ] ǫ 2 = 2(R X(0) R X (τ)) ǫ 2, where we apply the Markov inequality to obtain the upper bound. If R X (0) R X (τ) is small, then the probability of a large change in X(t) in τ seconds is small. The autocorrelation function is maximum at τ = 0 since R X (τ) 2 = E[X(t+τ)X(t)] 2 E[X 2 (t+τ)]e[x 2 (t)] = R X (0) 2,

Y. S. Han Random Processes 62 where we use the fact that E[XY ] 2 E[X 2 ]E[Y 2 ]. If R X (0) = R X (d), then R X (τ) is periodic with period d and X(t) is mean square periodic, i.e., E[(X(t + d) X(t)) 2 ] = 0. E[(X(t + τ + d) X(t + τ))x(t)] 2 E[(X(t + τ + d) X(t + τ)) 2 ]E[X 2 (t)], which implies that {R X (τ + d) R X (τ)} 2 2 {R X (0) R X (d)} R X (0).

Y. S. Han Random Processes 63 Therefore, R X (τ + d) = R X (τ). The fact that X(t) is mean square periodic is from E[(X(t + d) X(t)) 2 ] = 2{R X (0) R X (d)} = 0. Let X(t) = m + N(t), where N(t) is a zero-mean process for which R N (τ) 0 as τ. Then R X (τ) = E[(m + N(t + τ))(m + N(t))] = m 2 + 2mE[N(t)] + R N (τ) = m 2 + R N (τ) m 2 as τ. In summary, the autocorrelation function can have three types of components: (1) a component that

Y. S. Han Random Processes 64 approaches zero as τ ; (2) a periodic component; and (3) a component due to a non zero mean.

Y. S. Han Random Processes 65

Y. S. Han Random Processes 66

Y. S. Han Random Processes 67 Wide-Sense Stationary Gaussian Random Processes If a Gaussian random process is wide-sense stationary, then it is also stationary. This is due to the fact that the joint pdf of a Gaussian random process is completely determined by the mean m X (t) and the autocovariance C X (t 1,t 2 ). Example: Let X n be an iid sequence of Gaussian random variables with zero mean and variance σ 2, and let Y n be Y n = X n + X n 1 2.

Y. S. Han Random Processes 68 The mean of Y n is zero since E[X i ] = 0 for all i. The covariance of Y n is C Y (i,j) = E[Y i Y j ] = 1 4 E[(X i + X i 1 )(X j + X j 1 )] = 1 4 {E[X ix j ] + E[X i X j 1 ] + E[X i 1 X j ] + E[X i 1 X j 1 ]} 1 2 σ2, if i = j = 1 4 σ2, if i j = 1. 0, otherwise Y n is a wide sense stationary process since it has a constant mean and a covariance function that depends

Y. S. Han Random Processes 69 only on i j. Y n is a Gaussian random variable since it is defined by a linear function of Gaussian random variables.

Y. S. Han Random Processes 70 Cyclostationary Random Processes A random process X(t) is said to be cyclostationary with period T if the joint cdf s of X(t 1 ),X(t 2 ),...,X(t k ) and X(t 1 + mt),x(t 2 + mt),...,x(t k + mt) are the same for all k,m and all choices of t 1,...,t k : F X(t1 ),X(t 2 ),...,X(t k )(x 1,x 2,...,x k ) = F X(t1 +mt),x(t 2 +mt),...,x(t k +mt)(x 1,x 2,...,x k ). X(t) is said to be wide-sense cyclostationary if m X (t + mt) = m X (t) and C X (t 1 + mt,t 2 + mt) = C X (t 1,t 2 ).

Y. S. Han Random Processes 71 Example: Consider a random amplitude sinusoid with period T: X(t) = A cos(2πt/t). Is X(t) cyclostationary? wide-sense cyclostationary? We have P[X(t 1 ) x 1, X(t 2 ) x 2,..., X(t k ) x k ] = P[A cos(2πt 1 /T) x 1,..., A cos(2πt k /T) x k ] = P[A cos(2π(t 1 + mt)/t) x 1,..., A cos(2π(t k + mt)/t) x k ] = P[X(t 1 + mt) x 1, X(t 2 + mt) x 2,..., X(t k + mt) x k ]. Thus, X(t) is a cyclostationary random process.

Y. S. Han Random Processes 72 6.7 Time Averages of Random Processes and Ergodic Theorems We consider the measurement of repeated random experiments. We want to take arithmetic average of the quantities of interest. To estimate the mean m X (t) of a random process X(t,ζ) we have ˆm X (t) = 1 N X(t,ζ i ), N i=1

Y. S. Han Random Processes 73 where N is the number of repetitions of the experiment. Time average of a single realization is given by X(t) T = 1 2T T T X(t, ζ)dt. Ergodic theorem states conditions under which a time average converges as the observation interval becomes large. We are interested in ergodic theorems that state when time average converge to the ensemble average.

Y. S. Han Random Processes 74 The strong law of large numbers given as [ ] 1 n P lim X i = m = 1 n n i=1 is one of the most important ergodic theorems, where X n is an iid discrete-time random process with finite mean E[X i ] = m.

Y. S. Han Random Processes 75 Example: Let X(t) = A for all t, where A is a zero mean, unit-variance random variable. Find the limit value of the time average. The mean of the process m X (t) = E[X(t)] = E[A] = 0. The time average gives X(t) T = 1 T Adt = A. 2T The time average does not converge to m X (t) = 0. Stationary processes need not be ergodic. T

Y. S. Han Random Processes 76 Let X(t) be a WSS process. Then [ 1 E[ X(t) T ] = E 2T = 1 2T T T T T ] X(t)dt E[X(t)]dt = m. Hence, X(t) T is an unbiased estimator for m. Variance of X(t) T is given by VAR[ X(t) T ] = E[( X(t) T m) 2 ] = E 1 2T = 1 4T 2 T T T T T T (X(t) m)dt 1 2T T T E[(X(t) m)(x(t ) m)]dtdt (X(t ) m)dt

Y. S. Han Random Processes 77 = 1 4T 2 T T T T C X (t, t )dtdt. Since the process X(t) is WSS, we have VAR[ X(t) T ] = 1 4T 2 = 1 4T 2 = 1 2T T T 2T 2T 2T 2T T T C X (t t )dtdt (2T u )C X (u)du ( 1 u 2T ) C X (u)du.

Y. S. Han Random Processes 78

Y. S. Han Random Processes 79 X(t) T will approach m in the mean square sense, that is, E[( X(t) T m) 2 ] 0, if VAR[ X(t) T ] approaches zero. Theorem: Let X(t) be a WSS process with m X (t) = m, then lim T X(t) T = m in the mean square sense, if and only if 1 2T ( lim 1 u ) C X (u)du = 0. T 2T 2T 2T

Y. S. Han Random Processes 80 Time-average estimate for the autocorrelation function is given by X(t + τ)x(t) T = 1 2T T T X(t + τ)x(t)dt. E[ X(t + τ)x(t) T ] = R X (τ) if X(t) is WSS random process. Time average autocorrelation converges to R X (τ) in the mean square sense if VAR[ X(t + τ)x(t) T ] converges to zero.

Y. S. Han Random Processes 81 If the random process is discrete time, then X n T = 1 2T + 1 X n+k X n T = T n= T 1 2T + 1 X n ; T n= T X n+k X n. If X n is a WSS random process, then E[ X n T ] = m. Variance of X n T is given by VAR[ X n T ] = 1 2T + 1 2T k= 2T ( 1 k ) C X (k). 2T + 1

Y. S. Han Random Processes 82 X n T approaches m in the mean square sense and is mean ergodic if 1 2T + 1 2T k= 2T ( 1 k ) C X (k) 0. 2T + 1