CHAPTER 2 RANDOM PROCESSES IN DISCRETE TIME

Similar documents
Complement on Digital Spectral Analysis and Optimal Filtering: Theory and Exercises

Complement on Digital Spectral Analysis and Optimal Filtering: Theory and Exercises

ELEN E4810: Digital Signal Processing Topic 11: Continuous Signals. 1. Sampling and Reconstruction 2. Quantization

EE538 Final Exam Fall :20 pm -5:20 pm PHYS 223 Dec. 17, Cover Sheet

Like bilateral Laplace transforms, ROC must be used to determine a unique inverse z-transform.

EE538 Final Exam Fall 2007 Mon, Dec 10, 8-10 am RHPH 127 Dec. 10, Cover Sheet

Each problem is worth 25 points, and you may solve the problems in any order.

Poles and Zeros in z-plane

NAME: ht () 1 2π. Hj0 ( ) dω Find the value of BW for the system having the following impulse response.

Discrete Time Systems

Practical Spectral Estimation

Stability Condition in Terms of the Pole Locations

Digital Signal Processing. Midterm 1 Solution

Review of Discrete-Time System

Lecture 2 OKAN UNIVERSITY FACULTY OF ENGINEERING AND ARCHITECTURE

Lecture Notes 7 Stationary Random Processes. Strict-Sense and Wide-Sense Stationarity. Autocorrelation Function of a Stationary Process

Lecture 3 January 23

EE482: Digital Signal Processing Applications

13. Power Spectrum. For a deterministic signal x(t), the spectrum is well defined: If represents its Fourier transform, i.e., if.

ESE 531: Digital Signal Processing

Probability Space. J. McNames Portland State University ECE 538/638 Stochastic Signals Ver

Digital Signal Processing

Laboratory Project 2: Spectral Analysis and Optimal Filtering

Discrete-Time David Johns and Ken Martin University of Toronto

EE482: Digital Signal Processing Applications

UNIVERSITY OF OSLO. Faculty of mathematics and natural sciences. Forslag til fasit, versjon-01: Problem 1 Signals and systems.

/ (2π) X(e jω ) dω. 4. An 8 point sequence is given by x(n) = {2,2,2,2,1,1,1,1}. Compute 8 point DFT of x(n) by

Digital Signal Processing IIR Filter Design via Bilinear Transform

Let H(z) = P(z)/Q(z) be the system function of a rational form. Let us represent both P(z) and Q(z) as polynomials of z (not z -1 )

PS403 - Digital Signal processing

ELC 4351: Digital Signal Processing

Computer Engineering 4TL4: Digital Signal Processing

Review of Fundamentals of Digital Signal Processing

Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes

UNIVERSITY OF OSLO. Please make sure that your copy of the problem set is complete before you attempt to answer anything.

ELEN 4810 Midterm Exam

Digital Signal Processing

6.003 (Fall 2011) Quiz #3 November 16, 2011

Random Processes Handout IV

Discrete-Time Fourier Transform (DTFT)

EE 225a Digital Signal Processing Supplementary Material

Chap 2. Discrete-Time Signals and Systems

for valid PSD. PART B (Answer all five units, 5 X 10 = 50 Marks) UNIT I

Probability and Statistics for Final Year Engineering Students

CITY UNIVERSITY LONDON. MSc in Information Engineering DIGITAL SIGNAL PROCESSING EPM746

Interchange of Filtering and Downsampling/Upsampling

ESE 531: Digital Signal Processing

Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes (cont d)

Chapter 5 Frequency Domain Analysis of Systems

Review of Fundamentals of Digital Signal Processing

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process

Massachusetts Institute of Technology

A system that is both linear and time-invariant is called linear time-invariant (LTI).

Nonparametric and Parametric Defined This text distinguishes between systems and the sequences (processes) that result when a WN input is applied

Signals and Systems. Spring Room 324, Geology Palace, ,

Detailed Solutions to Exercises

Complex symmetry Signals and Systems Fall 2015

! Introduction. ! Discrete Time Signals & Systems. ! Z-Transform. ! Inverse Z-Transform. ! Sampling of Continuous Time Signals

( ) John A. Quinn Lecture. ESE 531: Digital Signal Processing. Lecture Outline. Frequency Response of LTI System. Example: Zero on Real Axis

6.003: Signals and Systems. Sampling and Quantization

Discrete-Time Signals and Systems. Frequency Domain Analysis of LTI Systems. The Frequency Response Function. The Frequency Response Function

Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science : Discrete-Time Signal Processing

Damped Oscillators (revisited)

Need for transformation?

Digital Signal Processing: Signal Transforms

Grades will be determined by the correctness of your answers (explanations are not required).

X. Chen More on Sampling

Lecture 4: FT Pairs, Random Signals and z-transform

Lecture 4 - Spectral Estimation

Correlator I. Basics. Chapter Introduction. 8.2 Digitization Sampling. D. Anish Roshi

Lecture 16: Filter Design: Impulse Invariance and Bilinear Transform

ECE 636: Systems identification

CMPT 889: Lecture 5 Filters

University Question Paper Solution

Multirate signal processing

Digital Filters. Linearity and Time Invariance. Linear Time-Invariant (LTI) Filters: CMPT 889: Lecture 5 Filters

TAKEHOME FINAL EXAM e iω e 2iω e iω e 2iω

Question Paper Code : AEC11T02

Sensors. Chapter Signal Conditioning

3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE

Introduction to Biomedical Engineering

Lab 4: Quantization, Oversampling, and Noise Shaping

Lecture 13: Pole/Zero Diagrams and All Pass Systems

2. Typical Discrete-Time Systems All-Pass Systems (5.5) 2.2. Minimum-Phase Systems (5.6) 2.3. Generalized Linear-Phase Systems (5.

Z-TRANSFORMS. Solution: Using the definition (5.1.2), we find: for case (b). y(n)= h(n) x(n) Y(z)= H(z)X(z) (convolution) (5.1.

EE301 Signals and Systems In-Class Exam Exam 3 Thursday, Apr. 19, Cover Sheet

2.161 Signal Processing: Continuous and Discrete Fall 2008

DFT & Fast Fourier Transform PART-A. 7. Calculate the number of multiplications needed in the calculation of DFT and FFT with 64 point sequence.

Aspects of Continuous- and Discrete-Time Signals and Systems

SPEECH ANALYSIS AND SYNTHESIS

Solutions: Homework Set # 5

ECE503: Digital Signal Processing Lecture 5

Digital Signal Processing Lecture 9 - Design of Digital Filters - FIR

Optimal Design of Real and Complex Minimum Phase Digital FIR Filters

SRI VIDYA COLLEGE OF ENGINEERING AND TECHNOLOGY UNIT 3 RANDOM PROCESS TWO MARK QUESTIONS

Transform Representation of Signals

DETECTION theory deals primarily with techniques for

Chapter Intended Learning Outcomes: (i) Understanding the relationship between transform and the Fourier transform for discrete-time signals

Topics: Discrete transforms; 1 and 2D Filters, sampling, and scanning

NAME: 11 December 2013 Digital Signal Processing I Final Exam Fall Cover Sheet

Transcription:

CHAPTER 2 RANDOM PROCESSES IN DISCRETE TIME Shri Mata Vaishno Devi University, (SMVDU), 2013 Page 13

CHAPTER 2 RANDOM PROCESSES IN DISCRETE TIME When characterizing or modeling a random variable, estimates of its statistical parameters must be made since the data taken is a finite sample sequence of the random process [18]. This chapter gives a brief review of relevant properties of discrete-time stochastic processes. 2.1 Autocorrelation Function Just like for deterministic signals, it is of course perfectly possible to sample random processes. The discrete-time version of a continuous-time signal x(t) is here denoted x[n], which corresponds to x(t) sampled at the time instant t = nt. The sampling rate is Fs = 1/T in Hz, and T is the sampling interval in seconds. The index n can be interpreted as normalized (to the sampling rate) time. Similar to the deterministic case, it is in principle possible to reconstruct bandlimited continuous-time stochastic process x(t) from its samples x [ n] n. In the random case, the reconstruction must be given a more precise statistical meaning, like convergence in the mean square sense.to simplify matters, it will throughout be assumed that all processes are ( wide-sense) stationary, real valued and zero mean, i.e. E{x[n]}=0. The autocorrelation function is defined as r x [k] = E{x[n]x[n k]} (2-1) In principle, this x coincides with a sampled version of the continuous-time autocorrelation r x (t), but one should keep in mind that x(t) needs to be (essentially) bandlimited to make the analogy meaningful. Note that r x [k] does not depend on absolute time n due to the stationarity assumption. Also note that the autocorrelation is symmetric, r x [k] = r x [ k] 2 (conjugate symmetric in the complex case), and that r x [k] r x [0]=σ x for all k (by Cauchy - Schwartz inequality). For two stationary processes x[n] and y[n], we define the cross-correlation function as r xy [k]=e{x[n]y[n k]}. (2-2) Shri Mata Vaishno Devi University, (SMVDU), 2013 Page 14

The cross-correlation measures how related the two processes are. This is useful, for example for de- termining who well one can predict a desired but unmeasurable signal y[n] from the observed x[n]. If r xy [k] = 0 for all k, we say that x[n] and y[n] are uncorrelated. It should be noted that the cross- correlation function is not necessarily symmetric, and it does not necessarily have its maximum at k = 0. For example, if y[n] is a time-delayed version of x[n], y[n] = x[n l], then r xy [k] peaks at lag k = l. Thus, the cross-correlation can be used to estimate the time-delay between two measurements of the same signal, perhaps taken at different spatial locations. This can be used, for example for synchronization in a communication system. 2.2 Power Spectrum While several different definitions can be made, the power spectrum is here simply introduced as the Discrete-Time Fourier Transform (DTFT) of the autocorrelation function: P x (e jω ) = n r x [n]e jωn (2-3) The variable ω is digital (or normalized) angular frequency, in radians per sample. This corresponds to physical frequency Ω = ω/t rad/s. The spectrum is a real-valued function by construction, and it is also non-negative if r x [n] corresponds to a proper stochastic process. Clearly, the DTFT is periodic in ω, with period 2π. The visible range of frequencies is ω ( π, π), corresponding to Ω ( πfs, πfs ). The frequency ΩN = πfs, or FN = Fs /2 is recognized as the Nyqvist frequency. Thus, in order not to loose information when sampling a random process, its spectrum must be confined to Ω ΩN. If this is indeed the case, then we know that the spectrum of x[n] coincides with that of x(t) (up to a scaling by T ). This implies that we can estimate the spectrum of the original continuous-time signal from the discrete samples. The inverse transform is Shri Mata Vaishno Devi University, (SMVDU), 2013 Page 15

r x [n] = 1 π 2π π P x(e jω )e j n d (2-4) So the autocorrelation function and the spectrum are a Fourier transform pair. In particular at lag 0 we have the signal power, π r x [0] = E{x 2 [n]} = 1 P 2π x(e jω ) d (2-5) π which is the integral of the spectrum. Imagine that we could filter x[n] through a narrow ideal band-pass filter with passband (ω, ω + ω). Denote the filtered signal x w [n]. According to the above, its power is given by w w E{x 2 w [n]} = 1 P 2π x(e jη ) dη = Δ w P 2π x(e jω ) (2-6) where the equality holds for ω 0 (assuming Px (e jw ) is smooth). Hence, with ω = 2π f, we get P x (e jw ) = E{x w 2 [n]} f (2-7) This leads to the interpretation of P x (e jw ) as a power spectral density, with unit power (e.g. voltage squared) per normalized frequency f. Or better yet, T P x (e jw ), which is the spectrum of the corre-sponding bandlimited continuous-time signal, has the unit power per Hz. 2.3 White Noise : The basic building block when modeling smooth stationary spectra is discrete-time white noise. This is simply a sequence of uncorrelated random variables, usually of zero mean and timeinvariant power. Letting e[n] denote the white noise process, the autocorrelation function is therefore Shri Mata Vaishno Devi University, (SMVDU), 2013 Page 16

r e [k] = E{e[n]e[n k] = σ e 2, k = 0 0, k 0 (2-8) Where σ 2 e = E{e 2 [n]} is the noise power. Such noise naturally arises in electrical circuits, for example due to thermal noise. The most commonly used noise model is the AWGN (Additive White Gaussian Noise), sometimes called university noise. The noise samples are then drawn from a N (0, σ e ) distribution. However, it is a mistake to believe that white noise is always Gaussian distributed. For example, the impulsive noise defined by e[n] = N 0, σ e w. p. 0 w. p. 1 (2-9) Where 0 < 1, is a stationary white noise, whose realizations have very little resemblance with AWGN. Two examples are shown in Figure (2.1). Yet, both are white noises with identical second-order properties. This illustrates that the second-order properties do not contain all information about a random signal, this is true only in the case of a Gaussian process. Fig. (2.1) Realizations of an AWGN and an impulsive noise with = 0.1. Both are stationary white noises with identical second-order properties. The spectrum of a white noise is given by Shri Mata Vaishno Devi University, (SMVDU), 2013 Page 17

P e (e jw ) = n r e [n]e jwn 2 = σ e (2-10) In other words, the white noise spectrum is flat over the region ( π, π). One e can infer that in order for a discrete-time noise to be white, the spectrum of the corresponding continuoustime signal should obey P e (Ω) = σ e 2 F s Ω πfs 0 Ω > πfs (2-11) This is usually called critical sampling. If the sampling rate is higher than necessary, the discrete time noise samples are in general correlated, although the spectrum of the continuous-time noise is flat over its bandwidth. 2.4 AR, MA and ARMA Processes: Similar to the continuous-time case, it is very easy to carry the spectral properties over to a filtered signal. Let the discrete-time filter H (z) (LTI-system) have impulse response {h[n]} n. Assume that the stationary process e[n] is applied as input to the filter (Fig.2.2). Fig. (2.2) Filtering a discrete-time random process by a digital filter The output x[n] is then given by x[n] = h[n] e[n] = k h[k]e[n k] (2-12) Shri Mata Vaishno Devi University, (SMVDU), 2013 Page 18

Provided the filter is stable, it is easy to show that the output signal is also a stationary stochastic process, and that the input and output spectra are related by P x (e jω ) = H(e jω ) 2 P e (e jω ) (2-13) Also, the cross-spectrum, which is the DTFT of the cross-correlation, is given by P xe (e jω ) = n r xe (n)e jωn = H(e jω )P e (e jω ) (2-14) In particular, when the input signal e[n] is a white noise, the output spectrum is given by P x (e jω ) = σ e 2 H(e jω ) 2 (2-15) In other words, if we are good in shaping the amplitude characteristic H(e jω ) of a digital filter, we can use this to model the spectrum of a stationary stochastic process. Indeed, the spectral factorization theorem tells us that every smooth spectra can be well approximated by a filtered white noise, provided the filter has a sufficiently high order. We will here only consider finite orders. Thus let the transfer function of the filter be H(z) = B(z) A(z) (2-16) Where the numerator and denominator polynomials (in z 1 ) are given by B(z) = 1 + b 1 z 1 + + b q z q A(z) = 1 + a 1 z 1 + + a p z p (2-17) Shri Mata Vaishno Devi University, (SMVDU), 2013 Page 19

Let the input to the filter be a white noise process e[n]. The filter output is given by the difference equation x[n] + a 1 x[n 1] + + a p x[n p] = e[n] + b 1 e[n 1] + + b q e[n q] (2-18) Thus, the signal x[n] is an auto-regression onto its past values along with a moving average ( perhaps weighted average would have been a better name) of the noise. Therefore, x[n] is called an ARMA (Auto-Regressive Moving Average) process. Sometimes the orders are emphasized by writing ARMA (p,q). The spectrum of an ARMA process is obtained as P x (e^jω ) = σ e 2 B ejω 2 A e jω 2 (2-19) Thus, shaping the spectrum of a process is reduced to appropriately selecting the filter coefficients and the noise power. For the important special case when q=0, H(z)=1/A(z) is an all-pole filter. The process x[n] is called an AR process, defined by x[n] + a 1 x[n 1] + + a p x[n p] = e[n] (2-20) The spectrum of an AR process is given by 1 P x (e^jω ) = σ2 e (2-21) A e jω 2 Similarly, when p=0, H(z)=B(z) has only zeros and we get the MA processs x[n] = e[n] + b 1 e[n 1] + + b q e[n q] (2-22) Shri Mata Vaishno Devi University, (SMVDU), 2013 Page 20

With P x (e^jω ) = σ e 2 B(e jω ) 2 (2-23) In general, it is difficult to say when an AR model is preferred over an MA or ARMA model. However it is more easy to get peaky spectra by using poles close to the unit circle. Similarly, spectra with narrow dips (notches) are better modeled with zeros. See figure 2.3 for some examples. Fig. (2.3) Examples of spectra for the case of AR(4) and MA(4) models. The AR part better models narrow peaks, whereas the MA part is better with notches and broad peaks The plots are produced with σ 2 e = 1 and B(z) = A(z) = 1-0.16z -1-0.44z -2 +0.12z -3 +0.52z -4 (2-24) The roots of this polynomial, which are the zeros and poles, respectively, are located at 0.9exp(±2π0.1j) and 0.8 exp(±2π0.4j). This explains why we see a notch (MA process) and a peak (AR process) at the normalized frequencies 0.1 and 0.4 revolutions per sample, with the notch (peak) at 0.1 being more pronounced because the zero(pole) is closer to the unit circle. In practice, AR models are often preferred because they lead to simpler estimation algorithms than do MA or ARMA models, and also because we are more often interested in peaks and peak locations than other properties of the spectrum. Shri Mata Vaishno Devi University, (SMVDU), 2013 Page 21